Skip to main content

Scan doodles and watch them come alive!

In this post I'd like to share about one of my projects involving doodles and bringing them to live with Unity. We prepare doodle papers and some crayons and let children color them. After they're done, we scan the images and they appear on the screen. The screen is projected on walls using projectors.


Doodles come alive on the screen

Project flow

I utilized document scanners readily available such as the following.

A document scanner

The scanner has many helpful features, such as cropping and rotating the scanned images so they are nice and straight even if the paper is slightly rotated or not aligned properly.

The scanned images are stored in a server and a Unity application polls the server for new images every few seconds. For the server I initially used AWS S3 for image storage and later on we switched to a local image server with Node JS.

Attaching 2D Texture to a 3D Model

I no longer have access to the actual doodle papers but they look like any other doodle template.
A sample of what the doodle paper looks like

After coloring the paper and scanned, the Unity application retrieves the images as PNG files and converts them into Texture2D objects. These textures are then applied to a 3D model with predetermined UV mapping. As a result, we get a 3D model of the fish (and other doodle objects such as trains, airplanes, etc) with the coloring as done on paper.

Differentiating Doodle Types

Since we had several types of doodles to choose from, fish, airplanes, UFO, divers, etc we need a way to separate the scanned images by their types. For this, we utilized QR codes printed on the doodle papers. Each QR code correspond to a doodle type and a small scanner application sorts out the types before sending them to the image server.

Challenges and possible extensions

There is limited space to show the doodle objects and it gets crowded really fast. If you have too many objects on screen they start to collide with each other and become very hard to control/animate. I had to make sure only a certain amount of the latest scanned doodles are active on the screen. And we cannot just make the older doodles disappear so I had to move them away from the viewport before turning them inactive.

Another challenge is that the installation does not have any scene changes like in games. So the app has to be very stable, and performance has to be high. We were running the installations for about 12 hours every day so making sure the installation is stable and runs without bugs was very challenging.

Another part that can be improved is the interactivity of the installation. It was mainly displaying 3D models on screen from paper doodles, but we could add more diverse interactions by adding touch support and various events (such as a shark appearing and eating all the doodle fishes every now and then). This could be something to explore in future installations.



Comments

Popular posts from this blog

Object detection with Google Colab and Tensorflow

This is just a memo of the challenges I faced when running a model training on Google Colab, while following a great tutorial here . Mind the versions Tensorflow is currently at version 2.2.0 but most tutorials are still using the contrib package, and there is no known easy way to update the code to remove dependency on contrib. So my best bet is to downgrade the tensorflow version to 1.x. Since Google Colab only gives the options of either 1.x or 2.x and we cannot specify the exact version, I ended up with version 1.15.2. Even with the command :  %tensorflow_version  1.15.0 I ended up with : 1.15.2 Another pitfall was the version of numpy. Installing numpy gives us the version 1.18.3 but for some reason this generates the error : TypeError: 'numpy.float64' object cannot be interpreted as an integer Downgrading numpy to version 1.17.4 solved this for me. It seems we don't need ngrok for tensorboard With the command :  %load_ext tensorboard W...

Installing a custom ROM on Android (on the GT-N8013)

It's been a while since my last entry and since it is a new start in 2019, I thought I'd write something about "gone with the old and in with the new". I've had my Samsung Galaxy Note 10.1 (pnotewifi) since 2014, and it's one of the early Galaxy Note tablet series. It has served me well all this years but now it just sits there collecting dust. My old Samsung GT-N8013 I've known a long time about custom Android ROMs like CyanogenMod but has never had the motivation to try them out, until now ! Overview of the process For beginners like me, I didn't have an understanding of the installation process and so it looked complicated and it was one of the reasons I was put off in trying the custom ROM. I just want to say, it's not complicated at all!   Basically you will need to Prepare an SD card and install Android SDK (you need adb ). Install a custom boot loader ( TWRP is the de facto tool at the moment). Use adb to copy custom...

Using FCM with the new HTTP v1 API and NodeJS

When trying to send FCM notifications I found out that Google has changed their API specifications. The legacy API still works but if you want to use the latest v1 API you need to make several changes. The list of changes is listed on their site so I won't be repeating them again but I'll just mention some of the things that caused some trial and error on my project. The official guide from Google is here : Official Migration Guide to v1 . The request must have a Body with the JSON containing the message data. Most importantly it needs to have "message" field which must contain the target of the notification. Usually this is a Topic, or Device IDs. Since my previous project was using GAS, my request had a field called "payload" instead of "body". Using the request from my previous project, my request in Node JS was as follows: request ({ url: 'https://fcm.googleapis.com/v1/projects/safe-door-278108/messages:send' , method: ...