Skip to main content

Interactive digital sandbox

This is one of the installations I developed for a digital entertainment theme park. It is an interactive digital sandbox with normal white sand and I used a projector and depth sensor to project topological colors and virtual objects depending on the height and shape of the sand. I also used QR markers on 3D printed objects to interact with the virtual content.

Overview of installation

A sandbox of 2 x 1.5 meters with projected content

As shown in the image, depending on the height of the sand, the projected colors change from deep water, shallow water, beach, green grass, dark green forest, rocky mountains and finally snowy mountains. These are the 7 different height levels defined by the system. In addition to the topological colors, it also has fishes and boats in the waters, lava coming out of a crater and other interactive elements.


The sand depth is detected real time and projection updated

Fish cannot go on land


Hardware setup

  1. Sandbox made of wood. But this can be anything as long as it can contain sand.
  2. Projector. Make sure the projection covers the sandbox entirely and some more. It is better to find a projector with a projection area somewhat larger than the area of the sandbox. This is because due to the height, the projected area will shrink a little.
  3. Depth sensor. There are several readily available such as the Kinect, or Intel's Realsense depth sensors. In this particular setup I used the Kinect. A little word of caution though, Asus Xtion's accuracy and resolution was not good enough to differentiate the height levels.
  4. RGB Imager. Ideally this is integrated into the depth sensor, so the transformation is known. Since I used the Kinect, this is already part of the setup.
  5. AR markers. For this setup I used hard cardboard material cut with laser cutter.
  6. 3D printed objects shaped like magic wands to interact with the content.
  7. Computer. To run the detection and rendering of the contents.
  8. HDMI cable to connect the computer and projector.
  9. Lens polarizer to reduce glare and reflection from projector light. Since the room is dark and the projector emits the strongest light, the reflection from the sand surface "blinds" the RGB imager and causes the AR markers detection to fail. Something like this.
  10. 3D printed attachment to attach the lens polarizer to the Kinect.

Software setup


  1. Unity to develop the contents
  2. OpenCV (Aruco) for AR marker detection and positioning (calibration)
  3. OS. I used Windows 10 in this installation
  4. (Optional) Jenkins, CMake, vcpkg, and other CI and package managers

3D printed objects and AR markers

A 3D printed magnifying glass with AR markers to zoom in on virtual animals.

The AR marker is detected by the RGB camera and after aligning it's position to the virtual content, it determines if there are animals in the vicinity of the magnifying glass. If there are, those animals will be zoomed in.

Another use of the marker is to implement a kind of treasure hunt where there are hidden treasures or characters that players can find, such as in the video below.


Or we can use several markers and realize different effects and interactions.




Lens polarizer

As mentioned, due to the room being dark and the projector the strongest light source around, the reflected light from the sand surface is enough to blind the RGB camera such that the image is just one big white image. Unfortunately, the Kinect does not have a mechanism to adjust the aperture with code so I had to attach an adjustable lens polarizer to adjust the amount of light coming through to the imager.

A lens polarizer adjusts the amount of light

The lens polarizer attached to the Kinect

It's a little hard to see but the lens polarizer is attached to the Kinect using a 3D printed attachment. The attachment was designed on Autodesk Fusion 360 with careful measurements of the Kinect's outer casing. Adjusting the dial on the polarizer allowed the imager to capture better images and detect the AR markers accordingly.

Volcano detection

On snowy mountains, if players dig out a small hole, that will become a crater and lava will spew out. The algorithm to detect a crater is simply checking for points that is at the bottom of a concave shaped area.

Crater detection process

For every depth point, check it's surrounding neighbors and if it is the lowest/deepest point, define it as the crater. This simple algorithm proved reliable enough in my testing.

Gallery

Here are a few extra images of some users painstakingly adjusting the sand height to resemble shapes such as fish bone and a house.


Creative players forms a fish bone

A house and a car, and the moon in the top right


This version has since been updated and this was taken on the day of decommisioning

Comments

Popular posts from this blog

Using FCM with the new HTTP v1 API and NodeJS

When trying to send FCM notifications I found out that Google has changed their API specifications. The legacy API still works but if you want to use the latest v1 API you need to make several changes. The list of changes is listed on their site so I won't be repeating them again but I'll just mention some of the things that caused some trial and error on my project. The official guide from Google is here : Official Migration Guide to v1 . The request must have a Body with the JSON containing the message data. Most importantly it needs to have "message" field which must contain the target of the notification. Usually this is a Topic, or Device IDs. Since my previous project was using GAS, my request had a field called "payload" instead of "body". Using the request from my previous project, my request in Node JS was as follows: request ({ url: 'https://fcm.googleapis.com/v1/projects/safe-door-278108/messages:send' , method: ...

Building a native plugin for Intel Realsense D415 for Unity

Based on a previous post , I decided to write a plugin for the Intel Realsense SDK methods so we can use these methods from within Unity. FYI Intel also has their own Unity wrapper in their Github repository , but for our projects, I needed to perform image processing with OpenCV and passing the results to Unity instead of just the raw image/depth data. There is a plugin called OpenCVForUnity to use OpenCV functions from Unity but previous experiments indicate the image processing inside Unity can take a long time. I hope this post can help someone else who wants to use Intel's cameras or any other devices natively in Unity. Test Environment Windows 10 64bit Unity 2017.2.0f3 x64 bit Realsense SDK from Intel CMake 3.0 or higher Steps Checkout the native plugin code here . Don't worry about the other projects in the same repository. The relevant code is in the link above. Checkout the Unity sample project here . However, instead of master you need to go to the br...

Microsoft Azure Face API and Unity

During one of my projects, I came across Microsoft's face recognition API (Azure Face API) and it looked good enough to recognize people's faces and detect if a person is a newcomer or a repeating customer to our store. As our installations mainly use the game engine Unity, I wanted to be able to use the Face API from Unity. Face API does not have an SDK for Unity but their requests are just HTTP requests so the Networking classes in Unity can be wrapped into methods to make it easy to call these APIs. First of all, to those who just want to see the code, here it is . My tests focus on the identification of a face in an input image. The full tutorial I followed can be found here . The Main scene goes through the steps in the tutorial such as creating a PersonGroup and adding Persons to the group if it is not created yet. Just make sure you: Change the API key. I used a free trial key which is no longer valid. Use whatever images you want. I don't mind you us...