Skip to main content

Building a native plugin for Intel Realsense D415 for Unity

Based on a previous post, I decided to write a plugin for the Intel Realsense SDK methods so we can use these methods from within Unity. FYI Intel also has their own Unity wrapper in their Github repository, but for our projects, I needed to perform image processing with OpenCV and passing the results to Unity instead of just the raw image/depth data. There is a plugin called OpenCVForUnity to use OpenCV functions from Unity but previous experiments indicate the image processing inside Unity can take a long time. I hope this post can help someone else who wants to use Intel's cameras or any other devices natively in Unity.

Test Environment

  • Windows 10 64bit
  • Unity 2017.2.0f3 x64 bit
  • Realsense SDK from Intel
  • CMake 3.0 or higher

Steps

  1. Checkout the native plugin code here. Don't worry about the other projects in the same repository. The relevant code is in the link above.
  2. Checkout the Unity sample project here. However, instead of master you need to go to the branch feature/realsense to see the sample scene for Realsense cameras.
  3. Build and install Realsense SDK. Make sure you match the architecture of the OS and Unity.
  4. Modify the CMakelist file in the native code repository to match your installation paths and generate the native code solution file. If you don't change the project name, the solution file generated will be named camera_vision.sln.
  5. Build the solution file and you should find the dll generated in build/uplugin/
  6. Copy this DLL file (if you didn't rename the project it should be uplugin_realsense_d415.dll) and put this in the Assets/Plugins/x64 folder of the Unity project.
  7. Go to the test scene in Assets/WindowsNativePlugin/Scenes/RealsenseD415.unity
  8. Press Instantiate -> List Devices -> Get Depth Data in this order.
  9. You should see the depth data being passed back to Unity.
I plan to add documentation and also reorganize/refactor the scripts but in the meantime please see the code and comment here if there are any questions.

Overview of plugin interface

This plugin borrows heavily from the previous Asus Xtion2 plugin.
  1. Unity calls the _Create function, which is a wrapped to the DLL method which instantiates a new realsense_capture instance. The memory address of this instance is returned to Unity as an IntPtr and as long as Unity holds this pointer we can interact with the DLL to call its methods.
  2. The native plugin returns data in the form of pointers to buffers which hold the actual data. These pointers are in the form of IntPtr and we use Unity's Marshaling methods to convert the values back to C# variables.
Refer to the illustration below for an overview.

Future work

Documentation, and refactoring and reorganizing of scripts. Sorry the repos are a mess right now. Please comment if you have any problems and I will try to help as much as possible.

Comments

  1. Where are you performing the image processing with OpenCV? Also did you ever get the ReaslSense camera working with OpenCV inside of Unity?

    ReplyDelete
    Replies
    1. Daniel, sorry for the late reply. I perform the image processing on the native plugin side. Meaning I pass all the parameters from Unity, process them on the native side and return the results. Maybe you can check here : https://github.com/sonnyky/WindowsNativePlugin.
      In one of the sample scenes called "Detect Shape", I take video stream with the native plugin, detects the shapes and sends the results as Texture2D to Unity. Effectively running the videos captured and processed by OpenCV on Unity. Is this what you're looking for?

      Delete
  2. I'd also like to know if you got realsense working with opencv in unity. I haven't been able to make this work yet. Do you have an example project perhaps?

    ReplyDelete
    Replies
    1. Hi Zite, I had the link also on the steps outlined above. Maybe it was a little hard to find among all those text. https://github.com/sonnyky/WindowsNativePlugin

      You can check the Detect Shape scene. There I took video stream with the native plugin and does shape detection and returns the results to Unity

      Delete
  3. Hi, the Unity project github link is here:
    https://github.com/sonnyky/WindowsNativePlugin.

    It seems it is easy to miss with all the text in the article. I'm still working on the documentation, so if you have any problems please comment here or on the github page directly. Thank you.

    ReplyDelete

Post a Comment

Popular posts from this blog

Using FCM with the new HTTP v1 API and NodeJS

When trying to send FCM notifications I found out that Google has changed their API specifications. The legacy API still works but if you want to use the latest v1 API you need to make several changes. The list of changes is listed on their site so I won't be repeating them again but I'll just mention some of the things that caused some trial and error on my project. The official guide from Google is here : Official Migration Guide to v1 . The request must have a Body with the JSON containing the message data. Most importantly it needs to have "message" field which must contain the target of the notification. Usually this is a Topic, or Device IDs. Since my previous project was using GAS, my request had a field called "payload" instead of "body". Using the request from my previous project, my request in Node JS was as follows: request ({ url: 'https://fcm.googleapis.com/v1/projects/safe-door-278108/messages:send' , method: ...

Microsoft Azure Face API and Unity

During one of my projects, I came across Microsoft's face recognition API (Azure Face API) and it looked good enough to recognize people's faces and detect if a person is a newcomer or a repeating customer to our store. As our installations mainly use the game engine Unity, I wanted to be able to use the Face API from Unity. Face API does not have an SDK for Unity but their requests are just HTTP requests so the Networking classes in Unity can be wrapped into methods to make it easy to call these APIs. First of all, to those who just want to see the code, here it is . My tests focus on the identification of a face in an input image. The full tutorial I followed can be found here . The Main scene goes through the steps in the tutorial such as creating a PersonGroup and adding Persons to the group if it is not created yet. Just make sure you: Change the API key. I used a free trial key which is no longer valid. Use whatever images you want. I don't mind you us...