Skip to main content

OpenCV native plugin for Unity in IOS

In one of my recent projects, I needed to use OpenCV from within Unity, in IOS. The asset called OpenCVForUnity is overkill because I didn't need the whole OpenCV library, just a few functions. In addition, this asset does not implement the whole OpenCV library so unless you know that what you need is included you may find it lacking when you discover it does not support some functions you need. As my project involves some trial and error and mixing algorithms together I decided to go with a native plugin.

Overview

In IOS, a native library is built as a bundle. We need to put this bundle inside Unity's Plugins/OSX folder to use it. Therefore, we need to create two projects.
  1. An XCode project to build the native plugin.
  2. A Unity project to use the plugin.

Dependencies

Of course, since we need to use OpenCV we will have to install it first. Tutorials on installing OpenCV on IOS are abundant and I will not include them here. Assuming you have installed OpenCV go to the next step.

The Bundle from XCode

First we create a new XCode project.
Select File -> New Project and choose Bundle template.
After setting up the project settings, OpenCV install paths and other parameters, create a new file for the image processing methods. The XCode project that I created can be found here. Inside are some computer vision algorithms that I needed on some of my projects.

The Unity part

Since I needed to use the OpenCV methods in Unity, I implemented it as a native plugin callable from Unity. The Unity project can be found here.

You can clone both repositories and play with the parameters and methods to suit your project. Please let me know in the comments or in the github page if you have any problems.

Comments

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete

Post a Comment

Popular posts from this blog

Using FCM with the new HTTP v1 API and NodeJS

When trying to send FCM notifications I found out that Google has changed their API specifications. The legacy API still works but if you want to use the latest v1 API you need to make several changes. The list of changes is listed on their site so I won't be repeating them again but I'll just mention some of the things that caused some trial and error on my project. The official guide from Google is here : Official Migration Guide to v1 . The request must have a Body with the JSON containing the message data. Most importantly it needs to have "message" field which must contain the target of the notification. Usually this is a Topic, or Device IDs. Since my previous project was using GAS, my request had a field called "payload" instead of "body". Using the request from my previous project, my request in Node JS was as follows: request ({ url: 'https://fcm.googleapis.com/v1/projects/safe-door-278108/messages:send' , method: ...

Object detection with Google Colab and Tensorflow

This is just a memo of the challenges I faced when running a model training on Google Colab, while following a great tutorial here . Mind the versions Tensorflow is currently at version 2.2.0 but most tutorials are still using the contrib package, and there is no known easy way to update the code to remove dependency on contrib. So my best bet is to downgrade the tensorflow version to 1.x. Since Google Colab only gives the options of either 1.x or 2.x and we cannot specify the exact version, I ended up with version 1.15.2. Even with the command :  %tensorflow_version  1.15.0 I ended up with : 1.15.2 Another pitfall was the version of numpy. Installing numpy gives us the version 1.18.3 but for some reason this generates the error : TypeError: 'numpy.float64' object cannot be interpreted as an integer Downgrading numpy to version 1.17.4 solved this for me. It seems we don't need ngrok for tensorboard With the command :  %load_ext tensorboard W...

Building a native plugin for Intel Realsense D415 for Unity

Based on a previous post , I decided to write a plugin for the Intel Realsense SDK methods so we can use these methods from within Unity. FYI Intel also has their own Unity wrapper in their Github repository , but for our projects, I needed to perform image processing with OpenCV and passing the results to Unity instead of just the raw image/depth data. There is a plugin called OpenCVForUnity to use OpenCV functions from Unity but previous experiments indicate the image processing inside Unity can take a long time. I hope this post can help someone else who wants to use Intel's cameras or any other devices natively in Unity. Test Environment Windows 10 64bit Unity 2017.2.0f3 x64 bit Realsense SDK from Intel CMake 3.0 or higher Steps Checkout the native plugin code here . Don't worry about the other projects in the same repository. The relevant code is in the link above. Checkout the Unity sample project here . However, instead of master you need to go to the br...