Computer Vision - OpenCV, Kinect and Unity Part 1

Image By QueSera4710 - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=31586266

A little intro

Let me tell you about my adventures with Computer Vision.

My first project with this company involved object tracking. In order to achieve this with a high degree of accuracy we decided to use the Kinect V2.0, mainly to take advantage of it’s depth sensor. The Kinect SDK has object detection tools right out of the box but are geared towards the human body / face and not really for other shapes. Despite this the raw depth data can still be accessed.

In order to find a solution I loaded up my favourite search engine and looked for a way to detect objects with the depth data I had at my disposal.

Computer Vision using OpenCV

which is a tutorial on the functionality I required. The application created from the tutorial uses a Computer Vision library to process the depth data and detect objects.

What is Computer Vision?


Computer Vision is technology used so that computers can identify, analyse and understand images. It is used in sectors ranging from robotics to the medical field.

OpenCV



OpenCV is a cross platform open source C/C++  library that provides Computer Vision functionality. At the time of writing this blog it can run on Windows, Linux, Mac, Android and iOS. It can compare images and highlight the differences or find certain shapes(ranging from squares and circles to the human face) inside an image.

Emgu CV



Emgu CV is a cross platform .NET wrapper for the OpenCV library. It allows OpenCV functions to be called from .NET languages including C# and can be compiled by Visual Studio, Xamarin Studio and Unity. It can be run on the same operating systems supported by OpenCV and also Windows Phone.

How can OpenCV utilise the depth data?


In the tutorial there is a clever function that converts the Kinect depth data(at certain depths) in a 2D bitmap image. In the image the data that represents an object is represented by white pixels and the data that represent void is represented by black pixels.

The images(frames) rendered by this function are fed into the OpenCV library which in turn compares and then detects the shapes formed. Open CV also gives you tools in order to filter through the objects, such minimum blob(object) size, maximum (blob) size and threshold. Please note that the depth data contains noise which can be mostly filtered out using OpenCV’s functions.

Adapting the code to Kinect V2.0  


Since the tutorial was published quite a while ago, it makes use of the Kinect V1.0 and it’s SDK. The Kinect V1.0 and V2.0 SDKs are significantly different so the code had to be modified in order to utilise it for my project.

Keeping with the Github spirit, I forked the code that was hosted on Github and applied my changes. You can see it here: https://github.com/drahcirsama/OpenCV-WPF-KinectV2 .

Integrating the code in Unity

Integrating Emgu

Integrating Emgu in a unity project took a little bit of research but it is not too complicated to be achieved. I came across a guide from this Unity forum post and with a few variations, I managed to integrate Emgu into my unity project.

The steps are as follows:

  1. Download and Install the latest version of EmguCV from https://sourceforge.net/projects/emgucv/files/?source=navbar
  2. Once installed access the install directory.
  3. Access the bin folder.
  4. From the “bin folder” you need to Copy all the dlls.
  5. Create a folder named Plugins, in your Unity project (Assets/Plugins).
  6. Paste all dll files in the Plugins folder.
  7. Go to the directory where Unity is installed.
  8. Go to Unity\Editor\Data\Mono\lib\mono\2.0
  9. Copy System.Drawing.dll
  10. Paste the dll in the Plugins folder of your project.
  11. Your Unity project should now have access to OpenCV and related tools.


Please note: EMGU has 
Licensing when it comes to commercial use. This library was used for a proof of concept project. Custom tools were built for the final project.

Integrating Kinect

Integrating Kinect is even more straightforward:

  1. Download Kinect tools and resources from https://dev.windows.com/en-us/kinect/tools
  2. Uncompress the file.
  3. In your Unity project access Assets->Import Packages->Custom Package.
  4. Select one of the unitypackage files from the Kinect Tools you download.
  5. Your project is now ready to use Kinect (PS there is a sample project in the unitypackage so you can check it out and see how it was implemented).

Since Unity has Texture Objects instead of Bitmap objects I made use of the functions located in this class https://github.com/neutmute/emgucv/blob/3ceb85cba71cf957d5e31ae0a70da4bbf746d0e8/Emgu.CV/PInvoke/Unity/TextureConvert.cs

To convert Texture2D to an OpenCV image object and vice versa.

An implementation of these technologies

With the proof on concept complete, I set on creating Join me in the next post as I talk about the project I worked on using these technologies. If you have any questions, please let me know!

See you in the next post.

0 件のコメント:

コメントを投稿