Sunday, March 31, 2013

[TUTORIAL] Visualizing Depth in Unity, part 2

    The joys of having a laptop capable of development, I'm seriously in love with my Ultrabook.  This isn't just me shilling for the company, I'm totally sold on this thing.  Apple did right by forcing people to figure out how to build smaller, lighter laptops that still pack serious development punch.  For reference, I'm currently working off of a Gigabyte U2442, would be nice to get something that has a Core i7 CPU, but this one's a Core i5 at 3.1 with a mobile Geforce 6xx, so I'm happy with it.  Made it easy for me to bang out this second depth sample from the comfort of a...actually I think it was a bar as opposed to a coffee shop...


The Technolust, i sorta haz it...

    I mentioned in my last post I'd been messing around with some other methods for visualizing depth from the Creative Camera, I took a few moments after GDC to decompress and finish this one up, it sorta builds off the last sample.  Instead of visualizing a texture, I'm using the depth to set attributes on some particles to get that point cloudy effect that everyone seems to know and love.  This one's a bit more complex, mainly because I added a few parameters to tweak the visualization, but if you've got some Unity under your belt, none of this will be that tricky, and in fact, you'll probably see pretty quickly how setting particle data is very similar to setting pixel data.  I should also note that the technique presented here could apply to any sort of 3d camera, pretty much if you can get an array of depth values from your input device, you can make this work.  So here's what we're trying to accomplish when all's said and coded:


    Since this is a Unity project, we'll need to set up a scene first.  All that's required for this is a particle system, which you can create from the GameObject menu (GameObject > Create Other > Particle System).  Set the particle system's transforms (translate and rotate) to 0,0,0 and uncheck all the options except for Renderer.  Next, set the Main Camera's transform to 160,120,-240, and our scene is ready to go.  That all in place, we can get to coding.  We'll only need a single behavior for this test, which we'll put on the particle system.  I called mine PDepth, but you'll call it Delicious (or whatever else suits your fancy)!  First, let's set up our particle grid and visualization controls:

//We'll use these to control our particle system
public float MaxPointSize;
public int XRes, YRes;

private ParticleSystem.Particle[] points;
private int mXStep, mYStep;

  • MaxPointSize: This controls the size of our particles
  • XRes, YRes: These control the number of particles in our grid
  • points: This container holds our individual particle objects
  • mXStep, mYStep: These control the spacing between particles (this is calculated, not set manually)

    With those in place, we can populate our particle grid and get some stuff on screen.  Here's what our initial Start() and Update() methods should look like:

void Start()
{
    points = new ParticleSystem.Particle[XRes*YRes];
    mXStep = 320/XRes;
    mYStep = 240/YRes;

    int pid=0;
    for(int y=0;y<240;y+=mYStep)
    {
        for(int x=0;x<320;x+=mXStep)
        {
            points[pid].position = new Vector3(x,y,0);
            points[pid].color = Color.white;
            points[pid].size = MaxPointSize;
            ++pid;
        }
    }
}

void Update()
{
    particleSystem.SetParticles(points, points.Length);
}

    If you're wondering where the values 320 and 240 came from, we're making some assumptions about the size of our depth map to set the initial bounds.  Once we add in the actual depth query, we'll fix that and not have to rely on hardcodes.  Otherwise, if all went according to plan, we should have a pretty grid of white particles.  Be sure to set some values for XRes, YRes, and MaxPointSize in the Inspector!  For this example, I've used the following settings:
  • XRes: 160
  • YRes: 120
  • MaxPointSize: 5

    As I mentioned earlier, this procedure actually isn't too much different from the previous sample, in that we're building a block of data from the depth map then loading it into a container object, just in this case we're using an array of ParticleSystem.Particle objects instead of a Color array, and we're calling SetParticles() instead of SetPixels().  That in mind, you've probably already started figuring out how to integrate the code and concepts from the previous tutorial into this project, so let's go ahead and plow forward.  First, well need to add a few more members to our behaviour:

public float MaxPointSize;
public int XRes, YRes;
public float MaxSceneDepth, MaxWorldDepth;

private PXCUPipeline mSession;
private short[] mDepthBuffer;
private int[] mDepthSize;
private ParticleSystem.Particle[] points;
private int mXStep, mYStep;

  • MaxSceneDepth: The maximum Z-amount for particle positions
  • MaxWorldDepth: The maximum distance from the camera to search for depth points
  • mDepthBuffer: Intermediate container for depth values from the camera
  • mDepthSize: Depth map dimensions queried from the camera. We'll replace our hardcoded 320,240 with this

    The only major additions we need to make to our Start() method involve spinning up the camera and using some of that information to properly set up our particle system.  Our new Start() looks like this:

void Start()
{
    mDepthSize = new int[2];
    mSession = new PXCUPipeline();
    mSession.Init(PXCUPipeline.Mode.DEPTH_QVGA);
    mSession.QueryDepthMapSize(mDepthSize);
    mDepthBuffer = new short[mDepthSize[0]*mDepthSize[1]];

    points = new ParticleSystem.Particle[XRes*YRes];
    mXStep = mDepthSize[0]/XRes;
    mYStep = mDepthSize[1]/YRes;

    int pid=0;
    for(int y=0;y<mDepthSize[1];y+=mYStep)
    {
        for(int x=0;x<mDepthSize[0];x+=mXStep)
        {
            points[pid].position = new Vector3(x,y,0);
            points[pid].color = Color.white;
            points[pid].size = MaxPointSize;
            ++pid;
        }
    }
}

    The bulk of the changes are going to be in the Update() method.  The big difference between working with a particle cloud and a texture as in the previous example is that we need to know the x and y positions for each particle, thus the nested loops as opposed to a single loop for pixel data.  This makes the code a bit more verbose, but not a ton more difficult to grok, so let's take a stab at building a new Update() method:

void Update()
{
    if(mSession.AcquireFrame(false))
    {
        mSession.QueryDepthMap(mDepthBuffer);
        int pid=0;
        for(int dy=0;dy<mDepthSize[1];dy+=mYStep)
        {
            for(int dx=0;dx<mDepthSize[0];dx+=mXStep)
            {
                int didx = dy*mDepthSize[0]+dx;

                if((int)mDepthBuffer[didx]>=32000)
                {
                    points[pid].position = new Vector3(dx,mDepthSize[1]-dy,0);
                    points[pid].size = 0.1f;
                }
                else
                {
                    points[pid].position = new Vector3(dx, mDepthSize[1]-dy, lmap((float)mDepthBuffer[didx],0,MaxWorldDepth,0,MaxSceneDepth));
                    float cv = 1.0f-lmap((float)mDepthBuffer[didx],0,MaxWorldDepth,0.15f,1.0f);
                    points[pid].color = new Color(cv, cv, 0.15f);
                    points[pid].size = MaxPointSize;
                }
                ++pid;
            }
        }
        mSession.ReleaseFrame();
    }

    particleSystem.SetParticles(points, points.Length);
}

    So like I said, a bit more verbose, but hopefully not terribly difficult to understand.  A few things to be aware of:

int didx = dy*mDepthSize[0]+dx;

    We use the variable didx as an index into the depth buffer.  The reason we do this is because our particles don't correspond 1:1 to values in the depth buffer, so we use each particle's x and y position to do the depth buffer lookup.  In the next example, we'll take a look at how we can actually have a 1:1 depth buffer to particle setup using generic types.

if((int)mDepthBuffer[didx]>=32000)
{
...
}
else
{
...
}

    Here, the reason we test against a depth value of 32000 is because this is what the Perceptual Computing SDK uses as the error term.  So if the SDK can't resolve a depth value for a given pixel, it sends back 32000 or more.  In this case, if we find an error term, we make the particle really small, but in the next example, we'll look at how we can skip that particle altogether if we have an error value.  Finally, remember we need to implement some sort of range remapping function, I call mine lmap as a homage to Cinder's remap, but you can call it whatever, again, it's basically just:

float lmap(float v, float mn0, float mx0, float mn1, float mx1)
{
    return mn1+(v-mn0)*(mx1-mn1)/(mx0-mn0);
}

    So that's that, in the next sample, we'll look at some different ways to map the depth buffer to a particle cloud and use the PerC SDK's UV mapping feature to add some color from the RGB stream to the particles.  Until then, email me, follow me on Twitter, find me on facebook, or otherwise feel free to stalk me socially however you prefer.  Cheers!


What can i say, i love OpenNI...

Wednesday, March 27, 2013

[TUTORIAL] Depth maps and Ultrabooks

    Went to a really great hack-a-thon this past weekend at the Sacramento Hacker Lab to help coach some folks through working with the Perceptual Computing SDK and got to see some really cool work being done, everything from a next-generation theremin to a telepresence bot, all powered by the Creative 3D Camera and Perceptual Computing SDK.  Does me good to actually get out into the community and see people just dive right in and start building stuff.  Compound that with the GDC Dev Day that personally I think went amazingly well (standing room only at one point!) and it's been a good GDC for Perceptual Computing so far.  But now comes the really hard part, which is that PerC needs to not become a victim of its own success.  As the technology gets into more hands, now it becomes about not burning through goodwill by breaking features, being uncommunicative, or not keeping up with the ecosystem.  But I digress...

    Wanted to share a little Unity tip I got asked about a few times during the hack-a-thon, and that's how to visualize the depth map.  The SDK ships with a sample for visualizing the label map, and visualizing the color map is a fairly trivial change, but visualizing the depth map requires a little bit of doing.  It's actually pretty trivial from a working standpoint, so let's take a look at what's required.

    To get a depth map into a usable Texture2D, the basic flow is:
  • Grab the depth buffer into a short array
  • Walk the array of depth values and and remap them into 0-1 range
  • Store the remapped value in a Color array
  • Load the Color array into a Texture2D
    If that seems really simple, fear not, it actually is, so let's take a look at some code and see how we accomplish this.  Here's a really simple Unity behavior that populates the texture object from the depth map.  I'll leave assigning the texture as an exercise to the readers:

using UnityEngine;
using System.Collections;

public class Test : MonoBehaviour
{
    private PXCUPipeline mSession;
    private int[] mDepthSize;
    private short[] mDepthBuffer;
    private int mSize;

    private Texture2D mDepthMap;
    private Color[] mDepthPixels;

    void Start()
    {
        mDepthSize = new int[2];
        mSession = new PXCUPipeline();
        mSession.Init(PXCUPipeline.Mode.DEPTH_QVGA);
        mSession.QueryDepthMapSize(mDepthSize);
        mSize = mDepthSize[0]*mDepthSize[1];

        mDepthMap = new Texture2D(mDepthSize[0], mDepthSize[1], TextureFormat.ARGB32, false);
        mDepthBuffer = new short[mSize];
        mDepthPixels = new Color[mSize];
        for(int i=0;i<mSize;++i)
        {
            mDepthPixels[i] = Color.black;
        }
    }

    void Update()
    {
        if(mSession.AcquireFrame(false))
        {
            mSession.QueryDepthMap(mDepthBuffer);
            for(int i=0;i<mSize;++i)
            {
                float v = 1.0f-lmap((float)mDepthBuffer[i],0,1800.f,0,1.f);
                mDepthPixels[i] = new Color(v,v,v);
                mDepthMap.SetPixels(mDepthPixels);
                mDepthMap.Apply();
            }
            mSession.ReleaseFrame();
        }
    }

    float lmap(float val, float min0, float max0, float min1, float max1)
    {
        return min1 + (val-min0)*(max1-min1)/(max0-min0);
    }
}

    So like i said, fairly simple, albeit verbose technique, but should be fairly easy to wrap it up into a simple function for quick future use.  This same technique can also be used to visualize the IR map with some very minor tweaks.  I've actually been doing alot of stupid depth map tricks the last few days.  I'm at GDC all this week so I'm not sure how much dev time I'll get to be able to polish a few more of these up but maybe the weekend'll afford me some cycles if i'm not in full on crash out recovery mode...

Friday, March 1, 2013

Milestone presents!

    5000 pageviews for me just ranting is pretty cool, so have some surprise processing code!  I won't tell you what it does, basically it came out of a Unity UI experiment Annie's been working on and my desire to learn how processing's PVector interface really works.  Found some interesting quirks, tho having more to do with 2d and 3d and PVector being a 3d construct vs PVector's implementation, i'm going to say (since Shiffman's a genius and probably knows what he's doing).  Anyway...have fun (and send Annie a hey if you end up using this code for something)!  If it's any hint, i call this "Spider Silk"...(poetic, no?)

float thresh = 0.5;
//world vectors
PVector v_ctr_w = new PVector(0,0);
PVector v_0_w = new PVector(0,0);
PVector v_1_w = new PVector(0,0);

//image vectors
PVector v_ctr_i = new PVector(0,0);
PVector v_0_i = new PVector(0,0);
PVector v_1_i = new PVector(0,0);

void setup()
{
  size(500,500,P2D);
  stroke(128);
}

void draw()
{
  background(0);
  v_ctr_i.set(map(v_ctr_w.x,-1,1,0,width),map(v_ctr_w.y,-1,1,0,height),0);
  v_0_i.set(mouseX,mouseY,0);
  v_0_w.set(map(v_0_i.x,0,width,-1,1),map(v_0_i.y,0,height,-1,1),0);
  
  line(v_ctr_i.x,v_ctr_i.y,v_0_i.x,v_0_i.y);
  ellipse(v_ctr_i.x,v_ctr_i.y,50,50);
  fill(0);
  ellipse(v_0_i.x,v_0_i.y,30,30);
  
  fill(255);
  if(v_0_w.mag()<thresh)
    v_1_w.set(v_0_w.x,v_0_w.y,0);
  else
    v_1_w = PVector.mult(v_0_w,thresh/v_0_w.mag());

  v_1_i.set(map(v_1_w.x,-1,1,0,width),map(v_1_w.y,-1,1,0,height),0);
  ellipse(v_1_i.x,v_1_i.y,15,15);  
}

void mousePressed()
{
  thresh = v_0_w.mag();
}