Wednesday, July 11, 2012

we maed a v1d w1th f!shez 1n it!!1

        So i've been following this whole OUYA thing somewhat closely, and as much as I feel like I don't know enough about their business plan to kickstart the project, I feel like i'm actually a pretty fair representation of the target audience.  I think about how i use my Xbox nowadays and yeah, it's pretty much just to play XBLA games.  That makes the OUYA at 99USD, pretty much an impulse buy for me, so i'm supposing if there's enough content, it would make sense as a gaming solution.  Here's the problem, i don't play XBLA games that often.  I either play a PC game because i'm on my PC working and i need a break after a few hours or I play games on a mobile device because i'm out and i'm bored waiting on something for a bit.  In that vein, i thought it might be cool to have a mobile android box with an HDMI out to run processing sketches on, if only i didn't have to plug it in.  Really that's probably not a huge issue though.  If the hardware ends up being as open as they say, might be moddable to this end.  But then i suppose the question remains why i still just wouldn't use an android tablet...I know, i know, don't kick if you don't believe, but hey, it's internet, i'm allowed to have opinions.


...Mobile p5 engine..?

        ...But let's be honest, you're not here to listen to me rain on internet's parade, so let's get onto the meat of it.  As the title sorta tries to allude to, we built a sort of installation!  Or at least, we got something to an early workable stage.  Sometime last week, Chris had the idea that we should motion track Annie's betta and use the resulting data to drive a flocking simulation.  To keep the footprint small, we decided to use a webcam based solution.  I spent a few hours working on a basic frame differencing solution using some code from processing.org to tweak the overall performance and output and came up with this:


        I did put a version up on my github, it's not quite as usable as I want it to be (a bit more dependencies than i'd like), but in a revision or two it'll be where i'd like it to be.  Meantime, it's definitely usable enough to do your own simple tracking based sketches, so have at!  If you're interested in putting your own motion tracker together, check these out:

Frame Differencing by Golan Levin
Frame Differencing with GSVideo

        We got all the code merged and tweaked to an initial state last night, and here's the result so far, fish courtesy of our buddy Ermal's Fishcam:


Live Fish Flocking from Chris Rojas on Vimeo.

        Next up, live tracking Annie's fish per the original spec.  Annie had some interesting ideas about how we might be able to enhance with some projection or some kind of external display to liven the overall display up.  Version 2.0 incoming!  Created with processing, toxiclibs, and GSVideo.

Tuesday, July 3, 2012

in difference...

         ...we may find the answers we seek.  Or at least a cool way to do motion-ish tracking.      

        This makes the second time this week i've been researching something just for the hell of it and suddenly find a use for it a day later, altho I guess the rounding algorithm stuff did have an actual purpose.  Oddly enough, the project i want to use it for was waiting for me to figure out some version of this little snippet.  Originally I was thinking i was going to have to do it as a full on hand-tracking/skeletal tracking thing, but if I can figure out some smoothing, I think this'll work pretty nicely on its own.  We're putting together a mini-installation at work, details of which I'll not spoil for you here, but should be fun...

        I started wondering about frame differencing after repeated visits to testwebcam.com (yes, it's SFW).  Simple effect, but it looks really cool.  Also a great way to approximate motion on a webcam stream.  The initial implementation didn't take me too long, now I just have to implement some optical flow or maybe even just a cheap trailing-3 tap, who knows?  Of course, you're welcome to solve that problem yourself if you wanna copy-paste this into your own copy of processing and hit Ctrl-R...seriously, your copy of p5 looks a little lonely and unloved, you should do something with it...I did some tests with lerping and vector/distancing, but i think i'm going to need a real filter...

faketracker
This is totally not a photoshop, run the sketch if you don't believe me...

        One of these days I need to start seriously optimizing some (all) of these sketches and my ofx projects. It's ok to suck out loud for now, but truth be told, I think i'm actually a better programmer than that.  I mean, not that I'm a good programmer, i'm just decent enough not to make silly un-optimization mistakes.  Eh, this one'll optimize itself out anyway me thinks, i imagine the filtering isn't going to be cheap...I also really need to start taking video...
//PREPARE YOURSELF FOR THE COMING OF 2.0
//GET GSVIDEOOOOkay it's actually not going to be
//that big of a transition.
import codeanticode.gsvideo.*;

PImage lastFrame;
GSCapture vStream;
int diff;
int thresh = 32;
ArrayList<PVector> dVals = new ArrayList();
PVector p_m;
PVector lastP;
void setup()
{
  p_m = new PVector(0,0);  
  lastP = new PVector(0,0);
  size(640, 480, P2D);
  frameRate(30);
  lastFrame = createImage(width,height,RGB);
  vStream = new GSCapture(this, width, height);
  vStream.start();
  background(0);
}

void draw()
{
  diff = 0;
  loadPixels();
  dVals.clear();
  
  if(vStream.available())
  {
    vStream.read();
    vStream.loadPixels();
    lastFrame.loadPixels();
    for (int x=0;x<width;x++)
    {
      for (int y=0;y<height;y++)
      {
        int i = y*width+x;
        color c = vStream.pixels[i];
        color l = lastFrame.pixels[i];
        int c_r = int(red(c));
        int c_g = int(green(c));
        int c_b = int(blue(c));
        int l_r = int(red(l));
        int l_g = int(green(l));
        int l_b = int(blue(l));
        
        int d_r = max(0,(c_r-l_r)-thresh);
        int d_g = max(0,(c_g-l_g)-thresh);
        int d_b = max(0,(c_b-l_b)-thresh);
        
        int d_s = d_r+d_g+d_b;
        diff += d_s;
        if(d_s>0)
        {
          dVals.add(new PVector(x,y));
        }
        pixels[i] = vStream.pixels[i];
        lastFrame.pixels[i] = c;
      }
    }
  }
  updatePixels();
  if(diff>0)
  {
    p_m = avgArrayList(dVals);
    fill(255,255,255);
    ellipse(p_m.x,p_m.y,40,40);    
  }
  lastP = p_m;
}

PVector avgArrayList(ArrayList<PVector> arr)
{
  float sumx=0;
  float sumy=0;
  for(int i=0;i<arr.size();i++)
  {
    PVector c = (PVector)arr.get(i);
    sumx+=c.x;
    sumy+=c.y;
  }
  return new PVector(sumx/arr.size(),sumy/arr.size());
}

void keyPressed()
{
  if(key=='q')
  {
    thresh+=1;
    if(thresh>128)
      thresh=128;
  }
  if(key=='a')
  {
    thresh-=1;
    if(thresh<8)
      thresh=8;
  }
}

void stop()
{
  vStream.stop();
  vStream.dispose();
}

grabbing hands

        ...grab all the video frames they can.  Or maybe they don't...

        It's funny, i've always been a graphics-ish programmer.  I don't mean hardcore rendering programmer or any of that madness, but i've always been motivated to code by graphics.  Back in high school, the first thing I dove into while learning C was the BGI, mainly so I could learn how to code graphics for demos (never mind that everyone was writing demos in assembly, which i was also learning so i could...yeah, write graphics routines).  I can only imagine where I'd be nowadays if I'd had things like openframeworks to tinker with.  Of course, we did have GLUT back in my day, which i did spend a fair amount of time mucking about with.  Cool how somethings just stand the test of time.

        Taking the plunge into video capture with openframeworks tonight, whipped up another quick, bitcrushesque-type vis. Took some cues from some of the processing vidcap samples i've been doing, some ideas just keep working:


ofVidTest
Should you lose your disc, you will be subject to immediate de-resolution...


        ...And of course, here're some codes:
/* testApp.h */ 
#pragma once

#include "ofMain.h"

class testApp : public ofBaseApp{
 public:
  void setup();
  void update();
  void draw();
  
  ofVideoGrabber grabber;
  unsigned char* gPixels;
  ofImage img;
};

/* testApp.cpp */
#include "testApp.h"

void testApp::setup()
{
 grabber = ofVideoGrabber();
 grabber.setVerbose(true);
 grabber.initGrabber(640,480,true);
 ofEnableAlphaBlending();
}

void testApp::update()
{
 grabber.update();
}

void testApp::draw()
{
 ofBackground(0,0,0);
 gPixels = grabber.getPixels();
 for(int x=0;x<64;x++)
 {
  int xStep = x*10;
  for(int y=0;y<48;y++)
  {
   int yStep = y*10;
   int i = (yStep)*640*3+xStep*3;
   ofSetColor(gPixels[i],gPixels[i+1],gPixels[i+2],128);
   ofRectMode(OF_RECTMODE_CENTER);
   ofNoFill();
   ofRect(xStep,yStep,10,10);
   ofSetColor(gPixels[i],gPixels[i+1],gPixels[i+2],255);
   ofFill();
   ofCircle(xStep,yStep,3);
  }
 }
}