Tuesday, June 26, 2012

post reality filter

        You know, for as much as i love music, i never really considered myself much of a music snob, or at least, not a genre snob anyway.  The whole idea of having to create millions of sub-genres in some feeble attempt to differentiate yourself from the next guy other than just having cool music always boggled my mind. Even worse, the music critics and journos who would always be coming up with new genres just to, i dunno, safeguard their idea of what a particular genre was or some such nonsense...For as much as i rail on game journalists, it's probably true of most media journalists.  The genre qualifier "post" always bothered me the most, what you can't think of a new genre name, so you just take the laziest path?  "Oh, this is what comes AFTER <genre>", in which case it might just be arrogance, i.e. really, wow, you think your music is that much different that you alone are going to define what comes next?  Wow alright then...

        Anyway, here's a fun little video experiment i knocked together in processing yesterday.  You can copy and paste  the code below into a sketch and run it, should be that simple.  Ohhh yes, you'll also need GSVideo, as I'm not using processing 1.5's native video.  After reading a few pages on what was required to get it working based on software that may or may not really exist anymore and also reading somewhere that p2 was going to move to GSVideo anyway, figured i might as well take the plunge.  It's a pretty friendly library overall...on the subject of laziness, gotta admit i probably could've captured a video of this, but didn't.  That's probably ok though, you should really run this yourself and see the effect, maybe play around with it a bit, see what you come up with...


vidcap_02
Hello from videoland...

import codeanticode.gsvideo.*;

PVector gridStep = new PVector(16,8);
int gridX;
int gridY;
GSCapture vStream;

void setup()
{
  size(640, 480, P2D);
  frameRate(30);
  gridX = int(width / gridStep.x);
  gridY = int(height / gridStep.y);
  vStream = new GSCapture(this, width, height);
  vStream.start();
  background(0);
  noStroke();
}

void draw()
{ 
  if(vStream.available())
  {
    vStream.read();
    vStream.loadPixels();
    
    //Thanks, RoHS
    fill(0,0,0,8);           
    rectMode(CORNER);        
    rect(0,0,width,height);  
    
    for (int i = 0; i < gridX; i++)
    {
      for (int j = 0; j < gridY; j++)
      {
        int x = i*int(gridStep.x);
        int y = j*int(gridStep.y);
        int loc = (vStream.width - x - 1) + y*vStream.width;
      
        color col = color(red(vStream.pixels[loc]),green(vStream.pixels[loc]), blue(vStream.pixels[loc]), 32);
        float vradius = brightness(col)*0.1;
        
        fill(col);
        if(vradius<12.8)
        {
          ellipseMode(CENTER);
          ellipse(x, y, vradius,vradius);
        }
        else
        {
          rectMode(CENTER);
          rect(x,y,vradius,vradius);
        }
      }
    }
  }
}

Sunday, June 24, 2012

First Light

        When i was working in SoCal, i worked with a rather brilliant tools programmer who used to refer to MVPs as the "First Light" version of something.  I like that term, MVP sounds so stuffy and business-y.
     
        Probably the most impactful realization I've had since coming to Intel is that I want to be a creative coder when I grow up.  My absolute dreamest job would be to build interactive digital art pieces/installations, and I know it's possible, because there are people out there doing it.  The downside is that it's an exteremely niche market.  It's like Tech Art or Concept Art on steroids, that is, people want to do it, moreso than the number of positions available.

This does not deter me.

        As daunting a challenge is it may seem, I have a particular set of skills, skills that will only partially help me achieve my new position through poise and audacity.  To this, I must now add resolve (guess both of those movies and I'll...i dunno buy you a shot next time i see you).  My resolve is that I need to get the rest of the skills required for this sort of work, and in the process maybe build some simple starting out pieces.  As i mentioned in my previous post, the big skill I feel like I'm lacking is not a technical one, but an artistic one.  Taking a step back from visualizing tweets, I decided to go even simpler and visualize some simple data-over-time type sets.  So, armed with pygame, I settled on visualizing periodic functions (sine and cosine in this case), and decided to see what I could come up with.

sine_tplt
I started simple, just so I'd have a template to work with...


sine_vis
..but things got pretty crazy pretty quickly.

        Overall this was a fun exercise and really helped me get my feet wet.  The thing I realized over the course of doing this is that really the possibilities are endless.  I mean, this is just a simple 2d library with no interactivity.  I haven't even started getting crazy with processing, cinder, vvvv, or any of the other insanely cool toolkits out there, to say nothing of something like Unity or other game engines.  I've already started working on some interactive stuff, mayhap I'll post some of that next once it gets a little more presentable.

        I uploaded all the source from this to my Github, or you can just check out the results on my Vimeo channel.  I challenge you to see what you can come up with yourself, you might be surprised how addicting it is...

Saturday, June 9, 2012

"Fun" is part of "Functional"

        It's been a really interesting...week?  I dunno, to be honest, time has completely disappeared, which is a good thing.  I've always had some interesting thoughts about time, but that's for another blog post, or maybe a good smokeout.  Given the kind of work I'm doing now, a little bit of the ol' green might not be a bad idea, but we'll save that as a last resort.

        As I think i've probably mentioned to some folks, one of the things I'm working on now, or at least was supposed to be working on, is "new" interaction ideas, natural/new user interfaces, and all kinds of other buzzwords that add up to "do what nintendo and apple did, but better".  No pressure.  I'm going to be honest with everyone, i'm NOT an incredibly brilliant, innovative, or creative person, but i think i know what i like, which i feel like could be an asset in this space.  This all sort of hit home a little while ago while i was prototyping some tech for an interaction demo to showcase a...pretty freakin cool bit of technology my co-workers Stan and Sterling have been grinding on for a bit.  Here's a short demo:


        Apologies for the lame-o compression, it's been a while since i've been a video producing guy man, something i intend to change, and by that i mean change that i haven't been producing videos, not that i'm a guy man.  Aaaaaanyway...So here's the thing, I found myself just playing with this demo for...minutes at a time, seriously.  And by playing i mean, putting my mouse on one of the fireflies and watching it light up.  How crazy is that??  It's funny though, i'm reminded of a story Mom used to tell me about how Dad would take me to the arcade when i was a wee bern and sit me on the pinball glass while he would play.  Apparently, i would paw at the glass trying to get to all the shiny-movey things.  While i have no memory of this, i feel like it probably affected me pretty deeply (and relevantly!).

        Fast forward to about a week ago, my co-worker Chris showed me this video from The Creators Project (which I'm so going to next year), which really got me thinking.  Unlike me, Memo Akten actually is a brilliant, innovative, creative individual, and i could watch his stuff forever.  Seriously, TRY and shut this off once you start watching it, you won't be able to...


        This got me thinking a few things, in no particular order or connectivity:
  • Complexity from simplicity is beauty
  • Chronological doesn't mean linear
  • I need to start small
  • "Fun" is part of functional
  • Graphics programming can make things that aren't games
  • In digital world, data is just numbers, and numbers make pictures
        Now, the last two points, i imagine you're all going "well duh", but here's the thing.  I'm VERY new to all this.  For all i've been around graphics programming and interaction design, i have to retrain myself to think of things not as game tools or game concepts.  I understand that the delta between "game" and "interactive experience" from a high level isn't that great, but at the same time, there are things you wouldn't do in a game, presentation-wise, that i feel like is probably permissible in other spaces.

        So i decided to start small and visualize an obvious and relevant known: Tweets!  More specifically, mentions.  I wanted to especially get away from linear representations and just think of mentions as a big data cloud.  How would i display and navigate said cloud?  I grabbed Tweepy and pygame and started messing around with some different ideas.  Everything here is WIP, not even anywhere close to being a solid idea, for now i'm just playing around.


twalaxy
Iter 0: Branching off from the most recent mention, age denoted by line color.

twalaxy_2
Iter 1: Main mentions represented by orbits, age denoted by distance from center. Related tweets branch off of each mention.


Iter 2: Same as 1, related tweets represented radially.

        This is fun, but you know, I'm not nearly as drawn into this as I am from the fireflies for some reason, even after running it on some touch-enabled hardware.

        But that's the thing, I can figure that out.  Like i said, in this space, i'm a BABY.  I need to really just immerse myself in all this stuff, the way I did when I was a game developer.  Taking a bit of my own advice, I'm going to go back to square one and start small.  Taking a page from Memo Akten, I'm going to visualize some simple data-over-time sets just to get my head more into that space.  Taking another page from Memo Akten, i'm going to start with simple functions.  Maybe I'll use a cosine though, just to be slightly different, hehe...Stay Tuned.  I feel like i'm finally hitting a bit of a stride here, like I finally gave myself permission to work on this stuff.  It's funny, it's been such a big mandate for our group, but i've felt like...it was too much not like work to be working on.  Ah well...at least i'm changing before it's too late.

Friday, June 1, 2012

i can see forever

        It's funny how sometimes just being around a term or concept so much skews your perspective on it.  Case in point, at work, one of the hot topics is CV (computer vision), like, you're immersed in it here.  Sometimes it's hard to not feel like everyone in the world is researching/investigating/developing products against a certain technology and you're being left behind.  Of course, in the case of CV, that's probably absolutely true, i'm sure everyone is trying to figure out how to use it to drive the next wave of great interactive experiences, but then, i guess that's the real hot topic.  Anyway, i just thought it was funny how over the last month i've felt like CV is the new hotness and everyone in the world is playing with it...

cvtest
OpenCV...one of the few things that doesn't break when it sees my ugly mug...

        That said, the thing that caught my eye most at Maker Faire was SimpleCV, which is a wrapper around OpenCV (ya think) and a few other super useful libraries.  They had a few really cool demos running, but it was more being able to look at the code and see how simple it was that really got my attention.  Not that OpenCV by itself doesn't have a ton of functionality already, SimpleCV just has a few more convenience functions, not to mention some slick blob finding stuff (among other things).  The big (and very specific) hurdle I ran into is that SimpleCV uses libfreenect, specifically the python bindings, for kinect access, and from everything I read online, getting the freenect python bindings to play nice with Windows is a bit of a chore.  I must say, in the last week or so, I've been amazed at how many tasks i've undertaken that turned out to be...not so simple or well documented.  That means i'm either truly blazing the trail or I'm doing it horribly wrong and missing the obvious solution.  Let's assume the latter.


         Anyway, so after a day of trying to build python freenect and only being somewhat succesful, i decided to see if I could get SimpleCV and pykinect to play nice together.  Thanks to some hard work done by another Codeplex-er who goes by the handle bunkus, i got pykinect and the OpenCV python bindings talking.  From there it was a pretty simple step to get pykinect and SimpleCV talking, since SimpleCV talks straight through OpenCV.  Well, it would've been if i'd remembered to install PIL...As much time as i'd spend digging through the SimpleCV ImageClass source, you'd think the dependencies would've been burned into my brain.

Facepalm

         But you know, at the end of the day, I've got pykinect streaming into SimpleCV, which is a good thing.  There's been a ton of interesting work done in open source kinect space, but having had my fill of trying to deal with OpenNI, i'm more than happy to use a supported, iterated on SDK with the most current features.  Who knows what I might want to do next?  So here's some code, yes i know, it's horrible and un-pythonic and there are globals all over the place, but it works...cleanup is next. Actually, next I'm going to work my way through the SimpleCV book and see what happens.  This + multiple kinects?  If performance doesn't murder me, could be pretty cool...

import array
import thread

import cv
import SimpleCV
import pykinect.nui

def frame_ready(frame):
    global disp, screen_lock, img_address, img_bytes, cv_img, cv_img_3, alpha_img
    with screen_lock:
        frame.image.copy_bits(img_address)
        cv.SetData(cv_img, img_bytes.tostring())
        cv.MixChannels([cv_img],[cv_img_3,alpha_img], [(0,0),(1,1),(2,2),(3,3)])
        scv_img = SimpleCV.Image(cv_img_3)
        scv_img.save(disp)

if __name__=='__main__':
    screen_lock = thread.allocate()
    cv_img = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,4)
    cv_img_3 = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,3)
    alpha_img = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,1)
    img_bytes = array.array('c','0'*640*480*4)
    img_address = img_bytes.buffer_info()[0]
    disp = SimpleCV.Display()

    kinect = pykinect.nui.Runtime()
    kinect.video_frame_ready += frame_ready
    kinect.video_stream.open(pykinect.nui.ImageStreamType.Video, 2, pykinect.nui.ImageResolution.Resolution640x480, pykinect.nui.ImageType.Color)