Saturday, June 21, 2014

Getting Started With Kinect.v2 and Cinder, pt. 1: Cinder-Kinect2


...Gentlefolks, start your frameworks.

    So, I'm somewhat ashamed to admit this, but I've actually had my Kinect v2 devkit for about a month-and-a-half now and haven't really done much with it other than run some of the demos, browse the sample code, and in general be amazed at the technology.  Thank god for eyeo festival and time off for a week to just create and be inspired, as it allowed me to really get hands on with the hardware and some of the solutions available for Cinder.  Let's take a look, then, shall we?  In this first installment, we'll go over Cinder-Kinect2, which is a wrapper around the core SDK, developed by Stephen Schieberl of Wieden-Kennedy and of course, Cinder, fame.


DISCLAIMER: This is preliminary software and/or hardware and APIs are preliminary and subject to change

    Let me preface this by saying I'm not presenting anything show-stopping here, this isn't even Cinder Kinect 101 (it's more like 90), chances are most users probably know this stuff already, but at least it's here for some amount of posterity, and I will be updating this as the APIs mature (which I guess should be any week now?).

    Ok, that all said, here we go, for real.  Cinder-Kinect2 is the simpler of the two solutions, granted, they're both simple, Cinder-Kinect2 just requires less to get started.  How simple?  Well, assuming you have the K4W DPP SDK installed...

-> Clone or fork Ian Smith-Heisters' fork of Cinder-Kinect2, as it includes some fixes to account for changes in the 1404 Developer Preview SDK.

-> "Install" it as you would any ordinary Cinder block, i.e. clone your chosen repo into your <cinder_root>\blocks folder.

-> Pop open a sample, make whatever changes you need, if any, to the Project Properties.

-> Build and Run. ??? Profit!

    As of this writing, the main fork of Cinder-Kinect2 doesn't take the 1404 changes into account.  If you're interested in fixing it yourself, really you just need to make some minor changes involving the commenting out or removal to references to KinectStatus, as it's been deprecated, replacement pending, but the good Mr. Heisters has also added a few other features that make his branch worth taking a look at.

    The interface itself is as simple, if not simpler, than the installation process.  Here's a sample image and a snippet of the code that produced it, cribbed from the BasicApp and BodyApp sample:


void K2TutorialApp::update()
{
  if(mKinect->getFrame().getTimeStamp()>mFrame.getTimeStamp())
  {
    mFrame = mKinect->getFrame();
    if(mFrame.getColor())
    {
      mRgbTexture = gl::Texture(mFrame.getColor());
      auto cBodies = mFrame.getBodies();
      if(cBodies.size()>0)
        mJoints = cBodies.at(0).getJointMap();
    }
  }
}

void K2TutorialApp::draw()
{
  gl::clear(Color(0, 0, 0));
  if(mFrame.getColor())
  {
    mRgbTex = gl::Texture::create(mFrame.getColor());
    gl::draw(mRgbTex,
          Rectf(Vec2f::zero(), getWindowSize()));

    auto cJoints = mBody.getJointMap();
    if(cJoints.size()>0)
    {
      gl::pushMatrices();
      gl::scale(Vec2f(getWindowSize()) / Vec2f(mRgbTex->getSize()));
      for(auto cCjoint : cJoints)
      {
        Vec2f cp = Kinect2::mapBodyCoordToColor(
                cCjoint.second.getPosition(),
                mKinect->getCoordinateMapper());
        gl::drawSolidCircle(cp,10);
      }
      gl::popMatrices();
    }
  }
}

Resources

Tuesday, April 15, 2014

A video is worth 30,000 words per second?


    Just popping in to say that if you haven't checked out my Vimeo channel recently...err...there's not much new stuff there, but there's more fun stuff coming (along with new blog posts that I've promised people and myself).  I'm considering doing some Cinder video tutorials if I ever find some free time, not sure about what, maybe covering Kinect, the Intel depth cameras, that sort of thing...not sure really, we'll see.  But anyway, yeah, bookmark my channel, show your friends, loved ones, your inner circle, all that.  If nothing else, it's good for a tiny bit of inspiration...maybe.

>>> Me on Vimeo <<<

Friday, March 21, 2014

Cinder-Assimp, VS2012, and Cinder 0.8.5

If you're in a hurry, absolutely need to get Cinder-Assimp functional, and don't have time to read my drivel...TOO BAD!  I kid, I kid, here's all you need to know:
  • Download the pre-built Assimp 3.0 libs from Eric Renaud-Houde's Cinder-Skinning block and stick them in Cinder-Assimp's lib/msw folder
  • Pop open AssimpLoader.cpp and change line 460-ish from cam.setFovHorizontal to cam.setFov
  • Shake your head sadly at the fact that I'm so desperate for attention that I took a two step process and turned it into the yarn below, which you may optionally read.  Yep, I'm definitely management material.(Optional)

    More often than not, when we need to deal with 3d content here in Lab land, we tend to use Unity as our framework of choice, though with dropping of the thermonuclear bomb that is UE4 on reality this week, that may be changing.  The problem, as you can imagine, is that sometimes we don't really need something as extensive as Unity, but we end up using it anyway because we don't have options, which leads to some really interesting design decisions sometimes.  Recently, we had a project that really only required a minimal amount of 3d content management, so little so that it would've been nice to be able to keep the whole project in Cinder, but since our content was skinned and animated, Cinder's built in OBJLoader obviously wouldn't suffice.  So we were stuck in a situation where really we needed the absolute most minimal subset of Unity's functionality, but because it was our only option, it required quite a bit of one-off development to turn Unity into something like a useable component.  Well, gee Seth, why didn't you just use Cinder-Assimp?  Great question...


...check out the big brain on the Cinder forums.

    Cinder 0.8.5 is an interesting beast, there seem to be quite a few interface changes from previous versions, that, if the dev docs are to be believed, may be re-appearing in 0.8.6.  This was probably the thing that kept us from designing with Cinder-Assimp in mind originally, the fact that we couldn't really get it to work.  Thankfully, we had some time to revisit the project and wrap it up, so I made it a priority to figure out Cinder-Assimp.  If you're a Cinder user, you're probably familiar with Gabor Papp's work, if not, well, it's good, and if he says it worked, then at some point it did, so I figured it was more an error on my part than the block's part.  Turns out I was right.

    First, all credit where it's due, it was actually Eric Renaud-Houde's excellent Cinder-Skinning block that solved the major problem, that of getting compatible libraries.  Sure, I could've just built Assimp myself against VS 2012, which also would have solved that problem, but hey, I'm lazy and am more than happy to let other people do work for me, although I'll probably be running a build of it myself going forward just to make sure I have it.  As much as I'm excited about UE4, I'm definitely not abandoning Cinder...ever.  I like to think of UE4 (or whatever game engine I use) as the infantry and Cinder as Spetsnaz.    Anyway, if you grab the pre-built Assimp libraries linked on Eric's repo for Cinder-Skinning, you can link the Cinder-Assimp samples against that.  But wait, there's probably one more error you might run up against...


...i know, i know, it's always one more thing.

    This..."fix", as I alluded to earlier, gives me a bit of pause, but hey, if you need to get work done, you need to get work done.  But first, consider this:     As you can see, the missing setFovHorizontal() will be (is) returning (here-ish), so my actual advice is to man up and work out of the dev branch and build your own version of Assimp, which you'll probably want to do anyway since Cinder is also moving to boost 155, so why not just build everything against the same boost?  But again, if that's not an option, you can just patch AssimpLoader.cpp to use the setFov() method vs setFovHorizontal() for now.  So to recap (stop me if you've heard this one before):
  • Download the pre-built Assimp 3.0 libs from Eric Renaud-Houde's Cinder-Skinning block and stick them in Cinder-Assimp's lib/msw folder
  • Pop open AssimpLoader.cpp and change line 460-ish from cam.setFovHorizontal to cam.setFov

    Now to finish up some prototypes so I can go to a Systema seminar this weekend and not sleep at the office.  Hopefully I'll carve out some time to play with UE4 as well, although, I had an interesting conversation with Stephen Schieberl, also of Cinder fame, at eyeo festival last year where he alluded to the idea that Team Cinder's been looking at game engines and thinking about how to bring some of those ideas, especially on the content creation side, into Cinder land, so...maybe UE4 is just a passing fancy too.  The future is exciting, my friends, go make something.


...in the future, all UIs will be MADE OF CASCADE PARTICLES!!!

Sunday, March 9, 2014

The Human Triangulation Experiment, Stage 1

    Recently, us lucky saps in the Perceptual Computing Lab have been fortunate enough to be doing some prototypes for different external groups, and I've actually been lucky enough to work with one of my favorite groups, [REDACTED]!  Needless to say, I'm super excited, and one of the first projects I'm working on involves using depth data and object/user segmentation data to interact with virtual/digital content, much like the ever popular kinect/box2d experiments you've probably seen floating around...


Check out more of Stephen's stuff on github or on his website.

    Depth buffer, OpenCV, cinder::Triangulator, and Box2D, seemed pretty straightforward, I mean let's be honest, that's creative coding 101, right?  That's what I thought, but as usual, the devil's in the details, and after some (not terribly extensive) searching and a little bit more coding I had...eh, well...nothing.  My code looked correct, but no meshes were drawn that day, and even in a cursory inspection of my TriMesh2ds, there was nary a vertex to be seen.  Here's what I tried originally (this is sketch code, so yeah, the pattern is a little sloppy):

//cv contours are in mContours, mMeshes is vector<TriMesh2d>
for(auto vit=mContours.begin();vit!=mCountours.end();++vit)
{
  Shape2d cShape;
  vector<cv::Point> contour = *vit;
  auto pit=contour.begin();
  cShape.moveTo(pit->x,pit->y); ++pit;
  for(/* nothing to see here */;pit!=contour.end();++pit)
  {
    cShape.lineTo(pit->x, pit->y);
    cShape.moveTo(pit->x, pit->y);
  }
  cShape.close;
  Triangulator tris(cShape);
  mMeshes.push_back(tris.calcMesh());
}

    Right, so at this point, it should be a simple exercise in gl::draw()ing the contents of mMeshes, yeah?  Sadly, this method yields no trimesh for you!, and as I mentioned above, even a quick call to getNumVertices() revealed that there were, in fact, no vertices for you!, either.  The docs on Triangulator lead me to believe that you can just call the constructor with a Shape2d and you should be good to go, and a quick test reveals that constructing a Triangulator with other objects does in fact yield all the verts you could ever want, so methinks maybe it's an issue with the Shape2d implementation, or perhaps I'm building my Shape2d wrong.  I rule the latter out, though (well, not decisively), since Triangulator has the concept of invalid inputs, e.g. if you don't close() your Shape2d, the constructor throws, so...what to do, what to do?  To the SampleCave!


TRIANGULATE AGAIN, ONE YEAR! NEXT!

    Mike Bostock, he of d3.js fame gave a great talk at eyeo festival last year on the importance of good examples (Watch it on Vimeo), and you know, it's so true.  It's sorta like documentation, we employ technical writers for that sorta thing, I feel like we should at least give some folks a solid contract to put together good sample code for whatever we're foisting onto the world, rather than relegating samples to free time and interns (no offense to either free time or interns).  Now Cinder has amazing sample code, so a quick google search for TriMesh2d popped up the PolygonBoolean sample, which was basically doing what I wanted, i.e. constructing and drawing a TriMesh2d from a Shape2d...kinda.  I trust the good folks at Team Cinder to not ship sample code that doesn't work, so a quick build 'n' run later and I had a solution.  I was sooooo close...

//cv contours are in mContours, mMeshes is vector<TriMesh2d>
for(auto vit=mContours.begin();vit!=mCountours.end();++vit)
{
  PolyLine2f cShape;
  vector<cv::Point> contour = *vit;
  for(auto pit=contour.begin();pit!=contour.end();++pit)
  {
    cShape.push_back(fromOcv(*pit));
  }
  Triangulator tris(cShape);
  mMeshes.push_back(tris.calcMesh());
}

    The results?  Well, see for yourself:


My tribute to Harold Ramis, may you never end up in one of your own traps, sir.

    Next steps are to maybe run some reduction/smoothing on the contours, although I suppose it doesn't matter terribly for this prototype, and get it into Box2D, all of which I'll cover in Stage 2, including a quick 'n' dirty Cinder-based Box2D debug draw class.  This is awesome, it's total Tron stuff, bringing the real into the digital and all that sort of sorcery.  Once I get the Box2D stuff implemented, I'll stick a project up on github, until then, if you have specific questions, Tag The Inbox or leave a comment below, you are always welcome to try my Shogun Style...for reference, here's the complete update() and draw():

//Using Cinder-OpenCV and Intel Perceptual Computing SDK 2013
void segcvtestApp::update()
{
  mContours.clear();
  mMeshes.clear();
  if(mPXC.AcquireFrame(true))
  {
    PXCImage *rgbImg = mPXC.QueryImage(PXCImage::IMAGE_TYPE_COLOR);
    PXCImage *segImg = mPXC.QuerySegmentationImage();
    PXCImage::ImageData rgbData, segData;
    if(rgbImg->AcquireAccess(PXCImage::ACCESS_READ, &rgbData)>=PXC_STATUS_NO_ERROR)
    {
      mRGB=gl::Texture(rgbData.planes[0],GL_BGR,640,480);
      rgbImg->ReleaseAccess(&rgbData);
    }
    if(segImg->AcquireAccess(PXCImage::ACCESS_READ, &segData)>=PXC_STATUS_NO_ERROR)
    {
      mSeg=gl::Texture(segData.planes[0],GL_LUMINANCE,320,240);
      segImg->ReleaseAccess(&segData);
    }

    mSrcSurf = Surface(mSeg);
    ip::resize(mSrcSurf, &mDstSurf);
    mPXC.ReleaseFrame();
  }

  cv::Mat surfMat(toOcv(mDstSurf.getChannelRed()));
  cv::findContours(surfMat, mContours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

  for(auto vit=mContours.begin();vit!=mContours.end();++vit)
  {
    PolyLine2f cLine;
    vector<cv::Point> contour = *vit;

    for(auto pit=contour.begin();pit!=contour.end();++pit)
    {
      cLine.push_back(fromOcv(*pit));
    }

    Triangulator tris(cLine);
    mMeshes.push_back(tris.calcMesh());
  }
}

void segcvtestApp::draw()
{
  // draw camera feed
  gl::clear(Color( 0, 0, 0 ) );
  gl::color(Color::white());
  gl::draw(mRGB, Vec2f::zero());

  //draw meshes
  gl::enableWireframe();
  gl::color(Color(0,1,0));
  for(auto mit=mMeshes.begin();mit!=mMeshes.end();++mit)
  {
    gl::draw(*mit);
  }
  gl::disableWireframe();
}

Sunday, June 9, 2013

EyeO Recap[0]

    My brain is COMPLETELY full.  I mean, honestly I don't even know where to begin, there were so many great presentations, and so many awesome conversations about random cool things, and so many good ideas tossed around, and...I took a bunch of notes on individual sessions, which I'll toss up on the google drive at some point, maybe in about an hour when I go sit down for lunch (aside, MSP is one of the most connected airports I've ever been in.  Seriously SFO and SJC, step up), and once some of the noise dies down I'll put up my actual thoughts.  I'd started some posts about each day, but eventually it just got to the point where I couldn't digest things fast enough to come up with coherent thoughts on a nightly basis.  Granted, there were open bars involved too, but i didn't really stay out that late, something I plan to remedy next year.


This guy is a genius, listening to him talk, you can totally see how he would build something like Hypercard. Photo by Charlie Cramer

    If I had to pull out one takeaway right now, Bill Atkinson (of Hypercard fame) said it best.  Learning to code is cool, but take a different approach.  There's the approach that says "I want to learn how to code", and there's the approach that says "I want to do something cool, and I'll need to learn how to code to make it happen", or alternately expressed, forget about how you want to do something and focus more on what you want to do and why you want to do it.  That's not to say that tools, frameworks, languages, etc don't matter, but I've always held that application is the best way to learn something, learning through doing, learning through projects, that sort of thing.  So often, I hear people say, well, why would I need to learn to code or, ok, I know some basics, but what do I do next?  Answer that problem first, and learning to code becomes easy.  Learn the things you need to learn for this project, then build off of them for the next project (or shoot off in a different direction and learn new things, either way is a great approach).  But don't get so caught up in learning how to code or learning every particular of a language, framework, paradigm, or process that you forget to make something beautiful.  As I ranted to a co-worker a little while back "There is no perfect tool or SDK/API.  Unity, UDK, Max, Maya, Cinder, ofx, processing, Windows, Mac OS, they ALL suck."  But I'm going to append that with they all make beautiful things.  So quit whining and make cool shit.

    My other takeaway is that Memo Akten is in fact a machine, but his talk was probably the most personally inspiring.  More on that later, but suffice it to say, I used to think I was crazy until I heard his talk.

    EyeO this year definitely struck me as more about experience than technology.  I think this is both good and bad.  Good because experience is really what matters at the end of the day, bad because so many people now are talking about experience, moreso than the people building experience.  It would be really sad to see EyeO become a glorified UX/HCI conference, and I really hope they keep to the trend of only recruiting speakers who have actually MADE stuff.  I don't see that trend changing, and I really hope EyeO continues to feel the way it did this year.  Sometimes it takes a long journey fraught with setbacks, delays, time spent wandering the wilderness, time spent off the trail for a bit, or time looking at the map trying to remember where it was you were going in the first place, before you reach home.  I'm not there yet, but after EyeO this year, I feel like I'm passing the last few mile markers.  GDC had been home of sorts for too many years, really looking forward to this new place.  It feels real.

Thursday, June 6, 2013

EyeO Festival, Day 0

    I'm not even going to try and quote Zach Lieberman's excellent keynote, it'll be up on Vimeo anyway.  I can't do it justice, but he really did say some things that lit a fire under me, especially when he called out people who spend their time building corporate demos for tradeshows and posited that CEOs shouldn't be the ones onstage telling the world what the future's going to be.  It's true, and I urge everyone, especially any of my colleagues in Perceptual Computing to keep an eye on Vimeo for when it's released (or just my facebook, since I'll be posting and reposting it probably a few times an hour).

    Watching the Ignite talks at EyeO tonight, I realized a few things:
  • I'm total weak-sauce for not submitting an Ignite talk, since I wouldn't have been the only first time speaker.
  • I haven't really given a talk on anything I REALLY care about since I spoke at Ringling.
  • If I want to do an Ignite talk next year, I need to start prepping now.  Holy jeebus those speakers were GOOD.
In fact, the last sorta public speech I gave, I was doing the exact opposite of that, i.e. I was being made to champion a cause I absolutely didn't believe in and tell a story that...wasn't really a story.  Seeing all those folks just KILL IT during the Ignite presentations really reminded me of what I love about public speaking, and that's telling stories I care about that give me an opportunity to connect with my listeners and inspire them.  I need the opportunity to do that again because, "Without Change, Something Sleeps Inside Us and Seldom Wakens."


I feel like I've been sleepwalking for the last month at least

    But enough positing!  There was actual content today, so let's talk about interesting things instead of listening to me spew opinion.  The first of two workshops today was a great intro to D3 from Scott Murray.  I'd been looking at d3 earlier this year and getting to play around with it reminded me of why I liked d3 in the first place and why I wouldn't mind doing a bit more javascript work.  I don't know if it's proper to say I want to do more javascript work, I think it's more that there are certain libraries that let me do certain things that just happen to be javascript, so really what I want to do is more work with said libraries and if that means a particular language, I guess that's it.  I mean, if it were about a particular language/toolkit's available libraries, by that logic I'd be much more of an ofx fan than a Cinder fan, yeah?  I'm really excited about the idea of using d3 and Three together, hadn't really thought much about that but the idea popped up today.  Could be fun.


It's kinda nuts how easy it is to make these in d3, even with animation and interactivity

    The second workshop was an applied math tutorial from the man himself, Memo Akten.  This course just reinforced to me how we really need to rethink the way we teach math in this country.  It also confirmed my suspicion that Memo is a machine.  Seriously, the way he talks about numbers and math, you can just tell he processes all that stuff differently than normal humans do, like...he SEES in linear algebra and trig.  I'm totally jealous, hopefully it'll come with practice.  The two things that I feel made this class work were a) the fact that the information was presented in such a way that concepts either built on top of each other or were otherwise shown in relation to each other and b) PRACTICAL EXAMPLES!  Honestly, I've always been comfortable-ish with Trig, but seeing some practical examples, like projector-to-camera mapping, for example, really just locked it in.  I put some notes online, they're probably only useful to me, but you're welcome to take a peek anyway: EyeO 2013 Applied Math Notes


...they're useful and make sense!

    Day 0 wrapped up with Zach Lieberman's amazing keynote and an incredible round of Ignite talks.  I really can't do them justice, so keep checking Vimeo for them, they're all worth watching.  Onto Day 1 for talks and hack-a-thons!

Tuesday, June 4, 2013

Eye-O Festival, Day -1

    I came to a really interesting realization this morning over some incredibly hot coffee (plus, Dunn Bros coffee on 15th? Yes, just, yes) that this may be the first time I've gone to a conference where I'm actually NOT interested in networking.  I mean, don't get me wrong, it'd be great to get my name out into the space a little, but I'd rather do it through real work.  Putting a few samples on github doesn't really count as compelling work, no matter how extensive the development might be (for the record, it's not much at this point).

    In all honesty, my interest lies in mapping the space, and again, not for a networking standpoint, I'm very interested just to see who's doing what in general, mainly because I think all this stuff is really cool (oddly enough, what brought me to intel).  I think if I had meant to network really hard, I would've tried a bit harder to make sure I brought business cards, so maybe my brain just knows things that I don't.  Yeah so, really excited to get a look at what people are working on, and just spend a few days writing code.  Workshops start tomorrow, first up D3 with Scott Murray, followed by Applied Math with the man himself, Memo Akten.  Mind preparing to be...expanded at least, probably blown.


My God, It's Full Of Code...

    So, here's a subject I don't talk about a whole lot, but it's definitely something I think about, for one reason or other.  This particular thought thread stems from my reading of this article: Intel Capital creates hundred million dollar Perceptual Computing fund.  Now, aside from this reinforcing my belief that it's time for me to go independent, a few things caught my eye.  First, there was the opening statement:

"That’s a lot of money, tech-art fans."

Second, the term "Tech Art" appeared in the Categories list for this article.  So of course I had to click on it and see what other articles fell under that category.  Quite an interesting list, one of them being an article highlighting the release of processing 2.  Being here at Eye-O festival, now somewhat surrounded by people who make art by writing code, really makes me ponder that term "Tech Art" and what a "Tech Artist" really is.  I'm probably not very qualified to speak on what the future of "Tech Art" as a game development discipline is, but ultimately, I'm not really sure there's such a thing as a Tech Artist in games anymore.  Well that's not true, but I definitely think they're becoming fewer and further between.



When code meets art (or at least my feeble attempt)...

    You see, somewhere back up the line, Tech Artists became much more specialized, almost to the point where I'm not sure the title "Tech Artist" was really applicable anymore.  All of a sudden we had riggers, technical animators, dcc tools developers, shader programmers, even physics artist, but to me, a proper Tech Artist was all of these things.  I wrote my first auto-rigging tool in 2002, and I wrote my first Cg (proper Cg, not CgFX) shader not much later, and you know, back then that was the job.  When that "normal mapping" thing started to be whispered in games circles (a lot longer ago than most of you kids think), I was one of the first people to write a normal map extractor from Maya and a Mental Ray shader to test it.  Yep.  And again, that was just the job?  A Tech Artist was rigger, dcc tools programmer, shader writer, fx artist, jack of many different languages, and sometimes even modeler and renderer.  I feel like nowadays, that original diversity and spirit of exploration that defined tech art once is gone, and now the extent of it is finding new ways to solve the same old problem inside whatever tech art sandbox you've chosen.  Sheesh...borrow someone else's solution and use all that free time to learn something NEW, trust me, your pipeline isn't that complex, and your toolchain requirements aren't that special.  Your production process is not a unique delicate flower, for shame.

    I can't really say if the current trend of specialization is going to continue, I imagine it will and people will make the argument that the increasing complexity of AAA game content will require it, but you know, I went from PS2 and low spec PC all the way to the dusk of the 360 and PS3, and I think I chose to work faster and smarter, rather then continue to add complexity to my chosen sandbox (or job security, whichever you want to call it).  I think it's that approach to Tech Art that continues to serve me well in the world of RED, and just to get a little sentimental, it warms my heart to see that original spirit of Tech Art continuing to live on as Creative Coding.


    Andrew Bell said it best at Eye-O last year, TDs (and of course, TAs) are Creative Coders for games and film.  I think this was much more true back in the day (FUCK ME I'M OLD), and I'd like to see Tech Artist return to that original spirit of exploration and diversity, rather than continuing to play "how many ways can i make up to solve the same maya problems everyone else has already solved?".  That said, tomorrow is Eye-O!  Time to shut my mouth, open my eyes, and engage my brain.  This should be awesome.