Thursday, August 30, 2012

the perfect engine

        Every time i type "blogger.com" into my address bar i'm reminded of a rant penned by a game writer a while back railing against the idea of being called a "journalist" because he was, in fact, a blogger, and as such didn't need to be bound by the tenets of that horribly oppressive, creativity-killing force called "journalistic integrity".  This got me thinking about a conversation I was having a little while ago with a colleague about the death of Triple-A game development and consoles as we know them today.  I wondered at the time if that would give rise to a new form of game journalism where game writers aren't afraid to put together well thought out and well written articles that reflect the notion of an older readership, as opposed to  the current race for they eyes, hearts, and minds of 13 year old boys.  Oh well, one can dream.


Shuriken Particles

        This idea of games and misguided representation, self or otherwise, has been quite a topic of thought for me recently, especially where the term "interaction" is concerned.  Don't get me wrong, I love lying in bed playing xbox games on the ceiling (projectors, so awesome) or sitting at my PC playing whatever F2P MMO has my attention for the next month, but i've started to re-classify this sort of activity as something other than "interaction".  I mean sure, in the loosest sense of the word it is an interactive activity, but when I think of myself as an interactive developer/artist/performer, this is not what i classify as "interaction" or "interactivity".  I'll concede that games are a subset of a broader interactive media and entertainment classifier, but games do get most of the spotlight and are driving alot of the technology in that space.


Relentless, The REV

      Therein lies a problem.  We can look at things like the Wii and the Kinect and talk about how they redefined interaction, which sure, they did, but this kind of merging the physical and digital world through different interaction paradigms is based on much older work (watch Underkoffler's TED talk for the details) that most of the public is probably unaware of, or at least, has relegated to the realm of science fiction, or more recently to games and academia.  So of course, big technology hasn't had a huge incentive to make sweeping changes in how we interact with technology, but then i suppose that could be chalked up as "good business".  Unrelated, I would argue that "good business" is why the country is in such a sorry state, but that's for a different blog post by someone who isn't me.


More Shuriken Particles

        So this presents some really funny irony, at least from my perspective.  While alot of advanced interaction is being developed for and because of games, that idea of "good business" means that middleware vendors aren't necessarily rushing out to build functionality to support this sort of thing into their products, because how do you sell depth camera support to an industry and consumer base who have written depth cameras off as motion control gimmicks? Funny, this is the same ideology championed by a man who thought one of the killer features of his game was that you could pick up your own shit, but whatever.  I'm not here to skewer said vendors for not including this sort of support, but it does bring up an interesting idea, that of "graphics engines" vs "interaction engines".  It's been a running debate, the idea of Unity vs UDK vs ofx vs Cinder vs whatever, so it's alot to think about.  The distinction was solidified best to me in a post on the Cinder forums on the same topic.  Someone wrote "Life is easy if you stick to loading models (animated or not) into the scene graph, setting up transitions triggers, using basic particle emitters...", whereas "Cinder is made in a way that it's easy to import foreign technologies."  That pretty clearly outlines/sums up the difference between graphics and interaction engines in my experience. Being able to just have webcam access as a ready made call, or an easy path to adding an external SDK for some non-native hardware is key to quickly prototyping and building advanced interaction. While alot of game engines do have facilities for this, it's not always quick and easy, but I don't feel like this is the way it has to be.  I've always felt that one of the things that's really helped me think in terms of interaction design is having been around games for so long, so it seems logical that there would/could/should be some sort of convergence.  Not that there isn't already, search Vimeo and you'll find games made with Cinder and ofx sitting alongside interactive installations made with Unity.  I think if it were easier to build either type of experience on either platform, we'd truly have convergence between the idea of games and advanced interactivity/interaction.


Addition Subtraction

        Ultimately, I'm not saying Unity/UDK need gesture recognition or anything like that, just an easier way to get it into the engine.  Keep the .NET integration current, get rid of UnrealScript, etc, don't predicate everything on the idea of mouse and keyboard events, or at least give me an easier way to hook into them, that sort of thing.  No reason for game engines to stay in the dark ages of interaction and interaction engines to stay in the dark ages of rendering.

        Just some thoughts, nothing really meaningful here, i just hadn't blogged in a while and felt like writing some words.  For the record, I'm using both Unity and Cinder.  I've been playing with some new Unity stuff recently that's got me super excited, i'll be able to talk about that pretty soon here...


Cymatic Ferrofluid