Friday, February 24, 2012

sound advice

        Hey gimme a break, i went to bed at like 1 last night and i've been in a bit of crunch this week.  I keep saying that one of these days I'm going to start a job during a project that's NOT in crunch, but you know, it's a fun crunch.  Had the first dog-n-pony with the execs and I gotta say our team is working on some cool shit.  We got to postpone our demo for a week, which is good because I've only been in the code for about a week and a half, and the scope of our demo is pretty impressive.  I mean, not really, but given the time we've had to put it together...

Photobucket
It seemed like a manageable project at the time..!

        So I'm still working on the Kinect camera tutorial, I'm actually going to go in and tighten up the implementation, along with the graphics on level 3, so I'll have a bit of a better presentation.  It's a bit sad that I didn't have more time to really come up with some cool camera control, was hoping to do a little more than just build a mouse driver type thing, but you know, it's a good starting point.  I feel like the implementation I came up with is pretty solid, it's a good hybrid of a few different paradigms that make good sense.  Anyway, hopefully that's a fairly tantalizing preview.  I'll go over the basics of how to setup the zigFu/OpenNI stuff too so you can further hack away on Kinect.  If you haven't messed around with it yet, you should, I'm surprised it took me this long to get into it.


Thompson Eye Phone and Sogo-7s optional...

        Alright, so to some possibly meatier content, although I may be the only person who's run into this issue.  If that's the case, you guys are all jerks for not posting solutions!!  One of the things we've been beating our heads against somewhat for the last couple days is how to manage sound, in particular sound on events, such as collision.  Now, that's a fairly straightforward problem that you could solve in a few different ways, but here are some ways NOT to solve it:
  • The rather naive and seemingly obvious audio.Play() in Update(), it only seems obvious at very first glance
  • Immediately prior to a seemingly obvious Destroy() call.  Again, it only seems obvious at very first glance
        So, the first method we came up with was a bit brute force I'll admit, but if you know me, that probably comes as no surprise.  We setup a GameObject that's basically an audio container, so actually it's a GameObject and a script that exposes a bunch of public AudioClips.  You probably see where I'm going with this, if you guessed AudioSource.PlayOneShot() or AudioSource.PlayClipAtPoint(), pour yourself an expensive single malt shot.  For some reason, this made the audio sound really wonky, and it wasn't a 3d issue or a playing multiple clips issue.  We could have wrapped either one of these to get a bit more control over the AudioSource, but I felt like that was probably overkill.  For a larger project, I could see the benefit of this though, and it's definitely something I'll be exploring more.  I have a ton of questions about this pattern, performance issues mainly.


Brute Force? Heh, more like ME Force, amirite??

        The second method is about as straightforward, but I had some perf concerns again.  It's funny how I'm still in game developer mode and trying to make the transition to blue sky R&D developer mode.  Performance? Hah, we care not for your framerates and memory budgets!!  Or something...But yeah, basically we ended up attaching multiple AudioSources to the prefab and managing it in script.  The code was pretty simple, something to this effect:


        Obviously this is super naive and you'd want to do a bunch of other checks, but you get the idea, it's sketch code, whadyawant? I'm not very sanguine on this method largely due to the multiple AudioSources on each prefab, again, perf concerns. I may just be paranoid still...

        The solution we ended up going with is not ideal but serves as a bit of a springboard for a pattern I might have used if i had been thinking about it.  For each collision behavior, we exposed an AudioClip for that behavior, then switched out the object's AudioClip in the OnCollisionEnter() based on the name of the colliding object, i.e. We have an object in the world and based on the projectile's name, we swap out the AudioClip and play the AudioSource.  Again, i don't feel like this is super ideal because then we have assets all over the place, but it definitely made life easier for me because I could just concentrate on a behavior at a time. So we're looking at something like this:


        Again, super-naive, there are better ways to do the specifics, but you get the idea.  This seems to be working for now, so i'm going to just STFG and keep in mind what I've learned.

looks-good-to-me-ship-it

        I think, in retro, I would have gone with some method that was a hybrid of all of these, maybe have an audio manager type script attached to the prefab that exposed all the AudioClips i wanted to play then have each collision behavior poke into the audio manager.  I like the idea of keeping collision behaviors as separate script just because i hate looking at big scripts that manage everything.  That could just be me, honestly, I ain't to guud at readin teh codez, not bein a real programmer and all...

        Thoughts?  I'd love to hear from anyone else about how you tackle this sort of thing.  At some point i want to go back into library/SDK developer mode and start wrapping up a bunch of functionality like this for future Unity projects...

No comments:

Post a Comment