Hey guys,
I'm testing out FaceFX 2009 at the moment and I've brought in my actor and configured his mapping table, face graph etc, but when I do audio analysis I get animation which is very 'flappy'.
#Is there any process I can implement which will limit this? I've tried clamping nodes in the facegraph and also adjusted the amounts shapes get pushed to in the mapping table but I still get very busy results.
The audio files I'm using are a broad subset of the types of things we would use in production and as such are generally quite fast bits of speech, so slowing down the delivery of audio is not really an option either...
Thanks for any tips or suggestions...
Matt.
Hi Matt,
One thing to check is targets & mapping. If the same audio looks good on our sample characters, then you may be able to improve performance by tweaking the targets or mapping. Have you customized the mapping?
If the audio looks too flappy on the sample characters too, then generally the problem is that there are "too many" phonemes. If you have an animator chained to a pole somewhere, culling the phoneme list will yield good results, but we are looking into ways to automate the solution. FaceFX 2009 includes a "FunnelPhonemes.py" script that will attempt to remove unnecessary phonemes from the selected animation. It's an early attempt and one of the customizable "quick-launch" buttons on the upper-right executes the script.
We've created an improved funnel phonemes script in 2010 (coming out soon), and we've also added a "smooth curves" feature that can improve results when applied to our "open" curve (which is the target most responsible for the flappy look, and also a good reason why our component mapping is a good choice for resolving issues like this). Face graph solutions are also a possibility, and a rate-of speech curve that corrects the open curve could be a decent approach too.
So in conclusion, we haven't found one solution that eliminates the problem completely, but we are trying to arm you with as many tools as possible to address the issue. We'd very much like to hear back about what techniques you have tried and what you ultimately go with.
Doug
OK I have rebuilt all of my targets to match as closely as physically possible the targets in the online help. I have used the default mapping.
When I analyse my audio (and also run the python script from the toolbar in the top right), the result is not AS flappy, but its still far from ideal. Also, the biggest issue I am getting now is I am just not seeing any 'M' or 'D' shapes which is significantly throwing off the effect with this piece of audio.
I look in the phoneme tab and there are entries here for these shapes, but they just happen far too fast to really be noticed. Is there something I can do to get these important shapes more prominent?
I know I could potentially manually adjust stuff here, but it's not really going to be feasible for us to throw a bunch of animators at this as we're likely to have upwards of 10,000 wavs to process in the end product...
Sorry about the trouble posting. It got caught up in our filter. I'll look into fixing that. Let's move this onto email support. This is definitely a good discussion to have on the forums, but there isn't much more to be said other than 2010 will have some new tools to work with this problem.
I'll contact via email so we can look at some concrete examples.