HeroEngine Forums

HeroEngine Support => Art & Art Pipeline => Topic started by: dmccollum on Oct 31, 11, 12:20:11 PM

Title: [Resolved] Model scale and Facegen question
Post by: dmccollum on Oct 31, 11, 12:20:11 PM
I've been looking through as much of the wiki as I can while I wait for an environment and have come up with a couple of questions around the art pipeline. I apologize if the answers are in the forums or wiki, I just haven't seen the answers yet.

What's the proper 3ds Max scale that I should be using. For example, if I want a character model that's equivalent to 6' tall, what unit size should the model be in 3ds Max.

I think I understand how the Facegen application works for the character models, but I'm still unsure how that works with dynamic models where you want to swap meshes for different clothing. If the Facegen application can provide the head mesh, do you still have to create your own body? If so, do you export the Facegen head and stitch it on in 3ds Max?

lastly, how does clothing work with Facegen? Let's say without Facegen, you have three different races with three different body sizes. That would mean for each piece of clothing in the game you would need to have nine different copies of that clothing. Will the Facegen application use one and resize it automatically?

EDIT: The wiki also states that the original HJ team felt that based on the complexity of Facegen, they would have been better off just using multiple standard meshes. Is this still the case?


Title: Re: Model scale and Facegen question
Post by: HE-Cooper on Oct 31, 11, 02:16:35 PM
For Max, check out Bennett's latest tutorials to get set up in advance, it has the scale values listed:

Without going into too much detail (we've got dozens of pages that talk about dynamic characters). The basic breakdown as you've hinted at, comes to morph targets versus mesh swapping. If you have skinny guy, medium guy, fat guy, then you've got 3 different meshes (and depending on differences, different skeletons). If you're using scaling, then you've got one mesh with targets that are scaled based on data points.

Everything becomes a push or a pull in different directions. More overhead at run time, versus more texture memory, etc. If you are planning on making a straight-forward character and gear system, and don't have dedicated technical artists, Facegen technology may bring in too much complexity and overhead for your team. On the other hand, if you do it correctly, Facegen can provide a level of customization to all aspects of your character pipeline that simply can't be replicated with static asset swaps.

I've been on a few different projects where we used the tech in completely different ways. Either wired up all the different morph targets and texture tinting and let the users go wild (and frequently ugly or broken looking), or simply made a bunch of great looking heads and faces and exported them as static atomics for use in asset swaps at character creation.

I *think* The Repopulation is our only current HeroCloud devs where I've seen video of full blown working Facegen systems in place, but I know there are other teams who have systems working, and a couple that I've seen screens of. Pretty much all have reported that it was a huge challenge for their tech artists, but that they were glad they went down the road.