Recently, a lot of people have been referencing Richard Fine's video on ScriptableObjects. I'm gonna go ahead and rebut it.
A few details: I loved ScriptableObjects. I made videos about how to use them. I certainly don't dislike Richard Fine, I think he does pretty solid work. But I think his obsession with ScriptableObjects is really muddying waters, and his video is misleading.
Things have changed in the prefab world.
Used to be that if you wanted something shared everywhere, you used ScriptableObjects. No matter the scene, no matter the situation, references to that ScriptableObject would point to the asset in the project, and one change would change it everywhere.
However, in the past few years, prefabs have grown a lot of muscle. It's easy to overlook, because prefabs wear baggy clothes, but now you can (and should) use prefabs for many of the things he recommends using ScriptableObjects for.
(And for the rest, you should use a real database.)
You can reference uninstantiated MonoBehaviours now. You can read their values, call their functions, trigger their events - all without instantiating. What does this mean?
It means that everything ScriptableObjects used to be good for, MonoBehaviours can do... and can unlock much more powerful patterns than ScriptableObjects tend to allow. Unity was built around the concept of prefabs and MonoBehaviours, and now we can merge that power with prefab referencing.
Let me describe the patterns I'm using in The Galactic Line. These are MonoBehaviours-with-muscly-prefabs patterns.
The Magic Mixin
I have a lot of space ship parts. I don't like instantiating them, because they're heavy: they have LODed visuals, sound clips, lights, lots of stuff that I want to leave on the cutting room floor if the ship is too far away to see... but I still want to simulate the ships. Hundreds of ships you can't see, still being simulated as their resources drain away and their mission timers tick up.
Ships are easy enough to simulate, but you need each of their parts to tell you what it does as time passes. What does this reactor do? It drains water and creates power. It drains antimatter and creates power, heat, and radiation. It's pressurized. It's not pressurized. It has these external visuals and collision meshes, these internal visuals and collision meshes. It makes these sounds in these situations. It has beds in it for some reason. Or maybe not.
When I get close, these parts resolve into a GameObject. They have to, because we have to see them and hear them in the scene. I could bend over backwards to avoid making a prefab... but why? It makes sense to make it a prefab, that's what prefabs are for.
The problem is that I want to access all that juicy functional stuff without instantiating the part.
I could make all of them ScriptableObjects, but then we have a staggering number of floating loose ends. A radiation-production ScriptableObject saved off in some other directory, values specific to this reactor. It makes more sense to have a radiation-production MonoBehaviour and stick it onto the part. I can just customize it right there, no need to have dangling assets. That's what the GameObject is for.
And I can access all of that directly, without instantiating anything. If I want to know how much power this reactor creates and how fast it guzzles fuel, I just ask it. If I want to know if it's air-tight, I just ask it if it has an air-tight mixin. If I want to know what its UI thumbnail is, I just ask it. Want to know what sounds it makes? Just ask. I need to simulate ten weeks of warp travel? Just ask it for those values and multiply by ten weeks.
I can load everything I need onto the part prefab. It doesn't matter how complex the part is. It doesn't matter whether it involves nested objects or flat mixins. Modders can clone and alter the part, and it won't screw up the default because the values are per-prefab.
In addition to being super easy to take care of and keep track of, I can then just drop it into the scene. Whether it's in play mode or edit mode, I can drop it into the scene and it will instantly be the thing it is. I can mount in anything I want, visually. LOD? Or not. Lighting or not. Hell, additional cameras displaying to holographic monitors? Sure. Sounds that play differently depending on how much the generator is being taxed? Why not!
In this way, anyone can create ship components, drop on the relevant mixins (such as "Reactor" and "PressurizedSpace"), save it. They know what they'll get, and the game engine automatically optimizes to prevent any bloat. Advanced users can hook UnityEvents up to create nice triggers and visuals, or even create new scripts. Just drop 'em in. No loose ends, no complicated dependencies.
Now, if the thought of a largely undifferentiated ball of objects makes you cringe... this method of creating and saving content is a fundamental in Unity. This is how Unity is engineered to work, and this is how it works most easily.
You can engineer around it, but why? Use the engine in the way it's meant to be used, and you'll have a much easier time of it.
The Meta-Instantiator
My ships have a lot to keep track of, but I don't instantiate any of the parts if I can get away with this. This means I can have a fleet of Star Destroyers, each containing thousands of crew members, and from the perspective of the game it's just a bit of data that gets processed whenever it needs to be.
The ship has a link to the blueprint prefab for the ship class, which in turn contains all the ship parts. If I instantiate that blueprint, I get a nice, shiny ship full of visuals and noises and stuff.
But I don't have to instantiate it. Instead, I crawl through the parts compiling a list of all the resources and potential mission parameters and that jazz. A small algorithm calculates out the next "keyframe" in the ship's future - when the mission completes, when resources get dangerously low, when it reaches its destination, etc.
The universe sim trickles forward at whatever pace the player wants, and when the in-universe time hits that keyframe, we crawl through the ship again to find the new situation and calculate out the next keyframe. This simple method allows us to have thousands of Star Destroyers without any slowdown at all. No per-frame update, we don't even really need an in-scene object to represent them. (And, since they're probably 500 light years away, that's good.)
Well, we get close to a fleet of Star Destroyers, and therefore the ones nearby start getting instantiated into the game world. The player accidentally rams one, breaking one of those big engines. An NPC commander is generated and yells at the player.
Oh no, this is awful! How do we remember things like a specific engine being broken, or a specific commander existing? We're just referencing a blueprint!
... just save the instance as a custom ship.
Since the blueprint contains a reference to its prefab and each component contains a reference to its own prefab, we can do whatever we want. We can easily save this "baked" blueprint, and then compare its stats to the changes in the definitions of things like engines and reactors. We could even just save the one damaged component, and leave the rest as references.
We can also generate a mission to repair the ship, and when the mission completes, we can delete the custom blueprint and restore it to an ordinary blueprint reference. We could also save the commander - either as part of this ship or separately, as we prefer.
Unlike a ScriptableObject, a prefab can easily be cloned, compared, reloaded, partially cloned, merged, instantiated piecemeal...
The Monster's Database
If you're a fan of ScriptableObjects, hopefully by now you're thinking "well for your specific application, sure, but IN GENERAL-"
Most of the uses of ScriptableObjects are as data objects. The big advantage is that there's no extra stuff.
For example, I have a list of various factions and species. If each was a MonoBehaviour, I could drag it onto a property on another class in the same way as I could do with a ScriptableObject - it'd automatically resolve the reference to the prefab as a reference to the specific MonoBehaviour on the prefab. It'd behave exactly like a ScriptableObject, but it'd have a GameObject lurking in the background being... nonoptimal.
I mean, why would you ever instantiate a whole faction, right? Just drag it into the scene? You'd never need that, so it's just junk stapled onto my data class!
I could argue that it allows for mixins, but we already did that. Let's argue for something else.
Before that level of garbage starts to get noticeable, we run into another problem: managing hundreds of ScriptableObjects is just as obnoxious as managing hundreds of prefabs.
There's a reason why GameMaker and RPGMaker use databases for this kind of thing: databases are a really great way to handle it. You could manipulate the editor to create a pseudo-database front end for your ScriptableObjects (or your prefabs), but... JUST MAKE A DATABASE. It's faster, less overhead, and can be easily exported and imported from Excel or an HTML form or whatever.
The big problem with databases is that it's hard to drag a specific entry into a field in the inspector. I don't know the best solution for this, right now I'm using an editor trick to fake it, but it's not very good.
In the end, my thoughts are simple: if you have dozens of entries, you probably need a database. If you have only a few, the extra garbage of having a GameObject attached isn't enough to worry me, and you can usually leverage mixins to add a lot of extra functionality on the cheap.
The Custom Prefab
My argument is simple: the prefab is now more powerful than the ScriptableObject.
In the video above, Richard Fine creates a ScriptableObject to handle playing custom explosion sounds. This is the part of the video that upsets me the most, because it's a magic handwave that hides the fact that it does nothing useful at all.
There's no reason to have a ScriptableObject "ShellExplode" floating around separately from the shell prefab that's going to explode. Just put it on the goddamn shell. Even the "play mode editing" would work the same way, because you're editing the prefab and it gets instantiated every time you hit shoot!
And, of course, now you just have A Shell Object instead of a shell object and some random dangling object in another directory that may or may not be referenced correctly. Moreover, you can easily clone the shell prefab and create your Big Boss Shell and your Tracer Shell and your Rocket Shell, tweak away!
It's not magic, it's the way Unity is built to work best.
The idea that a prefab is somehow "fragile" is not true any more. The classic example of a ScriptableObject is that if you have ten monsters, they can all have the same stats and you'll never "accidentally" edit just one monster to have different values.
If you have a lot of monsters, you'll probably be using a monster spawner that references a prefab, rather than hardcoding every monster. Even if you do hand-place each monster, changes to the prefab automatically update instances, as long as the instance's values haven't been manually altered. This works per property.
So, for example, if I have ten orcs and I want one of them to have extra HP, I can increase the HP. And I change his AI role to "leader". And I add a potion to his inventory. And I put a hat on him. Later, I change the orc prefab's damage from 4 to 6. The orc I modified will have his damage correctly updated, while still keeping his hat and potion and HP and AI role!
Sure, it's possible to accidentally change a value and not realize it, but that seems inconsequential compared to the overhead of needing a new, permanent asset file in your project directories for every slightly tweaked monster.
That's the point of Unity. The point of Unity's entire approach is that you can tweak things in scene view! Using ScriptableObjects to "work around" that is like "working around" the game of basketball by taking out the ball. Yeah, you could probably come up with something, but it's not going to be a very effective use of your basketball pros or your basketball courts!
Use Interfaces and Delegates
I've seen a few other arguments for ScriptableObjects - for example, externalizing coroutines so you can plug them in willy-nilly, or using delegates bound up in ScriptableObjects to do things like flexible AI processing.
Just... uh... just use C#.
Not to sound elitist, but C# has delegates and interfaces already. Use them. Don't find an excuse to wrap them in ScriptableObjects, just mainline the stuff.
Unity's support for these in the inspector view is kinda crappy, which is the big argument against it. Fortunately, you can use UnityEvents instead of delegates, and those do show up in the inspector fine.
Interfaces are similarly inspector-unfriendly, but they're very powerful and useful and don't limit you to either a ScriptableObject or a MonoBehaviour - you can use either, or even a raw C# class, or have instances of all three for different applications that are referenced from a single system!
Personally, I think using MonoBehaviour mixins as implementations of interfaces is really underestimated, but that's another topic entirely.
Use ScriptableObjects
Am I arguing that ScriptableObjects are useless?
No, not at all. They are more optimal than prefabs, so if it occurs to you to use them, you should use them. They're especially nice when it comes to instantiating them outside of the scene, or moving references between clients.
Fundamentally, ScriptableObjects are classes. That is, you've written lines of code. I generally suggest using them when you can write less code by using them, and I tend to find that means situations where I need to create and track arbitrary references and complex data.
I've stored options menu defaults in ScriptableObjects, level code, galaxy definitions...
But, in the end, I've never found them to be much earthshakingly better than either using prefabs or raw data. So I only use them when I'm in the mood to optimize.
ScriptableObjects CAN do a lot of things, but that's because they're a C# class. They're not substantially better at those things than either a MonoBehavior or a C# class. Any optimization you can get by using them is marginal, and they don't offer any particularly astounding new patterns or workflow.
I'm not against using them. I just don't want people to think they can do these amazing things... without ever realizing that simple prefabs can already do those things, and many other things at the same time.
Showing posts with label unity. Show all posts
Showing posts with label unity. Show all posts
Wednesday, July 06, 2016
Monday, April 29, 2013
Avatar Creation Tool: POOFY SLEEVES TIME
TECHNICAL POST
I've made a lot of progress on the avatar creation tool, after two weeks wasted. Right now, you can paint clothes in multiple layers, and then rearrange the layers, move them to different avatars, and so on.
For example, you could paint a shirt. Then you could paint chainmail. The mail would be a specular rather than diffuse material, it'd be bumpmapped properly, and the shirt would show through the holes. You could then switch the order, putting the shirt on over the chainmail. The shirt (assuming it was 100% opacity) would obscure the mail, but the mail's heightmap would make the shirt bumpy in a muted version of the mail's bumps.
Also, you can paint on wrinkles and thickness, and you can paint skin as well if you want tattoos or warts or whatever.
This actually works really well. I am very happy with it. But this is all texture/bumpmap stuff. Meaning that your arm will always be shaped like your arm. This is fine for tight sleeves, but if you have poofy sleeves at all, you'll need to actually modify the mesh.
Originally, the plan was that I would create modified versions of the mesh, and you could just select which one you wanted to use. Your shirt is based on the short sleeve mesh variant, or the poofy sleeve mesh variant, or whatever. It'd be a lot of work, not just because I would need to create a ton of variants, but because every variant would have to be mapped to the half dozen body shape keys/morph targets so that each would look correct on beefy people, fat people, and so on.
But... the success of this in-unity clothes generation means I'm rethinking that. Is there a way to allow the users to create their own mesh?
Sure!
First off, it's easy to actually modify or create meshes. The hard part is determining what to create, how. And UV mapping it.
Let's say that you're building the top half of a Disney princess dress - floofy and pastel. You want the shoulders to floof: right now, they're just that frosting-colored blue painted right on the skin, like a tight shirt. So you switch from texture paint mode to mesh edit mode.
You mouseover the shoulder and scroll-wheel up. The shoulders inflate along their normal. Perfect. Your shoulders are floofy.
That part is easy. It's just another morph target. Morph targets are stored as "Vertex N offset by XYZ", and that's literally what you're doing. Except we would store it in a 4D Vertex, where W would be the normal. By allowing it to be offset by the normal instead of (or as well as) XYZ coordinates, we allow for bodies that are different shapes. This probably isn't a big deal with shoulders, since most people's shoulders are roughly the same shape, but it really matters when you're trying to do armor on a fat guy.
Oh, is it not flounced quite right? Click and drag to move the vertices around with a grab brush. Again, very easy. Now the morph target is offset by a multiple of the normal and a certain XYZ value. Mousewheel down to pull the point inward...
You can use this trick to do mesh digging as well as floofing. Let's say you want to make someone with a robot arm, so you want the arm to have actual indents along metallic tracks. You paint the metal tracks, then you mousewheel down to pull the vertices in along their normal, digging into the mesh.
This stuff is the cake. The hard part starts now:
You want a sleeve. That is, you want a short sleeve that stands slightly away from the arm and the arm is within it. You don't want to simply modify the arm's size. You want to create more mesh around the arm.
Let's start with the UI. It's the same. The user mousewheels up on the sleeve. The algorithm detects the edge of the clothing you've painted, and says "okay, this texture ends between this vertex and the next vertex on the triangle. So I can't just move this vertex, that would affect a region not covered by the texture. I have to tear it."
So it clones the vertex, lifts it along the normal of its parent vertex, and creates tris trying it back to its parent mesh on the shoulder side while leaving it detached on the elbow side. A sleeve. It's got the same bone weights and tracks fine.
Mousewheel down, it retracts towards the parent until it merges and vanishes.
Want your poofy edge to have a poofy shoulder? Mousewheel up on the shoulder. This is all connected space, so it doesn't detach from the parent model. Only the edge points detach, meaning that your model only gains a few vertices instead of cloning vast amounts. Depending on the quality of the model, this may actually go in a few vertices: the point is that the sleeve gives the indication that there is an arm up it, so the detached portion does have to have a certain depth.
Collars or unbuttoned fronts or whatever all have the same method, except that with collars you'd want to drag the new vertices down and out so that they fold over properly. That may actually require a special function, since otherwise they might be attached to the neck bone rather than the chest bone. Still, it's not that bad - a "reparent to closest vertex" would do.
If you did want to create something that is an entirely separate layer, that'd be a slightly different system built on the same principle: instead of editing the base mesh, you'd create a clone of the base mesh containing only the vertices that are covered by the texture, then edit that.
In both cases, these vertices can be stored in the exact same WXYZ from N framework that we used for offsetting parent vertices. Except, instead of moving the parent vertex, they clone it.
However - this is the rough part. They can't clone the UV map location of the parent. Instead, they have to create a new UV map position for their tris, and cut the pixels that were on their parents' UV map, pasting them to their own. This is required because otherwise the arm inside the sleeve would be the same color as the sleeve.
I'm actually... not sure about how to do that. I know there's a way to do it, but I've never tried it out. It may be annoyingly complex. But it's an absolute necessity.
...
Okay, we've discussed sleeves and collars and entirely distinct layers... but what if we want a topology that isn't simply an offset version of the parent?
The obvious example is skirts. What if we want a skirt?
When you mouse up along the inside of the leg, you'll quickly cross over the X axis. If you're in "merge mode", those vertices would impact and merge, causing their triangles to also merge (and average their bone weights). If any vertex only has triangles which are entirely along 0 X, it self-destructs, "hollowing" the skirt out. (Otherwise you would have extremely loose shorts).
This is not so hard, although it has one big downside: the skirt would end up with a very low poly count horizontally. Because of that, we may need an algorithm to add "stripes" to the skirt. This'd be easy if the skirt was just a standalone mesh: we could just do loop cuts. But since it's often going to be integrated into the overall mesh, we need to come up with a method of creating vertical cuts that don't screw up the topology near the waist. that's going to be some work, but it's not nightmarish, just annoying. Alternately, I could just make the legs themselves have a high poly count so that skirts wouldn't suffer.
...
IT ALL SEEMS FEASIBLE LET'S DO IT.
I've made a lot of progress on the avatar creation tool, after two weeks wasted. Right now, you can paint clothes in multiple layers, and then rearrange the layers, move them to different avatars, and so on.
For example, you could paint a shirt. Then you could paint chainmail. The mail would be a specular rather than diffuse material, it'd be bumpmapped properly, and the shirt would show through the holes. You could then switch the order, putting the shirt on over the chainmail. The shirt (assuming it was 100% opacity) would obscure the mail, but the mail's heightmap would make the shirt bumpy in a muted version of the mail's bumps.
Also, you can paint on wrinkles and thickness, and you can paint skin as well if you want tattoos or warts or whatever.
This actually works really well. I am very happy with it. But this is all texture/bumpmap stuff. Meaning that your arm will always be shaped like your arm. This is fine for tight sleeves, but if you have poofy sleeves at all, you'll need to actually modify the mesh.
Originally, the plan was that I would create modified versions of the mesh, and you could just select which one you wanted to use. Your shirt is based on the short sleeve mesh variant, or the poofy sleeve mesh variant, or whatever. It'd be a lot of work, not just because I would need to create a ton of variants, but because every variant would have to be mapped to the half dozen body shape keys/morph targets so that each would look correct on beefy people, fat people, and so on.
But... the success of this in-unity clothes generation means I'm rethinking that. Is there a way to allow the users to create their own mesh?
Sure!
First off, it's easy to actually modify or create meshes. The hard part is determining what to create, how. And UV mapping it.
Let's say that you're building the top half of a Disney princess dress - floofy and pastel. You want the shoulders to floof: right now, they're just that frosting-colored blue painted right on the skin, like a tight shirt. So you switch from texture paint mode to mesh edit mode.
You mouseover the shoulder and scroll-wheel up. The shoulders inflate along their normal. Perfect. Your shoulders are floofy.
That part is easy. It's just another morph target. Morph targets are stored as "Vertex N offset by XYZ", and that's literally what you're doing. Except we would store it in a 4D Vertex, where W would be the normal. By allowing it to be offset by the normal instead of (or as well as) XYZ coordinates, we allow for bodies that are different shapes. This probably isn't a big deal with shoulders, since most people's shoulders are roughly the same shape, but it really matters when you're trying to do armor on a fat guy.
Oh, is it not flounced quite right? Click and drag to move the vertices around with a grab brush. Again, very easy. Now the morph target is offset by a multiple of the normal and a certain XYZ value. Mousewheel down to pull the point inward...
You can use this trick to do mesh digging as well as floofing. Let's say you want to make someone with a robot arm, so you want the arm to have actual indents along metallic tracks. You paint the metal tracks, then you mousewheel down to pull the vertices in along their normal, digging into the mesh.
This stuff is the cake. The hard part starts now:
You want a sleeve. That is, you want a short sleeve that stands slightly away from the arm and the arm is within it. You don't want to simply modify the arm's size. You want to create more mesh around the arm.
Let's start with the UI. It's the same. The user mousewheels up on the sleeve. The algorithm detects the edge of the clothing you've painted, and says "okay, this texture ends between this vertex and the next vertex on the triangle. So I can't just move this vertex, that would affect a region not covered by the texture. I have to tear it."
So it clones the vertex, lifts it along the normal of its parent vertex, and creates tris trying it back to its parent mesh on the shoulder side while leaving it detached on the elbow side. A sleeve. It's got the same bone weights and tracks fine.
Mousewheel down, it retracts towards the parent until it merges and vanishes.
Want your poofy edge to have a poofy shoulder? Mousewheel up on the shoulder. This is all connected space, so it doesn't detach from the parent model. Only the edge points detach, meaning that your model only gains a few vertices instead of cloning vast amounts. Depending on the quality of the model, this may actually go in a few vertices: the point is that the sleeve gives the indication that there is an arm up it, so the detached portion does have to have a certain depth.
Collars or unbuttoned fronts or whatever all have the same method, except that with collars you'd want to drag the new vertices down and out so that they fold over properly. That may actually require a special function, since otherwise they might be attached to the neck bone rather than the chest bone. Still, it's not that bad - a "reparent to closest vertex" would do.
If you did want to create something that is an entirely separate layer, that'd be a slightly different system built on the same principle: instead of editing the base mesh, you'd create a clone of the base mesh containing only the vertices that are covered by the texture, then edit that.
In both cases, these vertices can be stored in the exact same WXYZ from N framework that we used for offsetting parent vertices. Except, instead of moving the parent vertex, they clone it.
However - this is the rough part. They can't clone the UV map location of the parent. Instead, they have to create a new UV map position for their tris, and cut the pixels that were on their parents' UV map, pasting them to their own. This is required because otherwise the arm inside the sleeve would be the same color as the sleeve.
I'm actually... not sure about how to do that. I know there's a way to do it, but I've never tried it out. It may be annoyingly complex. But it's an absolute necessity.
...
Okay, we've discussed sleeves and collars and entirely distinct layers... but what if we want a topology that isn't simply an offset version of the parent?
The obvious example is skirts. What if we want a skirt?
When you mouse up along the inside of the leg, you'll quickly cross over the X axis. If you're in "merge mode", those vertices would impact and merge, causing their triangles to also merge (and average their bone weights). If any vertex only has triangles which are entirely along 0 X, it self-destructs, "hollowing" the skirt out. (Otherwise you would have extremely loose shorts).
This is not so hard, although it has one big downside: the skirt would end up with a very low poly count horizontally. Because of that, we may need an algorithm to add "stripes" to the skirt. This'd be easy if the skirt was just a standalone mesh: we could just do loop cuts. But since it's often going to be integrated into the overall mesh, we need to come up with a method of creating vertical cuts that don't screw up the topology near the waist. that's going to be some work, but it's not nightmarish, just annoying. Alternately, I could just make the legs themselves have a high poly count so that skirts wouldn't suffer.
...
IT ALL SEEMS FEASIBLE LET'S DO IT.
Tuesday, April 16, 2013
Avatars with Layers
This is a technical implementation post.
I'm slowly creating a Unity avatar generator, so this is a post about avatar generation.
There seem to be a lot of different opinions about how to do it, but most of them seem to be either mired in the past or extremely limited. I plan to use three methods to allow users to construct their avatars. Keep in mind that users will be able to submit content and use it on their avatars, so this is a framework rather than a specific set of options. This means that we can't use any of the cheap outs like you get in the various superhero MMORPGs.
First: you can stack texture layers. Or, more accurately, material layers.
This would be used for skin tight stuff, such as tank tops, necklaces, socks, and so on. These simply require a texture with transparency, a bump map, and an optional set of material parameters if you want to make it have different shader parameters (for example, to get the look of a spandex bodysuit). This is simply stacked on top of the underlying materials, allowing underlying materials to show through.
In addition, you can use this for simple decal work - scars, tattoos, robo-skin, whatever.
The little secret to this is the use of bumpmap blending.
It's quite easy to calculate out a single, final bumpmap by layering each bump map on top of the preceding bumpmap and fading it. This allows us to get a real feel for layers. For example, if you wear a chunky necklace and then a light shirt, the normals for the necklace will be used on the part of the shirt overlaying the necklace. So even though the necklace is partly hidden beneath the shirt, it doesn't vanish, it creates a little mound under the fabric.
This trick isn't critical or anything, but it's easy and adds a bit of fun realism. It also paves the way for more advanced techniques, both those listed below and things like wet or damaged clothes, nonhuman skin types, and so on. It's also important because Unity doesn't seem to support bump map transparency, so that has to be calculated by us anyway.
Second: your outermost "tight" layer determines your base mesh.
The human body is split into three mesh segments: head, upper body, and lower body. When you wear a piece of clothing, it overrides the default mesh with the correct mesh. So if you wear a T-shirt, the upper body is overridden with a mesh that has the correct sleeves. However, if you then put a long-sleeve shirt over the T-shirt, the upper body mesh becomes the long-sleeve-shirt mesh. The texture for the mesh is just like above, where there is lots of transparencies to allow the underlying skin texture to show through where needed.
These meshes are simply customizations of the base mesh, and have the same shape keys attached to them. This means that if your "muscle" slider was set to max and you put on a T-shirt, you'll still have a muscly body. Obviously it would be possible to have a mesh which didn't descend from the same topology - you could theoretically do some really crazy stuff and completely override the defaults. That's fine, that's kind of the point.
The UV mapping of these meshes has to match the original UV mapping. The reason is that we need to be able to use the underlying layers correctly, and also use this layer as an underlying layer. For example, you have a neck tattoo. You put on a T-shirt. Your mesh changes, part of your tattoo is hidden beneath the shirt texture. You put on the long sleeved shirt, your mesh changes again. While the T-shirt's mesh is gone, the T-shirts texture and normal map are not. So the neck of the shirt shows in the V of the button-up shirt, and there are subtle hints of wrinkles and mass at the neck, hem, and shoulders where the T-shirt had bump mapping.
Obviously, this has a few restrictions. For example, if you put a T-shirt on over the long-sleeved shirt, the long-sleeved shirt sleeves will be painted on your bare arm rather than having mass. I think that's fine.
Third: non-tight layers are layered meshes.
If you wear something which is poofy or doesn't adhere to your body, it's a separate mesh mapped to the same skeleton. So if you put a jacket on over your long-sleeved shirt, your shirt mesh remains your shirt mesh, and the jacket is a separate mesh. This can lead to issues if your sleeves are particularly poofy or something, but it's far less problematic than most alternatives.
Similarly, your hair is an add-on mesh. Your chunky arm pouch. Your skirt. Your boots. Your wings. These are all just add-on meshes. There's no guarantee they'll all get along, but that's part of you designing your avatars. If you stick wings on and put a coat on, you're going to have wings magically sticking through your coat. Live with it, or make a new kind of coat.
Of course, you could apply decals to these as well. Put a patch on your leather jacket.
Final thoughts
This is what I'm intending to make. It's actually not complicated, I've tested the feasibility of each of these with proof of concepts. It's just a lot of work.
A lot, especially because it's all database-based.
I really need to take maybe two weeks off work and just do this.
I'm slowly creating a Unity avatar generator, so this is a post about avatar generation.
There seem to be a lot of different opinions about how to do it, but most of them seem to be either mired in the past or extremely limited. I plan to use three methods to allow users to construct their avatars. Keep in mind that users will be able to submit content and use it on their avatars, so this is a framework rather than a specific set of options. This means that we can't use any of the cheap outs like you get in the various superhero MMORPGs.
First: you can stack texture layers. Or, more accurately, material layers.
This would be used for skin tight stuff, such as tank tops, necklaces, socks, and so on. These simply require a texture with transparency, a bump map, and an optional set of material parameters if you want to make it have different shader parameters (for example, to get the look of a spandex bodysuit). This is simply stacked on top of the underlying materials, allowing underlying materials to show through.
In addition, you can use this for simple decal work - scars, tattoos, robo-skin, whatever.
The little secret to this is the use of bumpmap blending.
It's quite easy to calculate out a single, final bumpmap by layering each bump map on top of the preceding bumpmap and fading it. This allows us to get a real feel for layers. For example, if you wear a chunky necklace and then a light shirt, the normals for the necklace will be used on the part of the shirt overlaying the necklace. So even though the necklace is partly hidden beneath the shirt, it doesn't vanish, it creates a little mound under the fabric.
This trick isn't critical or anything, but it's easy and adds a bit of fun realism. It also paves the way for more advanced techniques, both those listed below and things like wet or damaged clothes, nonhuman skin types, and so on. It's also important because Unity doesn't seem to support bump map transparency, so that has to be calculated by us anyway.
Second: your outermost "tight" layer determines your base mesh.
The human body is split into three mesh segments: head, upper body, and lower body. When you wear a piece of clothing, it overrides the default mesh with the correct mesh. So if you wear a T-shirt, the upper body is overridden with a mesh that has the correct sleeves. However, if you then put a long-sleeve shirt over the T-shirt, the upper body mesh becomes the long-sleeve-shirt mesh. The texture for the mesh is just like above, where there is lots of transparencies to allow the underlying skin texture to show through where needed.
These meshes are simply customizations of the base mesh, and have the same shape keys attached to them. This means that if your "muscle" slider was set to max and you put on a T-shirt, you'll still have a muscly body. Obviously it would be possible to have a mesh which didn't descend from the same topology - you could theoretically do some really crazy stuff and completely override the defaults. That's fine, that's kind of the point.
The UV mapping of these meshes has to match the original UV mapping. The reason is that we need to be able to use the underlying layers correctly, and also use this layer as an underlying layer. For example, you have a neck tattoo. You put on a T-shirt. Your mesh changes, part of your tattoo is hidden beneath the shirt texture. You put on the long sleeved shirt, your mesh changes again. While the T-shirt's mesh is gone, the T-shirts texture and normal map are not. So the neck of the shirt shows in the V of the button-up shirt, and there are subtle hints of wrinkles and mass at the neck, hem, and shoulders where the T-shirt had bump mapping.
Obviously, this has a few restrictions. For example, if you put a T-shirt on over the long-sleeved shirt, the long-sleeved shirt sleeves will be painted on your bare arm rather than having mass. I think that's fine.
Third: non-tight layers are layered meshes.
If you wear something which is poofy or doesn't adhere to your body, it's a separate mesh mapped to the same skeleton. So if you put a jacket on over your long-sleeved shirt, your shirt mesh remains your shirt mesh, and the jacket is a separate mesh. This can lead to issues if your sleeves are particularly poofy or something, but it's far less problematic than most alternatives.
Similarly, your hair is an add-on mesh. Your chunky arm pouch. Your skirt. Your boots. Your wings. These are all just add-on meshes. There's no guarantee they'll all get along, but that's part of you designing your avatars. If you stick wings on and put a coat on, you're going to have wings magically sticking through your coat. Live with it, or make a new kind of coat.
Of course, you could apply decals to these as well. Put a patch on your leather jacket.
Final thoughts
This is what I'm intending to make. It's actually not complicated, I've tested the feasibility of each of these with proof of concepts. It's just a lot of work.
A lot, especially because it's all database-based.
I really need to take maybe two weeks off work and just do this.
Subscribe to:
Posts (Atom)