correct modeling for game developement (poly count) – 2D and 3D Art
September 21, 2016 05:16 AM
Hello again everybody,
I recently completed the introduction course on zbrush 2016 from the site pluralsight and I got a brief overview of the software. I even got a decent looking character model at the end, for a beginner of course, but it turned out better than I thought it would. But with that whole process, came a few questions. And I don’t even know how to start asking.
I hear a lot of talk about low poly modelling for game development and see that there are a lot of videos and courses about this topic, but this whole concept is somewhat confusing. Correct me if I’m wrong, but the whole process works somewhat like this:
- Start with a simple base model and low polys to get the base shape and topology of the object right. Add/subdivide as many polys as you need, to get as much detail as you want on the model with the sculpting tools. Create some sort of map (sorry don’t know much of the names yet) of the high poly model, and then bake (whatever that means) that map onto the low poly model of that object.
Is that correct? Am I missing some crucial steps?
Another question that I have is: How many polys is actually low poly? I guess it depends. Let’s say for a PC Game like Diablo 3, Path of Exile and so on. How many polys should a character have and how many the models that shape the environment, a house, a barn or a wagon? And is there a rule on how many models/objects a “level” should have? Let’s say a whole map of diablo 3 after a loading screen.
Man I know these questions are probably not good, but I have these questions in the back of my head and I can’t get rid of them.
Thanks for your time and effort to help! I really appreciate it
September 21, 2016 06:32 AM
Thats pretty much correct, the baking youre referring to is common to generate a normal map (and other maps, AO, etc). The general idea is you have a low poly and a high poly that essentially match/lineup except they have different poly counts, and therefore the higher poly one is more detailed. You “bake” these high poly details into a texture (like a normal map) and then apply the normal map to the low poly in game for lighting calcs, etc.
Youre missing steps in the sense that it can be a little more involved, depends on the tools/process used, normal map output depends on the target engine/software format, conventions, etc. and as I understand it, its not exactly hard to generate bad maps containing errors if the input models are not what/where they need to be.
a program called x-normal is often used for the “bake” process.
As for poly counts, again its sort of target dependent, what are they for, I believe the characters in doom 3 if I remember were somewhere between 1500-3000, thats probably on
the lower end these days, but again its depends on the target use and the effects that will be applied to it, it really could be anything. Two triangles for a wall in the world with a lot
of effects on it might do the job. For the high poly bake source, ive seen extremely high counts, 100s of thousands or more depending. But again it depends on whats needed.
September 21, 2016 07:08 AM
Triangle counts don’t mean as much as they used to, especially now that we’re drawing several million pixels (tens of thousands of triangles seems minuscule in comparison!).
If you have triangles that become smaller than a pixel when viewed from the intended distance, then you have too many triangles. Ideally, a triangle would cover roughly an 8×8 pixel area (or more), and at worst, around a 2×2 pixel area. Games tend to use several LODs of a model (levels of detail — different versions of the model with more/less triangles) and the game will automatically pick the right LOD to use at the right distance to maintain a suitable triangle density.
September 21, 2016 07:49 AM
Normal-mapping is an imagery-effect to move the details from the geometry (triangles) into the texture. This fakes the geometry detail on the image and since you provide extra normal values which are (generally) non-linear between each vertex of a face, then you lit the geometry as if it was not flat (but it is). This works well as long as the faces do not cover a too-big region in the screen.
This is why you need both the explanations mentioned above: put a higher-detailed normal-map into a lower-poly geometry, and have different levels of details of the geometry (and the normal-map) and choose accordingly, depending on the screen coverage.
September 21, 2016 08:37 AM
Okay, nice I think I understand. Thanks for all the answers. So that means, if I have a game in Birdseye view (top down) and the player can adjust the camera zoom, the LOD should go down the farer away he zooms and go up as he zooms in. At least this is how the pros do it for performance I guess? Or are there any other reasons for this method?
Another question I asked myself as I was learning from videos and courses.
Sometimes when I watch someone build an environment scene in a game engine like Unreal, Unity or Cry, they use two different approaches or a mix of them.
One group works with a lot of building blocks like single walls to build a whole house, or single head sized stones to build a small wall out of them. Or they use models of single wood planks, stick them together and build a wood wall/roof/floor like this.
Other people just have the whole thing modeled beforehand in a modeling software and just insert the whole building to the scene.
Are there any reasons or benefits to have the whole model done before implementing it to the game engine, and are there downsides to have the building made out of that many pieces? Simply asked, what is the different and how do these two approaches influence the game in the end?
Question 2: Would it be highly uncommon to use 3ds max and zbrush for modeling, and then use Maya for rigging and Animation?
My head is full of questions and I don’t know where else I could get the answers. So sorry.
Thanks for all the help, I really appreciate all of you.
September 21, 2016 09:02 AM
That’s it. We can do so in order to avoid artifacts for example, z-fighting issues, aliasing with textures…
Game editors (or engine editors) are not modelers. They might most of them provide tools to work on or create the models, but they are not meant to modelize.
For things like walls, first because this is easy to model, and also because walls could help the engine having better performances (occluders and such), one would generally model them directly in the editor. But they could also be imported from a model (for example if the wall has a lot of geometry details, like holes, hills and other discontinuities).
But for more complex things (characters, cars, trees and so on), people will tend to model them in a modeler, create different levels of details, and different normal maps, then import them in the editor.
This is my knowledge about this. Real game developers will give more exact answers 🙂
September 21, 2016 09:44 AM
Oh damn you got my question the wrong way, sorry I have some trouble explaining what I mean, since English is not my native language.
I try better this time.
I didn’t mean, that they create the models inside the engine. I meant that some people use the modeling software to create a whole building model as one and import that whole building as an object/asset/model/static mesh (sorry don’t know the right terminology) into the Game engine like unreal and put that with the editor into the scene to build the level.
Other people use the modeling software to create the parts which make a whole building, like the walls, roofs, floors, doors, windows, steps etc. and they import these models all separated from each other into the game engine, and use these inside the editor of the game engine as building blocks, to create a whole building.
I wanted to know how these two approaches influence the game in the end (if they do). I hope I explained better this time. So sorry. I’m not very good at asking questions.
September 21, 2016 11:11 AM
What I answered before is still valid for your clarification.
Sometimes, some engines require to know each part of a thing, let’s say, as you talked about it, a building. Doing so, will allow to easily tell to the game editor that this is a wall, this is not a wall, this is a room, this is a door. Each of them will be recognized as different things in the engine: the walls will allow to split the building into different areas, or to tell that all what is behind cannot be seen (major occluder), the doors will allow to create portals in between these areas, and so on…
All of this is generally done so that to improve the performance of the rendering engine. Nowadays, this is less true, but some engines still require to tell that this geometry is a major occluder, this area is a portal, this set of walls is considered as a portal area…
And even if the engine does not require to know all of this, or if the engine editor allows to select sub-pieces of the geometry, people will tend to have different manners to work. For example, when you dispatch a building into many pieces, some pieces might be easily reused. For example, you can reuse the doors in several locations, so this will allow to save the amount of geometry, thus allowing to require less memory to the graphic card, thus allowing the engine to do some instancing…
Hope I went on your direction this time 🙂
September 21, 2016 06:35 PM
Start with a simple base model and low polys to get the base shape and topology of the object right. Add/subdivide as many polys as you need, to get as much detail as you want on the model with the sculpting tools. Create some sort of map (sorry don’t know much of the names yet) of the high poly model, and then bake (whatever that means) that map onto the low poly model of that object.
That is one way to do it, however some times your final and base model looks nothing alike, at this point you will build a lower poly model around the high poly model; this is known as retopology.
Another question that I have is: How many polys is actually low poly?
Yes like many things in game development it depends.
For unreal the max batch limit, that is the amount of polygons a single mesh can be, is 64 000 triangles; Unity is +/- 52 000. What happens when you load in a object larger than this is that it will be one object with more than one mesh.
The amount of meshes you can have on screen at a time is decided by OpenGL or DirectX (DirectX can have up to 50 more) and your graphics card. The amount for the mid range PC is 300-350 at the moment.
That means that Unreal can render up to 22400 triangles of static meshes with the basic shader, on screen and keep above 60 frames per second, on a mid range PC.
If 22400 triangles sound low then consider that the mid range screen size is 1600*900, that is about one polygon for each 64 pixels on screen.
Another question I asked myself as I was learning from videos and courses. Sometimes when I watch someone build an environment scene in a game engine like Unreal, Unity or Cry, they use two different approaches or a mix of them. One group works with a lot of building blocks like single walls to build a whole house, or single head sized stones to build a small wall out of them. Or they use models of single wood planks, stick them together and build a wood wall/roof/floor like this. Other people just have the whole thing modeled beforehand in a modeling software and just insert the whole building to the scene.
This depends on how the assets where made.
Set pieces are things like walls and doors, they are things that will appear in the scene multiple times. Set pieces work like Lego blocks, your level designer gets a bunch of them and then builds every thing from them.
This has some huge performance benefits and production benefits. This is also the preferred way of working in Unreal as unreal uses instances, meaning that if you made one crate or made a create out of 10 pieces of wood instances it will have the same performance.
This is because unreal batches draw calls based on materials, that however means that if each piece of wood had it’s own material they would be more costly- than one crate with one material- if they didn’t shared the same material.
You will want to lookup “Environment creation for games” to understand how set pieces work.
The single object approach is used when making a piece that will only appear at one point in the game, these are usually key game elements.
Because these kind of models use special textures only meant for them, it’s harder to reuse the resources and you would need more models and textures to make scenes.
The upside to using this technique is that the object can have more details and will look more real than one assembled from other model parts.
Fallout 4 allows players to use the set pieces to build, the difference is that the level designer isn’t bound by the same rules as the players.
edit: Most Tutorials are focused on a single model, a key piece. Most courses are based around a set, because they have more time.
A full set takes a weak on average to make.
September 21, 2016 10:31 PM
One thing I should mention, about low-poly and high-poly, which comes first is all about preference. Some people prefer to sculpt out the high poly version, and then do retopo to get the low poly. Others make the low poly, and then do the subdivision(or whatever method to get the needed geometry) to be able to sculpt the high poly. Either works. Another thing to consider, especially if you are working alone, is that you may not need to mess with low/high poly. Some people make just the low poly version, and then they create textures in something like Substance Painter. The beauty of Painter is that you paint all the maps of the material at once. If using the PBR pipeline, this means it paints the Albedo, Metallic, Normal, Rough/Gloss, all at once. So if you are painting with a material that has heights/normals as part of it, then you are adding the normals right there as part of the process, and without having pre-sculpted them first. You can even directly “sculpt” normal maps in as well simply be painting on that channel alone, though it is best to paint whole materials when possible.
The other thing…some people prefer modular design(making pieces of levels, then putting them together later), while others prefer to model whole levels. The advantage of making modular pieces is that once they are done, if they are done right, you can make big levels with small pieces, and if you change a piece, it can apply everywhere it was repeated to. The disadvantage is that you have to make them right so that they fit together, and the fact is that they can get repetitive if there isn’t enough variety. The advantage of modelling it all at once is that you can make the variety automatically as part of the progress. The disadvantage is that it generally takes longer to make a level in the modelling software, at least, a complete level. If you have modular pieces, you can move them(export) to the engine as you get them so that the level designer can start right then. But if you are making whole levels at a time, it isn’t so easy to do that all at once. So basically, there are advantages and disadvantages to both methods.