🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

correct modeling for game developement (poly count)

Started by
19 comments, last by Scouting Ninja 7 years, 9 months ago

I can’t thank you enough, this was so unbelievable helpful to me!

So to recap.

If I want to create the environment of two small villages, which were build close to each other in the actual game world, I should use a set of reusable models, since the villages would use similar architectural designs and materials.

But if for some reason, the villagers of each village, would worship I different deity, I could model the whole temple of that deity as one, since I would probably only use that Model rarely and I would want to give it more detail and let it stand out more from the rest of the environment.

Did I understand correctly?

About the Poly count. Should it be a conscious decision, that every time I create an environment with unreal, my static meshes only take up a certain amount of polygons and I limit myself to a certain amount of meshes that could be shown at a time on the screen? Is this something that is consciously done?

And is there a drastic difference if a Model is a static mesh or something moveable like a character, an NPC or a building with an animation like a windmill?

Last question. If let’s say I want to make a playable level with unreal, and it’s based on the principles of top down action rpg’s like diablo. Would it be advisable or even necessary, to limit the amount of environment meshes, to be able to add more character models on one screen? As action rpgs tend to have loads of enemy’s coming at players at the same time.

I tend to repeat myself, but thanks so much for the insane amount of help! Really awesome place!

Advertisement

I think your example about the two villages is a good example. If the temple you mention is never going to be used anywhere else, even in different form, go ahead and model it, and just place it in your world. And about the two similar looking but different villages, that is exactly one of the good advantages of using the modular assets.

Poly count, it really depends on your game and your target audience, and how good of computers you expect that target audience to have. The actual number of polys is nowhere near as important as the shaders that are used on said polys. A high end deferred pipeline/shader like UE4s or Unity's PBR shaders will be more expensive than one of Unity's older forward rendering shaders, even if you use less polys on the PBR shaders and more on the forward rendered polys.

A good rule and thing to get used to with modelling is to get used to only putting polys where you need it, but don't be stingy either. If you have a nice flat surface, you don't want to put a ton of polys on it if you can fill it with just a few and have it look the same. But...you need enough geometry to get the curvature of your model. And you need extra loops for anywhere geometry deforms during animation, like shoulders, knees, elbows, etc... There is no perfect way to make things happen, as all aspects of gamedev, even in AAA studios, involve making things "close enough, good enough." Modern human models can easily range from 3000 polys to 30,000 polys or even more. Note that the less actual models in your game at any time, the more polys you can make them have(although that doesn't mean you should, especially if your target audience isn't likely to have computers that can handle it). By the same token, if your game is full of action and hordes of enemies, you are going to have to have less geometry per enemy.



I'm a firm believer in optimizing as you go so that you don't have a mess to clean up when you finish. So, I aim for the lowest poly count I can get without losing any detail. Anything higher is waste. And if you can afford to waste GPU cycles, go for it, but most of us are trying to pack as much in as we can and any waste means there's that much less you can put on the screen.

The bottom line though, is that at some point the graphics card will start dropping below 30 frames per second and ruining the experience. I've seen this happen while I'm modeling in Blender and with my 670 graphics card ( a couple years old now) it started bogging down in Blender around 3 million triangles on one object. I try to not waste a single polygon, but if I can get a high detail model for under 1,000 triangles, I feel like I've done pretty good at my current level of skill. Models that are going to be looked at closely are where you want to put most of your detail. A main character or weapon makes sense to spend time and resources detailing. A rock that will never be seen up close and that will be hardly even noticed in the scene is probably not where you want to spend your time and resources. But the fewer polygons per model, the more models you can get into the scene without a problem. Maybe 3,000 polygons for something that's going to get a lot of scrutiny?

Normal maps allow you to really up the detail level substantially. Because you can take that million polygon model and bake a normal map that will contain all that detail and have the benefit of the detail on a 1,000 polygon model. It's applied with a UV map the same way the texture (color map) is applied. Baking it is the process of creating it. In Blender, you create your low poly model. Then you create another model by subdividing it or whatever you need to do to get all that detail up to the full limit of what the computer can handle for a single model. You sculpt in all the detail. Then you put the high detail model on top of the low poly model. The bake process is an option in Blender and when you bake the normal map it will take that high poly model and project it onto the low poly model to form a picture like your texture (color map). Then you can delete the high poly model. You now have your normal map and the low poly model and you UV wrap that around the model exactly like the color map. The color map is the color of each pixel within the triangle being drawn. The normal map is a pixel normal that describes what direction that pixel faces in 3D space. When combined with the lighting calculations, they take that direction into account when lighting the pixel and it effectively looks as if it was facing the same direction as it was on the high poly model, except its a flat triangle on a low poly model. Smooth shading works on the same principle except smooth shading is done by interpolating, or averaging, the direction between vertices of the triangle. Normal mapping allows you to control every pixel direction on the face of the triangle individual rather than a smooth gradient. That's what makes the magic work.

Baking is just a term that means to freeze detail into a photograph. You use photographs (maps) for things like textures with UV mapping. You could bake the color if you were doing vertex painting and wanted to convert it to a texture instead. You can bake lighting information, ambient occlusion, and all sorts of things in the modeling program. Some of these things take more processing power than the computer even has in order to render them real time for a game. To bake the information down to a photograph where the calculations have already been performed and you have a final result cuts down on the amount of processing done. As processors become faster, you can get away with less baking. Lighting used to be very commonly baked into the scene for pretty much everything. Now you would only do that if there's no point in wasting lighting calculations because it the lighting will never change during the game or something along those lines.

Here's a video that basically shows you how to bake a normal map.

Ideally, a triangle would cover roughly an 8x8 pixel area (or more), and at worst, around a 2x2 pixel area. Games tend to use several LODs of a model (levels of detail -- different versions of the model with more/less triangles) and the game will automatically pick the right LOD to use at the right distance to maintain a suitable triangle density.

I wonder Hodgman. Wheather this policy has any further, or even more serious, reasoning, but the redundant vertex stage processing.

Extremely helpful answers once again, you’re making my life a whole easier =) thank you all so much.

That leaves me with one last question, at least for a few days.

Since I have a pluralsight membership, I get a lot of my education from there at this moment. A lot of the courses there, which interest me, use 3ds max, but I heard that the animation section from maya is better. I don’t want to start a debate if that statement is true or not, I just want to know if it’s okay to use 3ds max and zbrush for modeling and use the created models then in maya to rig and animate them.

Is this a viable practice, or rather unpractical because of all the different software which is being used? Is it better to pick 3ds max or maya instead of using them both? Or is it irrelevant?

Thanks again for your immense help!

Ideally, a triangle would cover roughly an 8x8 pixel area (or more), and at worst, around a 2x2 pixel area. Games tend to use several LODs of a model (levels of detail -- different versions of the model with more/less triangles) and the game will automatically pick the right LOD to use at the right distance to maintain a suitable triangle density.


I wonder Hodgman. Wheather this policy has any further, or even more serious, reasoning, but the redundant vertex stage processing.

You mean, is there a reason for these besides vertex processing costs? Yes, it's actually about pixel stage processing costs!!
Almost every GPU runs pixel shaders on a 2x2 "pixel quad" at a time. If your triangle cuts partially through this 2x2 pixel grid, then the edges will only partially fill those grid cells, but the pixel shader is still executed for the entire 2x2 area. For pixel-sized triangles, this means that you run the vertex shader 3 times for the pixel, plus you run the pixel shader 4 times!
Aside from the extra vertex-shading cost, small triangles can increase your pixel shader cost by 4 times!

The larger your triangles are, the better your "pixel quad efficiency" is.

As for the "should be larger than 8x8" recommendation -- the fixed-function rasterization hardware tends to operate on even larger pixel-quads, such as 16x16 or 32x32 pixels... and GPU shader cores tend to be SIMD processors that operate on 8-64 pixels at a time (or 2-16 "2x2 pixel quads" at a time).

but I heard that the animation section from maya is better.

The person who said it is a lair or a Maya user who is familiar with it's controls and not with Max.

Max is the best 3D software around, maybe over priced however it's still the best.

Maya has better rigging tools, however most of these will only matter to the most advanced animation artist. The extra animation tools Maya offers isn't that useful, most of them are intended for rare animations.

In other words when it comes to animation Maya is flashy and at first looks more impressive, when you start producing you will ignore most of Maya's tools for animation.

The largest down side to Maya is how it's falling behind in 3D modeling; it reminds me of the path Cinema 4D took.

However it doesn't mean you shouldn't give Maya a chance.

I just want to know if it’s okay to use 3ds max and zbrush for modeling and use the created models then in maya to rig and animate them.

The problems with this is that it will confuse you (the shortcuts and controls differ greatly between these) and it will be expensive if you start selling your work. Also remember that you will also be adding two or three 2D software for textures to your workflow.

The advantage of this kind of workflow is that it will give you a chance to experience more software and will help you settle on the software you plan on using. You should give all 3D software a try, especially because they have free trails or licenses.Some great 3D software like Blender is completely free.

If you plan on using the best software for each job you will have a difficulty time, it's best to use what you feel comfortable with, remember software is only a tool it won't do the work for you.

but I heard that the animation section from maya is better.

The person who said it is a lair or a Maya user who is familiar with it's controls and not with Max.
Max is the best 3D software around, maybe over priced however it's still the best.

Isn't "best art software" a pretty subjective quality? I wouldn't call a difference in opinions to be a lie :P

I just want to know if it’s okay to use 3ds max and zbrush for modeling and use the created models then in maya to rig and animate them.

Look, it would be awesome if you knew all these tools, but usually we don't have time to learn them all in detail. You need to learn pick a decent software to learn in detail because you want to get past this software barrier and spend all your time improving your skills and making projects.

3ds max is perfectly capable of modeling, texturing and animating your game characters, as are other 3D packages like Maya, Modo, Cinema4D etc.
Don't think twice about it. With 3ds max and ZBrush (Autodesk will try to bring you over to Mudbox, their sculpting software) you have enough software power to be able to make production quality game assets.

Thanks for your immense help!

I guess I will start with 3ds max, since it has more courses that interest me. And in the future I will try Maya and decide for myself which software suits me best.

have a nice day everyone!

This topic is closed to new replies.

Advertisement