Gaming Underground Network

Come for the Mods, Stay for the Community!
HomeCalendarInterviewsFAQMemberlistUsergroupsRegisterLog in
Info Panel
Stay Connected

April 2018

Share | 

 Why Your Model Doesn't Normal(ly) Suck

View previous topic View next topic Go down 


Posts : 588
Join date : 2014-04-04
Age : 23
Location : Sydney, Australia

PostSubject: Why Your Model Doesn't Normal(ly) Suck   Sun Jun 22, 2014 5:49 am

By the end of this post, hopefully you'll get the pun. If you don't, re-read it!

Why Your Model Doesn't Normal(ly) Suck

Time and time again, I see people who remark, upon loading their model in Blender/3DS Max/whatever, their model sucks, and are often dissuaded from continuing with the model. I will try to address some misconceptions and educate readers as to why, despite their model looking bad, usually isn't.


This is going to be quite a technical discussion. Here's a quick glossary of terms should you be lost.

Map: a bitmap image used for either textural or vector purposes.
Normal: mathematical representation of a surface.
Vector: representation of direction using geometric quantities.

History of Normal Maps

In case you didn't know, models are comprised of 3 things*:


Most of the time, polycount is not as important as vertex count, but since their quantities are somewhat related (there's an equation for the maximum/minimum no. of vertices per face, but that's not important), people usually use polycount as the 'limit' for their mesh.

This polycount, although overrated these days (ie. people still keep on insisting on using barely any), is still a limiting factor nowadays. High-density ("high-poly") models can easily reach the millions whenever one tries to model clothing creases, zippers, or even bevelled corners smoothly. Obviously, video cards cannot handle that amount of vertices when you have to handle other stuff too.

To circumvent this issue somewhat, normal maps were invented. But before we dig in, we'll analyse how normal vectors work.


If you remember high-school maths, you might remember the Cartesian plane system: x, y and z. Vectors are similar to that, except instead of representing a point or a line, it represents a direction and its scalar (magnitude). Figure 1 shows how it works.

Figure 1: Euclidean vector in 3D space. Wikipedia.

Now, there are many laws and theorem on it, but we won't go into them too much. One thing about vectors, however, is that they have a scalar component (magnitude): the defining characteristic about normals is that each vector's magnitude is summed to 1. This is important later.

A more important thing about these vectors, however, is that it determines the visible direction of your face.

If you haven't noticed, a model is basically a shell: it's not solid. For sake of computing power, only one side of the shell is rendered.

Each face has its own normal: if you choose to view the normal directions inside your 3D package, you'll see that each normal on your mesh is represented by an arrow shooting away from your mesh. That means that the normal direction is towards the outside.

So, to abbreviate, normals determine the direction of your face.


Now that we know what normals do, let's take into account shading. As mentioned before, a face represents one normal. A shader will take into account each normal, and try to 'bend' the normal near each edge so that the normal doesn't become like a hard edge.

Each normal will be mathematically taken into account, along with lighting as well as textures to generate the image you see.

Almost immediately, there is a problem. Since faces limit the no. of normals, how do we model stuff like a recessed valley or a screw? For the sake of brevity, we normally don't model this stuff onto the low-poly mesh. As such, lighting will then be strange: although on your texture there might look like a valley, when it comes to interacting with lighting, it'll just look like a black paint streaked onto the mesh.

So Why Does Your Mesh Suck?

When you port a mesh, the mesh you're porting is the low-poly one. The purpose of a low-poly one is simply to capture the rough outline of a more detailed mesh. That low-poly one will thus have barely any normals, and will look unbearably bad. See figure 2.

Figure 2. No normal map, no textures

Normal Maps

Now you see what the problem is: the mesh just looks bad. That's because it's not meant to look good: it's meant to look like a bad casting of your pretty mesh.

Normal maps take into account that your only have a limited amount of normals on your model. What it does is, through a vector map (the blue-ish one everyone is familiar with), 'bends' the normal of your mesh at certain locations. Each channel represents a different vector component (if you do vectors maths, i, j, k, if not, x, y, z).

Knowing the specifics isn't important: knowing why normal maps work is. So, figure 3 demonstrates what the model looks like with a normal map.

Figure 3. Normal maps, no textures

Putting It All Together

So, the mesh looks much better. But you still might not be convinced. "What difference does it make once it's all textured anyway?" Well, figure 4 and 5 shows the difference.

Figure 4. No normal maps, textures

Figure 5. Normal maps, textures

And here are the normal maps used:

Figure 6. Baked normal map

Figure 7. Height map converted to normal space.

So What Does That Mean For Me?

For users of mods, nothing.
For porters however, it means that if you want your model to look even somewhat nice, pay attention to the normal map and the topology of your ripped mesh.

Since normal maps change the defined normals on your low-poly mesh, simply moving a vertex will change the normal whose faces are connected to that vertex. That means that, when you port, minimise localised movement, and work wholistically (and more importantly, rigid hard surface parts should not be transformed AT ALL).

Secondly, different engines will have different methods of reading normal maps, so the normal maps you ripped might not look alright in FO/Skyrim. Thankfully, the main differences between them is is the orientation of the map, namely, either of R or G channel inverted (B is very rarely inverted).

So if you plug in your normal map onto your model and load FO/Skyrim, but your model kind of looks like Figure 4, it means that the orientation is wrong.

To fix it, load up Photoshop/GIMP, and invert the green channel first (the most common difference), and try it again. If it doesn't, try flipping your blue channel, and if that still doesn't work, invert your green channel back to original (if you invert twice, it goes back to original), and see if that doesn't work. For FO/Skyrim, if you normal map ISN'T blue, invert the blue channel.

Lastly, it means that normal maps that were created with the Nvidia Normal Map filter won't cut it by itself: all it does is convert bumps and grooves only, not the actual interpolation. I'm not telling you not to use it at all (use it, most of the cuts and dirt on the above normal maps were created with it), I'm saying use it WITH your baked normal map.


Normal maps are an unfortunate hack for the limitations of present technology. Nevertheless, we must work around this limitation, acknowledge it and take advantage of it. Hopefully, with the conclusion of this article, you might be able to a little.

Further Reading

Normal Map - Polycount Wiki
Making Sense of Hard Edges, Normals, etc


If you have any comments or questions, feel free to post the below!
Back to top Go down
View user profile

Why Your Model Doesn't Normal(ly) Suck

View previous topic View next topic Back to top 
Page 1 of 1

Permissions in this forum:You cannot reply to topics in this forum
Gaming Underground Network :: Fallout :: Discussion-