We will be looking at noise generators and Vertex shaders in detail. The main technical learning we are reinforcing are GLSL shader language operators, and functions, because we need to use functions alot in GLSL to simplify things, like how to generate random numbers in a shader; will explore how we use these to make interesting textures.
1.Noise
In computer graphics we use noise all the time, it is an essential part of making everything look and feel more natural. We use noise in textures, objects, lighting and simulations, including most forms of AI.
In order to use noise in our shaders, we need to be able to calculate randomness. There may well be no such thing as ‘RANDOM’, when something is random, this just means we can not detect a pattern; this makes the whole problem of noise much easier to grasp, it also helps us understand the world a lot better!
White noise is a signal where there is equal energy in all frequencies. This means that every frequency is represented by the signal, and we do not associate it with any form of order.
We can also ‘shape’ noise in different ways; there are many different types of ‘noise’ distributions, and they have different trade offs. Throwing two dice is a good example - this generates a ‘normal’ distribution, but to model this, we need a generator. Brown noise, pink noise - 1/f noise can all generate interesting noise - sometimes fractal in nature. 1/f noise is a good model of natural variatiom.
So, logically, we can use techniques from radio to generate noise signals. We can use analogue radio modelling to create random number generators; these are really great random number gens but they are not talked about much.
Two common methods for radio transmission are AM and FM:
- AM = Amplitude modulation
- FM = Frequency modulation
Both methods require the following:
- A carrier (a sine wave)
- A modulator (a modulating wave)
- A modulation index(the amplitude of the modulator)
Both the carrier and the modulator have a frequency; in amplitude modulation, we modulate the amplitude of the carrier by the frequency of the modulator. In frequency modulation we modulate the frequency of the carrier by the frequency of the modulator.
Let’s take a look at an example using FM to generate randomness in a shader:
Now let’s look at methods for 2D noise generation using FM:
We can do very interesting things by using the integer or fractional parts of our coordinate systems. By using both at the same time, we can have randomness, and order, both at the same time; this is powerful - and what you need to do in order to create more natural looking images.
The problem with 1D noise approaches is that they do not calculate noise across both xy dimensions, so you can see lines and repetitions on both axes. By using the dot product (again), we can get noise values for fragments based on their magnitude; dot product takes 2D input and provides 1D output; if we use it as the input to our FM generator, it produces interesting oeffects in 2D.
2.Vertex Shaders
Let’s take a look at the vertex shader example:
In summary, we have two interacting shaders:
- Vertex shader
- Fragment shader
Together they are used to create a ‘materal’.
The vertex shader allows us to pass on and modify aspects of openGL meshes; the final output is gl_Position()
, gl_Position is a vec4 xyzw; w is hte ‘homogenous’ coordinate, we divide xyzzy by w, it is very useful.]
Materials are bound to a specific mesh; so ,our shaders get bound to a specific mesh, such as our model, plane or sphere - as part of the material; we can have quite a lot of shaders in memory, but only one is active at any one time - this impacts how we use them.
All our meshes can be processed procedurally without issue inside the draw loop:
- Bind our material to our mesh
- Draw the mesh
- Unbind our material
- Bind a new material to new geometry
- Draw a new mesh, Unbind…
Vertex shaders have the same uniforms as fragments shaders, we should use high precision uniforms when sharing them between frag and vert shaders; otherwise our shader code might not run.
Some programmers just write one big vertex and freag shader pair. They then specify a variable that sets which part of both shaders should be active for each piece of geometry. They bind just that single pair of shaders to a single material for all the geometry; then pass a uniform in to select which part of each shader should be active, then draw the specific geometry, then change the active shader sections, then draw the next bit of geometry..etc.
Vertex shaders can have ‘varing’ variables, these can be passed to fragment shaders, they need to be defined in both the vertex and the fragment shader in order for this to work.
Varyings are deprecated in openGL 4, have been replaced by ‘in’ and ‘out’; both methods are actually used all the time. Attributes are the vertex shader inputs; essentially, these are the elements of the vertex buffer object(VBO), the VBO contains the elements of the mesh that you ahve bound to the material. They get passed to the GPU and can be accessed as vertex shader attributes. This means we can get all the data streams from the mesh as attributes, modify them, and replace them with new data.
in: link to shader from previous stage in pipeline
out: link out of a shader to next stage in pipeline
attribute: same a in for vertex shader
varying: same as out for vertex shader, same as in for fragment shader
We can get the normals out of the vertex buffer and use them to set the colour of fragment shader at any position; this is super easy and you already know how to do it.
Using a varying to create a light:
Try to understand the idea of how to generate lighting effects using normals, feel confident that we can generate noise using simple approaches.
About this Post
This post is written by Siqi Shu, licensed under CC BY-NC 4.0.