We will discuss the storage and representation of pixel information, and how to create simple shapes and texture effects through mathematical expressions.
1.Create Image
We can use the createImageData()
method to draw the image. In this process, we traverse each pixel on the canvas, and we can also use a simple expression to determine what color each pixel should be.
2.Write Output Pixel
We can set the RGBA value of the pixel:
1 | position=(x+y*imageData.width)*4; |
In JavaScript, the pixels are kept in the imageData buffer, and we can also write the value directly into the imageData; it is actually a one-dimensional array, so we don’t need to treat it as a rectangular image.
For more effective practice, we can use nested for loops to help us access data in the form of rows and columns, so that each pixel has a coordinate (i, j), and the four colors of RGBA are needed in the array aisle.
Each value is 8-bit, so each pixel occupies 32 bits of information. Simple calculations, if we process the imageData array, we need to get (50, 10) pixels on a 100*100 pixel image, we need to find ((50 * 100) + 10) * in the array 4, the next 4 numbers are RGBA information.
After summarizing, we can get a formula:
1 | ((y * imageWidth) + x) * colourChannels |
It should be noted that different platforms may use different color description methods, but in JavaScript it is usually RGBA.
We also need to use nested for loops to help simplify the access of pixel data, treating i as a column variable (y) and j as a row variable (x). We multiply i by the width of the image and add j to the current pixel.
Because the actual size of the array is 4 times that of the image, we also need to multiply by 4.
1 | imageData.data[((imageWidth * i) + j) * 4 + 0] = 255; |
3.Basic Image Processing
We can also load the image through getImageData()
and perform simple processing.
For example, to increase the brightness of the picture:
Increase contrast:
4.Rotating an Image by Hand
You can check the example below, how to rotate pixels to new positions.
The code has two important parts: calculate where the pixel needs to be moved, and then 2D rotate it to move it.
1 | // This is a 2D matrix rotation!!! It's very similar to the code we used to rotate points around a circle. |
- j is the current pixel’s original Y position, i is the current pixel’s original X position.
- j and i here are acting like the radius of the circle
- theta is the angle, or how far round the circle we want to rotate
- We use Math.floor to round down, as this is an actual pixel xy coordinate. If we don’t do this, things could get messy
- The two new vars, x and y, are going to be where we are copying from - we want these colour values to be written to the current pixel
- The next bit of code does that part:
1 | imageData2.data[((imageWidth * i) + j) * 4] = data[((imageWidth * y) + x) * 4]; |
5.Image Processing using Convolution
- We can produce a wide range of image processing effects with a process called ‘convolution’
- Convolution allows us to perform blurs, edge detections and other common filter processes
- To do this, we create what is called a ‘kernel’, which is a small grid of numbers, 3 * 3 for example.
- The kernel decides what the central output pixel should be by multiplying the surrounding pixels by values in the grid
- After the multiplications, all the values get added together and averaged to find the value for the centre pixel.
- The below is an excellent visual explanation of image kernels.
- http://setosa.io/ev/image-kernels/
- There are even more here:
- http://aishack.in/tutorials/image-convolution-examples/
6.Performing an Edge Detection / Gradient analysis
- Here’s an example that uses the simplest possible approach to perform an edge detection
- This edge detection is very efficient and actually very powerful
- https://mimicproject.com/code/e9e74da1-ecd7-719d-9c8c-51d5b52d4cad
- (remember to press the play button if the images don’t appear)
- The example uses a very very simple image processing approach which is operating in place, rather than as a separate array
- So we’re taking a short-cut (no convolution kernel, just manual multiplication of pixels)
- It uses a ‘kernel’ of -1 0 1 in only one dimension. You can select either the X or Y dimension to see the effect.
- This produces a basic edge detection - but it can do much much more than this.
1 | imageData2.data[((imageWidth * i) + j) * 4] = (-1*(data[((imageWidth * i) + j-1) * 4])) + (data[((imageWidth * i) + j+1) * 4]); |
- The above code takes the pixel before the current position, and multiplies it by -1, which inverts it (so 128 would become -128).
- It then takes the pixel after the current pixel (e.g. 128) and adds it (it ignores the data in the current pixel entirely).
- This gives you a new image which only contains the difference between the previous pixel and the next pixel
- In the above example, the output pixel value would be 0. This would mean that nothing in the image had changed at that point
- This produces what we call
gradients
- These can be used to create edge detection, but frankly, the gradients are much more powerful than that.
Below is a simple kernel I made to achieve the effect of Gaussian blur, you can also try to adjust the value in main.js.
About this Post
This post is written by Siqi Shu, licensed under CC BY-NC 4.0.