REMOTE SENSING: THE UNSUNG HERO OF OUR EVERY DAY LIVES

Satellite dish searching sky
Satellite in orbit
Satellites carry a variety of sensors

The first blog I wrote on remote sensing ended at the point where light that reaches the sensors of a satellite is digitised into a matrix of numbers so that computers can understand the information, but humans not so much. A bunch of numbers on a page wouldn’t be pleasing or informative to the human eye. Therefore, the matrix of numbers has to be processed back to light or “radiance” data.

Data processing: a peek into nerd world

This is where humans combine their understanding of light with computers’ understanding of numbers. Together they correct and alter the data until we can see it as an image. This process is called rectification, which is just a nerd word for ‘alteration’.  

Rectification starts with pre-processing. It has to be done because the radiance data that comes from the satellite can look a bit like the squiggly lines you see on your TV when it isn’t tuned properly. This is called radiometric distortion and It happens when the light has a bumpy ride through the atmosphere.

Colorful rainbow of refracted light on water droplets arcing through the sky behind a telecommunications tower on a wet rainy day

As light passes through the atmosphere it gets scattered and reflected by dust, clouds and water vapour, and of course pollution. Therefore, pre-processing also needs to correct the data for this “atmospheric noise”.

The other problem that can happen to the data is that the latitude and longitude coordinates recorded by the sensor could be skewed compared to the real-world coordinates on the Earth. This is called geometric distortion and can happen if there is a slight change in the satellite’s orbit.

Once the radiance data is clear of squiggles and properly aligned, the image can be classified. This means that the data gets divided into tiny, equally sized squares called pixels. Each pixel is given a number that represents the brightness levels of each part of the image. Once again, humans and computers can work together to allocate numbers to each level of brightness in the pixels. This is called supervised classification. If the human decides to have a coffee break and leaves the computer to do the job alone, it is called unsupervised classification. This is how light is turned into numbers. This is how our smart phones can take selfies!

Now the image is enhanced to make it look better. It can be stretched and filtered to make tones and patterns look clearer. Just like cell phone apps that can modify a photo to make a granny look like a teenager!

You might want to take a break at this point, stretch your legs and make coffee because Image transformation comes next and it is the last and most technical part of the rectification process. This is where the wizard residing in the computer casts its spell and turns science into magic. The radiance data holds each wave of light wave as a separate wave band. So, visible light would present itself in the three primary colours, which are red, green and blue. Yes, sorry artists but your primary colours of red, yellow and blue are scientifically incorrect! In the process of image transformation, an image can be made combining red, green and blue wavelengths, making a single waveband, called the panchromatic band. Or, the three wave lengths can be presented as separate bands, giving a natural colour image, like a colour photograph. The fun part comes when you start adding or subtracting the colour bands to invisible light, like infrared for instance. A false colour image is made when you add the green and the red bands to the infrared band and leave out the blue one. In the resulting image, all green things look blue, all red things look green and all infrared things look red! Why would you do this, you may ask. These images can be used to see differences in plant cover, because leaves reflect infrared and so the plants look red.

You can take the magic a step further by spectral or band ratioing. This process uses addition, subtraction, multiplication or division algorithms to identify certain parts of an image. For example, farmers and ecologists use an algorithm called Normalised Difference Vegetation Index (NDVI). Because healthy vegetation absorbs visible red and reflects near infrared, you can divide the near infrared wave band with the red one and you would get bigger ratios for vegetation and smaller ones for other things. So, in the new transformed image, vegetation would be much easier to identify and measure than soil or water. This can indicate much more clearly than a natural colour image, which part of a crop or grassland has the most vigourous vegetation. Now decisions can be made to help along the sicklier vegetation. If we could see infrared light, healthy trees would appear to be extremely bright and struggling plants would be duller.

Computers can’t see light, but they can see numbers.
Light waves are converted into numbers and renamed data.

The data is analysed: a peek into a geographers’ world

In order for a human to make sense the data revealed by the image, and to extract the needed information, the resolution of the image must be taken into account. Satellite sensors will have different spatial, spectral, temporal and radiometric resolution, depending on the type of sensor and also the altitude and speed of the satellite’s orbit.

The amount of detail that you see in an image, depends on its spatial resolution, or the size of the pixels. If the satellites orbit is close to the earth, the onboard sensor has high spatial resolution and small details can be seen.

For example, the spatial resolution may be 20 metres and each pixel will represent 20m X 20m on the ground. Sensors onboard satellites that orbit further from earth have a low spatial resolution and only large features can be seen. Scale shows the ratio of the distance on an image to the actual ground distance on the earth’s surface. For example, a scale of 1:100 000 means that if an object is one centimetre on the image; it would actually be an object 100 000 centimetres long (or 1 kilometre) on the ground. 

Spectral resolution is the range of colour in the digital image. This depends on how many visible wavelengths of light a sensor can read. Panchromatic sensors have a course spectral resolution and can only make black and white images using one spectral waveband of blue, green and red bands mixed together. Multispectral sensors make images that separate the wavebands of red, green & blue light. This means that they have a finer resolution because they can see features with different colours like water and vegetation based on their reflectance in each wavelength (e.g. Landsat). Multispectral sensors can mix and match wavebands from the electromagnetic spectrum to create composite images. For example, false colour images are used to detect differences in vegetation cover, because leaves reflect infrared, so in these images, plants look red. Hyperspectral sensors use very narrow bands to detect fine details of a feature. Each band is so sensitive that miniscule detail such as different rock types can be identified (e.g. ASTER).

Temporal resolution is the time it takes for a satellite to complete one orbit cycle and pass over the exact same area of earth. If the satellite has an orbit close to the earth, it will take longer to revisit the same place. So, it is said that it has a course temporal resolution. For example, Landsat 8 has a temporal resolution of 16 days and a spatial resolution of 30m. Satellites further from earth quickly complete an orbit cycle and have a fine temporal resolution. For example, MODIS revisits the same place daily, but has a spatial resolution of 500m. By imaging continuously at different times, we can see changes that happen on the earth such as changes in vegetation because of climate change, or flooding, or urban development and deforestation.

Radiometric resolution is the amount of brightness the image can portray. This is the ability of a sensor to see different levels of energy or brightness. The different levels of energy are measured in bits. The data is represented by digital numbers ranging from zero to more or less one. BITS are also used for coding numbers in binary format. The absence of brightness, black, is represented by 0 and white is the highest number in the range. The number of brightness levels in the range depends on the number of BITS used. The more BITS, the finer the radiometric resolution, which means small differences in brightness. For example, if a sensor uses 8 bits to record data; there would be 2(to the power of)8 = 256 digital values available, ranging from 0 to 255; but if only 4 bits were used, then 2(to the power of)4 = 16, so only 0 to 15 would be available.

The image is interpreted: enter the spy

The image is interpreted by visual analysis of the characteristics of the features in it. This means that you look at the image and try to guess what is in it. This is not necessarily as easy as it sounds. Sometimes it is difficult to recognise an object on the ground because even familiar things seen from a bird’s eye view can look different. You might even have to take a trip to the actual target area on earth to confirm what you think is in the image. This is called ground truthing.

Sometimes it takes a human eye to identify what is being seen
Sometimes it takes a human eye to identify what is being seen

Luckily there are some tricks you can use to look for clues that will help you identify features. Tone is the brightness or colour of objects in an image. A sandy beach will look much lighter than the ocean.  Shape shows the structure and outline of objects. For example, natural features are curvy in shape; urban and agricultural areas have straight lines. You can tell the size of a feature from its scale, but also relative to other easily identified features in the image. Small squares could be houses but big squares could be factories. You can see a pattern in some objects when they have similar tones, textures, shapes or sizes. For example, orchards and mealies are evenly spaced vegetation. Rough textures have a mottled tone and smooth textures have hardly any change in tone. Shadows can show what the object is from its profile, or they can show the height of an object. But they can also completely obliterate an object in the shade. Association is the relationship between other recognisable objects in the image and the feature that you can’t identify. For example, a marina would be on the ocean or a lake and not in the middle of a grassland.

In one way, this part of the job needs to be done with human eyes because humans can recognise things through associative experience. The only problem is that humans can only analyse one band at a time, whereas computers can digitally analyse many bands at once. The identification of objects may not be perfect, but it will save a lot of time. The combination of human and machine probably has the best outcome.

Geographic Information Systems: a little help from the computer

Map of temperatures of Earth
Thermal imaging showing bands of hot and cold temperatues on a world map with South America and Africa in the centre

Once the image has been analysed and interpreted, it can bemerged with other images in layers to create a detailed intelligence-based study of the target area through a process called Geographical Information Systems (GIS). GIS uses the images data to make layers of maps that can be added to, or removed from a base map. 

These maps have hundreds of uses. If you are an ecologist you can use them to track animal movements across a landscape. If you overlay a vegetation map on top of the animal locations, you can see what they like to eat. If you are a farmer you can use an NDVI layer to see what condition your crops are in. If you are a geologist, you can overlay a vegetation map and a geology one to see where the most likely place is to find gold! If you are a policeman you can use layers of crime incidents over a neighbourhood to identify crime hotspots. Along with the image of the area, are layered data sets which could include details such as street names, schools, shops, crime locations, and even fine details such as injury locations. Trends and patterns can then be extrapolated from this data (Satellite Imaging Corporation, 2012).

The magic of remote sensing begins when different wave lengths of light reveal hidden information about the world around us. The applications of remote sensing are numerous and multifaceted. The only limitation to the use of remote sensing lies within the imagination of the people working with and developing the technology.

REFERENCES & FURTHER READING

http://scomp5063.wur.nl/courses/grs10306/Clevers/RS%20CH4%20Preprocessing/IGI_preprocessing%20RS%20ppt.pdf

https://crisp.nus.edu.sg/~research/tutorial/process.htm

https://www.researchgate.net/publication/324943179_UNIT_12_IMAGE_ENHANCEMENT_AND_TRANSFORMATION

https://en.wikipedia.org/wiki/False_color

http://geology.wlu.edu/harbor/geol260/lecture_notes/Notes_rs_ratios.html

http://www.fis.uni-bonn.de/en/recherchetools/infobox/professionals/resolution/spatial-resolution

https://www.nrcan.gc.ca/node/9407

http://fis.uni-bonn.de/en/recherchetools/infobox/professionals/resolution/spectral-resolution

https://articles.extension.org/pages/40073/what-is-the-difference-between-multispectral-and-hyperspectral-imagery

https://www.nrcan.gc.ca/earth-sciences/geomatics/satellite-imagery-air-photos/satellite-imagery-products/educational-resources/9365

https://www.nrcan.gc.ca/node/9283

https://www.nrcan.gc.ca/node/9379

https://www.lifewire.com/the-difference-between-bits-and-bytes-816248

https://www.techopedia.com/definition/938/binary-format

https://en.wikipedia.org/wiki/Ground_truth

https://www.geospatialworld.net/article/image-interpretation-of-remote-sensing-data/

Leave a Comment

Your email address will not be published. Required fields are marked *