To better visualize and interpret remotely sensed images, we need to understand the basic principles on how a sensor records an image. In many ways, imaging sensors resemble the human eye - we can describe it as a ‘sensor with three bands’.
The human eye is sensitive for electromagnetic radiation in the visible domain, meaning the blue, green and red regions of the electromagnetic spectrum. We can say the human eye is sensitive between ~400-700 nm, which is defined as the spectral range in the context of remote sensing. The human eye consists of three receptors cells (cones) for color vision, which respond to light of shorter (S), medium (M) and longer (L) wavelengths regions. The perception of colors depends on the stimulation of these cones, e.g. red is perceived when the L cone is stimulated more than the other cones. The width of the response curve of each receptor at half of the maximum sensitivity (peak) is defined as the spectral resolution, the intervals between the maximum sensitivity between receptors corresponds to the spectral sampling interval in the context of remote sensing.
The underlying principle is used in everyday technology, for instance in digital cameras. These correspond to imaging sensors capturing spectral regions similar to the human eye. While we look at photos in RGB visualization, technically each color, i.e. blue, green and red, is captured and stored separately in an individual image, i.e. band. Digital cameras therefore represent imaging sensors with three bands.
Above a photograph of the Geography Department as RGB visualization and separated in its red, green, and blue band. Every band can be visualized on a grayscale from black (low values = low light intensity) to white (high values = high light intensity). You may notice that the blue sky appears brightest in the blue band, while the red facade of the old building appears brightest in the red band.
Imaging spectrometers are sensitive to wavelengths beyond the human eye’s sensitivity. There are different technologies in remote sensing, which focus on different parts of the electromagnetic spectrum:
Optical remote sensing sensors capture bands the visible VIS, near-infrared (nIR) and shortwave-infrared (swIR). We can use the visible bands acquired in the blue, green and red domains for a grayscale or a true-color RGB visualization. However, visualizing other bands is also possible. In order to look at the nIR band of an image, we simply display it as a grayscale image or as false-color RGB visualization.
The basic principle behind visualizing colors and color gradients on our computer monitor is the additive color model. Any color we see is a mix of the three primary colors, (R) red, (G) green, and (B) blue, we therefore talk about the RGB visualization. We can scale the intensity of each color. In technical terms, the intensity can be expressed as values ranging from 0 (lowest intensity) to 255 (maximum intensity) in each of the three colors. Your computer screen will adjust the illumination of R, G, B according to these values. For example, R=255, G=0, B=0 will result in an intense red color. This principle allows us to visualize 256³, or 16.8 million different colors. The above numbers are an example using an 8-bit color depth, but we can produce even more colors using for example 16 or 32-bit.
Remote sensing images can be presented differently using the RGB visualization capabilities of the monitor. Arbitrary bands can be displayed on screen in different RGB combinations, we therefore talk about RGB-composites. Sometimes we want to look at a single band only. In this case, we can visualize a grayscale image, where all channels of your screen display the same image band. This leads to a grayscale gradient from black to white.
While the above examples assume that images contain pixel values of very low and very high intensity, this is often not the case in reality. Due to surface, recording and sensor conditions, data sets (or bands) often contain only a section of the obtainable gray values. Each RGB color of your screen has a color depth of 8-bit (equivalent to 256 graylevels). The transfer of pixel values to a monitor occurs initially 1:1. In this setting, the images are often displayed in low-contrast.
We can change this by adjusting the visualization based on the values present in the image. We can find out which range of values are present in an image by looking at the image histogram. It illustrates the frequency distribution of values present in a single band.
We can see that our image mostly contains values at the lower end. The majority of pixels however has values below 20% reflectance, and we can see a peak of very low values of at ~3% reflectance. If we now choose to visualize the image without accounting for the image-specific distribution, we will see a dark image with poor contrast. Instead, we would like to utilize the full gray value range optimally.
Contrast enhancement describes a function of representation that is used to transfer pixel values in gray values. Often, linear contrast enhancement is used where the increase in gray value per increase in pixel value remains the same for the most relevant value ranges. We can achieve this by modifying the linear strech applied to the image, e.g. linear contrast stretch adjusted to the specific image statistics. We may calculate for example minimum and maximum value and apply a linear stretch between those values. As the extreme values (min & max) are very much influenced by outliers, we may prefer using specific percentiles (e.g. 2% and 98%), or stretch linearly between +/- two standard deviations from the mean. This helps to maximize the contrast within the most frequently occurring image values.
In other words - we modify the range of values for which we distribute the available graylevels. Here we apply a linear stretch starting at the 2% percentile and ending at the 98% percentile. Compare the image contrast with the graph above.
There are many more ways of applying contrast enhancement to an image and the goal of todays exercise is getting to know some of the fundamental techniques. Keep in mind that all these techniques are only used to visualize images on screen. By applying them, only the screen display changes, the actual data stays the same.
Download the session materials from our shared repository.
For displaying different colors QGIS offers the RGB and HSV color model. Enter the color values of the following table in the RGB color model and note/describe the resulting color.
The objective of this assignment is to get familiar with different RGB visualization of optical remote sensing images and to learn how different land cover types are represented in true- and false-color RGB-composites.
Document your answers to the questions above with bullet points and the help of your screenshots in PowerPoint, Word or another software package. Please use the following layout suggestion:
Upload your documentation as a PDF file on Moodle.
General submission notes: Submission deadline for the weekly assignment is always the following Monday at 10am. Please use the naming convention indicating session number and family name of all students in the respective team, e.g. ‘s01_surname1_surname2_surname3_surname4.pdf’. Each team member has to upload the assignment individually. Provide single file submissions, in case you have to submit multiple files, create a *.zip archive.
Copyright © 2020 Humboldt-Universität zu Berlin. Department of Geography.