Learning goals

  • Get to know the spectral dimension of optical remote sensing data
  • Learn how to visualize optical remote sensing data on screen
  • Learn how different land cover types are represented in true- and false-color RGB visualizations

Background

The human eye

To better visualize and interpret remotely sensed images, we need to understand the basic principles on how a sensor records an image. In many ways, imaging sensors resemble the human eye - we can describe it as a ‘sensor with three bands’.

Spectral response of the human eye (Source: Wikimedia Commons).

The human eye is sensitive for electromagnetic radiation in the visible domain, meaning the blue, green and red regions of the electromagnetic spectrum. We can say the human eye is sensitive between ~400-700 nm, which is defined as the spectral range in the context of remote sensing. The human eye consists of three receptors cells (cones) for color vision, which respond to light of shorter (S), medium (M) and longer (L) wavelengths regions. The perception of colors depends on the stimulation of these cones, e.g. red is perceived when the L cone is stimulated more than the other cones. The width of the response curve of each receptor at half of the maximum sensitivity (peak) is defined as the spectral resolution, the intervals between the maximum sensitivity between receptors corresponds to the spectral sampling interval in the context of remote sensing.

Digital photography

The underlying principle is used in everyday technology, for instance in digital cameras. These correspond to imaging sensors capturing spectral regions similar to the human eye. While we look at photos in RGB visualization, technically each color, i.e. blue, green and red, is captured and stored separately in an individual image, i.e. band. Digital cameras therefore represent imaging sensors with three bands.

Grayscale and RGB visualization of a digital photo of the Geography Department

Above a photograph of the Geography Department as RGB visualization and separated in its red, green, and blue band. Every band can be visualized on a grayscale from black (low values = low light intensity) to white (high values = high light intensity). You may notice that the blue sky appears brightest in the blue band, while the red facade of the old building appears brightest in the red band.

Optical remote sensing

Imaging spectrometers are sensitive to wavelengths beyond the human eye’s sensitivity. There are different technologies in remote sensing, which focus on different parts of the electromagnetic spectrum:

  • Optical remote sensing observes the visible light (VIS; ~380 - 700 nm), the near infrared (nIR; 0.7 - 1.3 µm) and short-wave infrared (swIR; 1.3 - 3 µm)
  • Thermal remote sensing observes thermal infrared radiation (tIR; 5 - 15 µm)
  • Radar remote sensing observes microwave radiation (1 mm - 1 m)

The electromagnetic spectrum (Source: Wikimedia Commons).

Optical remote sensing sensors capture bands the visible VIS, near-infrared (nIR) and shortwave-infrared (swIR). We can use the visible bands acquired in the blue, green and red domains for a grayscale or a true-color RGB visualization. However, visualizing other bands is also possible. In order to look at the nIR band of an image, we simply display it as a grayscale image or as false-color RGB visualization.

True-color, false-color and grayscale visualization of optical remote sensing data

RGB visualization

The basic principle behind visualizing colors and color gradients on our computer monitor is the additive color model. Any color we see is a mix of the three primary colors, (R) red, (G) green, and (B) blue, we therefore talk about the RGB visualization. We can scale the intensity of each color. In technical terms, the intensity can be expressed as values ranging from 0 (lowest intensity) to 255 (maximum intensity) in each of the three colors. Your computer screen will adjust the illumination of R, G, B according to these values. For example, R=255, G=0, B=0 will result in an intense red color. This principle allows us to visualize 256³, or 16.8 million different colors. The above numbers are an example using an 8-bit color depth, but we can produce even more colors using for example 16 or 32-bit.

Additive RGB color model (left) and a close-up of an LCD screen displaying the letters R in red, G in green, and B in blue against a white background (Source: Wikimedia Commons).

Remote sensing images can be presented differently using the RGB visualization capabilities of the monitor. Arbitrary bands can be displayed on screen in different RGB combinations, we therefore talk about RGB-composites. Sometimes we want to look at a single band only. In this case, we can visualize a grayscale image, where all channels of your screen display the same image band. This leads to a grayscale gradient from black to white.

Satellite images as single band gray, RGB-composite (true colour), RGB-composite (false colour)

Enhancing image contrast

While the above examples assume that images contain pixel values of very low and very high intensity, this is often not the case in reality. Due to surface, recording and sensor conditions, data sets (or bands) often contain only a section of the obtainable gray values. Each RGB color of your screen has a color depth of 8-bit (equivalent to 256 graylevels). The transfer of pixel values to a monitor occurs initially 1:1. In this setting, the images are often displayed in low-contrast.

We can change this by adjusting the visualization based on the values present in the image. We can find out which range of values are present in an image by looking at the image histogram. It illustrates the frequency distribution of values present in a single band.

The red band of a satellite image with corresponding image histogram and assigned graylevel

We can see that our image mostly contains values at the lower end. The majority of pixels however has values below 20% reflectance, and we can see a peak of very low values of at ~3% reflectance. If we now choose to visualize the image without accounting for the image-specific distribution, we will see a dark image with poor contrast. Instead, we would like to utilize the full gray value range optimally.

Schematic visualization of contrast enhancement.

Contrast enhancement describes a function of representation that is used to transfer pixel values in gray values. Often, linear contrast enhancement is used where the increase in gray value per increase in pixel value remains the same for the most relevant value ranges. We can achieve this by modifying the linear strech applied to the image, e.g. linear contrast stretch adjusted to the specific image statistics. We may calculate for example minimum and maximum value and apply a linear stretch between those values. As the extreme values (min & max) are very much influenced by outliers, we may prefer using specific percentiles (e.g. 2% and 98%), or stretch linearly between +/- two standard deviations from the mean. This helps to maximize the contrast within the most frequently occurring image values.

Schematic representation of linear histogram stretch between min/max (top) or between +/- one standard deviation from the mean (Source: Jensen, 2011)

In other words - we modify the range of values for which we distribute the available graylevels. Here we apply a linear stretch starting at the 2% percentile and ending at the 98% percentile. Compare the image contrast with the graph above.

The red band of a satellite image with corresponding image histogram and the corresponding graylevel. We applied a linear stretch between the lowest and highest 2%, highlighted by red lines.

There are many more ways of applying contrast enhancement to an image and the goal of todays exercise is getting to know some of the fundamental techniques. Keep in mind that all these techniques are only used to visualize images on screen. By applying them, only the screen display changes, the actual data stays the same.

Session materials

Download the session materials from our shared repository.

Exercise

Colour mixing in QGIS

  • Open the shapefile ‘charlottenburg_point.shp’ in QGIS.
  • Open the ‘Select Color Tool’ of the shapefile (Properties… > Symbology > double click on colour bar)

Select Color Tool in QGIS

  • For displaying different colors QGIS offers the RGB and HSV color model. Enter the color values of the following table in the RGB color model and note/describe the resulting color.

    red green blue color
    255 0 0
    255 0 255
    0 255 255
    255 255 0
    0 0 0
    25 25 25
    150 150 150
    255 255 255

Image visualization

  • Open the Sentinel-2 image ‘Sentinel2_T33UUU_20190726_10m_4bands_subset_berlin.bsq’ in QGIS. Note that the ‘.hdr’ file only contains the necessary metadata. The image has the following specifications:
    • Spatial resolution: 10 m
    • Extent: 5169 x 3719 pixel
    • Spectral bands: 4 (band 1 = “b2-blue”, band 2 = “b3-green”, band 3 = “b4-red”, band 4 = “b8-nir”)
  • Open the display options of the image (Properties… > Symbology)
  • What are the default settings with regard to:
    • Render type
    • RGB band assignment
    • Min/Max Value Settings
  • Display the Sentinel-2 image in true colors.

Sentinel-2 image in true colours

  • Now, display only the blue band in grayscale (Render type > Singleband gray)
    • How did the Min/Max Value Settings change?
    • How rich in contrast does the image appear?
    • How does the image representation change when you alter the min/max value settings (i.e. cumulative count cut, etc.)?
  • Switch to the histogram section, calculate the image histogram and adjust the settings to change the contrast manually.

Settings for image histogram

  • Try to differentiate water bodies from land mass by stretching the min/max values of the nIR band. Note that you have to display the nIR band (Properties… < Symbology) dialog first.

Assignment

The objective of this assignment is to get familiar with different RGB visualization of optical remote sensing images and to learn how different land cover types are represented in true- and false-color RGB-composites.

True- & false color RGB-composites

  • Create a true and false color (R = nIR, G = red, B = green) representation of the Sentinel-2 image and answer the following questions:
    • What are the main differences between true and false color representations?
    • For what phenomena and/or surfaces is the nIR channel particularly sensitive?

Visualizing surface types

  • Find one example area for each of the following surfaces:
    • Deciduous forest
    • Coniferous forest
    • Grass
    • Artificial turf
  • Provide a screenshot of each example area in true- and false-color RGB-composite, answer the following questions:
    • Are there visible differences between deciduous and coniferous forest?
    • What are the differences between grass and artificial turf?

Submission

  • Document your answers to the questions above with bullet points and the help of your screenshots in PowerPoint, Word or another software package. Please use the following layout suggestion:

  • Upload your documentation as a PDF file on Moodle.

  • General submission notes: Submission deadline for the weekly assignment is always the following Monday at 10am. Please use the naming convention indicating session number and family name of all students in the respective team, e.g. ‘s01_surname1_surname2_surname3_surname4.pdf’. Each team member has to upload the assignment individually. Provide single file submissions, in case you have to submit multiple files, create a *.zip archive.


Copyright © 2020 Humboldt-Universität zu Berlin. Department of Geography.