Raster data#

In this part of the session we now finally get to work with some geographic data. For the next three weeks we will work with and manipulate raster data, which represent the most common type of data in our research. After that, we will tackle the use of vector data.

Looking at raster data and the different types of raster data we use, three examples come to mind:

  1. Satellite imagery (continuous data)

  2. Topographic information, e.g., a digital elevation model (continuous data)

  3. Land cover (change) maps (categorical data)

We are very familiar with these types of data, and have visualized them in a GIS during our study career numerous times. However - for the purpose of completeness - we want to mention that there are another type of images that have some kind of geo-information, but that cannot be visualized in a GIS: so-called “geo-tagged” photos, that have one geo-coordinate associated with the entire picture.

You will learn today, that also in case of the “common” raster files we don’t need much more than (a) one coordinate, and (b) information about the pixel size and the size of the entire image to visualize (and manipulate) these data.

We will do this using the Geospatial Data Abstraction Library (short: gdal). gdal is a computer software library for reading and writing raster and vector geospatial data formats, and is released under the permissive X/MIT style free software license by the Open Source Geospatial Foundation [GDAL/OGR contributors, 2023]. gdal can handle over 100 different raster-formats (through so-called drivers), and in addition offers a number of of built-in functions (e.g., gdal_translate, gdal_proximity).

Important

Important to recognize is that gdal really only helps us in accessing geospatial information and bring it into a form that allows us to manipulate the data. Specifically, and for our context, using gdal we will access raster-files and load their content into numpy-arrays. To do that, however, we need to first learn something about the raster format.

Gdal raster model#

We will use now some time to get to know how raster. In gdal the general structure of a raster file is as follows:

../../_images/Flowchart.png

Fig. 1 Overview about the gdal raster structure. For more info and background visit the gdal documentation and see Garrard (2016) Geoprocessing in python chapter 9 (available in moodle).#

You can imagine the structure in different levels. On the first level (blue) the dataset is being accessed, using gdal.Open(path). Important is that you create a variable, which points to the dataset. In addition, you will have to ijport the gdal library. A potential code look like that:

from osgeo import gdal
gdal.UseExceptions()

ds = gdal.Open("../data/landsat_tiles_chaco/tileID_410_y2000.tif")
ds
<osgeo.gdal.Dataset; proxy of <Swig Object of type 'GDALDatasetShadow *' at 0x1193673f0> >

Looking at the output, you see that the variable ds is just a pointer to the actual file, and nothing has been moved to the memory. Now, let’s access some more information. We start with the size of the rasterfile. The general syntax here is that you call the raster object ds followed by . and the method.

cols = ds.RasterXSize
rows = ds.RasterYSize
nbands = ds.RasterCount
cols, rows, nbands
(1529, 1380, 1)

The raster has 1529 columns, 1380 rows and 1 band. So, nothing really spectacular and nothing we haven’t been looking at. What is important to recognize here is that the three variables are now actually stored and do not represent a pointer. Now, we check in which projection the raster is stored - we do this using the method .GetProjection()

pr = ds.GetProjection()
pr
'GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AXIS["Latitude",NORTH],AXIS["Longitude",EAST],AUTHORITY["EPSG","4326"]]'

The output here comes as a Wellknown Text (Wkt). For the moment this is nothing to be too concerned of, but generally remember that when you create a new rasterfile and you want to assign a projection, that projection needs to be in form of a Wellknown Text. There are tools to convert e.g., epsg codes into wkt, but we cover this later during the course or during the final class project.

Geotransform#

Next, we look at the affine transformation. This is for this week the most important piece of information (besides the number of columns and rows), and we will spend now some time to look at it. I store the information as the variable gt to build a mental connection to the method we are calling to get this information .GetGeoTransform()

gt = ds.GetGeoTransform()
gt
(-63.86859973338657,
 0.00026949458523585647,
 0.0,
 -24.446662310500248,
 0.0,
 -0.00026949458523585647)

The ouput is a tupel containing a total of 6 pieces of information:

  1. the x-coordinate of the upper left corner of the raster file. This coordinate has to correspond to the coordinate system that the entire raster is stored

  2. The pixel size is x-direction. The units have to correspond to the coordinate system as well.

  3. The rotation of the pixel in x-direction \(\rightarrow\) I have never encountered a case where this value is not 0

  4. The y-coordinate of the upper left corner

  5. Rotation of the pixel in y-direction \(\rightarrow\) see point 3.

  6. The pixel size in y-direction.

You notice probably two important aspects: (1) the upper left coordinate seems to be important, (2) the pixel size is once noted as a positive number, and once as a positive number. Both points are related to each other, and the figure visualizes this - can you explain now why 2. and 6. have the same absolute value but different signs?

../../_images/px_Structure.png

Fig. 2 Overview about the elements of the function call .GetGeoTransform().#

As a preview and an encouragement for thinking to our course and exercise, we want you to think about a couple of more questions.

  • How can one calculate the geo-coordinate of the other four corners of a raster file?

  • Could you apply the same strategy to get the geo-coordinate of the center?

  • How can considering the affine transformation help you working with two raster files of different extent?

Load raster into an array#

Now, all that is left is to find out how to load the data into memory to be able to work on this. Generally, we use the pacakge numpy to manipulate the data, so what we need to do is find a method that gets the data and loads it into a numpy array. The way we do this is by accessing the last method in the orange level .GetRasterBand(), followed by the method .ReadAsArray(). You can call them separately, like in the examples below, or you can combine them within one call .GetRasterBand().ReadAsArray()

The argument for the function .GetRasterBand() is simple: all you need to provide is an integer value for the band you want to access \(\rightarrow\) caution: here, we don’t have zero-based counting. The first band has the index 1. The return value is again only a pointer to the respective band (similar to the function gdal.Open()… We have not yet loaded any data into memory.

rb = ds.GetRasterBand(1)
rb
<osgeo.gdal.Band; proxy of <Swig Object of type 'GDALRasterBandShadow *' at 0x119366910> >

The argument for loading the band into an array are slightly more complex. You need to provide four values (in that order):

  1. origin_x: the x-coordinate of the upper left corner of the part you want to load from the array. In the example below we want to load the entire array, so the origin has the coordinate (0,0), and the value for origin_x hence is 0.

  2. origin_y: same as for origin_x

  3. sliceSize_x: the number of columns you want to load from the array \(\rightarrow\) important: this refers to array-coordinates (using zero-based counting) and not to geo-coordinates. The value is an integer value and is often confused with the lower right coordinate; likely because when you load an entire array, then the lower right array-coordinate is the same as the value for cols.

  4. sliceSize_y: same as for sliceSize_x, but here for the y-direction. Important: because we are talking about slice sizes and not coordinates, this value can never be negative!

In the example below, we load the entire rasterband into an array:

ds_arr = rb.ReadAsArray(0, 0, cols, rows)
ds_arr
array([[10752, 11593, 12625, ...,  4840,  4553,  4645],
       [10811, 12021, 12399, ...,  5314,  5285,  5115],
       [12378, 12295, 12916, ...,  5541,  5561,  5925],
       ...,
       [ 4308,  3952,  3796, ...,  7751, 10913, 12762],
       [ 4133,  3667,  3877, ...,  7924,  9907, 12482],
       [ 5173,  5012,  4916, ...,  8940, 11699, 13095]], dtype=int32)

Now, we have loaded the raster data into numpy and are ready to work with the data. We will only apply some smple functions this week to calculate simple statistics, such as the mean value of the entire array. During the next weeks we will become gradually more and more complex with our operations. However, the steps to be taken so that actual data can be manipulated, are always the same once described here.

Last notes#

We focus here on the general raster structure and the methods in gdal to access the different elements of the structure. Of course you can also do a lof of more cool stuff, including (1) creating new files, (2) reproject images, (3) rasterize vector data, etc. Many of the functions we will touch upon in this course, but others not. We encourage you to continue reading, using some additional sources, e.g.,

  1. The book by Garrard. It is a bit outdated in some aspects but has still some nice information, particularly regarding the affine transformation

  2. Documentation on the executables in gdal

Lastly, you probably came across already some wrapper functions, such as the rasterio package. These are all really cool packages. Why we try to avoid these packages here to large extents is that we (a) want you to understand the structure of a raster file and how to maneuver the different pieces of information to get to our result. Packages such as rasterio do many of the things under the hood. This is cool when number of files you work with is low and/or the files themselves are relatively small (in terms of their size). When it comes to large datasets, however, we need to be efficient in how to access data and often we only access parts of the data and can logically exclude other parts. Using wrapper functions makes this often more difficult (or impossible) which is why we want you to focus on the core elements of raster processing. You also don’t run into trouble if the packages are deprecated or not maintained, hence resulting in a lack of compatibility to other functions.