Thursday, November 17, 2016

Lab 6: Geometric Correction

Goals and Background
The purpose of this lab was to introduce a process known as geometric correction. Geometric correction is the manipulation of an image so that it matches a surface projection using GCPs (Ground Control Points) to link a satellite image to a reference image. This process is necessary when a remotely sensed image does not match its planimetric position in the real world. During this lab, two forms of geometric correction were performed. First, an image-to-map first order polynomial geometric correction was performed. This was the simplest form of correction, slightly altering an image using a provided map of the image area. Secondly, an image-to-image third order polynomial geometric correction was performed. This process was far more complex a required the use of a reference image to vastly correct an image far outside of its proper planimetric position.

Methods
Part 1 (Image-to-Map Correction)
First, the image the needed to be corrected, Chicago_2000.img, and the reference map image, Chicago_drg.img, were brought into their own ERDAS imagine viewer. With the viewer containing Chicago_2000.img active, the Multispectral raster processing tool "Control Points" was activated. This started the geometric correction process. The geometric model was selected to be polynomial, and the default GCP Tool Reference Setup setting s were selected. The map image Chicago_drg.img was selected as the reference image, and the default Polynomial Model properties were selected as well, as a first order polynomial transformation was being used. The existing GCPs (Ground Control Points) were removed, as they would not be needed for a first order polynomial transformation, which only requires 3 pairs of GCPs, one of the pair on each image. Three pairs of GCPs were added to the Chicago_2000 image and the reference map image in their corresponding locations, with a fourth pair being added to increase the accuracy of the process. The GCPs were then slightly adjusted to decrease the value of the Total RMS error. This is a measurement of how "off" the program determines the GCP pairs to be. The GCPs were adjusted till a total RMS value of below 2.0 was achieved (Figure 1). The Multipoint Geometric Correction process was then run, creating a geometrically corrected image Chicago_2000gcr.img.
























Part 2 (Image-to-Image Correction)
First, the geometrically incorrect sierra_leone_east1991.img was added to a ERDAS viewer, with a reference image sl_reference_image.img being added as well. The geometric distortion in sierra_leone_east1991.img was so great it was determined that a third order polynomial transformation was required to correct the image. This required a minimum of 10 pairs of GCPs rather than 3, as it was fitting a more complex mathematical model to the data. A first order polynomial transformation fits a linear model (y = ax + b) to the data, while a third order polynomial transformation fits a third order cubic model (y = d + cx + bx^2 + ax^3) to the data. The Multispectral raster tool "Control Points" was launched again, with sl_reference_image.img being used as the reference image for sierra_leone_east1991.img. The only change in the setup process from part one, besides the use of different files, was to change the polynomial order to 3 in the Polynomial Model Properties. Similarly to part one, the existing GCPs were deleted and replace with 12 pairs of corresponding GCPs on the reference and sierra_leone_east1991 image. The points were then slight altered, until the total RMS value was below 1 (Figure 2).
The greater accuracy was required for this transformation due to the degree in which sierra_leone_east1991.img was being altered. Once this was achieved, the Geometric Correction process was run, with the interpolation method being changed to bilinear interpolation and the output being saved as sl_east_gcc.img.

Results
Part 1
The resulting geometrically corrected image Chicago_gcr.img, at an initial glance, shows little change from the original satellite image Chicgo_2000.img. This is because the first order polynomial transformation used on this image only slightly shifted the original image so that the surface features were correct in regards to their position on the reference map (Figure 3). The amount of geometric distortion seen in the original image is relatively minor, and the image only needed to be altered slightly to match the real world positioning of surface features.
Part 2
In comparison to the original sierra_leone_east1991.img, the geometrically corrected sl_east_gcc.img has been mostly corrected in regards to its its planimetric position in the real world. When overlayed on the reference image and analysed using the swipe tool, the corrected image largely matches the reference image (Figure 4). In comparison, the original image, when overlayed on the reference image, is shown to be position far above where is should be. While sl_east_gcc.img is now largely geometrically correct, it is crucial to not that the image is still not perfect. In particular, the top-left corner of sl_east_gcc.img is stretched slightly beyond the boundaries of the reference image, while the bottom left corner does not quite reach the edges of the reference image. In the future, these errors could be largely corrected by increasing the number of GCPs between the original and reference images and/or by decreasing the individual and total RMS values of each GCP pair.

Sources

Earth Resources Observation and Science Center. In United States Geological Survey. Retrieved November 17, 2016, from http://eros.usgs.gov/usa

In Illinois Geospatial Data ClearingHouse. Retrieved November 17, 2016, from https://clearinghouse.isgs.illinois.edu

Wilson, C. (2016). Lab 6: Geometric correction. Eau Claire, Wisconsin.

Thursday, November 10, 2016

Lab 5: LIDAR Remote Sensing

Goals and Background
The main purpose of this lab was to gain basic knowledge and understanding of LiDAR data structure and data processing. To do this, I was tasked with assuming the role of a GIS manager working in Eau Claire, Wisconsin, and was tasked with acquiring Eau Claire LiDAR point cloud data in LAS format. I was to use this data to process and retrieve surface and terrain models of Eau Claire and generate an intensity image and other products using the terrain models.

Methods
First, an LAS dataset was created in ArcCatalog, and the necessary LAS files for the exercise were added to the dataset. The statistics of the dataset were then calculated and analysed for reference. The necessary XY Coordinate System and the the Z Coordinate System were applied to the dataset, the "NAD 1983 HARN Wisconsin CRS Eau Claire (US Feet)" and "NAVD 1988 US Feet", respectively. These were chosen by analyzing a set of the metadata provided for the cloud point data. The LAS dataset was then displayed within ArcMap. The LAS dataset was then overlayed on a shapefile of the larger Eau Claire area to verify it had been formatted correctly.
Using the LAS dataset toolbar, the cloud point data was rendered as points color coded by elevation. From here, various functions of the rendering tool were explored. This includes rendering the point clouds elevation as polygons, rendering the aspect of the point clouds, rending the slope of the surface features, and displaying the contour lines of the features. Afterwards, the various preset classification and return filters were explored. This includes the Ground, Non Ground, and First Return preset settings for classification and return. Afterwards, the feature points were viewed in both 2D and 3D interactive viewers.
Using the Conversion Tool "LAS Dataset to Raster", digital models of both the the surface first return and the terrain were created to a cell size of two meters. Using these digital models, hillshades were constructed of both digital models using the 3D Analyst Tool "Hillshade". The output hillshades were then compared and analysed in order to interpret the surface features.
Using the point clouds set to "First Return", an intensity image was generated using the "LAS Dataset to Raster" Conversion Tool to the same cell size as the previous digital models. The output intensity image was then saved as a TIFF file and displayed in ERDAS, as the image appeared darkened in ArcMap. ERDAS automatically enhanced the image, allowing for analysis.

Results
Using the hillshades of both the DSM and DTM digital models (Figure 1), terrain and surface features can be identified for the Eau Claire area. The hillshade of the DSM shows that the Eau Claire shows that the area is densely packed with both vegetation and buildings of various size and height. However, this obscures much of the terrain. The DTM hillshade shows that the area is largely flat except for where it approaches the river, sloping greatly to meet the surface of the river. Using the DTM hillshade also allows for the analysis of the various road networks throughout the area. This is because roads and streets are a ground level feature picked up along with the terrain.






























Using LiDAR data, an accurate intensity image of the area can be generated similarly to optical remote sensing images produced through Landsat and other systems. However, unlike Landsat, the visual detail of this image is far greater. LiDAR outputs in the .7-1.5 μm NIR Band. A Landsat 5 image produced in this spectral range outputs to a resolution of 30x30 meters. However, using the "LAS Dataset to Raster" Conversion Tool an image was generated using the LiDAR data to a spatial resolution of 2x2 meters. This far exceeds the capabilities of Landsat imaging, which means that LiDAR can be used for fine details and measurement analysis.

Sources

Eau Claire County. www.co.eau-claire.wi.us/. Accessed 2013.

Price, Margaret H. Mastering ArcGIS. 6th ed., 2014.

Wilson, C. (2016). Lab 5: LiDAR Remote Sensing. Eau Claire, Wisconsin.

Tuesday, November 1, 2016

Lab 4: Miscellaneous Image Functions

Goals and Background:
Erdas Imagine possess many effective functions built in for the analysis and altering of satellite and image data. The problem is comes from learning to use this built in software. Th purpose of lab four was  to introduce students to a number of these built in functions and allow for the familiarization of the use of these functions. The methods and functions that were required for students to learn and use were as follows: delineating a study area from a larger source image, demonstrating how spatial resolution of images can be optimized for visual interpretation, introducing radiometric enhancement techniques, linking a satellite to Google Earth to use as a sourcing image, introducing a variety of resampling methods, examining and utilizing image mosaicking, and exploring the use of binary change detection with graphical modeling.

Methods
Part 1: Image Subsetting
In part one, image subsetting was the primary focus, both by using an inquire box and by utilizing an area of interest. First, the base image "eau_claire_2011.img" was brought into an Erdas image view. With this image maximized in the view, an inquire box was generated around the Eau Claire - Chippewa County area of the image. Then, the Subset & Chip tool "Create Subset Image" in the raster menu was launched. This would generated an output image from the original image, "eau_claire_2011.img", marked as the input file. The output image was selected to be created "From Inquire Box". This meant that the output image would be created only what was in the inquire box located on the original image, the Eau Claire - Chippewa County area. The "Create Subset Image" box was then run with the remaining default parameters, and the output save as "eau_claire_2011sb_ib.img". This was the subset image created from the inquire box. Next, the view was cleared of all bu the original "eau_claire_2011.img". Then the shapefile "ec_cpw_cts.shp" was added to the viewer. Both areas of the shapefile were selected with the shift key. The "paste from selected object" was then utilized in the "Home" menu, creating an area of interest around the shapefile. This area of interest was then saved as an "AOI Layer" under the "Save As" menu and titled to create the "ec_cpw_cts.aoi" area of interest layer. Next, the "Create Subset Image" from before was launched, once again utilizing the original image as the input file. The difference was the "AOI" option was used in order to utilize the created AIO layer from before. The tool was then run and the output save as "ec_cpw_2011sb_ai.img", an image taken from the original of only the area of interest, the Eau Claire and Chippewa Counties.
Part 2: Image Fusion
This involved the creation of a higher resolution image from a lower resolution image in order to aid in visual interpretation. The 30 m spatial resolution "ec_cpw_2000.img" was brought into a view, with the higher 15 m resolution panchromatic "ec_cpw_2000pan.img" being brought into another view. Under the Raster tool menu, Pan Sharpen was selected, with the "Resolution Merge" tool being selected from the drop-down menu. In the "Resolution Merge" window, the panchromatic image was selected to be the High Resolution input image, and the original "ec_cpw_2000.img" was selected to be the multispectral input image. The method was set to "Multiplicative" and the Resampling Technique was set to "Nearest Neighbor". The resolution merge model was then run.
Part 3: Simple Radiometric Enhancement
This demonstrated the processes behind a radiometric technique designed to reduce haze on a target image. The "eau_claire_2007.img"image was brought into a Erdas viewer. This image showed significant haze in the bottom-right corner of the image. To reduce this, the raster processing tool "Haze Reduction", found under Radiometric tools was launched. The original image was marked as the input file. All default parameters were accepted, and the tool was run, with the output being saved as "ec_2007_haze_r.img". This was the haze reduced image, which was then compared to the original.
Part 4: Linking Image Viewer to Google Earth
This next segmented demonstrated how to link the image viewer directly with Google Earth, for quick comparison or to be used as an image interpretation key. The "eau_claire_2011.img" was brought into a viewer. From here, the Google Earth key was selected from the top of the Erdas interface, then the "Connect to Google Earth was selected. This brought up the Google Earth View. "Match GE to View" was then used to make the Google Earth viewer display the same area as the image in Erdas. Afterwards, the "Sync GE to View" tab was used to allow synchronized viewing of both Erdas and Google Earth.
Part 5: Resampling
In order to increase or decrease pixel size, resampling is required. This part of the lab detailed the processes behind resampling up (decreasing) the pixel size. The "eau_claire_2011.img" was moving into a viewer, and the pixel size was recorded from the metadata to be of 30 m. From here, the raster tool "Resample Pixel Size" was selected from the Spatial tools drop-down menu. "eau_claire_2011.img" was selected as the input image. The output cell size was changed from the original 30x30 meters to 15x15 meters and the resample method was set to nearest neighbor. All other parameters were left as default and the tool was run, with the output being saved as "eau_claire_nn.img". However, when compared to the original image, no visual change could be spotted. This was because the resampling technique nearest neighbor was dividing each of the original pixels into four new pixels without changing their color value, as it was applying the color of the original image where the new pixel were. to correct this, the "Resample Pixel Size" tool was run again, with the same parameters as before, just with the resampling method being changed to Bilinear Interpolation. This means that the new pixels being generated took their color based on their location from the original surrounding pixels, with the closest pixels lending more weight to the final color. The tool wars then run and the resample saved as "eau_claire_bli.img". This was compared to the original image, with the new image showing visual differences from the original.
Part 6: Image Mosaicking
Image mosaicking is a tool used to link or overlay multiple satellite images for visual interpretation when one satellite image isn't large enough to produce the desired results. First, the images "eau_claire_1995p26r29.img" and "eau_claire_1995p25r29.img" were individually added to a viewer. However, each image had special parameters under the "Multiple" and "Raster Options" selected before each image was added from the "Select Layers to Add" window. Under "Raster Options", Background Transparent and Fit to Frame were checked, and under "Multiple", the radio button "Multiple Images in Virtual Mosaic" was selected. This resulted in the images being displayed one overlayed on the other. Next, the raster Mosaic tool "Mosaic Express" was opened. From here, the "eau_claire_1995p25r29.img" was added first, followed by the "eau_claire_1995p26r29.img" in order to layer them correctly. All other options of the Mosaic Express tool were left as default, and the output was saved as "eau_claire1995msx.img" before running the model. After displaying the newly created image, it was discovered that the colors between the two images did not evenly blend on the overlap area. To make up for this, the "MosaicPro" raster Mosaic tool was launched. Each of the two original images was added to the tool, with the Image Area Options being changed to Compute Active Area for each image. Using the Select tool, the order of the images was experimented with until " eau_claire_1995p25r29.img" was the bottom image. This meant the other image would be the one overlayed over the top. The radiometric properties of the images were synchronized by checking the "Use Histogram Matching" tab under the Color Corrections tool. The "Set Overlap Function" was checked to see if the Overlay function was set as default, and the Mosaic process was run, with the output being compared to the original Mosaic Express output. The new MosaicPro image showed far better blending of color in the overlap area.
Part 7: Binary Change Detection
Using images taken of the same area at different times, it is possible to use pixel brightness to detect change in the images from one to the next. In one viewer, "en_envs1991.img" was displayed. In a second viewer, "ec_envs2011.img" was displayed. Next, the Raster tool "Function" was selected. "en_envs1991.img" was selected as input 2 and "ec_envs2011.img" was selected as input 1. The operrator was changed to "-". The Layer was changed from "All" to "4" to imply the change detection. Afterwards, the image differentiating process was run and the results displayed. Using the metadata, the histogram was displayed for the output. The cutoff point was determined to be the mean + 1.5(Standard deviation). This is entirely subjective but was determined to be the best cutoff point between areas of little change and areas of high change. Using this data, a histogram was made showing the center value of the histogram and the cutoff points were shown to be the center value of the histogram + or - (mean + 1.5(Standard deviation)). From here a model was constructed using the Model Maker tool, with the two input rasters being "ec_envs_2011_b4.img" and "ec_envs_1991b4.img" NIR images from the designated area, taken in 2011 and 1991, respectively. These input rasters were input into the function
 "$n1_ec_envs_2011_b4 - $n2_ec_envs_1991_b4 + 127"
which would map the pixels which had changed between the two images. "127" was the determined constant of this function. The model output the function as a single raster, which was saved as "ec_91-11chg_b.img". This image was brought into the viewer, and its metadata was examined for a cutoff point. Is was decided from this that the threshold would be determined from "mean + (3 x standard deviation)". The Model Maker tool was launched again, but instead with a single input raster for a function which output as a single raster. "ec_91-11chg_b.img" was inserted into the input raster, and the function was determined to be a Conditional function of
EITHER 1 IF ( $n1_ec_91> change/no change threshold value) OR 0 OTHERWISE.
where threshold value = mean + (3 x standard deviation)
This meant that only areas of sufficient change would be displayed in the output raster. The output raster was saved as "ec_91-11bvis.img". Using ArcMap, this was displayed over the original NIR image "ec_envs1991b4.img" in order to create a aesthetically pleasing map that concisely displayed the changed areas of the displayed region from 1991 to 2011.
Results
Part 1
The results of part one demonstrated how an image can be broken down into smaller image subsets (Figure 1). Using image subsets can along for more effective image manipulation and interpretation, cutting out data not useful to the study. In this case, area of interest and inquire box tools were used in order to create image subsets of the Eau Claire and Chippewa County areas from a larger image.





































Part 2
With the use of pansharpening, an output image could be created of the original input image with the greater resolution the panchromatic image. The pan-sharpened image now has the 15x15 meter pixel size of the panchromatic image (Figure 2). For visual interpretation, this presents an enormous advantage. For most satellite imagery, the panchromatic band can be recorded at a higher resolution than other spectral bands. By using pan-sharpening, the higher resolution can be applied to the other spectral bands without increasing the power of the satellite.
Part 3
By using haze reduction, the haze present at the bottom right corner of the screen has largely been removed (Figure 3). This technique proves to be very effective at cleaning up an image's haze. With this tool, images that were largely marred and unusable for image analysis can now be effectively used. It is interesting to note that while the haze has been cleared from the haze reduction image, the shadow of the haze appears to remain. This suggests this technique may not be perfect, and that it is still best to use a satellite image without haze unless absolutely necessary.
Part 4
By linking an image in an Erdas to Google Earth, attempts at visual interpretation have been greatly improved. Google Earth, when linked to a relevant satellite image, acts a high resolution Selective interpretation key. Google Earth displays the same geographical area as the linked Erdas view in the visible spectrum while also providing annotations and labels for many of the key landmarks and features displayed (Figure 4).
 Part 5
Through the use of several resampling techniques, it is possible to increase the resolution of an image without the use of pansharpening. This is particularly helpful in a case where the original image may not have a accompanying panchromatic image with a higher resolution. However, it is important to note how an image should be correctly resampled. If the resampling method is set to nearest neighbor when resampling to a higher resolution, such as in this case where the image was resampled from a 30x30 meter resolution to a 15x15 meter resolution, the original pixels will merely be broken into smaller pixels of the same color as the original (Figure 5). This is because the nearest neighbor will of the new pixel will have always been the original pixel.
To correct this while resampling up to a higher resolution, it is important to change the resampling technique to Bilinear Interpolation. This will generate a resampled image with a higher resolution and a better blend of colors in the newly generated pixels (Figures 6 and 7). This is because the colors of the new pixels are generated based on their relative locations to all of the original surrounding pixels, with the closer original pixels lending more weight to the color of the new pixels.
Part 6
By use of image mosaicking techniques, multiple images can be combined using their overlap area to generate a linked image. This is particularly useful when the study area is larger than one satellite image. Depending on the mosaicking technique used, a simple overlay is generated, like that with Mosaic Express (Figure 8), or a more even transition of colors between the overlap of the images can be created using MosaicPro and Histogram Matching (Figure 9).
Part 7
With the use of binary change detection, changes in pixel value can be detected from one image to the next. By the use of the metadata, cutoff points can be determined when it comes to how much change is necessary in order for it to be displayed. A histogram can be found within the metadata and used to display the cutoff points (Figure 10), as is shown for "ec_91-11chg_b.img". Furthermore, using binary change detection and model maker, an image can be generated which shows these significant changes visually. This image file can then be constructed into a map to easily display this information. Figure 10 shows a map of generated from the finished product of a binary change detection process between the images "ec_envs_2011_b4.img" and "ec_envs_1991b4.img", being taken in 2011 and 1991 respectively. This map shows significant change around population centers, leading to the explanation that the change in images is due to urban expansion over the last 20 years, resulting in areas surrounding cities to be converted for residential use. Binary change detection is not limited to urban expansion analysis. It can also be used to detect deforestation and monitor land use.
 Sources:

Wilson, C. (2016). Lab 4: Miscellaneous Image Functions. Eau Claire, Wisconsin.

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey

Shapefile is from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014