Monday, September 23, 2013

Intro to ERDAS Imagine


This week in Remote Sensing we were introduced to ERDAS Imagine.  ERDAS Imagine is a geospatial image processing application.  We started out with a pretty simple introduction to the application's menus and functions, working with an existing raster and creating a new raster from a subset selection.  The simple navigation and manipulations we did today were fairly easy, we'll see what comes next.  For the final product (above) we had to move back to ArcGIS to create our map.

I'll be honest, I'm not super excited about jumping into a new, complicated, and apparently buggy tool this late in the program.  I feel like we were just getting to the good stuff in ArcGIS and I'd much rather come away feeling like we plumbed the depths there.  Hopefully the pay-off will be worth this detour.

Predictive Modeling


This week in Special Topics in Archaeology, we had a pretty interesting lab exercise.  We started with a digital elevation model of the Tangle Lakes Archaeological District in Alaska and some hydrographic information (mostly about the streams and ice mass).  From this raster and features, we were able to create additional raster that indicated favorable slope (for living areas), favorable access to sunlight (more south facing), elevations below the ice mass (warmer), and areas that were within 0.5 miles of a water source (one of the many streams).

With all this information, we could combine all the rasters using a weighting for each into a single weighted overlay (above).  This map shows the areas that are most likely to be inhabited in green and least likely in red. While this was a pretty simple view of this study area, it was a good demonstration of the tools we need to conduct a similar study in the regions of our own archaeological study.

Sunday, September 22, 2013

Ground Truthing


Our Remote Sensing lab this week focused on accuracy and ground truthing the classification that we made last week.  We started with our existing LULC classifications and then created a set of sample points.  I recalled an ArcGIS function that creates a fishnet grid over a map and decided to try that as a means to implement systematic sampling.  When the grid was created (as a polyline) ArcGIS also created a set of points in the center of each cell as a separate shapefile.  This, it turns out, was perfect for what I needed as a largely random set of sample points.  I then used Google Street View to zoom in and verify that each point was indeed of the classification that I had assigned.

I ended up with about 90% success and 10% mis-classifications.  The bad classifications tended to be the Industrial areas (not so industrial).  Overall, I think it went pretty well, however and showed that we can pretty well discern the land use classifications of an urban area.

Friday, September 20, 2013

Identifying Maya Periods, Part III + Angkor Wat


For the final week of our Maya Pyramids project, we focused on tools that allow us to share our ArcGIS maps on Google Earth.  We created .kmz files that captured our potential pyramids sites, some of our map views of the sites, and our final map of potential sites (above).


The graduate students completed a "bonus" section creating similar composite views (NDVI in my case) for Angkor Wat which were then used to train and create a supervised classification.  My final classification is above.

The Angkor Wat site seemed fairly "trainable" though I did run into some initial problems.  Apparently it is quite cloudy at the site.  My first download (very large, as we know) of Landsat images ended up covering the site completely.  The second set of images from an earlier date, which is what I used, still had a fair amount of cloud cover, but I was able to work around it.

The site seems to contain a lot of canals and ponds.  These are highlighted quite well by a False Color and especially by the NDVI view.The Supervised Classification, however, seems to not be able to recognize the water (and it is strangely classified as "Urban").  However, the linear nature of the canals and various man-made structures is still obvious. 

The linear indicators helps highlight the extensiveness of the site beyond the monumental core. The site seems to extend at least as far east as it does west (far left of the large rectangular pond).  Similarly, the site seems to extend north and south about the same distance as the height of the central square region.  I'm not sure the Supervised Classification provided a better view of the nature of the site as the plain False Color and NDVI images similarly expose the extent of the site.

Thursday, September 19, 2013

Identifying Mayan Pyramids: Data Analysis


We continued with our attempt to discover Mayan pyramids in the Guatemalan jungle using Landsat images.  This involved using NDVI (Normalized Difference Vegetation Index) and also mixing some of the Landsat bands to try to find a combination that allowed us to identify a pattern we could use for searching for additional pyramids.  Once we identified a pattern (not so easy, more below) we would train ArcGIS to identify this pattern with pyramids and analyze the entire 4-5-1 band composite for us.

This was a difficult lab for a couple reasons.  First, ArcGIS just seems to have some inherent performance bugs when dealing with the raster operations for these large files.  I have a fairly high-performance computer and it really came to a crawl in a few instances (restart/reboot sorts of instances).  Second, the known Mayan pyramids that we were looking at were never more than mere smudges on the (fairly) low resolution Landsat images.  I could see the Mirador pyramids in the ERSE imagery basemap but really couldn't see anything definitive in the Landsat images regardless of the processing we put it through.

Overall, I liked the idea of the lab and I think it could be used to good effect if we had higher resolution images to manipulate and train.

Remote Sensing Mayan Pyramids: Week 1


For our first few assignments in Special Topics in Archaeology, we are doing something pretty cool - we'll be analyzing Landsat imagery to explore for Mayan pyramids in the Guatemalan jungle.  We began experimenting with and combining various bands from the 8 bands available from Landsat.  The different combinations each have different advantages.  The image above shows three of the possibilities. Landsat Band 8 provides the highest resolution of the Landsat bands (15m panchromatic). Natural Color combines the visual spectrum bands (Landsat Bands 1 - 3) and includes pansharpening (using Band 8).  False Color combines the green and blue spectrum (Bands 2 and 3) with the Near Infrared (Band 4) to highlight biomass.

We still don't have a great view of the pyramid at Mirador, but we will continue to home in on it in the following weeks.

Wednesday, September 18, 2013

LULC Analysis


This week's assignment in Remote Sensing focused on land use/land cover analysis of an aerial photo.  Our job was to analysis apparent land use and classify it according to a USGS two-level classification code.

The map seems to have two main “super-areas” – the bay and the urban land.  I began by creating a polygon for the whole bay (the shore and frame edges).  I then created a series of polygons for the wetland islands.  Later I would “erase” the bay with the islands to create a shapefile that was just the water areas of the bay (avoiding polygon overlap).

In the urban region, I began by identifying “natural” areas like the rivers, estuaries, lakes, and small forested areas. There didn’t seem to be any agricultural areas in this photo. These natural areas were pretty easy to identify, though I could see some difference of opinion on forest types and various classifications of wetland, estuary, and streams.  Similar to how the islands were handled in the bay, the deciduous forest was “unioned” and then the lakes “erased” so that the forest and lake polygons wouldn’t overlap.

Next, I started classifying the urban areas.  Several of these were quite easy to identify after the lecture and text descriptions – industrial areas, schools, and retail areas.  I was also quite happy to recognize the cemetery in the lower right.   The most difficult was the area I classified as “Commercial and Services” region that borders the highway.  There is quite a mix in there and some may even be residential.

Finally, everything that wasn’t otherwise classified in the Urban region, I deemed “Residential”. This is all the single residence housing that fills the urban region. To create this region I used the Urban Land polygon and “erased” a union of all the urban classified sub-regions out of it.

Tuesday, September 17, 2013

Visual Interpretation


For our first Remote Sensing lab assignment, we began with the basics of the visual interpretation of aerial photographs. In our first exercise, we simply identified regions in the image (above) that fall along a five-step scale in tone (light to dark colors) and texture (smooth to rough).  These two features (tone and texture) can help us understand what we are looking down on in aerial images.  In the image above, for example, the very smooth area (lower left, in blue) is water while a very "rough" area is the section of residential housing left of center.


The second part of the exercise had us identifying objects using one of several strategies: Shape & Size, Shadows, Patterns, or Association.  Some objects are quite obvious just by their shape and size.  I happen to be quite familiar with docks and piers, for example, and the pier in the image above was easy for me to identify by shape.  Similarly, the water tower may not be easily identifiable with only a top-down view, but the shadow is quite distinct to anyone who has ever lived in the mid-west.  The parking lot appears to have a distinctive herringbone pattern to it.  Finally, we can use association to identify features.  The beach is not particularly distinctive in this view.  However, we know we have water in the image (from the wave pattern) so the association of this feature adjacent to the water is easy to make.