Monday, July 25, 2016

Exercise 6

Goals

The goals of exercise six were to use a suite of raster and vector tools to identify common characteristics of landslides in Oregon. This exercise is a continuation of exercise five. This time we will be using a search and update cursor to extract values from tables to help our study. 


Methods

The first step in exercise 6 was to gather the data provided in the exercise 6 zip file. After transferring that data to the working exercise 6 folder, the python script can be started. As with all of the scripts that have been written, the first step is to write a print statement to ensure that the script is running. Next, the standard modules need to be imported. This includes os, shutil, time, datetime, and arcpy. The next step in the standard set up is to make sure that the outputs can overwrite existing files. after those steps have been completed, it is necessary to create variables for the paths to our exercise 6 geodatabase, the feature dataset, and the working folder. Once the paths have variables, for convenience purposes, we set up naming conventions for all of our outputs. This means that we set our output names to have extensions that display what they are or what tool output it is from. The next thing that was done was create a list of the feature classes that will be created in the exercise. Then we set up the field names for the new fields that we will create. Next the select tool will be run in order to select all of the landslides that have a width and length greater than zero and are in debris flow, earth flow or flow movement classes just in order to narrow down the study sample. The next tool that will be run is the extract multivalued to points tool. So that we add the values of lands, slope and precipitation to the landslides point feature class. The next tool that needs to be run is a buffer, but in order to know how large of a buffer is wanted, the length and width of each point. To create this new measurement, a new field is added and then the calculate field tool is run. The calculation is adding the length to the width and divide by two and multiply to convert feet to meters. Once the buffer distance field is created, The buffer can be executed. The next step is to calculate the statistics of the slope values for each of the buffered landslides using the zonal statistics as a table tool. The table results will then be joined back to the points buffered feature class. Once this step has been completed, there will be null values for some of the slides just from errors in the DEM. So the next step is to replace those null values with the correct value. This is where the search cursor and update cursor came into effect. The search cursor is created to find the null values and the update cursor replaces the null values with the correct values. This was put inside of a loop that continues to search and update values until they are all correct. The next step is to create a summary statistics for precipitation, slope, slide area. The final step is to create a table, using the tabulate area tool, that calculates how much of each buffer falls within different landcover classes. The final thing in the script was just a loop that goes through and deletes unwanted feature classes. 

Results

The script itself had a couple of bugs the first time around. After going through and correcting some of the spelling mistakes there were still errors that I couldn't find. Eventually, after going through the script a couple of times slowly, I notices that I was missing a line of code that added and calculated the slidelengthfieldName. Once that was corrected, the script ran correctly and produced the resulting table and feature classes that were desired. Below is a series of screenshots showing the final script. 











Conclusion

I can confidently say that this was the most difficult script to run so far. I think there was a combination of new and difficult tools to use made it tough to follow what needed to be done at times. That being said, this gave good practice using new tools and working with a long script. The debugging process was more difficult than usual, but it was more rewarding when I found the error that kept my script from running.












No comments:

Post a Comment