Wednesday, July 27, 2016

Exercise 7

Goals

Exercise seven is the last phase of generating data for landslide susceptibility in Oregon. In this last section, a risk model for landslides in Oregon will be built. The risk model will be based multiplying risk values and raster values to come up with an overall risk model. 

Methods

This script starts out the same as the previous scripts. The data was already provided from exercise five and six so The first step is to ensure the script is working, and to import all of the necessary modules. The necessary modules were the same as they were in exercise six. They are os, shutil, time, datetime, and arcpy. As usual, outputs were told to overwrite existing files. After that, variables were created to represent both paths that will be used during the script and feature classes that will be used. The next step was to set up field names for each of the new fields that would be created during the script. The next step in the script was to create a fishnet that would be used as the unit of analysis in combination with the roadway buffer. The first step to creating a fishnet is getting a processing boundary. In order to do this, the domain form the slope data and is taken and the arcpy.Describe function is used. Once the boundary is created and the fishnet is created, the next step is to buffer the roadways because we are interested in areas near roads. Next, the roadway buffer and the fishnet need to be intersected. This will create a layer that will be the unit of analysis. The next step is to set up the reclass values for slope and land cover. The reclass values were given so all that needed to be down was set up a variable to hold the list of reclass values and run the reclassify tool. The next thing is multiplying the reclassed rasters together to obtain the risk raster. Once the risk raster is finished, running a zonal statistics tool will summarize the median value for the unit of analysis. The last step was to join the zonal statistics as a table back to the unit of analysis layer. 



Results 

The results of the exercise seven script are represented by the risk model map. Figure one is the script that was created in this exercise. Figure two is the resulting map and the risk model that was created when the script was run. 



Figure One. The script created in exercise seven to develop a risk model.


Figure Two. The resulting map from the exercise seven script.




Conclusion

Exercise seven was the final step in the process to create a risk model for landslides in Oregon. It was cool to see the resulting map and symbolize it correctly so that high risk areas looked red and safer areas were green. It is cool to see the results of the three scripts that we ran over the last three exercises. Overall, this group of exercises was split up nicely so that it wasn't too overwhelming but the result was still rewarding. 








Monday, July 25, 2016

Exercise 6

Goals

The goals of exercise six were to use a suite of raster and vector tools to identify common characteristics of landslides in Oregon. This exercise is a continuation of exercise five. This time we will be using a search and update cursor to extract values from tables to help our study. 


Methods

The first step in exercise 6 was to gather the data provided in the exercise 6 zip file. After transferring that data to the working exercise 6 folder, the python script can be started. As with all of the scripts that have been written, the first step is to write a print statement to ensure that the script is running. Next, the standard modules need to be imported. This includes os, shutil, time, datetime, and arcpy. The next step in the standard set up is to make sure that the outputs can overwrite existing files. after those steps have been completed, it is necessary to create variables for the paths to our exercise 6 geodatabase, the feature dataset, and the working folder. Once the paths have variables, for convenience purposes, we set up naming conventions for all of our outputs. This means that we set our output names to have extensions that display what they are or what tool output it is from. The next thing that was done was create a list of the feature classes that will be created in the exercise. Then we set up the field names for the new fields that we will create. Next the select tool will be run in order to select all of the landslides that have a width and length greater than zero and are in debris flow, earth flow or flow movement classes just in order to narrow down the study sample. The next tool that will be run is the extract multivalued to points tool. So that we add the values of lands, slope and precipitation to the landslides point feature class. The next tool that needs to be run is a buffer, but in order to know how large of a buffer is wanted, the length and width of each point. To create this new measurement, a new field is added and then the calculate field tool is run. The calculation is adding the length to the width and divide by two and multiply to convert feet to meters. Once the buffer distance field is created, The buffer can be executed. The next step is to calculate the statistics of the slope values for each of the buffered landslides using the zonal statistics as a table tool. The table results will then be joined back to the points buffered feature class. Once this step has been completed, there will be null values for some of the slides just from errors in the DEM. So the next step is to replace those null values with the correct value. This is where the search cursor and update cursor came into effect. The search cursor is created to find the null values and the update cursor replaces the null values with the correct values. This was put inside of a loop that continues to search and update values until they are all correct. The next step is to create a summary statistics for precipitation, slope, slide area. The final step is to create a table, using the tabulate area tool, that calculates how much of each buffer falls within different landcover classes. The final thing in the script was just a loop that goes through and deletes unwanted feature classes. 

Results

The script itself had a couple of bugs the first time around. After going through and correcting some of the spelling mistakes there were still errors that I couldn't find. Eventually, after going through the script a couple of times slowly, I notices that I was missing a line of code that added and calculated the slidelengthfieldName. Once that was corrected, the script ran correctly and produced the resulting table and feature classes that were desired. Below is a series of screenshots showing the final script. 











Conclusion

I can confidently say that this was the most difficult script to run so far. I think there was a combination of new and difficult tools to use made it tough to follow what needed to be done at times. That being said, this gave good practice using new tools and working with a long script. The debugging process was more difficult than usual, but it was more rewarding when I found the error that kept my script from running.












Wednesday, July 20, 2016

Exercise 5

Goals

The goals of exercise five was to use standard raster geoprocessing preparation tools, like project, clip, as well as basic raster analysis tools such as, hillshade and slope. The use of a FOR IN loop will be used by creating a list of rasters from a provided geodatabase. 


Methods

To start off this exercise, it was necessary to set up the script in the usual manner. The first thing added to the script was a print statement to ensure that the script started running. The next thing that needed to be done was importing system modules. Os, time, datetime, arcpy, and env were all imported similar to last exercise. But this time, shutil was also imported as well. The overwrite output setting was turned on next. The last thing that was done to set up the script was setting the workspace. The next step in writing the script was to set up smart variables. All of the variables created will be used later in the script. There were also lists created with nothing in them to start. These lists would later hold clipped rasters in one, hillshade rasters in another, and slope rasters in the last. The next step and probably the biggest step was to create the FOR IN loop that would get rasters and reformat their name and project the rasters. Projecting the rasters was done by using the arcpy.ProjectRaster_management tool. The loop would also preform a variety of the basic geoprocessing preparation tools. The FOR loop would then take the projected rasters and run a clip and add it to the clipped raster list. The loop then runs a hillshade and puts the output into the hillshade list. The same is done with the slope. The loop was then complete. So the FOR loop ran through every raster that was provided and spit out clips, hillshades, and slope for all of them. The last step in the script was merging all of the tiles. All of the clips were merged together, all of the hillshades were merged together, and all of the slopes were merged together. 


Results

The results of exercise five are relatively simple. In our new exercise five geodatabase, there is now a lot of new rasters and combinations of these rasters. This exercise really showed the importance of using loops when you can. They can save you many lines of the same code. The completed script is shown in the images below. 




Conclusion

As I stated earlier, this exercise showed us how important the use of loops are in python and coding in general. If we hadn't used a loop, the script would have been over ten times longer. Loops keep things more organized and easier to look over. Overall, this was a good exercise to show us how helpful using loops is for python coding. 

Monday, July 18, 2016

Exercise 4

Goals

The goals of this exercise were to become more familiar with adding a field, calculating a field and applying an SQL statement to tables in Python. This exercise uses the same data from exercise three and has us take some of the exercise three outputs and add fields to them along with making calculations in the new fields.

Methods

The first step in exercise four was to Set up the script and import the modules. This is almost always the first step in writing python scripts. This included importing arcpy, os, time, datetime, and importing env from arcpy. The next thing that was added to the script was the statement that allows files of the same name to overwrite older ones. Once the environments were set and everything was imported, variables were created. These variables were imported using os.path.join to take them directly from the exercise three geodatabase. Once variables were created for dissolved fcs and the intersected fcs, a field was added to the dissolved feature class. This was done using the arcpy.AddField_management tool. This tool allows you to input a feature class that you want to add a field to, name the new field and define the datatype. The next step was to calculate the newly created field. This was done using the arcpy.CalculateField_management tool. In this tool, you enter the feature class and the field name that you need to calculate. Then you are allowed to enter an SQL statement in order to do the calculations. The next step in the exercise was to use the select tool to select polygons with an area greater than 2 square kilometers. This was testing the field that had just been created which was area. The arcpy.Select_analysis tool was used to select the polygons with an area greater than 2 square kilometers. We repeated all of these steps on the selection that we just made. Another field was added that defined the compactness of the polygon. The field was used by calculating the Area divided by the length times the length. Once this step was complete, the script was done. 

Results

The results of this script came out correct and both of the newly created fields had the correct values in them. The first time the script was run, there were errors. The errors that came up were due to spelling errors. Once the spelling errors were taken care of, the script ran correctly. Below is the final script for exercise four. 



Conclusion

This exercise didn't take very long to complete, but it did give more practice with some of the things that we can do with feature classes. The further we get into the exercises, the more I am starting to see the freedom that python scripts can bring. Adding fields and calculating fields will be a very important thing to understand when writing scripts. 


Friday, July 15, 2016

Exercise 3

Goals

The goals of this exercise were to export a geoprocessing model as a script and add several smart variables and modify the exported script. The model that was exported into a script was the same model that was created in exercise 1. 

Methods

This exercise took the model that was created in exercise one and exported it to help create a Python script. The first thing that had to be done was to export the model. Once that was exported, PyScripter was opened and the script was started. First step was to comment out the title and purpose of the script along with the authors name and the date. The next step was to import ArcPy System Modules and the environmental settings. Some of the ArcPy modules that were used were os, time, and datetime. Next the environmental setting to overwrite existing files was added. The next step was to create variables and import our geodatabase from exercise one. From there we used the os.path.join to join the existing files in the exercise one geodatabase to a variable name. Next we set up the path to the newly created exercise three geodatabase. Once the path was created, we set up variables for the clip, selected, buffered, dissolved, intersected, and final selected variables. 

After the base of the script was completed, the exported python script form step one was imported into PyScripter. We left out the local variables that were created in the model builder because the variable had already been created earlier. The code taken from the model executed the tools. All that needed to be done was replace the input and output variables in the tools. The last step was to debug and run the code. 


Results

The following images show the script that was created in this exercise. The script debugged and executed correctly. The desired output was achieved. In the conclusion section errors will be discussed







Conclusion

Exercise three was the first Python script that we created in this course. It was nice that the output was the same as the first exercise so that we could directly compare what it was like to create and model vs a script to execute the same geoprocessing tools. The first couple of times the script was run, it did come up with errors. This was because the pathing to the exercise 3 geodatabase was incorrect. Once that was fixed, there were just a couple of spelling issues that needed correcting. As soon as the spelling was correct, the script ran correctly and gave the desired output. 









Tuesday, July 12, 2016

Exercise 2

Goals

The goal of this exercise was to dive deeper into model builder and get more experience with its many uses. In particular, the use of model iterator and inline variable substitution. In this exercise, the given scenario is that we have obtained line feature classes for existing ski runs on resorts in Colorado. We want to understand the topographical characteristics of the runs. We also want to execute tools on the feature classes multiple times.


Methods

To start of the exercise, data had to be acquired from the class folder. As stated earlier, the data were multiple feature classes showing the ski runs of multiple ski resorts. Once the data were gathered, the model could be built. The first objective in the model was to add the iterate feature classes tool to the model and connect the input workspace. Once the iterate feature classes tool was in the model and the parameters were set, the buffer tool was added to the iterate tool. This made it so that every feature class that was sent through the iterate tool would be getting a buffer. Once the buffer tool was in place, the zonal statistics tool was added. The zonal statistics tool was used to gather data from the rasters that were provided in the geodatabase. These rasters would provide statistics to each of the now buffered ski run feature classes. The next task was to use the join field tool. This was used so that it would join the zonal statistic output tables to the buffered feature class. The final task of the exercise was to use the select tool and an SQL statement to create four new feature classes. These feature classes would rank different sections of the ski runs from beginner, intermediate, advanced, and expert. 



Results

The results of the model shown below as Figure 1, have been placed into a map that is labeled Figure 2. The map shows each of the three different ski resorts and their ski runs. The sections of the runs that were categorized by the final select tool, were classified as follows:

Blue - Beginner

Green - Intermediate

Orange - Advanced

Red - Expert

Figure 1 shows the model that was built to execute the desired tools.

Figure 2 shows the maps that were produced from the above model.



Conclusion

Exercise two did a good job of building off of exercise one. We dove into some more advanced model builder tools and got some experience with iterator and inline variable substitution. Both exercise one and two have done a good job of getting us more comfortable with the different tools that can be used in model builder. As for the data that was processed in this exercise, I thought it was fascinating how the maps turned out and showed the different levels of ski runs. 



Monday, July 11, 2016

Exercise 1

Goals

The goal of this exercise was to refresh our model builder skills and the use of tools in geoprocessing. The specific scenario of this exercise was that research needed to be done to find the ideal location for a brand new ski resort in the Rocky Mountain region. Things like annual snowfall, and average temperature were important to ensure that snow would last throughout the year. Another important factor was that they were on National forest land to make sure that the areas were not built up. 


Methods

The majority of this exercise was built within Model builder. Before the model could be built, the necessary data needed to be downloaded. It was provided through a class geodatabase. The geodatabase contained data like our study area, the average temperatures, average snowfalls, and where the National forest land was located. With this data the model could be started. The first step was to take each of the three pieces of data stated above, and clip it to our study area. This ensured that we could get rid of any data outside of the Rocky Mountain region. Once each piece of data was clipped, we needed to take the airport feature class that was provided, and project it to the same coordinate system as the other feature classes. We then created selections in each of the clipped classes in order to create areas that meet the criteria give. SQL statements were written to ensure that the ideal areas had snofall greater than 240 inches, the mean temperature was less than 32 degrees Fahrenheit, the airports had a tower and were within 40 miles, and the area was in National Forest land. 

Results

The results of the model that was built will be shown in this section. To start off, the model that was built to run the necessary tests is shown below as Figure 1. Directly below Figure 1, is the final map that shows the ideal area for a new ski resort in blue, and the study area in a transparent green. A hillshade map was used as a basemap to show elevation in the Rockies. 


Figure 1. shows the model that was built in order to run the necessary tools to find the
ideal land to place a new ski resort.

Figure 2 is the final map that shows the ideal locations in blue and the study area in green.



Conclusion

This exercise helped to get us back into the swing of things for the summer session. Using model builder and all of the geoprocessing tools are extremely important skills in order to expand our GIS knowledge. This wasn't necessarily the most difficult model to create, but it was good to start back up with. This exercise made me excited to become more comfortable doing these sort of models in a Python script.