Thursday, August 4, 2016

Exercise 9

Goals

The goal of exercise nine was to create a custom script tool that will generate a swiss hillshade. The tool will be kept in a custom toolbox in a geodatabase and will be accessible to use within ArcMap. This will be a much shorter Python script. 




Methods

Exercise nine's python script starts out like all of the previous scripts. The first step is to ensure that the script is running properly by using a print statement. The second step is to import the system modules. In this exercise, we import os, shutil, time, datetime, and arcpy. Once the modules are imported, we can move on to the next objective. In this next step the GetParametersAsText function is used for each of the input feature classes. This allows the user to specify the name of the DEM directly into the tool. So the tool is portable and can be used in many different situations and projects.  Next, variables are set up. Once the proper variables are set up, a try statement is set up. Within the try statement, the process to generate the swiss hillside is added. This includes using the divide function, hillshade function, focal statistics function, and plus function. Now at the end of the try statement, there needs to be an except statement. This statement will make it so that an error message will be reported if the try statement fails. This is the end of the script to make the swiss hillshade tool.  The next step in the exercise was to add it to a tool box and to use the add script wizard to set up the script for ArcMap. 


Results

This exercise was very short but I think that the results of it were very useful. A nice short script to create a tool that you can put into ArcMap and then use again and again is a very helpful skill to have. The results of this exercise are below in the form of the python script for the swiss hillshade tool. 










Conclusion

As I said earlier, this was a very short exercise but an extremely useful one. Doing this exercise made me think, if someone works for a company and they need to execute a GIS process many times every day, it would be extremely helpful to be able to just write a script once and create a tool for it. This way The script can already have the process built into it and you can save yourself a bunch of time. This exercise truly showed us a very useful python tactic. 




Exercise 8

Goals

The goal of this exercise is to use a suite of loops to clip and project a directory of scanned USGS topographic sheets into a seamless raster. The topographic sheets are in a layout referred to as a tile index or a quad index. These quads have a collar around the edge of the map that needs to be clipped away so that we have a seamless mosaic. 


Methods

This script was easily the longest and most complex script that was written during this class. That being said, it starts off the same as just about all of the other scripts. First, a print statement showing that the script was working is written. Second, all of the system modules need to be imported. For this script, os, shutil, time, datetime, and arcpy were all imported. The next step, similar to previous scripts, is to write a block of code that allows existing files to be overwritten. The next step was to create variables. Some of the variables created included our workspace, geodatabase path, featuredataset path, and clip geodatabase path. The next thing in the script is two blank lists that will end up being populated with the name of the topos as we iterate through loops later. Along with the blank lists, file location and files name for some of the created data sets are set up. The next step was to create a spatial reference object. Next a boolean loop is set up. If the condition is true the function will then traverse a workspace and final all files with a specified type. Inside of the loop, we want to be able to track the progress, to do this, you can set a letter as a variable equal to zero. You then add one to the variable later on as the loop progresses to indicate how many rasters are left. The next step in the script is applying the walk to the directory. Next we set up an interaction loop. The first thing that needs to be done in the iteration loop is set up the counter. Next the names of the top will be formatted. Once the names are reformatted, the process of projecting the topos can be executed. Next we take the projected topo and create a polygon of the topos footprint which includes the collar. Now the tiles can be added to the list of tiles and then the tiles can be merged together. Next we set up an if loop that will clip the collar from the tiles one by one. There is an if else statement inside the loops that will allow a tile to pass over the clipping tool if it has already been clipped. In this case the first tile that went through the loop had already been clipped. Once All of the tiles have been clipped then the mosaic tool is used to combine all of the clipped tiles into one larger topo map. 



Results

The results of this exercise were the projected and clipped rasters and then a mosaic of all of the rasters combined together. Below are images that first show the script that was written in the exercise and below that is the final mosaic topo map that was created. 







Conclusion

This exercise was difficult, but, it definitely helped get more practice with loops and cursors along with many other elements of python scripts. The final map came out looking good besides the discoloring. The discoloring in the map is just a result of the way the USGS maps are scanned. All of the data in this exercise was provided by USGS topo maps! Overall this was a difficult but worthwhile exercise for this class. 




Wednesday, July 27, 2016

Exercise 7

Goals

Exercise seven is the last phase of generating data for landslide susceptibility in Oregon. In this last section, a risk model for landslides in Oregon will be built. The risk model will be based multiplying risk values and raster values to come up with an overall risk model. 

Methods

This script starts out the same as the previous scripts. The data was already provided from exercise five and six so The first step is to ensure the script is working, and to import all of the necessary modules. The necessary modules were the same as they were in exercise six. They are os, shutil, time, datetime, and arcpy. As usual, outputs were told to overwrite existing files. After that, variables were created to represent both paths that will be used during the script and feature classes that will be used. The next step was to set up field names for each of the new fields that would be created during the script. The next step in the script was to create a fishnet that would be used as the unit of analysis in combination with the roadway buffer. The first step to creating a fishnet is getting a processing boundary. In order to do this, the domain form the slope data and is taken and the arcpy.Describe function is used. Once the boundary is created and the fishnet is created, the next step is to buffer the roadways because we are interested in areas near roads. Next, the roadway buffer and the fishnet need to be intersected. This will create a layer that will be the unit of analysis. The next step is to set up the reclass values for slope and land cover. The reclass values were given so all that needed to be down was set up a variable to hold the list of reclass values and run the reclassify tool. The next thing is multiplying the reclassed rasters together to obtain the risk raster. Once the risk raster is finished, running a zonal statistics tool will summarize the median value for the unit of analysis. The last step was to join the zonal statistics as a table back to the unit of analysis layer. 



Results 

The results of the exercise seven script are represented by the risk model map. Figure one is the script that was created in this exercise. Figure two is the resulting map and the risk model that was created when the script was run. 



Figure One. The script created in exercise seven to develop a risk model.


Figure Two. The resulting map from the exercise seven script.




Conclusion

Exercise seven was the final step in the process to create a risk model for landslides in Oregon. It was cool to see the resulting map and symbolize it correctly so that high risk areas looked red and safer areas were green. It is cool to see the results of the three scripts that we ran over the last three exercises. Overall, this group of exercises was split up nicely so that it wasn't too overwhelming but the result was still rewarding. 








Monday, July 25, 2016

Exercise 6

Goals

The goals of exercise six were to use a suite of raster and vector tools to identify common characteristics of landslides in Oregon. This exercise is a continuation of exercise five. This time we will be using a search and update cursor to extract values from tables to help our study. 


Methods

The first step in exercise 6 was to gather the data provided in the exercise 6 zip file. After transferring that data to the working exercise 6 folder, the python script can be started. As with all of the scripts that have been written, the first step is to write a print statement to ensure that the script is running. Next, the standard modules need to be imported. This includes os, shutil, time, datetime, and arcpy. The next step in the standard set up is to make sure that the outputs can overwrite existing files. after those steps have been completed, it is necessary to create variables for the paths to our exercise 6 geodatabase, the feature dataset, and the working folder. Once the paths have variables, for convenience purposes, we set up naming conventions for all of our outputs. This means that we set our output names to have extensions that display what they are or what tool output it is from. The next thing that was done was create a list of the feature classes that will be created in the exercise. Then we set up the field names for the new fields that we will create. Next the select tool will be run in order to select all of the landslides that have a width and length greater than zero and are in debris flow, earth flow or flow movement classes just in order to narrow down the study sample. The next tool that will be run is the extract multivalued to points tool. So that we add the values of lands, slope and precipitation to the landslides point feature class. The next tool that needs to be run is a buffer, but in order to know how large of a buffer is wanted, the length and width of each point. To create this new measurement, a new field is added and then the calculate field tool is run. The calculation is adding the length to the width and divide by two and multiply to convert feet to meters. Once the buffer distance field is created, The buffer can be executed. The next step is to calculate the statistics of the slope values for each of the buffered landslides using the zonal statistics as a table tool. The table results will then be joined back to the points buffered feature class. Once this step has been completed, there will be null values for some of the slides just from errors in the DEM. So the next step is to replace those null values with the correct value. This is where the search cursor and update cursor came into effect. The search cursor is created to find the null values and the update cursor replaces the null values with the correct values. This was put inside of a loop that continues to search and update values until they are all correct. The next step is to create a summary statistics for precipitation, slope, slide area. The final step is to create a table, using the tabulate area tool, that calculates how much of each buffer falls within different landcover classes. The final thing in the script was just a loop that goes through and deletes unwanted feature classes. 

Results

The script itself had a couple of bugs the first time around. After going through and correcting some of the spelling mistakes there were still errors that I couldn't find. Eventually, after going through the script a couple of times slowly, I notices that I was missing a line of code that added and calculated the slidelengthfieldName. Once that was corrected, the script ran correctly and produced the resulting table and feature classes that were desired. Below is a series of screenshots showing the final script. 











Conclusion

I can confidently say that this was the most difficult script to run so far. I think there was a combination of new and difficult tools to use made it tough to follow what needed to be done at times. That being said, this gave good practice using new tools and working with a long script. The debugging process was more difficult than usual, but it was more rewarding when I found the error that kept my script from running.












Wednesday, July 20, 2016

Exercise 5

Goals

The goals of exercise five was to use standard raster geoprocessing preparation tools, like project, clip, as well as basic raster analysis tools such as, hillshade and slope. The use of a FOR IN loop will be used by creating a list of rasters from a provided geodatabase. 


Methods

To start off this exercise, it was necessary to set up the script in the usual manner. The first thing added to the script was a print statement to ensure that the script started running. The next thing that needed to be done was importing system modules. Os, time, datetime, arcpy, and env were all imported similar to last exercise. But this time, shutil was also imported as well. The overwrite output setting was turned on next. The last thing that was done to set up the script was setting the workspace. The next step in writing the script was to set up smart variables. All of the variables created will be used later in the script. There were also lists created with nothing in them to start. These lists would later hold clipped rasters in one, hillshade rasters in another, and slope rasters in the last. The next step and probably the biggest step was to create the FOR IN loop that would get rasters and reformat their name and project the rasters. Projecting the rasters was done by using the arcpy.ProjectRaster_management tool. The loop would also preform a variety of the basic geoprocessing preparation tools. The FOR loop would then take the projected rasters and run a clip and add it to the clipped raster list. The loop then runs a hillshade and puts the output into the hillshade list. The same is done with the slope. The loop was then complete. So the FOR loop ran through every raster that was provided and spit out clips, hillshades, and slope for all of them. The last step in the script was merging all of the tiles. All of the clips were merged together, all of the hillshades were merged together, and all of the slopes were merged together. 


Results

The results of exercise five are relatively simple. In our new exercise five geodatabase, there is now a lot of new rasters and combinations of these rasters. This exercise really showed the importance of using loops when you can. They can save you many lines of the same code. The completed script is shown in the images below. 




Conclusion

As I stated earlier, this exercise showed us how important the use of loops are in python and coding in general. If we hadn't used a loop, the script would have been over ten times longer. Loops keep things more organized and easier to look over. Overall, this was a good exercise to show us how helpful using loops is for python coding. 

Monday, July 18, 2016

Exercise 4

Goals

The goals of this exercise were to become more familiar with adding a field, calculating a field and applying an SQL statement to tables in Python. This exercise uses the same data from exercise three and has us take some of the exercise three outputs and add fields to them along with making calculations in the new fields.

Methods

The first step in exercise four was to Set up the script and import the modules. This is almost always the first step in writing python scripts. This included importing arcpy, os, time, datetime, and importing env from arcpy. The next thing that was added to the script was the statement that allows files of the same name to overwrite older ones. Once the environments were set and everything was imported, variables were created. These variables were imported using os.path.join to take them directly from the exercise three geodatabase. Once variables were created for dissolved fcs and the intersected fcs, a field was added to the dissolved feature class. This was done using the arcpy.AddField_management tool. This tool allows you to input a feature class that you want to add a field to, name the new field and define the datatype. The next step was to calculate the newly created field. This was done using the arcpy.CalculateField_management tool. In this tool, you enter the feature class and the field name that you need to calculate. Then you are allowed to enter an SQL statement in order to do the calculations. The next step in the exercise was to use the select tool to select polygons with an area greater than 2 square kilometers. This was testing the field that had just been created which was area. The arcpy.Select_analysis tool was used to select the polygons with an area greater than 2 square kilometers. We repeated all of these steps on the selection that we just made. Another field was added that defined the compactness of the polygon. The field was used by calculating the Area divided by the length times the length. Once this step was complete, the script was done. 

Results

The results of this script came out correct and both of the newly created fields had the correct values in them. The first time the script was run, there were errors. The errors that came up were due to spelling errors. Once the spelling errors were taken care of, the script ran correctly. Below is the final script for exercise four. 



Conclusion

This exercise didn't take very long to complete, but it did give more practice with some of the things that we can do with feature classes. The further we get into the exercises, the more I am starting to see the freedom that python scripts can bring. Adding fields and calculating fields will be a very important thing to understand when writing scripts. 


Friday, July 15, 2016

Exercise 3

Goals

The goals of this exercise were to export a geoprocessing model as a script and add several smart variables and modify the exported script. The model that was exported into a script was the same model that was created in exercise 1. 

Methods

This exercise took the model that was created in exercise one and exported it to help create a Python script. The first thing that had to be done was to export the model. Once that was exported, PyScripter was opened and the script was started. First step was to comment out the title and purpose of the script along with the authors name and the date. The next step was to import ArcPy System Modules and the environmental settings. Some of the ArcPy modules that were used were os, time, and datetime. Next the environmental setting to overwrite existing files was added. The next step was to create variables and import our geodatabase from exercise one. From there we used the os.path.join to join the existing files in the exercise one geodatabase to a variable name. Next we set up the path to the newly created exercise three geodatabase. Once the path was created, we set up variables for the clip, selected, buffered, dissolved, intersected, and final selected variables. 

After the base of the script was completed, the exported python script form step one was imported into PyScripter. We left out the local variables that were created in the model builder because the variable had already been created earlier. The code taken from the model executed the tools. All that needed to be done was replace the input and output variables in the tools. The last step was to debug and run the code. 


Results

The following images show the script that was created in this exercise. The script debugged and executed correctly. The desired output was achieved. In the conclusion section errors will be discussed







Conclusion

Exercise three was the first Python script that we created in this course. It was nice that the output was the same as the first exercise so that we could directly compare what it was like to create and model vs a script to execute the same geoprocessing tools. The first couple of times the script was run, it did come up with errors. This was because the pathing to the exercise 3 geodatabase was incorrect. Once that was fixed, there were just a couple of spelling issues that needed correcting. As soon as the spelling was correct, the script ran correctly and gave the desired output.