1

To preface my question: I am very new to stack overflow, and relatively new to Python.

I am working on setting up a sensitivity analysis. I am working with 40 parameters that range from 0.1 - 1. My analysis requires simultaneously varying these parameters by +-0.1 roughly ~500 times. These values will then be fed into an ArcGIS tool. So, I need 500 sets of random values, for 40 parameters. These parameter values will then be compared to the output of the tool, to see which parameters the model is the most sensitive to. I've set up an Excel sheet that will randomly calculate these values each time it's opened, but the issue is that they need to be in .dbf format to be read by the ArcGIS tool.

I have set up a while loop (for 10 iterations to start, but will need to be ~500) and tried two different methods, in hopes that I could automate the process of calling the .xls to generate random numbers, and then exporting it to .dbf.

The first, arcpy.CopyRows_management correctly exported to .dbf. The issue was that the output was the exact same for each iteration, and instead of having values of 0.1, 0.2, 0.3 etc, it contained values of 0.22, 0.37, 0.68 etc. It wasn't to the tenths, even though that was specified in the formulas in the .xls.

I also tried arcpy.TabletoTable_conversion but that was throwing the ERROR 999999: Error executing function.

I am open to all kinds of suggestions. Perhaps there is an easier way to randomly sample and export results to .dbf in Python. This does not need to be done using arcpy, but that is all I've really worked with. I really appreciate any help that is provided! Thanks for your time.

i = 0
while i < 10:
 # Set run specific variables
 lulc = "D:\\SARuns\\lulc_nosoils_rand.xls\\lulc_nosoils$"
 folder = "D:\\SARuns"
 print "Reading lulc"
 newlulc = "D:\\SARuns\\lulc_nosoils_rand"+str(i)+".dbf"
 print "Reading newlulc"
 # Copy rows is copying it to a dbf, but the values inside 
 # are the same for each run. And, none are correct. 
 arcpy.CopyRows_management(lulc, newlulc)
 # Table to table should work. But isn't. 
 # arcpy.TableToTable_conversion(lulc, folder, newlulc)
 print "Converting table" 
 i+= 1
asked Aug 9, 2013 at 19:19
1
  • This may be inadequate as a sensitivity analysis, because with so many variables it is possible that the effects of the random deviations practically cancel in the vast majority of cases. You might be better served by systematically varying each variable in turn across the largest reasonable range while fixing the other variables at typical or average values. You should then explore interactions (the effects of simultaneous changes in two or more variables) and the possibility of strongly nonlinear responses. Commented Aug 9, 2013 at 20:41

1 Answer 1

3

There are Python modules out there for working with Excel, but it's probably easier to use the random module.

With a little researching, you should be able to replicate your Excel formula in Python. You can write your results to a txt file and then use that as input to CopyRows().

If you edit your excel formulas into your post, I can get you started.

For example, the following will create a dbf file of 100 lines containing random values from 0-1.

from random import random 
import arcpy
def formula():
 return "{}\n".format(random())
with open("D:/random.txt", "w") as randval:
 for i in xrange(100):
 randval.write(formula())
arcpy.CopyRows_management("D:/random.txt", "D:/random.dbf") 
answered Aug 9, 2013 at 20:00
1
  • +1 Simple solution that eliminates the need to generate random values in excel--nice! Commented Aug 10, 2013 at 14:48

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.