I am having some difficulty conceptualizing how the code should work for a custom ArcPy Script tool. Currently our work flow involves QA/QC and taking a csv file and joining it to a feature class. From there we calculate each field individually based on the values in the csv. This takes a tremendous amount of time. I am trying to automate this with a script. I am looking to create a tool allows one input (the csv file) and the tool will read the csv, join it to the feature class and then populates the fields automatically. I would also like to have some errors thrown if there is not able to join/calculate etc. I am not sure how this should flow.
import arcpy
import csv
def bringincsv(csv):
with open(arcpy.GetParameterAsText(0), "rb") as csv:
reader = csv.reader(csv, delimiter=",")
next(reader):
for row in reader:
facilityid = str(row[0])
diameter = double(row[1])
hyperlink_cctv = string(row[2])
hyperlink_rpt = string(row[3])
try:
arcpy.env.workspace = 'CURRENT'
inspection = arcpy.GetParameterAsText(0)
arcpy.MakeTableView_management(in_table=inspection, out_view= 'Inspection')
arcpy.AddJoin_management('Memphis.GIS.ssGravityMain', 'FACILITYID',
'Inspection', facilityid, 'KEEP_COMMON')
arcpy.CalculateField_management('Memphis.GIS.ssGravityMain', diameter, ##The
csv files diameter)
arcpy.CalculateField_management('Memphis.GIS.ssGravityMain', hyperlink_cctv,
##The csv files values)
arcpy.CalculateField_management('Memphis.GIS.ssGravityMain', hyperlink_rpt,
##The csv files values)
-
The structure of the csv:s are always the same, same columns etc.?Bera– Bera2018年05月08日 13:11:36 +00:00Commented May 8, 2018 at 13:11
-
Yes, the .csv will always be the same.Wazzy24– Wazzy242018年05月08日 13:24:53 +00:00Commented May 8, 2018 at 13:24
1 Answer 1
You can use a dictionary instead of joining them and the da.UpdateCursor instead of three field calculators. I am assuming that the first "column" in the csv is the field to match on.
import arcpy, csv
fc = r"C:\Test\Regions.shp"
idfield = 'IDFIELDNAME'
fields_to_update = ['F1','F2','F3'] #Add/remove fields here
csvfile = r"C:\Test\MoreData.csv"
with open(csvfile, mode='r',) as infile:
reader = csv.reader(infile, delimiter=';') #Change delimiter to match your csv
reader.next() #Skip header
d = {r[0]:r[1:] for r in reader}
fields_to_update.append(idfield)
with arcpy.da.UpdateCursor(fc, fields_to_update) as cursor:
for row in cursor:
if row[-1] in d:
row[0], row[1], row[2] = d[row[-1]] #Add/remove to match field count
cursor.updateRow(row)
else:
print '{0} not found in csv, no update'.format(row[-1])
You can Place all this in a function with feature class and csv as inputs if you want to.
-
Thanks for the quick response. I am not sure if this will work. I am joining this to a feature class in an SDE geodatabase so I am not sure what the exact path will be to that feature class. I wish it was a regular feature class on my machine but I am in a distributed cloud environment. The .csv file would ideally be browsed to thru the GUI of the script tool. Can you explain what the d = {r[0]:r[1:] for r in reader} line does? What are those indices? I like the idea of using a dictionary instead, that will make it more readable. The first field in the csv is indeed the field to join on.Wazzy24– Wazzy242018年05月08日 13:53:43 +00:00Commented May 8, 2018 at 13:53
-
To get the path just copy and paste it from Catalog. That line is Reading each line in the csv, splitting it using indexes into Dictionary key (row[0]) and a list of values (row[1:] = all values except the first). If this is not what you want i will remove my answer.Bera– Bera2018年05月08日 13:59:09 +00:00Commented May 8, 2018 at 13:59
-
Nice, thanks. I will give this a shot. Will replacing csvfile = "C:\Test\MoreData.csv" with csvfile = arcpy.GetParameterAsText(0) work for reading the csv?Wazzy24– Wazzy242018年05月08日 14:00:40 +00:00Commented May 8, 2018 at 14:00
-
Yes, as long as the user browses to a .csvartwork21– artwork212018年05月08日 14:13:05 +00:00Commented May 8, 2018 at 14:13
Explore related questions
See similar questions with these tags.