0

Below is part of a script i wrote a few months back when I was still learning python. Its part of a larger script, its works totally fine but it is really clunky and not 'pythonic'. It is pretty straightforward. I use an old gp searchcursor to loop through a GDB table. extract all the columns I want to variables. use a the .format() to order them properly, and write to the csv. I cannot just use the arcpy table to excel tool because I am only extracting certain columns in a particular order with different header names than the ones in the table.

def NullTest(item):
 formating function...
sorttable = arcpy.TableSelect_analysis(sortjoin, "sorttable")
z = 0
for x in gp.Searchcursor(sorttable):
 label2 = ''.join(str(NullTest(x.onflysort_OFlabel)))
 facidcsv = x.onflysort_FacilityID
 County = NullTest(x.SiteChemByMunSort_CountySEARCH)
 Municipality = NullTest(x.SiteChemByMunSort_MunSEARCH)
 FacilityName = NullTest(x.SiteChemByMunSort_Facility_Name)
 Address = NullTest(x.SiteChemByMunSort_Physical_Address)
 City = NullTest(x.SiteChemByMunSort_MunFacility)
 Contact = ''.join(str(NullTest(x.SiteChemByMunSort_Company_Contact)))
 ContactPhone = NullTest(x.SiteChemByMunSort_Contact_Phone)
 x1 = NullTest(x.SiteChemByMunSort_XFINAL)
 y = NullTest(x.SiteChemByMunSort_YFINAL)
 DOT_Number = NullTest(x.SiteChemByMunSort_DOT_Number)
 Substance = NullTest(x.SiteChemByMunSort_Substance)
 threesub = NullTest(x.SiteChemByMunSort_EPCRA_302_Substance)
 threesubabove = NullTest(x.SiteChemByMunSort_EPCRA_302_Above_Threshold)
 Inv = NullTest(x.SiteChemByMunSort_Inventory)
 InvD = NullTest(x.SiteChemByMunSort_Inventory_Description)
 reldur = NullTest(x.SiteChemByMunSort_Release_Duration__min_)
 relrate = NullTest(x.SiteChemByMunSort_Release_Rate_lb_min_)
 totalrel = NullTest(x.SiteChemByMunSort_Total_Release)
 modelrun = NullTest(x.SiteChemByMunSort_Model_Run)
 ZoneType = NullTest(x.SiteChemByMunSort_Label)
 BufferDistance = NullTest(x.SiteChemByMunSort_Buffer_Distance)
 Schools = NullTest(x.SiteChemByMunSort_Schools)
 Childcares = NullTest(x.SiteChemByMunSort_ChildCare)
 Hospitals = NullTest(x.SiteChemByMunSort_Hospitals)
 NursingHomes = NullTest(x.SiteChemByMunSort_NursingHomes)
 CI = NullTest(x.SiteChemByMunSort_CriticalInfrastructure)
 streamsw = NullTest(x.SiteChemByMunSort_StreamswithIntakes)
 streams = NullTest(x.SiteChemByMunSort_Streams)
 pop = NullTest(x.SiteChemByMunSort_ResidentialPopulation)
 header = "Id, FACID, County, Municipality, FacilityName, Address, City, Contact, ContactPhone, Xcoord, Ycoord, DOT_Number, Substance, 302 Substance?, 302 Above Threshhold?, Inventory, InventoryDescription, Release_Duration_min, Release_rate_lb_min,Total_Release,Model_Run, ZoneType, BufferDistance(Feet), Schools, Childcares, Hospitals, NursingHomes, Critical Infrastructure, StreamsWithIntakes(miles), Streams(miles), ResidentialPopulation, \n"
 row = "{},'{}',{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{}\n".format(label2,facidcsv,County,Municipality,FacilityName,Address,City,Contact,ContactPhone,x1,y,DOT_Number,Substance,threesub,threesubabove,Inv,InvD,reldur,relrate,totalrel,modelrun,ZoneType,BufferDistance,Schools,Childcares,Hospitals,NursingHomes,CI,streamsw,streams,pop)
 with open('C:\\Users\\rzagha\\Desktop\\RTK\\CRTKMunicipalityMaps\\csvs\\'+table['County']+' '+table['MunName']+'.csv', 'a') as wildcsv:
 if z==0:
 wildcsv.write(header)
 wildcsv.write(row)
 else:
 wildcsv.write(row)
 z+=1

I know i can use the arcpy.da searchcursor. How can make this code not so ridiculously long and clunky. and it does not seem I can use the csv module on a gdb table. are there any tricks I can use with the arcpy.da module?

Modified script put my search cursor fields and new columns headers in their own tuples. and used the search cursor in list comprehension and used the writerows csv method to write them to the csv.

a = ('onflysort_OFlabel','onflysort_FacilityID','SiteChemByMunSort_CountySEARCH','SiteChemByMunSort_MunSEARCH','SiteChemByMunSort_Facility_Name')
b = ('SiteChemByMunSort_Physical_Address','SiteChemByMunSort_MunFacility','SiteChemByMunSort_Company_Contact','SiteChemByMunSort_Contact_Phone')
c = ('SiteChemByMunSort_XFINAL','SiteChemByMunSort_YFINAL','SiteChemByMunSort_DOT_Number','SiteChemByMunSort_Substance')
d = ('SiteChemByMunSort_EPCRA_302_Substance','SiteChemByMunSort_EPCRA_302_Above_Threshold','SiteChemByMunSort_Inventory')
e = ('SiteChemByMunSort_Inventory_Description','SiteChemByMunSort_Release_Duration__min_','SiteChemByMunSort_Release_Rate_lb_min_')
f = ('SiteChemByMunSort_Total_Release','SiteChemByMunSort_Model_Run','SiteChemByMunSort_Label','SiteChemByMunSort_Buffer_Distance')
g = ('SiteChemByMunSort_Schools','SiteChemByMunSort_ChildCare','SiteChemByMunSort_Hospitals','SiteChemByMunSort_NursingHomes')
h = ('SiteChemByMunSort_CriticalInfrastructure','SiteChemByMunSort_StreamswithIntakes','SiteChemByMunSort_Streams','SiteChemByMunSort_ResidentialPopulation')
csvtuple = a+b+c+d+e+f+g+h
header = ("Id, FACID, County, Municipality, FacilityName, Address, City, Contact, ContactPhone, Xcoord, Ycoord, DOT_Number, Substance, 302 Substance?,")
header2 = ("302 Above Threshhold?, Inventory, InventoryDescription, Release_Duration_min, Release_rate_lb_min,Total_Release,Model_Run, ZoneType, BufferDistance(Feet),")
header3 = ("Schools, Childcares, Hospitals, NursingHomes, Critical Infrastructure, StreamsWithIntakes(miles), Streams(miles), ResidentialPopulation,\n")
csvheader = ''.join(header+header2+header3)
with open('C:\\Users\\rzagha\\Desktop\\RTK\\CRTKMunicipalityMaps\\csvs\\'+table['County']+' '+table['MunName']+'.csv', 'a') as wildcsv:
 writer = csv.writer(wildcsv, lineterminator='\n')
 wildcsv.write(csvheader)
 csvarc = (row for row in arcpy.da.SearchCursor(sorttable, csvtuple))
 writer.writerows(csvarc)
asked Oct 24, 2016 at 15:20

1 Answer 1

1

Just brainstorming here, but maybe try something like this: make a list of your fields first, then pass that to the cursor. You can then join list comprehensions of the cursor's row objects to write into your csv.

Edit: If you need the CSV's header row to be different than the field names in the table, you'll need to either modify your fields list or make a separate headers list with the CSV headers you want (or use your existing header variable to write that row).

sorttable = arcpy.TableSelect_analysis(sortjoin, "sorttable")
fields = [**list of all your fields**] 
with arcpy.da.SearchCursor(sorttable, [fields]) as cur:
 with open('C:\\Users\\rzagha\\Desktop\\RTK\\CRTKMunicipalityMaps\\csvs\\'+table['County']+' '+table['MunName']+'.csv', 'a') as wildcsv:
 wildcsv.write(", ".join([i for i in fields]))
 for row in cur:
 wildcsv.write(", ".join([str(i) for i in row])
answered Oct 24, 2016 at 17:21
4
  • yeah i was thinking someone along these lines. the hardest part would be matching up the new row names to the values in the specific order that I want. i may just have to keep it as manual.. Commented Oct 24, 2016 at 18:20
  • In the cursor, the order of values in the row matches the order of fields specified by the field_names argument. So as long as your two lists are in the same order your values should map properly from the table to the csv. You could also use a list of dictionary objects to map {old field name : new field name} and derive each list from that. Commented Oct 24, 2016 at 18:30
  • ahhh I like the dictionary idea! Commented Oct 24, 2016 at 18:35
  • check out the way i did it. pretty similar to your idea. was just annoying to manually write create the tuples and headers Commented Oct 26, 2016 at 16:29

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.