I'm trying to use DDP to produce a map book for a series of works. Each item of work is a feature in a polyline feature class, and a copy of the class is being used to define my DDPs. I already have set up dynamic text, page definition queries etc.
As the work lines are variable in length, I want to constrain my map scale so that they are not zoomed in or out too far. My code:
import arcpy
from arcpy import env
env.workspace = r"U:"
mxd = arcpy.mapping.MapDocument(r"U:\EMS.mxd")
df = arcpy.mapping.ListDataFrames(mxd)[0]
ddp = mxd.dataDrivenPages
indexLayer = ddp.indexLayer
# def ddpScale():
# if df.scale < 6000:
# df.scale = 6000
# elif df.scale > 10000:
# df.scale = 10000
# else:
# df.scale = 10000
for i in range(1, mxd.dataDrivenPages.pageCount + 1):
mxd.dataDrivenPages.currentPageID = i
for scale in df.scale:
if df.scale < 6000:
df.scale = 6000
elif df.scale > 10000:
df.scale = 10000
else:
df.scale = 10000
#df.scale = ddpScale
ddp.exportToPDF(r"U:\test.pdf")
del mxd
The commented out lines were my attempt to define a function for setting the scale, but I opted for a elif statement instead. I basically want any pages with a scale less than 1:6000 to be 1:6000, and all others to be 1:10,0000.
As an addition, I would like lines that are above 1:10,000 originally to be split across multiple pages as needed.
How would I got about setting the scale in this manner?
The error I am getting:
Traceback (most recent call last):
File "C:/XXXddp.py", line
20, in <module>
for scale in df.scale:
TypeError: 'float' object is not iterable
-
I doubt what you are trying to do is possible but i could be wrong. DDP dont seem to "remember" the scale for each page when set using df.scale. You could add a scale field in the index layer feature class. For example using minimum bounding geometry to estimate a suitable scale. Then use this field as Data Driven Scale field.Bera– Bera2018年12月20日 12:21:41 +00:00Commented Dec 20, 2018 at 12:21
-
1If you want to I can post an example of creating the scale field using arcpy.Bera– Bera2018年12月20日 12:36:26 +00:00Commented Dec 20, 2018 at 12:36
-
@BERA that would be very useful, thank you. The whole purpose of this is to save me time when works kicks off next year, so the more that can be done in python the better.Matt Houston– Matt Houston2018年12月20日 12:37:44 +00:00Commented Dec 20, 2018 at 12:37
1 Answer 1
An alternative is to use a field with scales as Data Driven Scale Field:
When you specify a Data Driven Scale field, values from this field define the map scale of the detail data frame for each page in the Data Driven Pages series
The field can be calculated using arcpy by the extent of each feature:
import arcpy
fc = "Areas" #Change to match your fc
scalefield = "Scale" #Add this field before executing the code.
with arcpy.da.UpdateCursor(fc,["SHAPE@",scalefield]) as cursor:
for row in cursor:
ext = row[0].extent
maxdiffxy = max([ext.XMax-ext.XMin,ext.YMax-ext.YMin]) #Max width or height of feature
if maxdiffxy<2500: #If smaller than 2500 meters, use 6000 scale. Will have to be adjusted
row[1] = 6000
else:
row[1] = 10000
cursor.updateRow(row)
-
1The code for adding the scale data works great, just working on getting the DDP code to read the scale field when exporting.Matt Houston– Matt Houston2018年12月20日 14:33:14 +00:00Commented Dec 20, 2018 at 14:33