I want to compute the area of the intersection for each geometry in two different shapefiles in Python with GDAL/OGR. Here is my sample code:
def ogrIntersection(srcShpPath1, srcShpPath2):
# import ogr
driver = ogr.GetDriverByName("ESRI Shapefile")
srcShp1 = driver.Open(srcShpPath1)
srcShp2 = driver.Open(srcShpPath2)
srcLay1 = srcShp1.GetLayer()
srcLay2 = srcShp2.GetLayer()
outArea = []
for feat1 in srcLay1:
geom1 = feat1.GetGeometryRef()
for feat2 in srcLay2:
geom2 = feat2.GetGeometryRef()
if geom2.Intersects(geom1):
outArea.append(geom2.Intersection(geom1).GetArea())
return(outArea);
ogrIntersection(srcShpPath1, srcShpPath2)
The function returns a single number, while the code iterate over the geometries of the two input layers. What did I do wrong?
1 Answer 1
You should reset the reading of the second layer after you have iterated over all features of this layer. This will make sure that when the second geometry of the first layer is processed, all features of the second layer are available again.
See this example:
for feat1 in srcLay1:
geom1 = feat1.GetGeometryRef()
for feat2 in srcLay2:
geom2 = feat2.GetGeometryRef()
if geom2.Intersects(geom1):
outArea.append(geom2.Intersection(geom1).GetArea())
srcLay2.reset_reading()
return outArea