3

I have thousands of data, each represents trail of aircraft flight. My goal is to analyse and detect those which do specified pattern (in my case survey pattern).

I am using Python, my approach was (because I have the raw data of trails, longitudes and latitudes) to find the mean position point of aircraft trail using numpy then to plot with matplotlib a confidence circle of (let's say 75%) centered at that mean position, then I crop them to bounding box of that confidence circle. As you can see, all trails plotted with black color with white background for contrast.

Is there some methods using for example OpenCV or something else to recognize those pattern or suspected patterns that look like survey grid pattern?

For example detecting 'quasi-parallel' lines those make some angle theta with some tolerance and have almost same length, then detecting that as survey pattern?!

or maybe directly recognize such patterns like opencv detect players, stars and other trained objects?

Here are thumbnails on my PC of some images, those with red box around are my goal, they look like survey grid.

These are thumbnails

That was my approach and work: My approach

ADDED Because someone asked if I have the vector/raw data, yes! Each flight has own CSV file, header is like this:

latitude, longitude, col1, ..., etc
PolyGeo
65.5k29 gold badges115 silver badges350 bronze badges
asked Aug 30, 2023 at 12:06
2
  • Instead of image recognition perhaps you can stick with the GPS data? After determining the mean position and the bounding box you could determine the distance covered by the line in the bounding box by summing the distance between each subsequent point. If that distance is e.g. 4 (or 3, or 5, or N) times the width of the box you could assume that the aircraft covered a survey pattern? Commented Dec 19, 2023 at 23:09
  • Or maybe do something with the aggregated heading? The survey pattern (if lines remain mostly parallel,but in opposite directions) likely has a recognizeable heading patttern, e.g. something like "the top 10 headings are +/- 180 degrees different from the bottom 10 headings". I am really unsure if this works, so that's why I don't write an answer. Commented Dec 19, 2023 at 23:14

1 Answer 1

2

I once had a problem that was somewhat related, so I had a further look if the principle I used for it gives a reasonable result for your case.

It is a vector based approach, but based on the comments I think you are open to that as well...

I used the following freely available data and depending on whether the heuristics needed for it are OK for you, it seems like it could be a usable solution.

The script loops over all ".csv" files in a directory and for each file does the following steps:

  1. Load the vector data enter image description here
  2. Apply a buffer of ~the typical distance between passes / 2 to the points. Because the data is in degrees, calculating distances isn't very precise, but for this case this might not be important. If you want it more precise, it is also easy to convert the data to a projected crs before applying the buffers.
  3. Dissolve the resulting polygons enter image description here
  4. Apply a negative buffer slightly larger than the positive buffer previously applied. This will remove all areas that are no "survey zones". enter image description here
  5. After some extra cleaning steps you now have a relatively clean localisation of all "survey zones". enter image description here
  6. If at least 1 "survey zone" has been found, the file is considered to be a survey zone file.

Script with the (rough) implementation:

from pathlib import Path
import geopandas as gpd
import shapely
def find_survey_zones(path: Path) -> gpd.GeoDataFrame:
 # Maximum and minimum distance to treat as a survey pass
 max_pass_distance = 0.1
 min_pass_distance = 0.02
 min_survey_area = None
 df = gpd.read_file(path, engine="pyogrio")
 gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.LON, df.LAT), crs=4326)
 # Buffer the input points
 buff_gs = gdf.buffer(distance=max_pass_distance / 2)
 # Union all the buffered points
 buff_union = buff_gs.unary_union
 # buff_union_gdf = gpd.GeoDataFrame(geometry=[buff_union], crs=4326)
 # buff_union_gdf.to_file(path.parent / f"{path.stem}_buff_union.gpkg")
 # Apply negative buffer to remove areas without surveying pattern
 buff_union_buff = buff_union.buffer(
 distance=-(max_pass_distance + min_pass_distance) / 2
 )
 # Positive buffer again, with some margin so outer survey points/lines are retained
 survey_zone = buff_union_buff.buffer(distance=min_pass_distance)
 # Small zones where single passes cross still need to be removed, e.g. by area.
 survey_zones = shapely.get_parts(survey_zone)
 survey_zones_gdf = gpd.GeoDataFrame(geometry=survey_zones, crs=4326)
 # survey_zones_gdf.to_file(path.parent / f"{path.stem}_buff_union_buff.gpkg")
 if min_survey_area is None:
 min_survey_area = (max_pass_distance / 2) ** 2 * 3.14
 survey_zones_gdf = survey_zones_gdf[
 survey_zones_gdf.geometry.area > min_survey_area
 ]
 return survey_zones_gdf
if __name__ == "__main__":
 dir = Path("C:/Temp/tracks")
 for path in dir.glob("*.csv"):
 survey_zones_gdf = find_survey_zones(path)
 # Check if any zones were found
 if len(survey_zones_gdf) > 0:
 print(f"Survey zones were found in {path}")
 # survey_zones_gdf.to_file(dir / f"{path.stem}_survey_zones.gpkg")
answered Dec 22, 2023 at 0:54

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.