Detect and Track Objects with ML Kit on iOS

You can use ML Kit to detect and track objects across frames of video.

When you pass ML Kit images, ML Kit returns, for each image, a list of up to five detected objects and their position in the image. When detecting objects in video streams, every object has an ID that you can use to track the object across images. You can also optionally enable coarse object classification, which labels objects with broad category descriptions.

Before you begin

  1. If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
  2. Include the ML Kit libraries in your Podfile:
    pod 'Firebase/MLVision', '6.25.0'
    pod 'Firebase/MLVisionObjectDetection', '6.25.0'
    
    After you install or update your project's Pods, be sure to open your Xcode project using its .xcworkspace.
  3. In your app, import Firebase:

    Swift

    importFirebase

    Objective-C

    @importFirebase;

1. Configure the object detector

To start detecting and tracking objects, first create an instance of VisionObjectDetector, optionally specifying any detector settings you want to change from the default.

  1. Configure the object detector for your use case with a VisionObjectDetectorOptions object. You can change the following settings:

    Object Detector Settings
    Detection mode .stream (default) | .singleImage

    In stream mode (default), the object detector runs with very low latency, but might produce incomplete results (such as unspecified bounding boxes or category) on the first few invocations of the detector. Also, in stream mode, the detector assigns tracking IDs to objects, which you can use to track objects across frames. Use this mode when you want to track objects, or when low latency is important, such as when processing video streams in real time.

    In single image mode, the object detector waits until a detected object's bounding box and (if you enabled classification) category are available before returning a result. As a consequence, detection latency is potentially higher. Also, in single image mode, tracking IDs are not assigned. Use this mode if latency isn't critical and you don't want to deal with partial results.

    Detect and track multiple objects false (default) | true

    Whether to detect and track up to five objects or only the most prominent object (default).

    Classify objects false (default) | true

    Whether or not to classify detected objects into coarse categories. When enabled, the object detector classifies objects into the following categories: fashion goods, food, home goods, places, plants, and unknown.

    The object detection and tracking API is optimized for these two core use cases:

    • Live detection and tracking of the most prominent object in the camera viewfinder
    • Detection of multiple objects in a static image

    To configure the API for these use cases:

    Swift

    // Live detection and tracking
    letoptions=VisionObjectDetectorOptions()
    options.detectorMode=.stream
    options.shouldEnableMultipleObjects=false
    options.shouldEnableClassification=true// Optional
    // Multiple object detection in static images
    letoptions=VisionObjectDetectorOptions()
    options.detectorMode=.singleImage
    options.shouldEnableMultipleObjects=true
    options.shouldEnableClassification=true// Optional
    

    Objective-C

    // Live detection and tracking
    FIRVisionObjectDetectorOptions*options=[[FIRVisionObjectDetectorOptionsalloc]init];
    options.detectorMode=FIRVisionObjectDetectorModeStream;
    options.shouldEnableMultipleObjects=NO;
    options.shouldEnableClassification=YES;// Optional
    // Multiple object detection in static images
    FIRVisionObjectDetectorOptions*options=[[FIRVisionObjectDetectorOptionsalloc]init];
    options.detectorMode=FIRVisionObjectDetectorModeSingleImage;
    options.shouldEnableMultipleObjects=YES;
    options.shouldEnableClassification=YES;// Optional
    
  2. Get an instance of FirebaseVisionObjectDetector:

    Swift

    letobjectDetector=Vision.vision().objectDetector()
    // Or, to change the default settings:
    letobjectDetector=Vision.vision().objectDetector(options:options)
    

    Objective-C

    FIRVisionObjectDetector*objectDetector=[[FIRVisionvision]objectDetector];
    // Or, to change the default settings:
    FIRVisionObjectDetector*objectDetector=[[FIRVisionvision]objectDetectorWithOptions:options];
    

2. Run the object detector

To detect and track objects, do the following for each image or frame of video. If you enabled stream mode, you must create VisionImage objects from CMSampleBufferRefs.

  1. Create a VisionImage object using a UIImage or a CMSampleBufferRef.

    To use a UIImage:

    1. If necessary, rotate the image so that its imageOrientation property is .up.
    2. Create a VisionImage object using the correctly-rotated UIImage. Do not specify any rotation metadata—the default value, .topLeft, must be used.

      Swift

      letimage=VisionImage(image:uiImage)

      Objective-C

      FIRVisionImage*image=[[FIRVisionImagealloc]initWithImage:uiImage];

    To use a CMSampleBufferRef:

    1. Create a VisionImageMetadata object that specifies the orientation of the image data contained in the CMSampleBufferRef buffer.

      To get the image orientation:

      Swift

      funcimageOrientation(
      deviceOrientation:UIDeviceOrientation,
      cameraPosition:AVCaptureDevice.Position
      )->VisionDetectorImageOrientation{
      switchdeviceOrientation{
      case.portrait:
      returncameraPosition==.front?.leftTop:.rightTop
      case.landscapeLeft:
      returncameraPosition==.front?.bottomLeft:.topLeft
      case.portraitUpsideDown:
      returncameraPosition==.front?.rightBottom:.leftBottom
      case.landscapeRight:
      returncameraPosition==.front?.topRight:.bottomRight
      case.faceDown,.faceUp,.unknown:
      return.leftTop
      }
      }

      Objective-C

      - (FIRVisionDetectorImageOrientation)
       imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
      cameraPosition:(AVCaptureDevicePosition)cameraPosition{
      switch(deviceOrientation){
      caseUIDeviceOrientationPortrait:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationLeftTop;
      }else{
      returnFIRVisionDetectorImageOrientationRightTop;
      }
      caseUIDeviceOrientationLandscapeLeft:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationBottomLeft;
      }else{
      returnFIRVisionDetectorImageOrientationTopLeft;
      }
      caseUIDeviceOrientationPortraitUpsideDown:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationRightBottom;
      }else{
      returnFIRVisionDetectorImageOrientationLeftBottom;
      }
      caseUIDeviceOrientationLandscapeRight:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationTopRight;
      }else{
      returnFIRVisionDetectorImageOrientationBottomRight;
      }
      default:
      returnFIRVisionDetectorImageOrientationTopLeft;
      }
      }

      Then, create the metadata object:

      Swift

      letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.
      letmetadata=VisionImageMetadata()
      metadata.orientation=imageOrientation(
      deviceOrientation:UIDevice.current.orientation,
      cameraPosition:cameraPosition
      )

      Objective-C

      FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];
      AVCaptureDevicePositioncameraPosition=
      AVCaptureDevicePositionBack;// Set to the capture device you used.
      metadata.orientation=
      [selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
      cameraPosition:cameraPosition];
    2. Create a VisionImage object using the CMSampleBufferRef object and the rotation metadata:

      Swift

      letimage=VisionImage(buffer:sampleBuffer)
      image.metadata=metadata

      Objective-C

      FIRVisionImage*image=[[FIRVisionImagealloc]initWithBuffer:sampleBuffer];
      image.metadata=metadata;
  2. Pass the VisionImage to one of the object detector's image processing methods. You can either use the asynchronous process(image:) method or the synchronous results() method.

    To detect objects asynchronously:

    Swift

    objectDetector.process(image){detectedObjects,errorin
    guarderror==nilelse{
    // Error.
    return
    }
    guardletdetectedObjects=detectedObjects,!detectedObjects.isEmptyelse{
    // No objects detected.
    return
    }
    // Success. Get object info here.
    // ...
    }
    

    Objective-C

    [objectDetectorprocessImage:image
    completion:^(NSArray<FIRVisionObject*>*_Nullableobjects,
    NSError*_Nullableerror){
    if(error==nil){
    return;
    }
    if(objects==nil|objects.count==0){
    // No objects detected.
    return;
    }
    // Success. Get object info here.
    // ...
    }];
    

    To detect objects synchronously:

    Swift

    varresults:[VisionObject]?=nil
    do{
    results=tryobjectDetector.results(in:image)
    }catchleterror{
    print("Failed to detect object with error: \(error.localizedDescription).")
    return
    }
    guardletdetectedObjects=results,!detectedObjects.isEmptyelse{
    print("Object detector returned no results.")
    return
    }
    // ...
    

    Objective-C

    NSError*error;
    NSArray<FIRVisionObject*>*objects=[objectDetectorresultsInImage:image
    error:&error];
    if(error==nil){
    return;
    }
    if(objects==nil|objects.count==0){
    // No objects detected.
    return;
    }
    // Success. Get object info here.
    // ...
    
  3. If the call to the image processor succeeds, it either passes a list of VisionObjects to the completion handler or returns the list, depending on whether you called the asynchronous or synchronous method.

    Each VisionObject contains the following properties:

    frame A CGRect indicating the position of the object in the image.
    trackingID An integer that identifies the object across images. Nil in single image mode.
    classificationCategory The coarse category of the object. If the object detector doesn't have classification enabled, this is always .unknown.
    confidence The confidence value of the object classification. If the object detector doesn't have classification enabled, or the object is classified as unknown, this is nil.

    Swift

    // detectedObjects contains one item if multiple object detection wasn't enabled.
    forobjindetectedObjects{
    letbounds=obj.frame
    letid=obj.trackingID
    // If classification was enabled:
    letcategory=obj.classificationCategory
    letconfidence=obj.confidence
    }
    

    Objective-C

    // The list of detected objects contains one item if multiple
    // object detection wasn't enabled.
    for(FIRVisionObject*objinobjects){
    CGRectbounds=obj.frame;
    if(obj.trackingID){
    NSIntegerid=obj.trackingID.integerValue;
    }
    // If classification was enabled:
    FIRVisionObjectCategorycategory=obj.classificationCategory;
    floatconfidence=obj.confidence.floatValue;
    }
    

Improving usability and performance

For the best user experience, follow these guidelines in your app:

  • Successful object detection depends on the object's visual complexity. Objects with a small number of visual features might need to take up a larger part of the image to be detected. You should provide users with guidance on capturing input that works well with the kind of objects you want to detect.
  • When using classification, if you want to detect objects that don't fall cleanly into the supported categories, implement special handling for unknown objects.

Also, check out the [ML Kit Material Design showcase app][showcase-link]{: .external } and the Material Design Patterns for machine learning-powered features collection.

When using streaming mode in a real-time application, follow these guidelines to achieve the best framerates:

  • Don't use multiple object detection in streaming mode, as most devices won't be able to produce adequate framerates.

  • Disable classification if you don't need it.

  • Throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame.
  • If you are using the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame. See the previewOverlayView and FIRDetectionOverlayView classes in the showcase sample app for an example.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月14日 UTC.