Recognize Landmarks with ML Kit on iOS

You can use ML Kit to recognize well-known landmarks in an image.

Before you begin

  1. If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
  2. Include the ML Kit libraries in your Podfile:
    pod 'Firebase/MLVision', '6.25.0'
    
    After you install or update your project's Pods, be sure to open your Xcode project using its .xcworkspace.
  3. In your app, import Firebase:

    Swift

    importFirebase

    Objective-C

    @importFirebase;
  4. If you have not already enabled Cloud-based APIs for your project, do so now:

    1. Open the ML Kit APIs page of the Firebase console.
    2. If you have not already upgraded your project to a Blaze pricing plan, click Upgrade to do so. (You will be prompted to upgrade only if your project isn't on the Blaze plan.)

      Only Blaze-level projects can use Cloud-based APIs.

    3. If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs.

Configure the landmark detector

By default, the Cloud detector uses the stable version of the model and returns up to 10 results. If you want to change either of these settings, specify them with a VisionCloudDetectorOptions object as in the following example:

Swift

letoptions=VisionCloudDetectorOptions()
options.modelType=.latest
options.maxResults=20

Objective-C

FIRVisionCloudDetectorOptions*options=
[[FIRVisionCloudDetectorOptionsalloc]init];
options.modelType=FIRVisionCloudModelTypeLatest;
options.maxResults=20;

In the next step, pass the VisionCloudDetectorOptions object when you create the Cloud detector object.

Run the landmark detector

To recognize landmarks in an image, pass the image as a UIImage or a CMSampleBufferRef to the VisionCloudLandmarkDetector's detect(in:) method:

  1. Get an instance of VisionCloudLandmarkDetector:

    Swift

    lazyvarvision=Vision.vision()
    letcloudDetector=vision.cloudLandmarkDetector(options:options)
    // Or, to use the default settings:
    // let cloudDetector = vision.cloudLandmarkDetector()

    Objective-C

    FIRVision*vision=[FIRVisionvision];
    FIRVisionCloudLandmarkDetector*landmarkDetector=[visioncloudLandmarkDetector];
    // Or, to change the default settings:
    // FIRVisionCloudLandmarkDetector *landmarkDetector =
    // [vision cloudLandmarkDetectorWithOptions:options];
  2. Create a VisionImage object using a UIImage or a CMSampleBufferRef.

    To use a UIImage:

    1. If necessary, rotate the image so that its imageOrientation property is .up.
    2. Create a VisionImage object using the correctly-rotated UIImage. Do not specify any rotation metadata—the default value, .topLeft, must be used.

      Swift

      letimage=VisionImage(image:uiImage)

      Objective-C

      FIRVisionImage*image=[[FIRVisionImagealloc]initWithImage:uiImage];

    To use a CMSampleBufferRef:

    1. Create a VisionImageMetadata object that specifies the orientation of the image data contained in the CMSampleBufferRef buffer.

      To get the image orientation:

      Swift

      funcimageOrientation(
      deviceOrientation:UIDeviceOrientation,
      cameraPosition:AVCaptureDevice.Position
      )->VisionDetectorImageOrientation{
      switchdeviceOrientation{
      case.portrait:
      returncameraPosition==.front?.leftTop:.rightTop
      case.landscapeLeft:
      returncameraPosition==.front?.bottomLeft:.topLeft
      case.portraitUpsideDown:
      returncameraPosition==.front?.rightBottom:.leftBottom
      case.landscapeRight:
      returncameraPosition==.front?.topRight:.bottomRight
      case.faceDown,.faceUp,.unknown:
      return.leftTop
      }
      }

      Objective-C

      - (FIRVisionDetectorImageOrientation)
       imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
      cameraPosition:(AVCaptureDevicePosition)cameraPosition{
      switch(deviceOrientation){
      caseUIDeviceOrientationPortrait:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationLeftTop;
      }else{
      returnFIRVisionDetectorImageOrientationRightTop;
      }
      caseUIDeviceOrientationLandscapeLeft:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationBottomLeft;
      }else{
      returnFIRVisionDetectorImageOrientationTopLeft;
      }
      caseUIDeviceOrientationPortraitUpsideDown:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationRightBottom;
      }else{
      returnFIRVisionDetectorImageOrientationLeftBottom;
      }
      caseUIDeviceOrientationLandscapeRight:
      if(cameraPosition==AVCaptureDevicePositionFront){
      returnFIRVisionDetectorImageOrientationTopRight;
      }else{
      returnFIRVisionDetectorImageOrientationBottomRight;
      }
      default:
      returnFIRVisionDetectorImageOrientationTopLeft;
      }
      }

      Then, create the metadata object:

      Swift

      letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.
      letmetadata=VisionImageMetadata()
      metadata.orientation=imageOrientation(
      deviceOrientation:UIDevice.current.orientation,
      cameraPosition:cameraPosition
      )

      Objective-C

      FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];
      AVCaptureDevicePositioncameraPosition=
      AVCaptureDevicePositionBack;// Set to the capture device you used.
      metadata.orientation=
      [selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
      cameraPosition:cameraPosition];
    2. Create a VisionImage object using the CMSampleBufferRef object and the rotation metadata:

      Swift

      letimage=VisionImage(buffer:sampleBuffer)
      image.metadata=metadata

      Objective-C

      FIRVisionImage*image=[[FIRVisionImagealloc]initWithBuffer:sampleBuffer];
      image.metadata=metadata;
  3. Then, pass the image to the detect(in:) method:

    Swift

    cloudDetector.detect(in:visionImage){landmarks,errorin
    guarderror==nil,letlandmarks=landmarks,!landmarks.isEmptyelse{
    // ...
    return
    }
    // Recognized landmarks
    // ...
    }

    Objective-C

    [landmarkDetectordetectInImage:image
    completion:^(NSArray<FIRVisionCloudLandmark*>*landmarks,
    NSError*error){
    if(error!=nil){
    return;
    }elseif(landmarks!=nil){
    // Got landmarks
    }
    }];

Get information about the recognized landmarks

If landmark recognition succeeds, an array of VisionCloudLandmark objects will be passed to the completion handler. From each object, you can get information about a landmark recognized in the image.

For example:

Swift

forlandmarkinlandmarks{
letlandmarkDesc=landmark.landmark
letboundingPoly=landmark.frame
letentityId=landmark.entityId
// A landmark can have multiple locations: for example, the location the image
// was taken, and the location of the landmark depicted.
forlocationinlandmark.locations{
letlatitude=location.latitude
letlongitude=location.longitude
}
letconfidence=landmark.confidence
}

Objective-C

for(FIRVisionCloudLandmark*landmarkinlandmarks){
NSString*landmarkDesc=landmark.landmark;
CGRectframe=landmark.frame;
NSString*entityId=landmark.entityId;
// A landmark can have multiple locations: for example, the location the image
// was taken, and the location of the landmark depicted.
for(FIRVisionLatitudeLongitude*locationinlandmark.locations){
doublelatitude=[location.latitudedoubleValue];
doublelongitude=[location.longitudedoubleValue];
}
floatconfidence=[landmark.confidencefloatValue];
}

Next steps

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月10日 UTC.