Recognize Landmarks with ML Kit on iOS
Stay organized with collections
Save and categorize content based on your preferences.
You can use ML Kit to recognize well-known landmarks in an image.
Before you begin
- If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
- Include the ML Kit libraries in your Podfile:
After you install or update your project's Pods, be sure to open your Xcode project using itspod 'Firebase/MLVision', '6.25.0'
.xcworkspace. - In your app, import Firebase:
Swift
importFirebase
Objective-C
@importFirebase;
-
If you have not already enabled Cloud-based APIs for your project, do so now:
- Open the ML Kit APIs page of the Firebase console.
-
If you have not already upgraded your project to a Blaze pricing plan, click Upgrade to do so. (You will be prompted to upgrade only if your project isn't on the Blaze plan.)
Only Blaze-level projects can use Cloud-based APIs.
- If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs.
Configure the landmark detector
By default, the Cloud detector uses the stable version of the model and
returns up to 10 results. If you want to change either of these settings,
specify them with a VisionCloudDetectorOptions object as
in the following example:
Swift
letoptions=VisionCloudDetectorOptions() options.modelType=.latest options.maxResults=20
Objective-C
FIRVisionCloudDetectorOptions*options= [[FIRVisionCloudDetectorOptionsalloc]init]; options.modelType=FIRVisionCloudModelTypeLatest; options.maxResults=20;
In the next step, pass the VisionCloudDetectorOptions
object when you create the Cloud detector object.
Run the landmark detector
To recognize landmarks in an image, pass the image as aUIImage or a
CMSampleBufferRef to the VisionCloudLandmarkDetector's detect(in:)
method:
- Get an instance of
VisionCloudLandmarkDetector:Swift
lazyvarvision=Vision.vision() letcloudDetector=vision.cloudLandmarkDetector(options:options) // Or, to use the default settings: // let cloudDetector = vision.cloudLandmarkDetector()
Objective-C
FIRVision*vision=[FIRVisionvision]; FIRVisionCloudLandmarkDetector*landmarkDetector=[visioncloudLandmarkDetector]; // Or, to change the default settings: // FIRVisionCloudLandmarkDetector *landmarkDetector = // [vision cloudLandmarkDetectorWithOptions:options];
-
Create a
VisionImageobject using aUIImageor aCMSampleBufferRef.To use a
UIImage:- If necessary, rotate the image so that its
imageOrientationproperty is.up. - Create a
VisionImageobject using the correctly-rotatedUIImage. Do not specify any rotation metadata—the default value,.topLeft, must be used.Swift
letimage=VisionImage(image:uiImage)
Objective-C
FIRVisionImage*image=[[FIRVisionImagealloc]initWithImage:uiImage];
To use a
CMSampleBufferRef:-
Create a
VisionImageMetadataobject that specifies the orientation of the image data contained in theCMSampleBufferRefbuffer.To get the image orientation:
Swift
funcimageOrientation( deviceOrientation:UIDeviceOrientation, cameraPosition:AVCaptureDevice.Position )->VisionDetectorImageOrientation{ switchdeviceOrientation{ case.portrait: returncameraPosition==.front?.leftTop:.rightTop case.landscapeLeft: returncameraPosition==.front?.bottomLeft:.topLeft case.portraitUpsideDown: returncameraPosition==.front?.rightBottom:.leftBottom case.landscapeRight: returncameraPosition==.front?.topRight:.bottomRight case.faceDown,.faceUp,.unknown: return.leftTop } }
Objective-C
- (FIRVisionDetectorImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition{ switch(deviceOrientation){ caseUIDeviceOrientationPortrait: if(cameraPosition==AVCaptureDevicePositionFront){ returnFIRVisionDetectorImageOrientationLeftTop; }else{ returnFIRVisionDetectorImageOrientationRightTop; } caseUIDeviceOrientationLandscapeLeft: if(cameraPosition==AVCaptureDevicePositionFront){ returnFIRVisionDetectorImageOrientationBottomLeft; }else{ returnFIRVisionDetectorImageOrientationTopLeft; } caseUIDeviceOrientationPortraitUpsideDown: if(cameraPosition==AVCaptureDevicePositionFront){ returnFIRVisionDetectorImageOrientationRightBottom; }else{ returnFIRVisionDetectorImageOrientationLeftBottom; } caseUIDeviceOrientationLandscapeRight: if(cameraPosition==AVCaptureDevicePositionFront){ returnFIRVisionDetectorImageOrientationTopRight; }else{ returnFIRVisionDetectorImageOrientationBottomRight; } default: returnFIRVisionDetectorImageOrientationTopLeft; } }
Then, create the metadata object:
Swift
letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used. letmetadata=VisionImageMetadata() metadata.orientation=imageOrientation( deviceOrientation:UIDevice.current.orientation, cameraPosition:cameraPosition )
Objective-C
FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init]; AVCaptureDevicePositioncameraPosition= AVCaptureDevicePositionBack;// Set to the capture device you used. metadata.orientation= [selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
- Create a
VisionImageobject using theCMSampleBufferRefobject and the rotation metadata:Swift
letimage=VisionImage(buffer:sampleBuffer) image.metadata=metadata
Objective-C
FIRVisionImage*image=[[FIRVisionImagealloc]initWithBuffer:sampleBuffer]; image.metadata=metadata;
- If necessary, rotate the image so that its
-
Then, pass the image to the
detect(in:)method:Swift
cloudDetector.detect(in:visionImage){landmarks,errorin guarderror==nil,letlandmarks=landmarks,!landmarks.isEmptyelse{ // ... return } // Recognized landmarks // ... }
Objective-C
[landmarkDetectordetectInImage:image completion:^(NSArray<FIRVisionCloudLandmark*>*landmarks, NSError*error){ if(error!=nil){ return; }elseif(landmarks!=nil){ // Got landmarks } }];
Get information about the recognized landmarks
If landmark recognition succeeds, an array ofVisionCloudLandmark
objects will be passed to the completion handler. From each object, you can get
information about a landmark recognized in the image.
For example:
Swift
forlandmarkinlandmarks{ letlandmarkDesc=landmark.landmark letboundingPoly=landmark.frame letentityId=landmark.entityId // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. forlocationinlandmark.locations{ letlatitude=location.latitude letlongitude=location.longitude } letconfidence=landmark.confidence }
Objective-C
for(FIRVisionCloudLandmark*landmarkinlandmarks){ NSString*landmarkDesc=landmark.landmark; CGRectframe=landmark.frame; NSString*entityId=landmark.entityId; // A landmark can have multiple locations: for example, the location the image // was taken, and the location of the landmark depicted. for(FIRVisionLatitudeLongitude*locationinlandmark.locations){ doublelatitude=[location.latitudedoubleValue]; doublelongitude=[location.longitudedoubleValue]; } floatconfidence=[landmark.confidencefloatValue]; }
Next steps
- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to prevent and mitigate the effect of unauthorized API access.