Recognize Landmarks Securely with Cloud Vision using Firebase Auth and Functions on Android
Stay organized with collections
Save and categorize content based on your preferences.
In order to call a Google Cloud API from your app, you need to create an intermediate REST API that handles authorization and protects secret values such as API keys. You then need to write code in your mobile app to authenticate to and communicate with this intermediate service.
One way to create this REST API is by using Firebase Authentication and Functions, which gives you a managed, serverless gateway to Google Cloud APIs that handles authentication and can be called from your mobile app with pre-built SDKs.
This guide demonstrates how to use this technique to call the Cloud Vision API from your app. This method will allow all authenticated users to access Cloud Vision billed services through your Cloud project, so consider whether this auth mechanism is sufficient for your use case before proceeding.
Before you begin
Configure your project
- If you haven't already, add Firebase to your Android project.
-
If you haven't already enabled Cloud-based APIs for your project, do so now:
- Open the Firebase ML APIs page in the Firebase console.
-
If you haven't already upgraded your project to the pay-as-you-go Blaze pricing plan, click Upgrade to do so. (You'll be prompted to upgrade only if your project isn't on the Blaze pricing plan.)
Only projects on the Blaze pricing plan can use Cloud-based APIs.
- If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs.
- Configure your existing Firebase API keys to disallow access to the Cloud
Vision API:
- Open the Credentials page of the Cloud console.
- For each API key in the list, open the editing view, and in the Key Restrictions section, add all of the available APIs except the Cloud Vision API to the list.
Deploy the callable function
Next, deploy the Cloud Function you will use to bridge your app and the Cloud
Vision API. The functions-samples repository contains an example
you can use.
By default, accessing the Cloud Vision API through this function will allow only authenticated users of your app access to the Cloud Vision API. You can modify the function for different requirements.
To deploy the function:
- Clone or download the functions-samples repo
and change to the
Node-1st-gen/vision-annotate-imagedirectory:git clone https://github.com/firebase/functions-samplescd Node-1st-gen/vision-annotate-image - Install dependencies:
cd functionsnpm installcd .. - If you don't have the Firebase CLI, install it.
- Initialize a Firebase project in the
vision-annotate-imagedirectory. When prompted, select your project in the list.firebase init
- Deploy the function:
firebase deploy --only functions:annotateImage
Add Firebase Auth to your app
The callable function deployed above will reject any request from non-authenticated users of your app. If you have not already done so, you will need to add Firebase Auth to your app.
Add necessary dependencies to your app
<project>/<app-module>/build.gradle.kts or
<project>/<app-module>/build.gradle):
implementation("com.google.firebase:firebase-functions:22.1.0") implementation("com.google.code.gson:gson:2.8.6")
1. Prepare the input image
In order to call Cloud Vision, the image must be formatted as a base64-encoded string. To process an image from a saved file URI:- Get the image as a
Bitmapobject:Kotlin
varbitmap:Bitmap=MediaStore.Images.Media.getBitmap(contentResolver,uri)
Java
Bitmapbitmap=MediaStore.Images.Media.getBitmap(getContentResolver(),uri);
- Optionally, scale down the image to save on bandwidth. See the
Cloud Vision recommended image sizes.
Kotlin
privatefunscaleBitmapDown(bitmap:Bitmap,maxDimension:Int):Bitmap{ valoriginalWidth=bitmap.width valoriginalHeight=bitmap.height varresizedWidth=maxDimension varresizedHeight=maxDimension if(originalHeight > originalWidth){ resizedHeight=maxDimension resizedWidth= (resizedHeight*originalWidth.toFloat()/originalHeight.toFloat()).toInt() }elseif(originalWidth > originalHeight){ resizedWidth=maxDimension resizedHeight= (resizedWidth*originalHeight.toFloat()/originalWidth.toFloat()).toInt() }elseif(originalHeight==originalWidth){ resizedHeight=maxDimension resizedWidth=maxDimension } returnBitmap.createScaledBitmap(bitmap,resizedWidth,resizedHeight,false) }
Java
privateBitmapscaleBitmapDown(Bitmapbitmap,intmaxDimension){ intoriginalWidth=bitmap.getWidth(); intoriginalHeight=bitmap.getHeight(); intresizedWidth=maxDimension; intresizedHeight=maxDimension; if(originalHeight > originalWidth){ resizedHeight=maxDimension; resizedWidth=(int)(resizedHeight*(float)originalWidth/(float)originalHeight); }elseif(originalWidth > originalHeight){ resizedWidth=maxDimension; resizedHeight=(int)(resizedWidth*(float)originalHeight/(float)originalWidth); }elseif(originalHeight==originalWidth){ resizedHeight=maxDimension; resizedWidth=maxDimension; } returnBitmap.createScaledBitmap(bitmap,resizedWidth,resizedHeight,false); }
Kotlin
// Scale down bitmap size bitmap=scaleBitmapDown(bitmap,640)
Java
// Scale down bitmap size bitmap=scaleBitmapDown(bitmap,640);
- Convert the bitmap object to a base64 encoded string:
Kotlin
// Convert bitmap to base64 encoded string valbyteArrayOutputStream=ByteArrayOutputStream() bitmap.compress(Bitmap.CompressFormat.JPEG,100,byteArrayOutputStream) valimageBytes:ByteArray=byteArrayOutputStream.toByteArray() valbase64encoded=Base64.encodeToString(imageBytes,Base64.NO_WRAP)
Java
// Convert bitmap to base64 encoded string ByteArrayOutputStreambyteArrayOutputStream=newByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.JPEG,100,byteArrayOutputStream); byte[]imageBytes=byteArrayOutputStream.toByteArray(); Stringbase64encoded=Base64.encodeToString(imageBytes,Base64.NO_WRAP);
The image represented by the
Bitmap object must
be upright, with no additional rotation required.
2. Invoke the callable function to recognize landmarks
To recognize landmarks in an image, invoke the callable function, passing a JSON Cloud Vision request.First, initialize an instance of Cloud Functions:
Kotlin
privatelateinitvarfunctions:FirebaseFunctions // ... functions=Firebase.functionsJava
privateFirebaseFunctionsmFunctions; // ... mFunctions=FirebaseFunctions.getInstance();Define a method for invoking the function:
Kotlin
privatefunannotateImage(requestJson:String):Task<JsonElement>{ returnfunctions .getHttpsCallable("annotateImage") .call(requestJson) .continueWith{task-> // This continuation runs on either success or failure, but if the task // has failed then result will throw an Exception which will be // propagated down. valresult=task.result?.data JsonParser.parseString(Gson().toJson(result)) } }Java
privateTask<JsonElement>annotateImage(StringrequestJson){ returnmFunctions .getHttpsCallable("annotateImage") .call(requestJson) .continueWith(newContinuation<HttpsCallableResult,JsonElement>(){ @Override publicJsonElementthen(@NonNullTask<HttpsCallableResult>task){ // This continuation runs on either success or failure, but if the task // has failed then getResult() will throw an Exception which will be // propagated down. returnJsonParser.parseString(newGson().toJson(task.getResult().getData())); } }); }Create a JSON request with Type
LANDMARK_DETECTION:Kotlin
// Create json request to cloud vision valrequest=JsonObject() // Add image to request valimage=JsonObject() image.add("content",JsonPrimitive(base64encoded)) request.add("image",image) // Add features to the request valfeature=JsonObject() feature.add("maxResults",JsonPrimitive(5)) feature.add("type",JsonPrimitive("LANDMARK_DETECTION")) valfeatures=JsonArray() features.add(feature) request.add("features",features)Java
// Create json request to cloud vision JsonObjectrequest=newJsonObject(); // Add image to request JsonObjectimage=newJsonObject(); image.add("content",newJsonPrimitive(base64encoded)); request.add("image",image); //Add features to the request JsonObjectfeature=newJsonObject(); feature.add("maxResults",newJsonPrimitive(5)); feature.add("type",newJsonPrimitive("LANDMARK_DETECTION")); JsonArrayfeatures=newJsonArray(); features.add(feature); request.add("features",features);Finally, invoke the function:
Kotlin
annotateImage(request.toString()) .addOnCompleteListener{task-> if(!task.isSuccessful){ // Task failed with an exception // ... }else{ // Task completed successfully // ... } }Java
annotateImage(request.toString()) .addOnCompleteListener(newOnCompleteListener<JsonElement>(){ @Override publicvoidonComplete(@NonNullTask<JsonElement>task){ if(!task.isSuccessful()){ // Task failed with an exception // ... }else{ // Task completed successfully // ... } } });
3. Get information about the recognized landmarks
If the landmark recognition operation succeeds, a JSON response of BatchAnnotateImagesResponse will be returned in the task's result. Each object in thelandmarkAnnotations
array represents a landmark that was recognized in the image. For each landmark,
you can get its bounding coordinates in the input image, the landmark's name,
its latitude and longitude, its Knowledge Graph entity ID (if available), and
the confidence score of the match. For example:
Kotlin
for(labelintask.result!!.asJsonArray[0].asJsonObject["landmarkAnnotations"].asJsonArray){
vallabelObj=label.asJsonObject
vallandmarkName=labelObj["description"]
valentityId=labelObj["mid"]
valscore=labelObj["score"]
valbounds=labelObj["boundingPoly"]
// Multiple locations are possible, e.g., the location of the depicted
// landmark and the location the picture was taken.
for(locinlabelObj["locations"].asJsonArray){
vallatitude=loc.asJsonObject["latLng"].asJsonObject["latitude"]
vallongitude=loc.asJsonObject["latLng"].asJsonObject["longitude"]
}
}
Java
for(JsonElementlabel:task.getResult().getAsJsonArray().get(0).getAsJsonObject().get("landmarkAnnotations").getAsJsonArray()){
JsonObjectlabelObj=label.getAsJsonObject();
StringlandmarkName=labelObj.get("description").getAsString();
StringentityId=labelObj.get("mid").getAsString();
floatscore=labelObj.get("score").getAsFloat();
JsonObjectbounds=labelObj.get("boundingPoly").getAsJsonObject();
// Multiple locations are possible, e.g., the location of the depicted
// landmark and the location the picture was taken.
for(JsonElementloc:labelObj.get("locations").getAsJsonArray()){
JsonObjectlatLng=loc.getAsJsonObject().get("latLng").getAsJsonObject();
doublelatitude=latLng.get("latitude").getAsDouble();
doublelongitude=latLng.get("longitude").getAsDouble();
}
}