Class FaceAnnotation (3.3.0)
Stay organized with collections
Save and categorize content based on your preferences.
- 3.78.0 (latest)
- 3.76.0
- 3.74.0
- 3.73.0
- 3.72.0
- 3.71.0
- 3.70.0
- 3.68.0
- 3.66.0
- 3.65.0
- 3.62.0
- 3.61.0
- 3.60.0
- 3.58.0
- 3.57.0
- 3.56.0
- 3.55.0
- 3.54.0
- 3.53.0
- 3.52.0
- 3.51.0
- 3.50.0
- 3.49.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.39.0
- 3.38.0
- 3.37.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.27.0
- 3.26.0
- 3.25.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.0
- 3.3.0
- 3.2.0
- 3.1.3
- 2.1.4
- 2.0.29
publicfinalclass FaceAnnotationextendsGeneratedMessageV3implementsFaceAnnotationOrBuilderA face annotation object contains the results of face detection.
Protobuf type google.cloud.vision.v1.FaceAnnotation
Inheritance
Object > AbstractMessageLite<MessageType,BuilderType> > AbstractMessage > GeneratedMessageV3 > FaceAnnotationImplements
FaceAnnotationOrBuilderInherited Members
Static Fields
ANGER_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintANGER_LIKELIHOOD_FIELD_NUMBERBLURRED_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintBLURRED_LIKELIHOOD_FIELD_NUMBERBOUNDING_POLY_FIELD_NUMBER
publicstaticfinalintBOUNDING_POLY_FIELD_NUMBERDETECTION_CONFIDENCE_FIELD_NUMBER
publicstaticfinalintDETECTION_CONFIDENCE_FIELD_NUMBERFD_BOUNDING_POLY_FIELD_NUMBER
publicstaticfinalintFD_BOUNDING_POLY_FIELD_NUMBERHEADWEAR_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintHEADWEAR_LIKELIHOOD_FIELD_NUMBERJOY_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintJOY_LIKELIHOOD_FIELD_NUMBERLANDMARKING_CONFIDENCE_FIELD_NUMBER
publicstaticfinalintLANDMARKING_CONFIDENCE_FIELD_NUMBERLANDMARKS_FIELD_NUMBER
publicstaticfinalintLANDMARKS_FIELD_NUMBERPAN_ANGLE_FIELD_NUMBER
publicstaticfinalintPAN_ANGLE_FIELD_NUMBERROLL_ANGLE_FIELD_NUMBER
publicstaticfinalintROLL_ANGLE_FIELD_NUMBERSORROW_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintSORROW_LIKELIHOOD_FIELD_NUMBERSURPRISE_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintSURPRISE_LIKELIHOOD_FIELD_NUMBERTILT_ANGLE_FIELD_NUMBER
publicstaticfinalintTILT_ANGLE_FIELD_NUMBERUNDER_EXPOSED_LIKELIHOOD_FIELD_NUMBER
publicstaticfinalintUNDER_EXPOSED_LIKELIHOOD_FIELD_NUMBERStatic Methods
getDefaultInstance()
publicstaticFaceAnnotationgetDefaultInstance()getDescriptor()
publicstaticfinalDescriptors.DescriptorgetDescriptor()newBuilder()
publicstaticFaceAnnotation.BuildernewBuilder()newBuilder(FaceAnnotation prototype)
publicstaticFaceAnnotation.BuildernewBuilder(FaceAnnotationprototype)parseDelimitedFrom(InputStream input)
publicstaticFaceAnnotationparseDelimitedFrom(InputStreaminput)parseDelimitedFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseDelimitedFrom(InputStreaminput,ExtensionRegistryLiteextensionRegistry)parseFrom(byte[] data)
publicstaticFaceAnnotationparseFrom(byte[]data)byte[]parseFrom(byte[] data, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseFrom(byte[]data,ExtensionRegistryLiteextensionRegistry)byte[]parseFrom(ByteString data)
publicstaticFaceAnnotationparseFrom(ByteStringdata)parseFrom(ByteString data, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseFrom(ByteStringdata,ExtensionRegistryLiteextensionRegistry)parseFrom(CodedInputStream input)
publicstaticFaceAnnotationparseFrom(CodedInputStreaminput)parseFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseFrom(CodedInputStreaminput,ExtensionRegistryLiteextensionRegistry)parseFrom(InputStream input)
publicstaticFaceAnnotationparseFrom(InputStreaminput)parseFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseFrom(InputStreaminput,ExtensionRegistryLiteextensionRegistry)parseFrom(ByteBuffer data)
publicstaticFaceAnnotationparseFrom(ByteBufferdata)parseFrom(ByteBuffer data, ExtensionRegistryLite extensionRegistry)
publicstaticFaceAnnotationparseFrom(ByteBufferdata,ExtensionRegistryLiteextensionRegistry)parser()
publicstaticParser<FaceAnnotation>parser()Methods
equals(Object obj)
publicbooleanequals(Objectobj)getAngerLikelihood()
publicLikelihoodgetAngerLikelihood()Anger likelihood.
.google.cloud.vision.v1.Likelihood anger_likelihood = 11;
The angerLikelihood.
getAngerLikelihoodValue()
publicintgetAngerLikelihoodValue()Anger likelihood.
.google.cloud.vision.v1.Likelihood anger_likelihood = 11;
The enum numeric value on the wire for angerLikelihood.
getBlurredLikelihood()
publicLikelihoodgetBlurredLikelihood()Blurred likelihood.
.google.cloud.vision.v1.Likelihood blurred_likelihood = 14;
The blurredLikelihood.
getBlurredLikelihoodValue()
publicintgetBlurredLikelihoodValue()Blurred likelihood.
.google.cloud.vision.v1.Likelihood blurred_likelihood = 14;
The enum numeric value on the wire for blurredLikelihood.
getBoundingPoly()
publicBoundingPolygetBoundingPoly() The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale.
The bounding box is computed to "frame" the face in accordance with human
expectations. It is based on the landmarker results.
Note that one or more x and/or y coordinates may not be generated in the
BoundingPoly (the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
.google.cloud.vision.v1.BoundingPoly bounding_poly = 1;
The boundingPoly.
getBoundingPolyOrBuilder()
publicBoundingPolyOrBuildergetBoundingPolyOrBuilder() The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale.
The bounding box is computed to "frame" the face in accordance with human
expectations. It is based on the landmarker results.
Note that one or more x and/or y coordinates may not be generated in the
BoundingPoly (the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
.google.cloud.vision.v1.BoundingPoly bounding_poly = 1;
getDefaultInstanceForType()
publicFaceAnnotationgetDefaultInstanceForType()getDetectionConfidence()
publicfloatgetDetectionConfidence()Detection confidence. Range [0, 1].
float detection_confidence = 7;
The detectionConfidence.
getFdBoundingPoly()
publicBoundingPolygetFdBoundingPoly() The fd_bounding_poly bounding polygon is tighter than the
boundingPoly, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1.BoundingPoly fd_bounding_poly = 2;
The fdBoundingPoly.
getFdBoundingPolyOrBuilder()
publicBoundingPolyOrBuildergetFdBoundingPolyOrBuilder() The fd_bounding_poly bounding polygon is tighter than the
boundingPoly, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1.BoundingPoly fd_bounding_poly = 2;
getHeadwearLikelihood()
publicLikelihoodgetHeadwearLikelihood()Headwear likelihood.
.google.cloud.vision.v1.Likelihood headwear_likelihood = 15;
The headwearLikelihood.
getHeadwearLikelihoodValue()
publicintgetHeadwearLikelihoodValue()Headwear likelihood.
.google.cloud.vision.v1.Likelihood headwear_likelihood = 15;
The enum numeric value on the wire for headwearLikelihood.
getJoyLikelihood()
publicLikelihoodgetJoyLikelihood()Joy likelihood.
.google.cloud.vision.v1.Likelihood joy_likelihood = 9;
The joyLikelihood.
getJoyLikelihoodValue()
publicintgetJoyLikelihoodValue()Joy likelihood.
.google.cloud.vision.v1.Likelihood joy_likelihood = 9;
The enum numeric value on the wire for joyLikelihood.
getLandmarkingConfidence()
publicfloatgetLandmarkingConfidence()Face landmarking confidence. Range [0, 1].
float landmarking_confidence = 8;
The landmarkingConfidence.
getLandmarks(int index)
publicFaceAnnotation.LandmarkgetLandmarks(intindex)Detected face landmarks.
repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
getLandmarksCount()
publicintgetLandmarksCount()Detected face landmarks.
repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
getLandmarksList()
publicList<FaceAnnotation.Landmark>getLandmarksList()Detected face landmarks.
repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
getLandmarksOrBuilder(int index)
publicFaceAnnotation.LandmarkOrBuildergetLandmarksOrBuilder(intindex)Detected face landmarks.
repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
getLandmarksOrBuilderList()
publicList<?extendsFaceAnnotation.LandmarkOrBuilder>getLandmarksOrBuilderList()Detected face landmarks.
repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
getPanAngle()
publicfloatgetPanAngle()Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
float pan_angle = 5;
The panAngle.
getParserForType()
publicParser<FaceAnnotation>getParserForType()getRollAngle()
publicfloatgetRollAngle()Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
float roll_angle = 4;
The rollAngle.
getSerializedSize()
publicintgetSerializedSize()getSorrowLikelihood()
publicLikelihoodgetSorrowLikelihood()Sorrow likelihood.
.google.cloud.vision.v1.Likelihood sorrow_likelihood = 10;
The sorrowLikelihood.
getSorrowLikelihoodValue()
publicintgetSorrowLikelihoodValue()Sorrow likelihood.
.google.cloud.vision.v1.Likelihood sorrow_likelihood = 10;
The enum numeric value on the wire for sorrowLikelihood.
getSurpriseLikelihood()
publicLikelihoodgetSurpriseLikelihood()Surprise likelihood.
.google.cloud.vision.v1.Likelihood surprise_likelihood = 12;
The surpriseLikelihood.
getSurpriseLikelihoodValue()
publicintgetSurpriseLikelihoodValue()Surprise likelihood.
.google.cloud.vision.v1.Likelihood surprise_likelihood = 12;
The enum numeric value on the wire for surpriseLikelihood.
getTiltAngle()
publicfloatgetTiltAngle()Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
float tilt_angle = 6;
The tiltAngle.
getUnderExposedLikelihood()
publicLikelihoodgetUnderExposedLikelihood()Under-exposed likelihood.
.google.cloud.vision.v1.Likelihood under_exposed_likelihood = 13;
The underExposedLikelihood.
getUnderExposedLikelihoodValue()
publicintgetUnderExposedLikelihoodValue()Under-exposed likelihood.
.google.cloud.vision.v1.Likelihood under_exposed_likelihood = 13;
The enum numeric value on the wire for underExposedLikelihood.
getUnknownFields()
publicfinalUnknownFieldSetgetUnknownFields()hasBoundingPoly()
publicbooleanhasBoundingPoly() The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale.
The bounding box is computed to "frame" the face in accordance with human
expectations. It is based on the landmarker results.
Note that one or more x and/or y coordinates may not be generated in the
BoundingPoly (the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
.google.cloud.vision.v1.BoundingPoly bounding_poly = 1;
Whether the boundingPoly field is set.
hasFdBoundingPoly()
publicbooleanhasFdBoundingPoly() The fd_bounding_poly bounding polygon is tighter than the
boundingPoly, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the <code>fd</code> (face detection) prefix.
.google.cloud.vision.v1.BoundingPoly fd_bounding_poly = 2;
Whether the fdBoundingPoly field is set.
hashCode()
publicinthashCode()internalGetFieldAccessorTable()
protectedGeneratedMessageV3.FieldAccessorTableinternalGetFieldAccessorTable()isInitialized()
publicfinalbooleanisInitialized()newBuilderForType()
publicFaceAnnotation.BuildernewBuilderForType()newBuilderForType(GeneratedMessageV3.BuilderParent parent)
protectedFaceAnnotation.BuildernewBuilderForType(GeneratedMessageV3.BuilderParentparent)newInstance(GeneratedMessageV3.UnusedPrivateParameter unused)
protectedObjectnewInstance(GeneratedMessageV3.UnusedPrivateParameterunused)toBuilder()
publicFaceAnnotation.BuildertoBuilder()writeTo(CodedOutputStream output)
publicvoidwriteTo(CodedOutputStreamoutput)