What is ML Kit service for? What problems can it solve during development?
Today I present to your attention one of the most important functions of ML Kit - face recognition.
Face recognition overview
This feature can recognize facial orientation, facial expressions (joy, disgust, surprise, sadness, and anger), signs (gender, age, clothing and accessories), and whether eyes are open or closed. It can also determine the coordinates of the nose, eyes, lips and eyebrows, and even identify groups of faces at the same time.
And most importantly, the face recognition feature is absolutely free and works on any Android phone.
Developing an automatic smile shooting function for a group of people
I'll walk you through how you can use the features described above to create a demo of the Automatic Smile Capture feature. You can download the demo source code at github.com/HMS-Core/hms-ml-demo .
1. Preparation
When integrating any HMS Core development tools, the preparation process is almost the same. You just need to add the Maven repository and import the SDK.
1.1 Add the Maven repository provided by Huawei to your build.gradle file at the project level
Add the Maven repository address:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK dependencies to build.gradle file at application level
Import the face recognition SDK and core SDK:
dependencies{
// import the basic SDK
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
// Import the face detection SDK
implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'
}
1.3 Add automatic model loading function to AndroidManifest.xml file
This function is mainly used to update the model. Models can be downloaded automatically and updated on mobile based on an optimized algorithm.
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</application></manifest>
1.4 Submit a request for access to camera and memory in AndroidManifest.xml file
<!--Camera permission--><uses-permission android:name="android.permission.CAMERA" /><!--Storage permission--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Code development
2.1 Create a face analyzer to take a photo when a smile is detected
First, to configure the smile detection photo capture, follow these steps:
(1) Configure the analyzer settings.
(2) Transfer parameters to the analyzer.
(3) Override transactResult in analyzer.setTransacto to handle the content returned by face recognition. Specifically, the confidence value (that there is a smile on the face) is returned. When the confidence level reaches the set threshold, the camera automatically takes a photo.
private MLFaceAnalyzer analyzer;private void createFaceAnalyzer() {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
.setMinFaceProportion(0.1f)
.setTracingAllowed(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
@Override public void destroy() {
}
@Override public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if (emotion.getSmilingProbability() > smilingPossibility) {
flag++;
}
}
if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false;
mHandler.sendEmptyMessage(TAKE_PHOTO);
}
}
});}
Secondly, we want to save this photo:
private void takePhoto() {
this.mLensEngine.photograph(null,
new LensEngine.PhotographListener() {
@Override public void takenPhotograph(byte[] bytes) {
mHandler.sendEmptyMessage(STOP_PREVIEW);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
saveBitmapToDisk(bitmap);
}
});}
2.2 Create a LensEngine instance to capture dynamic camera streams and pass streams to the analyzer
private void createLensEngine() {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create();}
2.3 Submit an application for the right to access dynamic streams and attach the code for creating the analyzer and LensEngine
@Overridepublic void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_live_face_analyse);
if (savedInstanceState != null) {
this.lensType = savedInstanceState.getInt("lensType");
}
this.mPreview = this.findViewById(R.id.preview);
this.createFaceAnalyzer();
this.findViewById(R.id.facingSwitch).setOnClickListener(this);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
} else {
this.requestCameraPermission();
}}
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
return;
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
return;
}}
Further actions
Pretty simple, isn't it? Even if you're unfamiliar with the development process, you can still create a useful new feature in just half an hour! Now let's see what this function can do.
Take a photo of one person while smiling:
Take a photo of several people while smiling:
How else can you use facial recognition? Here are some options:
1. Beautify your facial features.
2. Create interesting effects by exaggerating or changing the contours and features of the face.
3. Create an age determination function that will prevent children from accessing inappropriate content.
4. Design an eye protection function by determining the amount of time the user looks at the screen.
5. Determine if a person is alive in front of the camera using random commands (for example, shake your head, blink, open your mouth).
6. Recommend products to users according to their age and gender.
For more details, visit our website:developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
We will share other ways to use the HUAWEI ML Kit. Stay tuned!