The ViewModel
class is a business logic or screen level state holder, as explained in
ViewModel overview
on the Android Developer website. It exposes state to the UI and encapsulates related business logic.
The main advantage of using the ViewModel
class is that it caches state and persists it through configuration changes. This means that the UI does not have to fetch data again when navigating between activities, or following configuration changes, such as screen rotation.
You can now add Jetpack Lifecycle libraries to your app.
libs.versions.toml
and append the following line to the end of the [versions]
section. This defines the version of the Jetpack Lifecycle libraries that you will be using.
lifecycle = "2.8.7"
[libraries]
section, ideally between androidx-appcompat
and material
. This declares the Jetpack Lifecycle ViewModel Kotlin extension:
androidx-lifecycle-viewmodel = { group = "androidx.lifecycle", name = "lifecycle-viewmodel-ktx", version.ref = "lifecycle" }
build.gradle.kts
in your project’s app
directory, then insert the following line into the dependencies
block, ideally between implementation(libs.androidx.constraintlayout)
and implementation(libs.camera.core)
:
implementation(libs.androidx.lifecycle.viewmodel)
MainViewModel.kt
and place it into the same directory of MainActivity.kt
. Now copy and paste the code below into it:
package com.example.holisticselfiedemo
import android.app.Application
import android.util.Log
import androidx.lifecycle.ViewModel
import androidx.lifecycle.viewModelScope
import kotlinx.coroutines.launch
class MainViewModel : ViewModel(), HolisticRecognizerHelper.Listener {
private val holisticRecognizerHelper = HolisticRecognizerHelper()
fun setupHelper(context: Context) {
viewModelScope.launch {
holisticRecognizerHelper.apply {
listener = this@MainViewModel
setup(context)
}
}
}
fun shutdownHelper() {
viewModelScope.launch {
holisticRecognizerHelper.apply {
listener = null
shutdown()
}
}
}
fun recognizeLiveStream(imageProxy: ImageProxy) {
holisticRecognizerHelper.recognizeLiveStream(
imageProxy = imageProxy,
)
}
override fun onFaceLandmarkerResults(resultBundle: FaceResultBundle) {
Log.i(TAG, "Face result: $resultBundle")
}
override fun onFaceLandmarkerError(error: String, errorCode: Int) {
Log.e(TAG, "Face landmarker error $errorCode: $error")
}
override fun onGestureResults(resultBundle: GestureResultBundle) {
Log.i(TAG, "Gesture result: $resultBundle")
}
override fun onGestureError(error: String, errorCode: Int) {
Log.e(TAG, "Gesture recognizer error $errorCode: $error")
}
companion object {
private const val TAG = "MainViewModel"
}
}
You might notice that success and failure messages are logged with different APIs. For more information on log level guidelines, see Understanding Logging: Log Level Guidelines .
MainViewModel
to MainActivity
by inserting the following line into MainActivity.kt
, above the onCreate
method.
Do not forget to import the viewModels
extension function
through import androidx.activity.viewModels
.
private val viewModel: MainViewModel by viewModels()
private var isHelperReady = false
override fun onResume() {
super.onResume()
viewModel.setupHelper(baseContext)
isHelperReady = true
}
override fun onPause() {
super.onPause()
isHelperReady = false
viewModel.shutdownHelper()
}
imageAnalysis
to MainActivity
, along with other camera- related member variables:
private var imageAnalysis: ImageAnalysis? = null
MainActivity
’s bindCameraUseCases()
method, insert the following code after building preview
, above cameraProvider.unbindAll()
:
// ImageAnalysis. Using RGBA 8888 to match how MediaPipe models work
imageAnalysis =
ImageAnalysis.Builder()
.setResolutionSelector(resolutionSelector)
.setTargetRotation(targetRotation)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build()
// The analyzer can then be assigned to the instance
.also {
it.setAnalyzer(
// Forcing a serial executor without parallelism
// to avoid packets sent to MediaPipe out-of-order
Dispatchers.Default.limitedParallelism(1).asExecutor()
) { image ->
if (isHelperReady)
viewModel.recognizeLiveStream(image)
}
}
The isHelperReady
flag is a lightweight mechanism to prevent camera image frames being sent to helper once you have started shutting down the helper.
imageAnalysis
along with other use cases to camera
:
camera = cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageAnalysis
)
Face result: ...
and Gesture result: ...
debug messages in your
Logcat
, which prove that MediaPipe tasks are functioning properly. Good job!