Working with a camera on the HMS platform: improving the quality of shooting and adding various modes to our applications

Hello, Habr! HUAWEI and HONOR smartphone users have a wide range of shooting modes and effects available by default: Night Scene, Scene Recognition, HDR, Wide Aperture, etc. Using the Camera Engine toolkit, these and other modes can be added to any application. Under the cut I’ll tell you about the capabilities of this SDK and show you how to use the camera to the maximum.

Camera Engine Integration

Camera Engine is an SDK with pre-configured shooting modes for easy integration into third party IDEs. It is recommended to use Android Studio 3.0.1 or higher to develop applications compatible with the Camera Engine. They will work on Huawei phones with a Kirin 980 processor or newer and an OS no older than EMUI 10.0.

First you need to register as a developer and go through identity verification on our developer portal… Read more about this in the section registration of Huawei ID… Among other things, you will need to sign a cooperation agreement – the system will automatically prompt you to do this when you download the SDK.

Read more about creating a project and settings in our integration guide

General mode operation

Create the Camera Kit mode when the application is running and preview is available:

// It is recommended that exceptions be captured in the 1.0.1 version. In the 1.0.2 and later versions, exceptions do not need to be captured, but the CameraKit instance is obtained.
if (!isGetInstance) {
   try {
       mCameraKit = CameraKit.getInstance(getApplicationContext());
   } catch (NoSuchMethodError e) {
       Log.w(TAG, "this version camerakit does not contain VersionInfoInterface");
   } finally {
       isGetInstance = true;
   }
}
// If a mobile phone does not support CameraKit, or the CameraKit SDK version is incompatible with the capability layer, the return result is empty.
if (mCameraKit == null) {
   return;
}
// Query the camera list of the mobile phone. Currently, only the rear camera supports the super night mode.
String[] cameraLists = mCameraKit.getCameraIdList();
// Query the modes supported by the current cameras.
int[] modes = mCameraKit.getSupportedModes(cameraLists[0]);
// Create a mode.
mCameraKit.createMode(cameraLists[0], mCurrentModeType, mModeStateCallback, mCameraKitHandler);

We configure the settings depending on the characteristics of the mode and the requirements of the service:

// Configure the preview surface.
modeConfigBuilder.addPreviewSurface(surface);
// Set photographing parameters.
modeConfigBuilder.addCaptureImage(mCaptureSize, ImageFormat.JPEG)
// Configure the action data listener.
modeConfigBuilder.setDataCallback(actionDataCallback, mCameraKitHandler);
// Configure the action status listener.
modeConfigBuilder.setStateCallback(actionStateCallback, mCameraKitHandler);
// Configure the mode.
mMode.configure();

The preview is triggered when the mode status callback returns mode configuration success:

mMode.startPreview();

Setting functional parameters:

// Query the functions supported by the mode.
List<CaptureRequest.Key<?>> parameters = mModeCharacteristics.getSupportedParameters();
// Set zoom.
int zoomSetResult = mMode.setZoom(level);

The camera will take a picture after touching the UI:

// In this example, the portrait mode is used by default, which simplifies the rotation angle setting logic.
mMode.setImageRotation(90);
// This example uses the default photo storage path.
mFile = new File(getExternalFilesDir(null), "pic.jpg");
// Take a photo. The photo is asynchronously called back through ActionDataCallback.onImageAvailable.
mMode.takePicture();

The callback function is used in the mode of creating mode status and control processes:

private final ModeStateCallback mModeStateCallback = new ModeStateCallback() {
   @Override
   public void onCreated(Mode mode) {
       // Configure the mode after the mode is created successfully.
   }
   @Override
   public void onConfigured(Mode mode) {
       // Start the preview after the mode is configured successfully.
   }
   // ...
};

The callback of these actions is used to process them asynchronously. For example, we notify the application about the format in which the photo is created:

private final ActionDataCallback = new ActionDataCallback() {
   @Override
   public void onImageAvailable(Mode mode, @Type int type, Image image) {
       // Store photos.
   }
};

The activity state callback is used to process it asynchronously – for example, start a preview, start or stop taking a photo:

private final ActionStateCallback = new ActionStateCallback() {
   @Override
   public void onPreview(Mode mode, int state, PreviewResult result) {
       // Preview starting callback.
   }
   @Override
   public void onTakePicture(Mode mode, int state, TakePictureResult result) {
       // Photographing action callback.
   }
};

When the application is closed, the occupied resources are released:

mMode.release();

Working with night mode

In night mode instead of a continuous long exposure, a series of short ones is used, and the final picture is formed algorithmically from the best elements of the resulting images.

To work in this mode, set mCurrentModeType to Mode.Type.SUPER_NIGHT_MODE. Further – as in the general mode, only after starting the preview, set the sensitivity and exposure time to customize the user interface (UI) of the application. If these parameters are not specified, the default settings are used.

Below is the method for setting the camera sensitivity; exposure setting method (RequestKey.HW_SUPER_NIGHT_EXPOSURE) is similar.

// Query the functions supported by the mode.
List<CaptureRequest.Key<?>> parameters = mModeCharacteristics.getSupportedParameters();
// If the sensitivity can be set
if ((parameters != null) && (parameters.contains(RequestKey.HW_SUPER_NIGHT_ISO))) {
   // Query the supported sensitivity range.
   List<Long> values = mModeCharacteristics.getParameterRange(RequestKey.HW_SUPER_NIGHT_ISO);
   // Set the first sensitivity parameter.
   mMode.setParameter(RequestKey.HW_SUPER_NIGHT_ISO, values.get(0));
}

You can use a touch on the user interface to stop the exposure and take photos. API call to stop taking photos:

mMode.stopPicture();

The activity state callback is used to process it asynchronously – for example, start a preview, start or stop taking a photo:

private final ActionStateCallback = new ActionStateCallback() {
   @Override
   public void onPreview(Mode mode, int state, PreviewResult result) {
       // Preview starting callback.
   }
   @Override
   public void onTakePicture(Mode mode, int state, TakePictureResult result) {
       // Photographing action callback.
       switch (state) {
           case TakePictureResult.State.CAPTURE_STARTED:
               Log.d(TAG, "onState: STATE_CAPTURE_STARTED");
               break;
           case TakePictureResult.State.CAPTURE_EXPOSURE_BEGIN:
               /* When the long-time exposure mode is enabled, use the result class to obtain the exposure time required for the super night mode.
                  After receiving the exposure time return using the callback, you can call stopPicture to stop the exposure before the exposure time ends. */
               break;
           case TakePictureResult.State.CAPTURE_EXPOSURE_END:
               // The long-time exposure ends. When the exposure ends at the specified time or in advance, this status is called back.
               break;
           case TakePictureResult.State.CAPTURE_COMPLETED:
               // Photographing is complete.
               break;
           default:
               break;
       }
   }
};

Working with wide aperture mode

Wide aperture – a mode in which the background is blurred, and the object in the image becomes emphasized clear, highlighted.

To work, set mCurrentModeType to Mode.Type.BOKEH_MODE. After starting the preview, set the aperture parameters:

// Query the functions supported by the mode.
List<CaptureRequest.Key<?>> parameters = mModeCharacteristics.getSupportedParameters();
// If the ultra-wide angle function is supported,
if ((parameters != null) && (parameters.contains(RequestKey.HW_APERTURE))) {
   // Query the supported ultra-wide angle level range.
   List<Float> values = mModeCharacteristics.getParameterRange(RequestKey.HW_APERTURE);
   // Set the first ultra-wide angle parameter.
   mMode.setParameter(RequestKey.HW_APERTURE, vales.get(0));
}

Working with video

In this mode, you can apply real-time effects to the recording. For example, the function AI movie allows you to adjust the desired brightness and saturation, apply a film effect, etc., so as not to waste time on post-processing.

To work in the mode for mCurrentModeType set the Mode.Type.VIDEO_MODE value, set the parameters:

// Configure the preview surface.
modeConfigBuilder.addPreviewSurface(surface);
// Configure the recording surface.
modeConfigBuilder.addVideoSurface(videoSurface);
// Set photographing parameters.
modeConfigBuilder.addCaptureImage(mCaptureSize, ImageFormat.JPEG)
// Configure the action data listener.
modeConfigBuilder.setDataCallback(actionDataCallback, mCameraKitHandler);
// Configure the action status listener.
modeConfigBuilder.setStateCallback(actionStateCallback, mCameraKitHandler);
// Configure the mode.
mMode.configure();

After starting the preview, request and configure the supported features, for example AI movie:

// Query the functions supported by the mode.
List<CaptureRequest.Key<?>> parameters = mModeCharacteristics.getSupportedParameters();
// If the AI movie feature is supported,
(parameters != null) && (parameters.contains(RequestKey.HW_AI_MOVIE))
// Query the supported range of the AI movie feature.
List<Byte> lists = modeCharacteristics.getParameterRange(RequestKey.HW_AI_MOVIE)
// Set the AI movie feature parameter.
mMode.setParameter(RequestKey.HW_AI_MOVIE, value)

After starting the preview, we call the video recording API:

// Start recording.
mMode.startRecording()
mMediaRecorder.start();
// Pause recording.
mMode.pauseRecording();
mMediaRecorder.pause();
// Resume recording.
mMode.resumeRecording();
mMediaRecorder.resume();
// Stop recording.
mMode.stopRecording();
mMediaRecorder.stop();

Everything else is the same as when working in general mode.

Working with HDR mode

HDR (High Dynamic Range) – a mode for shooting in low light. It combines several shots of the same frame with different shutter speeds and thereby increases the clarity of the image.

To integrate into the application, set mCurrentModeType to Mode.Type.HDR_MODE. Everything else is the same as when working with the general mode.

Working with deceleration modes

Deceleration modes – Slow-mo and Super slow-mo – allow you to record video at 60 fps, 120 fps, 480 fps or 960 fps.

To work in the mode Slow-mo set mCurrentModeType to Mode.Type.SLOW_MOTION_MODE.

// Set the recording frame rate.
modeConfigBuilder.setVideoFps(recordFps);

Photo mode is not supported by us, so we will not add configuration related to photography.

// Start recording.
mMode.startRecording()
// Stop recording.
mMode.stopRecording()

This mode supports flash, zoom, autofocus, color correction and face detection. Slow-mo is not supported by the front camera, nor can it pause and resume during video recording.

To work in Super slow-mo set mCurrentModeType to Mode.Type.SUPER_SLOW_MOTION.

// Start recording.
mMode.startRecording()
// Stop recording.
mMode.stopRecording()

These two configuration items must be set in pairs based on Mapreturned by modeCharacteristics.getSupportedVideoSizes (). Photo mode is not supported, so do not add any associated configuration.

The video resolution must be the same as the preview resolution. Below are the various Super slow-mo operations. Use the following APIs:

// Obtain the motion detection area.
modeCharacteristics.getParameterRange(RequestKey.HW_SUPER_SLOW_CHECK_AREA)

The range is returned in the center coordinate system:

// Start recording.
Mode.startRecording(file);

If manual recording is required, call mMode.startRecording (file) directly. The recording cannot be paused.

To make Super slow-mo work in automatic mode, first set a motion detection frame:

mMode.setParameter(RequestKey.HW_SUPER_SLOW_CHECK_AREA,rect);

The detection frame must be rectangular, converted to the center coordinate system from the preview coordinate system. Take the upper left corner of the preview as the vertex. The rule for determining the length of the frame edge is as follows: on a phone that supports 7680 frames per second, the length of the frame edge can be set to a value in the range [1/3, 1] from the smaller side of the phone screen. Otherwise, only a fixed value can be delivered, which is 1/3 of the smaller side.

The module works in automatic mode and mMode.startRecording (file) is called. Before a moving object is detected, that is, before RecordingResult.State.RECORDING_STARTED is returned, you can call mMode.stopRecording () to stop recording. Once a moving object is detected, recording cannot be stopped.

When mMode.stopRecording () is called, the RecordingResult.State.RECORDING_STOPPED event is returned. After one recording, the mode switches to manual. To perform automatic recording again, we deliver the detection zone.

// Action status callback.
class ModeActionStateCallback extends ActionStateCallback {
   @Override
   public void onRecording(Mode mode, int state, RecordingResult result) {
       switch (state) {
           // An IO error occurs.
           case RecordingResult.State.ERROR_FILE_IO:
           // An unknown error occurs.
           case RecordingResult.State.ERROR_UNKNOWN:
           //The bottom-layer initialization is not ready.
           case RecordingResult.State.ERROR_RECORDING_NOT_READY:
               break;
           // The bottom layer is ready.
           case RecordingResult.State.RECORDING_READY:
               break;
           // Recording is started. This parameter is returned only in automatic mode.
           case RecordingResult.State.RECORDING_STARTED:
               break;
           // Recording is stopped.
           case RecordingResult.State.RECORDING_STOPPED:
               break;
           // Recording is complete.
           case RecordingResult.State.RECORDING_COMPLETED:
               break;
           // The recorded file is saved.
           case RecordingResult.State.RECORDING_FILE_SAVED:
               break;
           default:
               break;
       }
   }
}

Super slow-mo supports flash, zoom and autofocus, not supported by front camera.

Working with portrait mode

Portrait mode works with the main and front cameras, allows you to detect various objects in the image: faces, smiles, etc. The user can select and apply lighting effects.

To work, let’s set mCurrentModeType to Mode.Type.PORTRAIT_MODE. The rest of the steps are the same as in the general mode, only after starting the preview we set the settings for the portrait mode:

// Query the facial beautification functions supported by the mode.
modeCharacteristics.getSupportedBeauty(Metadata.BeautyType.HW_BEAUTY_SKIN_SMOOTH)
modeCharacteristics.getSupportedBeauty(Metadata.BeautyType.HW_BEAUTY_FACE_SLENDER)
modeCharacteristics.getSupportedBeauty(Metadata.BeautyType.HW_BEAUTY_SKIN_COLOR)
modeCharacteristics.getSupportedBeauty(Metadata.BeautyType.HW_BEAUTY_BODY_SHAPING)
// Configure the facial beautification functions.
mMode.setBeauty(Metadata.BeautyType.HW_BEAUTY_SKIN_SMOOTH, value)
mMode.setBeauty(Metadata.BeautyType.HW_BEAUTY_FACE_SLENDER, value)
mMode.setBeauty(Metadata.BeautyType.HW_BEAUTY_SKIN_COLOR, value)
mMode.setBeauty(Metadata.BeautyType.HW_BEAUTY_BODY_SHAPING, value

Working with Pro Mode

Pro Mode works with both photos and videos. When using this API, it becomes possible to change ISO and exposure duration, focus mode, and so on. In addition, Pro mode supports flash, zoom, autofocus, color correction and face detection. Doesn’t work with front camera.

In Pro mode, new options open as keys. We only need:

  1. Set mCurrentModeType to PRO_PHOTO_MODE or PRO_VIDEO_MODE.
  2. Use mCurrentModeType to create a Mode object.
  3. Call the Mode object’s setParameter method to enable the corresponding feature.
  4. Use the TakePicture, startRecording, and stopRecording methods of the Mode object to take photos and record videos.

After enabling the preview function, you can adjust the Pro mode options.

// Query the functions supported by the mode.
List<CaptureRequest.Key<?>> parameters = mModeCharacteristics.getSupportedParameters();
// Query the supported ISO range.
List<Integer> values = mModeCharacteristics.getParameterRange(RequestKey.HW_PRO_SENSOR_ISO_VALUE);
// Set the ISO value. The first value is used as an example.
mMode.setParameter(RequestKey.HW_ PRO_SENSOR_ISO_VALUE, values.get(0));

The settings for the other parameters are the same. To enable automatic mode, set ISO and exposure to 0. It should be borne in mind that Pro mode does not support front-facing camera and continuous shooting.

// Deliver the parameters.
mMode.setParameter();
// Start shooting.
mMode.takePicture()

To take a RAW image, use getSupportedCaptureSizes (ImageFormat.RAW_SENSOR) in ModeCharacteristics. To request a supported resolution, select the captureSize resolution and call addCaptureImage (captureSize, ImageFormat.RAW_SEOSOR) from ModeConfig.Builder.

modeConfigBuilder.addCaptureImage(captureSize,ImageFormat.RAW_SENSOR)

Delivery of parameters when operating in Pro video:

// Deliver the parameters.
mMode.setParameter();

After enabling the preview function, we call the recording API. The parameters are the same as in Pro foto, except for the shutter speed.

// Start recording.
mMode.startRecording()
mMediaRecorder.start();
// Pause recording.
mMode.pauseRecording();
mMediaRecorder.pause();
// Resume recording.
mMode.resumeRecording();
mMediaRecorder.resume();
// Stop recording.
mMode.stopRecording();
mMediaRecorder.stop();

Working with normal mode

In normal mode, you can activate the AI ​​functions from the ML Kit arsenal. You can use them, for example, to teach an application to recognize scenarios in order to automatically apply the appropriate color, brightness and contrast settings. Our artificial intelligence services identify over 1,500 scenes from 25 categories: architecture, pets, plants, cars, and more.

To activate normal mode, set mCurrentModeType to Mode.Type.NORMAL_MODE. After enabling the preview, set the parameters. Below is an example for smartly defining scenarios:

// Query whether the AI capability is supported. If true is returned, AI scenario identification is supported.
modeCharacteristics. getSupportedSceneDetection();
// Enable AI scene identification. You can disable it afterwards by setting the input parameter to false.
mMode. setSceneDetection (true);
// Identify the scene and perceive the identified scene based on the callback information.
class ModeActionStateCallback extends ActionStateCallback {
   @Override
   public void onSceneDetection(Mode mode, @SceneDetectionResult.State int state,
       @Nullable SceneDetectionResult result) {
       // Confirm the effect of the scene to be enabled. For example, the scene "flower" is identified here.
       // After the scene is enabled, the parameters are adjusted to make the flowers look brighter.
       mMode.setParameter(RequestKey.HW_SCENE_EFFECT_ENABLE, true);
   }
}

// Disable the scene effects. If the scene effect is enabled and then disabled, parameter adjustment will be canceled.
mMode.setParameter(RequestKey.HW_SCENE_EFFECT_ENABLE, false);

Sports shooting (burst shooting) with the main camera:

// Query whether burst shooting is supported. If true is returned, burst shooting is supported.
modeCharacteristics. isBurstSupported();
// Start burst shooting.
mMode. takePictureBurst();

Or:

mMode. takePictureBurst (filename);
// Stop burst shooting.
mMode.stopPicture();


That’s all for now. If you have any questions about working with a camera on the HMS platform, you can ask them in the comments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *