HarmonyOS 6 Real-Time Camera Sticker Rendering: A High-Performance NDK, OH_Drawing, and OpenGL ES Implementation

This article breaks down how real-time stickers work in a HarmonyOS 6 lightweight camera app: use NDK to read rawfile images, rely on OH_Drawing for offscreen rendering, and blend the result with OpenGL ES through alpha compositing. This approach avoids ArkTS Canvas performance limits and reduces cross-layer copy overhead. Keywords: HarmonyOS 6, NDK, real-time stickers.

Technical Specifications Snapshot

Parameter Details
Target Platform HarmonyOS 6
Primary Languages C++, GLSL
Rendering Protocols / APIs OpenGL ES, OES Texture, Native Drawing
Resource Input PNG/JPG under rawfile
Popularity Reference Approx. 3k views, 6 likes, 7 saves
Core Dependencies libimage_source.so, librawfile.z.so, OH_Drawing
Key Capabilities Offscreen rendering, texture upload, alpha blending

This sticker solution is fundamentally a C++ rendering pipeline

Stickers in a lightweight camera app are not simple UI overlays. They enter the rendering pipeline directly. The sticker layer sits on top of the camera preview stream and works together with AI inference results to form a real-time closed loop of detection, drawing, and compositing.

Compared with ArkTS Canvas, this approach keeps the performance-critical work in the Native layer and reduces cross-language pixel transfers. That makes it a better fit for 60fps preview scenarios. It is especially effective for dynamic stickers, face accessories, and AR face effects.

The sticker effect follows a three-stage model

  1. Read PNG/JPG assets from rawfile and decode them into pixel data.
  2. Use OH_Drawing to complete offscreen rendering on a transparent bitmap.
  3. Upload the result as an RGBA texture and blend it with the camera OES texture in a shader.
// Main sticker rendering pipeline
void RenderStickerPipeline() {
    LoadSticker();          // Load sticker assets
    DrawToBitmap();         // Draw the sticker on an offscreen bitmap
    UploadTexture();        // Upload to a GPU texture
    BlendWithCameraFrame(); // Blend with the camera frame
}

This snippet summarizes the full path from sticker asset loading to on-screen output.

Decoding images from rawfile is the starting point of the sticker pipeline

In HarmonyOS 6, the Native layer can read app resources directly through ResourceManager and ImageSource. The key benefit is that it reduces unnecessary intermediate format conversions and avoids repeated copies between the UI layer and the Native layer.

The decoded object is typically an OH_PixelmapNative. You can further bind it to an OH_Drawing_Bitmap and use it as the source for subsequent drawing steps.

#include <multimedia/image_framework/image_source_native.h>
#include <rawfile/raw_file_manager.h>

// Load a sticker bitmap from rawfile
OH_PixelmapNative* LoadStickerFromRawFile(NativeResourceManager* resMgr, const char* fileName) {
    // 1. Open the raw resource file
    RawFile* rawFile = OH_ResourceManager_OpenRawFile(resMgr, fileName);
    if (rawFile == nullptr) return nullptr; // Return immediately if the resource does not exist

    // 2. Create the image source object
    OH_ImageSourceNative* imageSource = nullptr;
    Image_ErrorCode ret = OH_ImageSourceNative_CreateFromRawFile(
        resMgr, (char*)fileName, strlen(fileName), &imageSource);
    if (ret != IMAGE_SUCCESS) {
        OH_ResourceManager_CloseRawFile(rawFile); // Release resources on failure
        return nullptr;
    }

    // 3. Decode into a Pixelmap
    OH_DecodingOptions* opts = nullptr;
    OH_PixelmapNative* pixelmap = nullptr;
    OH_ImageSourceNative_CreatePixelmap(imageSource, opts, &pixelmap);

    // 4. Clean up intermediate objects
    OH_ImageSourceNative_Release(imageSource);
    OH_ResourceManager_CloseRawFile(rawFile);
    return pixelmap;
}

This code converts sticker assets from application resources into a Native pixel map object.

OH_Drawing acts as the offscreen canvas layer

OH_Drawing is the 2D processing layer in this pipeline. It does not handle final presentation directly. Instead, it completes sticker layout, scaling, and positioning on an in-memory bitmap, effectively preparing a transparent overlay for the GPU.

Typical steps include creating a transparent bitmap, binding a canvas, and drawing the PixelMap into a target region based on coordinates returned by AI. Because the entire process stays inside C++, you can control latency and throughput more consistently.

The offscreen rendering stage determines the sticker’s spatial position

AI models typically output normalized facial landmarks, such as 106 key points. Before display, you must map those points into the preview view coordinate system. Otherwise, the sticker may drift, scale incorrectly, or become inconsistent under mirroring.

// Calculate the sticker area from the detection box
Rect CalcStickerRect(float faceX, float faceY, float faceW, float faceH) {
    Rect rect;
    rect.left = faceX - faceW * 0.1f;   // Expand slightly to the left to cover the face boundary
    rect.top = faceY - faceH * 0.2f;    // Leave some space above for forehead-style stickers
    rect.right = faceX + faceW * 1.1f;
    rect.bottom = faceY + faceH * 1.2f;
    return rect;
}

This code converts detection results into a drawing region that better fits sticker placement.

The OpenGL ES blending stage determines visual realism

The camera frame usually comes from samplerExternalOES, while the sticker texture is a standard sampler2D. Their pixel formats differ, but you can still sample both in the fragment shader and composite them in a unified way.

The most common blending mode is Source Over, which is standard alpha blending: Cout = Csrc × Asrc + Cdst × (1 - Asrc). This formula determines whether transparent sticker edges look natural and whether black halos appear.

The shader implements the alpha blending formula directly

#extension GL_OES_EGL_image_external : require
precision highp float;

varying vec2 vTexCoord;
uniform samplerExternalOES cameraTexture; // Camera OES texture
uniform sampler2D stickerTexture;         // Sticker RGBA texture

void main() {
    vec4 cameraColor = texture2D(cameraTexture, vTexCoord);
    vec4 stickerColor = texture2D(stickerTexture, vTexCoord);

    // Blend colors according to the alpha channel
    float r = stickerColor.r * stickerColor.a + cameraColor.r * (1.0 - stickerColor.a);
    float g = stickerColor.g * stickerColor.a + cameraColor.g * (1.0 - stickerColor.a);
    float b = stickerColor.b * stickerColor.a + cameraColor.b * (1.0 - stickerColor.a);

    gl_FragColor = vec4(r, g, b, 1.0); // Output the final pixel
}

This shader uses the most direct formula to perform per-pixel compositing between the sticker and the camera frame.

The complete rendering pipeline should cooperate with AI inference while remaining decoupled

You can split the full pipeline into two paths: one path continuously captures and renders camera frames, while the other handles AI detection and outputs landmarks. The sticker module sits at the intersection of the two, consuming inference coordinates and providing the latest texture to the rendering thread.

A recommended sequence is: detect facial landmarks with AI, convert coordinates across coordinate systems, perform offscreen rendering, upload the texture with glTexImage2D, and finally display the result on the Surface associated with XComponent. This structure is clear and also makes it easier to extend the system later with filters, watermarks, and 3D effects.

The supplemental image helps explain the source and context

AI Visual Insight: The image is a promotional poster for a technical public account. Its main value is to show the content source and the entry point to the developer community. It is not a sticker rendering flowchart, so it does not contain specific graphics pipeline or API details.

This solution delivers both performance and extensibility in real projects

Its value is not just that it can render stickers. More importantly, it treats a sticker as a rendering operator rather than a UI element. That means the same architecture remains valid when you later add multiple sticker layers, animated frame sequences, gesture-triggered effects, or even 3D masks.

If your goal is a high-frame-rate lightweight camera app on HarmonyOS 6, this NDK-, OH_Drawing-, and OpenGL ES-based pipeline is a safer and more robust choice than a pure frontend drawing approach.

FAQ

Why not implement stickers directly with ArkTS Canvas?

ArkTS Canvas is better suited to lightweight 2D scenarios, but real-time camera stickers involve high-frequency drawing and moving large pixel buffers. Keeping the work in the Native layer significantly reduces cross-layer copy costs and makes it much easier to sustain 60fps.

Why does the sticker position keep drifting?

In most cases, the root cause is coordinate conversion. AI outputs are often normalized coordinates, so you must map them into the final view coordinate system based on preview resolution, mirror direction, crop ratio, and rotation angle.

What should I do if blending in the shader causes black edges or jagged artifacts?

A common cause is inconsistent premultiplied alpha handling, or dirty pixels around the edges of the PNG asset. You should standardize the alpha processing strategy and verify both the sticker export format and the texture filtering parameters.

AI Readability Summary

This article reconstructs the core implementation of real-time stickers in a HarmonyOS 6 lightweight camera app. It focuses on a three-stage pipeline: rawfile image decoding, OH_Drawing-based offscreen rendering, and OpenGL ES alpha blending, showing how to implement 60fps real-time dynamic stickers in the C++/NDK layer.