HarmonyOS 6 Face AR and Body AR in Practice: Build an Intelligent Rehabilitation Training Assistant

Build an intelligent rehabilitation training assistant on HarmonyOS 6 API 23 and AR Engine 6.1.0 that can quantify pain, correct movement, and track progress. Core capabilities include Face AR micro-expression recognition, Body AR skeletal tracking, and real-time safety pause control. Keywords: HarmonyOS 6, Face AR, Body AR.

Technical specifications define the implementation baseline

Parameter Details
Platform HarmonyOS 6 (API 23)
Development Language TypeScript / ETS
Core Protocols/Capabilities AR Engine 6.1.0, ArkUI, AbilityKit
Core Dependencies @hms.core.ar.arengine, @kit.ArkUI, @kit.AbilityKit
Target Scenarios Hospital rehabilitation, community rehabilitation, home rehabilitation, remote rehabilitation
Article Status Reconstructed from the original hands-on draft, with substantial page noise removed

This solution addresses three high-frequency problems in rehabilitation training

The biggest issue in traditional rehabilitation is not the absence of training. It is the lack of measurable training quality. At home, patients often cannot tell whether their movements are correct, pain can only be described subjectively, and training progress rarely becomes continuous data.

HarmonyOS 6 Face AR and Body AR fill these three gaps directly. Face AR identifies pain-related micro-expressions, Body AR tracks skeletal key points, and ArkUI turns the feedback into immediate, actionable training guidance.

The original architecture diagram illustrates a bimodal rehabilitation assessment pipeline

Image description here AI Visual Insight: This diagram represents the unified entry point for the intelligent rehabilitation training assistant. It emphasizes AR-driven visual interaction as the core of the system, integrating the patient’s live video feed, overlaid motion trajectories, pain recognition results, and training status feedback into a single workflow. This setup is well suited for continuous rehabilitation monitoring on PCs or large-screen devices.

The system can be broken down into three layers: the acquisition layer, the evaluation layer, and the interaction layer. The acquisition layer captures both facial BlendShapes and 3D body skeleton points. The evaluation layer outputs pain level, joint angle, ROM, and stability metrics. The interaction layer handles color-coded alerts, automatic pause, and training recommendations.

interface RehabConfig {
  targetJoint: string
  targetROM: { min: number; max: number }
  painThreshold: number
  sessionDuration: number
}

const rehabConfig: RehabConfig = {
  targetJoint: 'left_shoulder', // Specify the target joint for this training session
  targetROM: { min: 0, max: 180 }, // Define the target range of motion
  painThreshold: 7, // Automatically pause when pain exceeds this threshold
  sessionDuration: 300
}

This code defines the minimum configuration model for a rehabilitation session and serves as the foundation for movement evaluation and safety control.

Environment setup must prioritize both the AR engine and large-screen usability

The original solution depends on components such as @hms.core.ar.arengine, @kit.ArkUI, and @kit.AbilityKit. It is not just a mobile page. It is closer to a large-screen rehabilitation workstation, so window mode, background color, and refresh rate all affect real-world usability.

The recommended dependency setup should remain a minimal closed loop

{
  "dependencies": {
    "@hms.core.ar.arengine": "^6.1.0",
    "@kit.ArkUI": "^6.1.0",
    "@kit.AbilityKit": "^6.1.0",
    "@kit.SensorServiceKit": "^6.1.0"
  }
}

This code block establishes the core dependency set for a HarmonyOS 6 rehabilitation training application.

Window initialization should focus on three design priorities: full screen, a low-stimulation background, and a high frame rate. Rehabilitation training requires sustained visual attention, so the interface should not distract users. If skeletal overlays and joint-angle refreshes are not smooth enough, the correction experience will suffer directly.

The Face AR pain assessment module converts subjective pain into a quantified score

The value of Face AR is not just recognizing expressions. It is recognizing combinations of pain-related features. The original draft uses a weighted fusion of brow lowering, inner brow raising, mouth corner depression, eye squinting, and jaw tension to produce a final pain score from 0 to 10.

Compared with single-point detection, this multi-parameter fusion model is better suited to medical-assistive scenarios. Pain does not always appear as one isolated expression. A weighted combination model improves robustness and can also provide trend analysis, such as rising, stable, or decreasing pain.

const weights = {
  brow: 0.35,
  mouth: 0.25,
  eye: 0.25,
  jaw: 0.15
}

const level = Math.min(10, Math.round(
  indicators.browTension * weights.brow + // Brow tension is the strongest pain signal
  indicators.mouthTension * weights.mouth +
  indicators.eyeTension * weights.eye +
  indicators.jawTension * weights.jaw
))

This code generates a pain severity score for rehabilitation scenarios by applying weighted fusion to multi-dimensional facial tension indicators.

The pain recognition model should prioritize four feature groups

  1. Brow tension: BROW_DOWN and BROW_INNER_UP.
  2. Mouth tension: MOUTH_FROWN and MOUTH_PUCKER.
  3. Eye tension: EYE_SQUINT and EYE_WIDE.
  4. Jaw tension: JAW_FORWARD and JAW_CLENCH.

The direct benefit of this design is safety control. Once the system detects that the pain score has reached the threshold, it can pause the training session immediately and prevent the patient from continuing under incorrect movement patterns and high pain levels.

The Body AR posture tracking module establishes the baseline for movement quality

The Body AR component uses more than 20 skeletal key points to calculate joint angle, range of motion, symmetry, and stability. For high-frequency rehabilitation targets such as the shoulder, hip, and knee, 3D angle calculation is more meaningful than 2D visual estimation because it reflects movement amplitude more accurately.

The original implementation calculates the angle between vectors formed by three points and then compares the result with the target ROM. This approach can tell users not only whether they achieved the movement, but also how far off they are and which side deviates.

private calculate3DAngle(p1, p2, p3): number {
  const v1 = { x: p1.x - p2.x, y: p1.y - p2.y, z: p1.z - p2.z }
  const v2 = { x: p3.x - p2.x, y: p3.y - p2.y, z: p3.z - p2.z }
  const dot = v1.x * v2.x + v1.y * v2.y + v1.z * v2.z
  const mag1 = Math.sqrt(v1.x ** 2 + v1.y ** 2 + v1.z ** 2)
  const mag2 = Math.sqrt(v2.x ** 2 + v2.y ** 2 + v2.z ** 2)
  return Math.round(Math.acos(dot / (mag1 * mag2)) * 180 / Math.PI) // Output the included joint angle
}

This code implements joint-angle calculation based on 3D key points and serves as the core algorithm for ROM analysis and movement scoring.

Movement quality assessment should cover at least four dimensions

Dimension Calculation Target Practical Meaning
ROM Maximum angle – minimum angle Determine whether the range of motion meets the target
Symmetry Left-right joint angle difference Determine whether the affected side and healthy side are imbalanced
Stability Hip-center displacement Determine whether trunk control is stable
Speed Control Comparison with the standard trajectory Determine whether the rhythm is too fast or too slow

The main interface must put safety feedback first

The original draft uses a three-column layout: live AR view on the left, standard movement demonstration in the center, and a data panel on the right. The value of this layout is that it keeps watching the movement, comparing the movement, and checking the result on the same screen, which reduces the patient’s cognitive load.

Navigation color is also a critical interaction signal. Levels 0-2 are green, 3-4 are yellow, 5-6 are orange, and 7 or above turns red and triggers automatic pause. This maps medical risk states into a visual language users can understand instantly.

private getPainColor(level: number): string {
  if (level <= 2) return '#00FF88' // No pain, normal training
  if (level <= 4) return '#FFD700' // Mild pain, prompt the user to pay attention
  if (level <= 6) return '#FF9500' // Moderate pain, recommend rest
  return '#FF4444' // Severe pain, trigger an alert
}

This code maps pain levels into a unified UI risk-color system.

Deployment must consider both algorithm performance and compliance requirements

In hospital scenarios, a large-screen PC and a more stable camera are recommended to improve key-point precision and demonstration readability. In home scenarios, the priority shifts toward lightweight deployment and on-device inference to avoid a complex installation process.

It is also important to note that facial data and skeletal data are sensitive. A more practical strategy is to complete expression and posture inference locally and upload only de-identified scores, trends, and training summaries to reduce privacy risk and compliance pressure.

FAQ

1. Can Face AR pain assessment replace a physician’s judgment?

No. It is better suited as an assistive quantification tool during training for continuous monitoring and risk alerts. It cannot directly replace clinical diagnosis.

2. Why is Body AR more valuable for movement correction than a standard camera approach?

Because it calculates angle, stability, and ROM based on skeletal key points and 3D spatial relationships, which makes the results more stable than visual observation or 2D contour recognition.

3. Which scenarios are the best fit for initial deployment?

Hospital rehabilitation departments, post-operative home rehabilitation, and remote follow-up scenarios should be the first priorities. These three scenarios have the strongest need for movement quantification, risk alerts, and data continuity.

Core Summary: This article reconstructs an intelligent rehabilitation training solution built on HarmonyOS 6 API 23 and AR Engine 6.1.0. By combining Face AR pain assessment, Body AR posture tracking, and an ArkUI three-column interface, it addresses three core rehabilitation challenges: movement correction, pain quantification, and training data tracking.