This article breaks down a HarmonyOS 6 (API 23) PC “Flow Space” solution: use floating navigation to improve scene-switching efficiency, immersive lighting to reduce interface pressure, and Face AR to detect emotion and drive environmental responses. It is ideal for developers building emotion-aware applications. Keywords: HarmonyOS 6, Face AR, immersive lighting.
The technical specification snapshot outlines the project foundation
| Parameter | Description |
|---|---|
| Target Platform | HarmonyOS PC / 2-in-1 / Tablet |
| Development Language | ArkTS / ETS |
| API Level | HarmonyOS 6 (API 23) |
| Core Protocols / Capabilities | Face AR, Window Management, Immersive UI |
| Core Dependencies | @kit.ArkUI, @kit.AbilityKit, @kit.UIDesignKit, @kit.AREngineKit |
| Interaction Features | Floating Navigation, System Material Lighting, Multi-Window Coordination |
| Data State Management | AppStorage |
| Star Count | Original data not provided |
This project redefines the interaction boundaries of mental wellness applications
Traditional meditation or mental health apps often stop at static pages and fixed flows, without responding in real time to the user’s current state. The key breakthrough in this project is that it turns “detect emotion → adjust environment → guide behavior” into a closed loop.
It does not simply stack visual effects. Instead, it combines systemMaterialEffect, Face AR, and multi-window capabilities into a complete product architecture: the main scene delivers the immersive experience, while auxiliary windows handle breathing exercises, journaling, and white-noise controls.
AI Visual Insight: The interface uses a dark immersive background with a central lighting focal point, a borderless immersive title bar at the top, and floating navigation tabs at the bottom. The overall design emphasizes glassmorphism, soft light diffusion, and a low-distraction layout, which fits the design goal of mental wellness scenarios: weaker visible controls and a stronger atmospheric presence.
The core value can be summarized in three layers
The first layer is emotion recognition. The system uses Face AR to capture micro-expression parameters and infer states such as calm, anxious, joyful, tired, and low mood.
The second layer is environment mapping. Different emotions map to color temperature, brightness, particle density, breathing rhythm, and ambient sound, creating a responsive healing space.
The third layer is task coordination. The main window and floating tool windows share theme color and focus state so that operations and visual feedback stay consistent.
The overall architecture decouples the perception layer, rendering layer, and window layer
The project is split into three parts: ArkUI handles UI and animation, AR Engine handles expression tracking, and WindowManager handles multi-window orchestration. This separation gives each layer a single responsibility and makes the system easier to replace and extend.
The architectural relationship can be abstracted as the following pseudocode
// After the emotion state changes, drive a global environment update
AppStorage.watch('emotion_state', (emotion: string) => {
const params = mapEmotionToEnvironment(emotion); // Map emotion to environment parameters
AppStorage.setOrCreate('environment_params', params); // Write to shared global state
WindowManager.getInstance().syncGlobalLightEffect(params.themeColor); // Sync theme across windows
});
This logic converts Face AR output into globally consumable UI environment parameters.
The functional modules are clearly partitioned
| Module | Implementation | Key Role |
|---|---|---|
| Immersive Title Bar | HdsNavigation + IMMERSIVE | Reduce the visual weight of the title bar |
| Meditation Scene | Canvas / Particle Animation | Build a breathing-style background |
| Emotion Sensing | AREngineKit | Output expressions and emotion states |
| Floating Navigation | HdsTabs / Custom Tabs | Quickly switch wellness scenes |
| Tool Subwindows | SubWindow | Coordinate breathing, journaling, and white noise |
Environment configuration directly determines whether the app has the foundation for immersion
The dependency set covers UI, the ability framework, AR, multimedia, and graphics capabilities. That shows this project is not a simple UI demo but a complete interaction system.
{
"dependencies": {
"@kit.AbilityKit": "^6.1.0",
"@kit.ArkUI": "^6.1.0",
"@kit.UIDesignKit": "^6.1.0",
"@kit.AREngineKit": "^6.1.0"
}
}
This configuration defines the core HarmonyOS 6 capability packages required by the project.
At a minimum, permissions should include camera, microphone, and network access. The camera supports Face AR, the microphone typically serves ambient sound or future voice guidance, and network access is required for resources and service integration.
Window immersion configuration is a prerequisite for the experience
The main window initialization code reflects several important decisions: fullscreen mode, hidden title bar, rounded corners, transparent background, shadows, and free-size mode. Together, they shape the immersive container for the PC experience.
await this.mainWindow.setWindowMode(window.WindowMode.FULLSCREEN); // Put the main window into fullscreen immersive mode
await this.mainWindow.setWindowTitleBarEnable(false); // Disable the system title bar
await this.mainWindow.setWindowLayoutFullScreen(true); // Let content fill the entire window area
await this.mainWindow.setWindowBackgroundColor('#00000000'); // Use a transparent background for lighting layers
This code transforms the default desktop application window into an immersive wellness canvas.
An immersive title bar is not about removing the title bar but redesigning it
The project uses an HDS navigation component with SystemMaterialEffect.IMMERSIVE, then adjusts opacity, border, and shadow based on window focus and emotion state. This preserves control entry points without breaking the atmosphere.
This design is especially suitable for PC because users still need window-level controls, but they do not want a traditional toolbar interrupting meditation continuity.
The Face AR component handles both emotion computation and environment driving
The core of the Face AR implementation is not simply “detect a face” but “infer state by combining expression parameters.” The project uses expression coefficients such as smile, frown, browRaise, eyeWide, and mouthOpen to make rule-based judgments.
if (frown > 0.4 && browRaise > 0.3) {
emotion = 'anxious'; // Frown + raised brows indicates anxiety
} else if (smile > 0.6 && eyeWide > 0.4) {
emotion = 'joyful'; // Smile + open eyes indicates joy
} else {
emotion = 'calm'; // Fall back to a calm state otherwise
}
These rules implement a lightweight emotion state machine that works well for a first product prototype.
The emotion mapping table is the system’s key knowledge layer
An anxious state maps to a cooler color temperature, lower brightness, and slower, steadier environmental feedback. Fatigue leans toward warm yellow tones and a slower breathing rhythm. Joy increases brightness and particle activity. At its core, this is a visual emotional intervention strategy.
The dynamic meditation scene turns abstract emotion into perceivable feedback
The particle system serves two purposes: it provides continuous motion and visualizes changes in color temperature and brightness. Combined with a central breathing guidance ring, it helps users perceive rhythm changes more directly.
this.particles = this.particles.map(p => ({
...p,
y: (p.y - p.speed + 100) % 100, // Keep particles floating upward in a continuous loop
opacity: p.opacity * (0.8 + 0.2 * Math.sin(this.pulsePhase)) // Change opacity with the breathing rhythm
}));
This update logic gives the background a breathing quality and avoids the psychological stagnation of a static interface.
Floating navigation and multi-window coordination improve PC task efficiency
The bottom floating navigation is not just a visual replacement for traditional tabs. It is an optimization for spatial efficiency. Through translucency, blur, and a long-press expandable opacity panel, it reduces the amount of space occupied in the main meditation area.
The multi-window portion acts more like a lightweight workstation: the breathing window guides rhythm, the journaling window records emotion, and the white-noise window adjusts the environment. All subwindows coordinate through AppStorage and a theme synchronization mechanism.
The value of WindowManager lies in unifying window lifecycle management
async openBreathWindow(): Promise
<void> {
await this.createToolWindow({
name: 'BreathWindow',
width: 300,
height: 400,
themeColor: '#50E3C2' // Assign an independent theme color to the subwindow
});
}
This wrapper unifies window creation, size, position, and theme management, reducing complexity in the main page.
Performance optimization must first control the cost of AR and particles
Face AR is a continuously running computation module, so optimization should start with frame rate and resolution. The particle system should dynamically limit particle count based on device performance to avoid frame drops in PC multi-window scenarios.
this.arSession.setCameraConfig({
fps: 10,
resolution: { width: 320, height: 240 } // Reduce sampling cost in exchange for steadier frame rates
});
This configuration lowers AR sampling intensity to achieve a more stable overall interaction experience.
This solution works best as a reference template for emotion-aware applications
Its real value extends beyond mental wellness scenarios. You can also adapt it to focus training, emotionally aware productivity, sleep assistance, and digital rehabilitation. The three most reusable technical pieces are the emotion state machine, the environment parameter mapping table, and cross-window theme synchronization.
If you later integrate distributed capabilities, physiological sensors, or AI recommendation models, this architecture can continue to scale without replacing the existing UI and window system.
FAQ structured Q&A
1. Why is rule-based emotion recognition a better starting point for Face AR than a model?
Rule-based methods have low latency, strong interpretability, and are easy to debug, which makes them ideal for validating the interaction loop in a first release. After enough samples are collected, you can replace them with a more complex classification model more safely.
2. Why use multi-window design on HarmonyOS PC instead of a single-page overlay?
Multi-window interaction aligns better with the desktop task model. It preserves immersion in the main scene while externalizing auxiliary functions, which reduces the cognitive load on the primary view.
3. What are the easiest pitfalls in this architecture?
There are three main risks: unstable AR permissions and lighting conditions, frame drops caused by particle animation running in parallel with AR, and UI inconsistency caused by poor cross-window state synchronization. Prioritize permissions, frame-rate reduction, and a unified source of truth for state.
Core summary: This article reconstructs and analyzes a HarmonyOS 6 API 23 PC mental wellness application, focusing on the implementation path for floating navigation, immersive lighting, Face AR emotion recognition, and multi-window coordination, with core architecture, dependency configuration, key code, and optimization guidance.