Build an Immersive PC Quantum Lab on HarmonyOS 6 with Floating Navigation, Immersive Lighting, and Dual AR Engines

[AI Readability Summary] The “Quantum Lab” for HarmonyOS 6 (API 23) is an immersive virtual experiment platform for PC. Its core capabilities include floating navigation, immersive lighting effects, and fused control through dual Face AR and Body AR engines. It addresses the common limitations of traditional virtual lab systems, such as fragmented interaction, weak spatial presence, and poor multi-window coordination. Keywords: HarmonyOS 6, AR fusion, immersive experiments.

Technical specifications are summarized below

Parameter Details
Platform HarmonyOS 6 / API 23 / PC
Language ArkTS / ETS / JSON5
Interaction Capabilities Floating Navigation, Immersive Lighting, Face AR, Body AR
Window Mode Full-screen main window + multiple child windows
Core Dependencies @kit.ArkUI, @kit.UIDesignKit, @kit.AREngineKit, @kit.GraphicsKit
Typical Components HdsNavigation, XComponent, Canvas, AppStorage
Protocol / License Original article marked as CC 4.0 BY-SA
Stars Not provided in the original content

Illustration AI Visual Insight: This image shows the application’s immersive main interface on a large PC display. The center area hosts the experiment scene, the top contains a title bar with luminous material effects, and the bottom uses floating tab navigation. The overall design emphasizes layered blur, translucent materials, and coordinated experiment-themed colors.

This project redefines virtual lab interaction on PC

Traditional virtual experiment platforms often stop at 2D menus and button clicks, which makes it difficult to leverage the strengths of large displays, multi-window workflows, and spatial interaction. The value of this solution lies in turning the “experiment workflow” into “environmental interaction.”

Its core design combines three layers of capability: immersive lighting communicates atmosphere and state, floating navigation enables low-friction switching, and dual AR engines map facial expression and body posture into experiment control commands.

Project capabilities at a glance

  • Immersive title bar: Dynamically switches theme colors and warning light effects based on experiment type and safety level.
  • Floating tab navigation: Supports bottom-floating layout, opacity adjustment, and fast switching across multiple experiments.
  • Face AR + Body AR: Facial expressions define precision modes, while body actions define actual operations.
  • Multi-window coordination: The main scene, data logging, instrument panel, and molecular view stay synchronized.

The architecture decouples UI, AR, and window management into three layers

From an engineering perspective, the project uses ArkUI as the presentation layer, AREngineKit as the recognition layer, and WindowManager as the orchestration layer for PC multi-window workflows. This separation allows visuals, interaction logic, and window lifecycle management to evolve independently.

The key to the architecture is not any individual component, but the state bus. The code uses AppStorage extensively as a global state bridge to synchronize themes, track window focus, and dispatch experiment actions.

// Synchronize global state across components and windows through AppStorage
AppStorage.setOrCreate('current_theme', '#4ECDC4'); // Sync the current experiment theme color
AppStorage.setOrCreate('window_action', 'open_data'); // Trigger the data window to open
AppStorage.watch('fusion_command', (cmd) => {
  // Listen for AR fusion commands and drive the experiment scene
  this.executeCommand(cmd);
});

This code unifies events across UI, AR, and multi-window logic through a lightweight state channel.

Technical layering overview

Layer Responsibility Key Technologies
UI Layer Title bar, experiment scene, floating navigation ArkUI, HdsNavigation, XComponent
AR Engine Layer Expression recognition, skeletal tracking, command fusion AREngineKit, CameraKit
Window Layer Child window creation, theme coordination, focus synchronization window, AppStorage

Immersive lighting translates experiment state into visual language

HarmonyOS 6 SystemMaterialEffect.IMMERSIVE is more than a visual enhancement. In this project, it acts as a state communicator. Different scientific disciplines map to different primary colors, while different safety levels map to different edge glows and shadow colors.

For example, chemistry experiments can use purple tones combined with warning orange-red, biology experiments can lean green, and astronomy experiments can use deep blue. This design improves the speed at which users recognize experimental context.

HdsNavigation({
  title: `量子实验室 - ${this.currentExperiment}`,
  systemMaterialEffect: SystemMaterialEffect.IMMERSIVE,
  backgroundOpacity: this.isWindowFocused ? 0.85 : 0.55 // Stronger lighting effect for the focused window
})
.shadow({
  radius: this.isWindowFocused ? 20 : 5,
  color: this.getSafetyColor() // Safety level determines the shadow color
});

This code uses system materials, opacity, and shadow color to convert window focus and experiment risk into perceptible UI feedback.

The dual AR engines separate intent recognition from action execution

The most distinctive part of this solution is that Face AR handles “intent,” while Body AR handles “action.” A frown can represent precision mode, while a smile can represent fast mode. Actions such as grabbing, pouring, stirring, and observing are recognized through skeletal tracking and gesture recognition.

This approach is more stable than using gestures alone because the system first determines “how to act” and then determines “what action to perform,” which reduces false triggers and control drift.

AR fusion control flow

const command: FusionCommand = {
  precision: this.currentPrecision, // Facial expression determines the precision mode
  operation: operation, // Body posture determines the operation type
  intensity: intensity, // Movement intensity determines the magnitude of change
  targetPosition: position,
  confidence: this.calculateFusionConfidence()
};

AppStorage.setOrCreate('fusion_command', command); // Broadcast the fused command

This code combines expression, posture, intensity, and target position into a single control command that the experiment scene can consume directly.

Multi-window coordination brings virtual experiments closer to a real research desktop

The value of a PC is not just its large display, but also its multi-window workflow. The project uses the main window to host the experiment itself, then opens data logging, instrument control, and 3D molecular view windows on demand to form a “main view + tool views” layout.

The window manager also implements focus awareness and theme synchronization. When any window gains focus, the lighting theme can propagate back into global state, improving consistency across windows.

async openDataWindow(): Promise
<void> {
  await this.createToolWindow({
    name: 'DataWindow',
    title: '实验数据',
    width: 500,
    height: 400,
    themeColor: '#4ECDC4' // Child window reuses the experiment theme color
  });
}

This code shows the minimum model for creating a child window. The key point is that size, purpose, and theme coordination are encapsulated together.

Environment setup should prioritize dependency and permission completeness

At the dependency level, you need at least AbilityKit, ArkUI, UIDesignKit, AREngineKit, GraphicsKit, and SensorKit. At the permission level, camera and microphone access provide the foundation for AR interaction, while network permission supports resource loading and future collaboration features.

Minimal dependency and permission examples

{
  "dependencies": {
    "@kit.ArkUI": "^6.1.0",
    "@kit.UIDesignKit": "^6.1.0",
    "@kit.AREngineKit": "^6.1.0"
  }
}

This configuration defines the core UI and AR runtime foundation for the project.

{
  "requestPermissions": [
    { "name": "ohos.permission.CAMERA" },
    { "name": "ohos.permission.MICROPHONE" },
    { "name": "ohos.permission.INTERNET" }
  ]
}

This configuration ensures that AR capture and networking capabilities are available at runtime.

Performance optimization must focus on dual AR and multi-window workloads

Running Face AR and Body AR at the same time can significantly increase camera, GPU, and UI composition load. To manage this, reduce resolution and frame rate, and enable lazy refresh for inactive windows.

In addition, pause 3D scenes or dynamic Canvas rendering when a page is hidden to avoid continuous background resource consumption.

private optimizeDualAR(): void {
  if (this.faceSession) {
    this.faceSession.setCameraConfig({ fps: 10, resolution: { width: 320, height: 240 } }); // Reduce face tracking overhead
  }
  if (this.bodySession) {
    this.bodySession.setCameraConfig({ fps: 15, resolution: { width: 480, height: 360 } }); // Balance motion recognition accuracy and performance
  }
}

This code controls the performance pressure introduced by concurrent dual AR through differentiated frame rate and resolution strategies.

This solution can expand into education, research, and training platforms

At its core, “Quantum Lab” is not just a single demo, but an interaction framework template. It can be extended into medical simulation, industrial training, remote collaborative experiments, and AI-assisted teaching scenarios.

If you later integrate the distributed soft bus, AI assistants, and VR devices, the platform can evolve from a standalone immersive lab into a cross-device, multi-role, collaborative experiment operating system.

FAQ

1. Why separate Face AR and Body AR instead of using gestures alone?

Face AR is better suited for identifying user intent and control precision, while Body AR is better suited for recognizing continuous actions. When fused, the interaction feels more natural and is less likely to produce incorrect recognition.

2. What types of HarmonyOS devices are best suited for this solution?

It works best on PCs or 2-in-1 devices with large displays, multi-window support, and strong GPU performance. If you enable both AR engines at the same time, prioritize high-performance physical devices.

3. What is the key implementation detail behind synchronized lighting effects across multiple windows?

The core requirement is a unified state source. By using AppStorage to synchronize theme colors, focus state, and window actions, the main window and child windows can maintain consistent visual feedback and interaction context.

Core Summary: This article reconstructs an immersive scientific experiment platform built on HarmonyOS 6 API 23. It covers floating navigation, immersive lighting, Face AR and Body AR dual-engine fusion, multi-window coordination, and performance optimization, making it well suited for building a PC-based virtual simulation laboratory.