Maker H01
GigaAI

Maker H01

Maker H01 is a wheeled humanoid from GigaAI — pairing dual 7-DOF arms, rich sensing, and agile mobility to bring embodied AI into real-world tasks with flexibility and precision.

Description

Maker H01, developed by GigaAI (极佳视界), a leading Chinese embodied intelligence startup backed by Huawei Hubble Investment and others, is a wheeled humanoid robot unveiled on November 26, 2025. Positioned as a 'Physical AGI Native Platform,' it bridges advanced AI models with physical embodiment, facilitating 'model-to-body' convergence for real-world tasks. Departing from bipedal designs, its omnidirectional all-wheel-drive wheeled chassis prioritizes stability, speed (up to 1.5 m/s or ~5.4 km/h, with reports up to 8 km/h), and endurance (4 hours per charge), ideal for dynamic environments like factories, homes, and services. The architecture integrates dual 7-DOF bionic arms (total 28 DOF excluding end-effectors) with a vertical reach of 0-2 meters, supporting 5 kg maximum payload per arm (3 kg rated). Modular end-effectors, such as multi-finger grippers or dexterous hands, enable precise manipulation for tasks including pick-and-place, stacking, pouring, and collaborative operations. The upper body emphasizes 'operation and upper limbs' for superior physical interaction and data collection. Perception relies on a sophisticated multi-modal sensor array: 5 RGB cameras and 4 RGBD cameras across head, chest, and hands for visual-depth fusion, plus chassis-mounted 360° LiDAR and ultrasonic sensors for omnidirectional obstacle avoidance and safe human interaction (IP20 protection). This rich sensing feeds into low-latency (250-450 ms glass-to-action) control. Central to its intelligence is GigaAI's full-stack software ecosystem. GigaBrain-0, an open-source Vision-Language-Action (VLA) model, acts as the 'embodied brain,' delivering end-to-end decisions from multi-modal inputs (images, depth, text, states) to structured task/motion planning. It excels in 3D spatial reasoning, long-horizon tasks (e.g., desk tidying, coffee making), and generalization, outperforming SOTA benchmarks. GigaWorld-0, the world model platform, generates high-fidelity synthetic data (~300% generalization boost), resolving data costs, Sim2Real gaps, and simulator errors via physics-accurate simulations. Their synergy with reinforcement learning targets 95% success across 90% of common tasks, embodying the 'World Model + Action Model + RL' paradigm. Hardware specs include ~162 cm height, 88 kg weight (aluminum alloy frames + ABS shells), Linux-based OS with custom stacks, and connectivity (Bluetooth, Ethernet, Wi-Fi). No specific CPU/GPU or battery kWh/voltage disclosed. Deployment is accelerating: prototype stage with mass production underway, partnerships with top OEMs, humanoid centers, and scenario leaders in industry/service/home. Funded by 500M+ CNY in 3 months, it targets home service, hospitality, logistics, education, and R&D, heralding physical AI's 'ChatGPT moment' in 2-3 years.

Key Features

Wheeled Omnidirectional Mobility

All-wheel-drive chassis with 360° LiDAR and ultrasonic sensors enables agile navigation, obstacle avoidance, and stable movement at up to 1.5 m/s in dynamic environments.

Dual 7-DOF Bionic Arms

28 total DOF (excl. end-effectors), 0-2m reach, 5kg max payload per arm for dexterous manipulation, adaptable grippers/hands for complex tasks like stacking and pouring.

Rich Multi-Modal Sensing

9 cameras (5 RGB + 4 RGBD on head/chest/hands) + LiDAR provide comprehensive perception for safe human interaction and precise operations.

GigaBrain-0 VLA Control

End-to-end embodied brain for structured planning and action from vision/language inputs, supporting long-sequence generalization and high success rates.

GigaWorld-0 Data Engine

World model generates scalable synthetic data, boosting training efficiency and closing Sim2Real gaps for rapid model iteration.

Physical AGI Native Design

Modular hardware-software stack optimized for research, prototyping, and deployment in home, industrial, and service scenarios.

Specifications

AvailabilityPrototype
NationalityChina
Degrees Of Freedom, Overall28
Degrees Of Freedom, Hands14
Height [Cm]160
Manipulation Performance4
Navigation Performance3
Max Speed (Km/H)8
Strength [Kg]5–10 kg per arm
Weight [Kg]64
Runtime Pr Charge (Hours)4
Safe With HumansYes
Ingress ProtectionIP20
ConnectivityBluetooth, Ethernet, Wi‑Fi
Operating SystemLikely Linux + custom robot control stack
Latency Glass To Action250–450 ms
Main Structural MaterialAluminum alloy + ABS composite shells
Number Of Fingers5 per hand
Main MarketHome service, Hospitality, reception, Workplace assistance, light logistics, education
VerifiedNot verified
Shipping Size170 cm × 70 cm × 50 cm
ColorWhite
ManufacturerGigaAI
Height162 cm
Weight88 kg
Dof Overall28 (excluding end-effectors)
Dof Arms14 (7-DOF per bionic arm)
Payload Per Arm5 kg max / 3 kg rated
Reach0-2 m vertical
MobilityOmnidirectional wheeled chassis, max speed 1.5 m/s (5.4 km/h)
Sensors5x RGB cameras, 4x RGBD cameras (head/chest/hands), 360° LiDAR, ultrasonic sensors (specific models N/A)
VisionMulti-modal RGB/RGBD fusion
ProcessorsN/A (inferred high-end GPU for VLA/world model inference)
OsLikely Linux + custom robot stack
ConnectivityBluetooth, Ethernet, Wi-Fi
Battery4 hours runtime (kWh/voltage N/A)
Latency250-450 ms glass-to-action
MaterialsAluminum alloy frames + ABS composite shells
Ip RatingIP20
End EffectorsMulti-finger grippers / dexterous hands (modular)
ColorWhite
Shipping Size170 cm × 70 cm × 50 cm

Curated Videos

Video 1
Video 2
Video 3
Video 4

Frequently Asked Questions

What are the key dimensions and payload of Maker H01?

Standing at 162 cm tall and weighing 88 kg, it offers dual arms with 5 kg maximum payload (3 kg rated) and 0-2 m vertical reach, balancing power and agility for real-world tasks in varied spaces like homes and factories.

What AI systems drive Maker H01's intelligence?

Powered by GigaBrain-0 (open-source VLA for end-to-end control) and GigaWorld-0 (world model for data generation), integrated via World Model + VLA + RL paradigm, enabling superior generalization, spatial reasoning, and task success in complex scenarios.

What sensors does the robot use for perception?

Equipped with 5 RGB cameras, 4 RGBD cameras (head, chest, hands), 360° chassis LiDAR, and ultrasonic sensors, providing fused visual-depth-spatial data for navigation, manipulation, and safe human co-existence.

Is Maker H01 suitable for human-shared environments?

Yes, its rich sensing (LiDAR, cameras, ultrasonics), low-latency control (250-450 ms), and wheeled stability ensure safe operation around humans, with IP20 protection and compliant motion for collaborative tasks.

What is the battery life and speed?

Offers 4 hours runtime per charge with omnidirectional mobility up to 1.5 m/s (5.4 km/h), suitable for extended shifts in service, logistics, or research, prioritizing endurance over legged designs.

What markets does Maker H01 target?

Designed for home service, hospitality/reception, workplace assistance, light logistics, education, and embodied AI research; mass production started with OEM and center partnerships for rapid deployment.

×