G2 Genie
Foundation

G2 Genie

Agibot G2 “Genie”is a wheeled humanoid for industrial-grade, people-safe interaction—pairing 7-DoF arms, 360° perception (LiDAR, RGB-D, ultrasonics, mic array), and on-device embodied AI to handle manipulation, guidance, and service tasks.

Description

The Agibot G2 Genie, launched on October 16, 2025, by AGIBOT Innovation (Shanghai) Technology Co., Ltd. (also known as Zhiyuan Robotics), represents a cutting-edge wheeled humanoid robot designed for industrial-grade, people-safe interactions. Standing at 175 cm tall and weighing 55 kg, it combines a humanoid upper body with an omnidirectional wheeled base for superior stability, mobility up to 7 km/h, and adaptability to over 95% of factory floor conditions. Its architecture features dual 7-degree-of-freedom (DoF) arms with full-arm high-precision torque sensing, a 3-DoF anthropomorphic waist for human-like bending, twisting, and swaying, and dexterous OmniHand Pro hands with 12 DoF and 10 fingers equipped with visuo-tactile fingertip sensors. This enables sub-millimeter precision assembly, force-controlled manipulation up to 5 kg per arm, and delicate tasks like RAM insertion or holding fragile objects such as raw eggs. Perception is achieved through a 360° spatial awareness system, including dual LiDARs for omnidirectional obstacle avoidance and hazard prediction, RGB-D cameras, ultrasonic sensors, and an 8-microphone array for multi-user voice conversations and eye-gaze tracking. Expressive facial animations and whole-body gestures enhance human-robot interaction, allowing the robot to explain actions to bystanders. At the core is an NVIDIA Jetson Thor T5000 compute platform delivering up to 2070 TFLOPS (FP4) or ~200 TOPS (INT8), enabling low-latency (<10 ms glass-to-action) on-device execution of large embodied AI models without cloud dependency. The AI stack leverages Agibot's proprietary WorkGPT/Genie multimodal mission-level models enhanced by Vision-Language Models (VLMs), integrated with the GO-1 generalist embodied foundation model featuring a '3-layer brain' architecture: VLM for perception, Latent Planner for task planning, and Action Expert for precise execution. The Genie Envisioner (GE-1) world model supports predictive simulation, policy learning, and neural rendering for rehearsing actions in virtual environments. Rapid deployment is facilitated by Genie RL reinforcement learning and LinkCraft zero-code platform, allowing non-experts to teach tasks via human demonstration in as little as one hour, with 99% success rates in precision assembly (e.g., completing components 3 seconds faster than humans in 15 seconds). Built with 100% automotive-grade components and IP42 ingress protection, it withstands extreme conditions (-15°C to 50°C), electrostatics, and includes dual hot-swappable batteries for 24/7 autonomous operation with self-charging. The wheeled design prioritizes industrial applications over bipedal locomotion, focusing on manipulation, guidance, and service tasks. Real-world deployments began shortly after launch, with commercial pilots in automotive manufacturing (e.g., seatbelt lock cylinder assembly and material handling alongside humans) and consumer electronics precision tasks. By late 2025, Agibot reported ongoing rollouts in these sectors, building on the company's first industrial deployment in July 2025 (earlier models) and mass production scaling to thousands of units. Agibot's AIDEA Giga Data Factory and AgiBot World dataset (over 1 million real-machine interactions) fuel continuous AI improvements, positioning G2 Genie as a benchmark for embodied AI in logistics, research, education, factories, retail, and public venues. Its force-compliant, interactive nature ensures safety in human-collaborative environments, marking a shift from rigid industrial arms to versatile, intelligent operators.

Key Features

High-Precision Force-Controlled Arms

Dual 7-DoF arms with full torque sensing and cross-shaped wrists enable sub-millimeter assembly, 5 kg payload, and impedance control for safe human interaction.

360° Omnidirectional Perception

Dual LiDARs, RGB-D cameras, ultrasonics, 8-mic array, and visuo-tactile fingertips provide comprehensive spatial awareness, obstacle avoidance, and multi-modal interaction.

On-Device Embodied AI

NVIDIA Jetson Thor T5000 (~200 TOPS) runs GO-1 foundation model, Genie VLM-enhanced stack, and GE-1 world simulator for <10 ms latency mission planning and execution.

Rapid Learning and Deployment

Genie RL and LinkCraft allow task mastery in 1 hour via demos; supports zero-code customization and SDK for industrial integration.

24/7 Industrial Durability

IP42 automotive-grade build, dual hot-swap batteries, autonomous charging, and extreme environment tolerance (-15°C to 50°C) for continuous operation.

Human-Like Interaction

Expressive face, gestures, eye-gaze tracking, multi-user conversation, and explanatory behaviors enhance collaborative usability.

Specifications

AvailabilityPrototype
NationalityChina
Websitehttps://www.agibot.com
Degrees Of Freedom, Hands12
Height [Cm]175
Manipulation Performance3
Navigation Performance3
Max Speed (Km/H)7
Strength [Kg]5
Weight [Kg]55
Safe With HumansYes
Cpu/GpuNVIDIA-based local AI compute (exact SKU N/D); ~200 TOPS stated
Camera ResolutionRGB-D camera
ConnectivityN/D (likely Wi-Fi/Ethernet typical of industrial platforms
Llm IntegrationWorkGPT/Genie multimodal mission-level model; VLM-enhanced.
Motor Techhigh-precision torque-sensing joints; vendor not disclosed
Number Of Fingers10
Main MarketIndustries, logistics, Research & education
VerifiedNot verified
Walking Speed [Km/H]7
ManufacturerFoundation
Height Cm175
Weight Kg55
Max Speed Kmh7
Payload Kg Per Arm5
Dof Total49+
Dof Arms7 each
Dof Waist3
Dof Hands12
Fingers10
ComputeNVIDIA Jetson Thor T5000, up to 2070 TFLOPS (FP4), ~200 TOPS
SensorsDual LiDARs, RGB-D camera, ultrasonics, 8-mic array, visuo-tactile fingertips
Perception360° omnidirectional
Ip RatingIP42
Materials100% automotive-grade components
MotorsHigh-precision torque-sensing joints
BatteryDual hot-swappable, autonomous charging, 24/7 operation (kWh N/S)
Latency Ms<10
OsLingqu OS (embodied intelligent)
Ai ModelsGO-1 foundation model, Genie WorkGPT/Genie multimodal VLM, GE-1 world model
MobilityOmnidirectional wheeled base
Environment Tolerance-15°C to 50°C, electrostatic protection

Curated Videos

Video 1
Video 2
Video 4

Frequently Asked Questions

What is the primary application of the G2 Genie?

Designed for industrial manufacturing, logistics, and service tasks, it excels in precision assembly, inspection, material handling, and guided tours. Deployed in automotive (seatbelt assembly) and electronics (RAM insertion), it relieves humans from repetitive, risky work with 99% success rates.

What AI models power the G2 Genie?

It uses Agibot's GO-1 generalist embodied foundation model with a 3-layer brain (VLM perception, Latent Planner, Action Expert), Genie multimodal WorkGPT stack, and GE-1 world model for prediction and simulation. These enable complex, long-sequence tasks from single voice commands.

How does the G2 Genie ensure human safety?

Full-arm torque sensing, impedance control, and real-time force detection allow compliant responses to external forces. 360° perception predicts hazards, while wheeled stability and IP42 build support people-safe operation in shared spaces.

What are the compute capabilities?

Powered by NVIDIA Jetson Thor T5000 with up to 2070 TFLOPS (FP4) or ~200 TOPS, it processes multi-sensor data locally for <10 ms latency, running large VLMs/LLMs without cloud reliance for real-time decisions.

Can non-experts deploy and program it?

Yes, Genie RL and LinkCraft zero-code tools enable rapid setup and task teaching via human demos in under 1 hour. SDKs allow customization, with autonomous navigation and charging for plug-and-play industrial use.

What is the battery life and mobility?

Dual hot-swappable batteries support 24/7 operation with autonomous docking. Omnidirectional wheels achieve 7 km/h max speed, navigating 95% of factory floors with high passability.

×