ALPHA
N/A

ALPHA

ALPHA, developed by TeknTrash, is a humanoid robot designed to transform waste handling by automating repetitive, unsafe tasks in recycling facilities. With advanced AI, 6‑DOF arms, 3D vision, and cloud connectivity, it can sort, lift, and navigate autonomously to increase efficiency and reduce contamination in waste streams.

Description

ALPHA (Automated Litter Processing Humanoid Assistant), developed by UK-based TeknTrash Robotics, is a specialized humanoid robot engineered to revolutionize waste handling in recycling facilities by automating repetitive, hazardous, and unsanitary tasks traditionally performed by humans. Standing at 200 cm tall and weighing 90 kg, ALPHA features a modular architecture comprising a wheeled mobile base for autonomous navigation, a lifting column for height adjustment (up to 2000 mm, 150 mm/s speed), dual 6-DOF arms with grippers capable of lifting 5 kg total payload, a head unit with 2-DOF binocular depth camera for global object identification and obstacle avoidance, and a chest-mounted Nvidia Jetson Orin AGX 64 compute module delivering 275 TOPS of AI processing power via 2048 CUDA cores, 64 Tensor Cores, 12-core Arm Cortex-A78AE CPU, and integrated accelerators (2x NVDLA, PVA v2.0). The robot's sensory suite includes hyperspectral cameras for material identification beyond visible light (UV, IR, X-ray spectra), arm-mounted 3D cameras (85° H x 58° V FOV, 0.25-2.5 m depth), base lidar, depth camera, ultrasonic sensors (1 safety edge + 2 units), IMU/encoder for navigation, and a monocular camera/microphone on the lifting column. Operating on Ubuntu 24 with WiFi cloud connectivity, ALPHA supports team-based operations where a central conveyor belt camera allocates tasks via Google Cloud infrastructure, offloading heavy computation (e.g., 6+ MB/s hyperspectral data per robot) to Compute Engine GPUs for real-time inference, reducing on-board latency to ~20 ms glass-to-action. AI architecture leverages Nvidia Isaac GR00T Vision-Language-Action (VLA) models trained on vast datasets from HoloLab, TeknTrash's VR-based data capture system using Meta Quest 3 headsets to record operatives' motions in LeRobot format. Over 49 TB of structured/unstructured data (hyperspectral images, motion logs, video streams) is processed via Cloud Data Fusion for model fine-tuning, enabling ALPHA to mimic human picking (30-40 items/min) without fatigue, achieving up to 95% material purity vs. 75% human rates and reducing contamination by 25%. Cloud-native design supports Robots-as-a-Service (RaaS), remote firmware updates, and scalability to 1,000+ sites. Real-world deployment began in 2025 with a pilot at Sharp Group's Rainham, East London facility (2,800 tonnes/week throughput), addressing labor shortages for 24/7 ops. HoloLab captured real-time sorting data from workers, training ALPHA for hyperspectral waste tracking, dexterous picking (grippers vs. suction), and autonomous mobility. Outcomes include 10%+ recovery efficiency gains, granular EPR compliance data, and injury reduction in a sector with 17x higher fatalities. As a prototype (IP32 ingress, 7-hour runtime), ALPHA outperforms fixed robotic arms by enabling free movement to retrieve missed items, positioning it for full facility takeover in waste processing, carrying, and policing.

Key Features

Hyperspectral Vision

Advanced cameras capture UV, IR, and X-ray spectra for accurate identification of dirty or incomplete waste items, enabling early tracking from conveyor starts.

Dual 6-DOF Dexterous Arms

Equipped with grippers (4 fingers, 2 DOF hands) for reliable picking without constant cleaning, supporting 5 kg payload and human-like manipulation.

Cloud-Coordinated Team Operation

Permanent WiFi connectivity allows central task allocation to robot teams, offloading compute to Google Cloud for scalable, low-latency performance.

Autonomous Wheeled Navigation

Lidar, depth cameras, ultrasonics, and IMU enable map building, obstacle avoidance, positioning, and auto-charging on dynamic factory floors.

Nvidia Isaac GR00T AI Training

VLA models trained via HoloLab VR data capture human motions precisely, supporting 24/7 fatigue-free sorting at 30-40 picks/min.

Specifications

AvailabilityPrototype
NationalityUK
Websitehttps://www.tekntrash.com/
Degrees Of Freedom, Overall13
Degrees Of Freedom, Hands2
Height [Cm]200
Manipulation Performance2
Navigation Performance2
Max Speed (Km/H)0.9
Strength [Kg]5
Weight [Kg]90
Runtime Pr Charge (Hours)7
Safe With HumansYes
Cpu/GpuNvidia Orin AGX 64
Ingress ProtectionIP32
Camera Resolution640×480
ConnectivityWiFi (cloud via Google Compute Engine)
Operating SystemUbuntu 24
Llm Integrationdeepseek
Latency Glass To Action20 ms
Motor TechRealman
Gear TechRealman
Main Structural MaterialAluminum, Plastic
Number Of Fingers4
Main MarketRecycling
VerifiedNot verified
Height200 cm
Weight90 kg
Degrees Of Freedom (Overall)13
ArmsDual 6-DOF with 2-DOF grippers (4 fingers)
Payload5 kg total
Max Speed0.9 km/h
Runtime7 hours per charge (3-hour recharge)
ProcessorNvidia Jetson Orin AGX 64 (12-core Arm Cortex-A78AE, 2048 CUDA cores, 64 Tensor cores, 275 TOPS, 64GB eMMC, 1TB SSD)
OsUbuntu 24
SensorsVision: Hyperspectral cameras (UV/IR/X-ray), Arm 3D cameras (640x480, 85°x58° FOV, 0.25-2.5m depth), Head binocular depth, Base depth/LiDAR, Lifting monocular + mic, Proximity: Ultrasonic (1 safety edge + 2 units), IMU/encoder
NavigationDeep learning camera fusion, SLAM, dynamic obstacle avoidance
IngressIP32
MaterialsAluminum frame, plastic components
Motors/GearsRealman
Ai ModelsNvidia Isaac GR00T VLA, deepseek LLM
Data Rate6+ MB/s per robot (hyperspectral)
ThermalN/A (passive cooling implied via Orin design)

Curated Videos

Frequently Asked Questions

What makes ALPHA suitable for recycling environments?

ALPHA is built for harsh conditions with IP32 protection, aluminum/plastic structure, grippers resistant to grime, and hyperspectral vision for contaminated waste. It reduces human exposure to hazards where injury rates are 17x industry average, enabling 24/7 operations without fatigue or errors.

How is ALPHA trained using HoloLab?

HoloLab uses Meta Quest 3 VR headsets to record operatives' motions and visuals in real-time, uploading LeRobot-format data to Google Cloud. This trains Nvidia Isaac GR00T VLA models, allowing ALPHA to replicate precise picking and sorting behaviors autonomously.

What is ALPHA's compute and AI capability?

Powered by Nvidia Jetson Orin AGX 64 (275 TOPS, 2048 CUDA/64 Tensor cores, 12-core Arm CPU), it runs deep learning for navigation and vision. Cloud integration handles hyperspectral processing (6 MB/s data), with deepseek LLM for integration and 20 ms latency.

Can ALPHA work in teams?

Yes, designed for groups of 2+ units; a central hyperspectral camera on conveyors assigns tasks via cloud, optimizing for specialized roles like plastics or metals, outperforming solo fixed-arm systems.

What are real-world results from pilots?

In the 2025 Sharp Group Rainham pilot, ALPHA improved material recovery by 10%+, purity to 95%, and provided item-level data for compliance. It handles 2,800 tonnes/week, scaling to full robotic facilities.

×