PR-34D
Perceptyne

PR-34D

The Perceptyne PR-34D is a dual-arm,semi-humanoid robot engineered for dexterous industrial tasks in the electronics and automotive sectors.It features 7-DOF arms,10-DOF grippers,integrated force-torque sensing,and high-resolution tactile feedback,enabling precise assembly and collaborative work with a 6 kg payload capacity.

Description

The Perceptyne PR-34D represents a breakthrough in semi-humanoid robotics, engineered by the Indian deep-tech startup Perceptyne for dexterous industrial automation in electronics manufacturing services (EMS) and automotive sectors. Launched after 1.5 years of stealth development in May 2024, this dual-arm system features a total of 34 degrees of freedom: two 7-DOF arms (comprising 3-DOF shoulders, 1-DOF elbow, and 3-DOF wrists) paired with advanced 10-DOF three-fingered grippers. This anthropomorphic design enables human-like manipulation for complex, unstructured tasks such as precise assembly, packaging, quality checks, and handling delicate components, with a robust 6 kg payload capacity per arm covering most human-scale industrial operations. The robot's architecture centers on a proprietary full-stack, multi-stage hierarchical neuro-symbolic AI platform that emulates human reasoning by fusing neural perception with symbolic logic. This allows dynamic adaptation to 'messy' factory environments without rigid programming. Core AI models include object detection and 6-DOF pose estimation using high-resolution RGB and depth-sensing 3D cameras, real-time path prediction for obstacle avoidance, and motion planning that optimizes redundant kinematics for efficient, collision-free trajectories. Teleoperation (TeleOp) training is a standout feature: human demonstrators perform tasks (e.g., picking, inserting pins), generating datasets of synchronized vision, force, torque, and proprioceptive data. The AI then generalizes to autonomous execution via imitation learning, with LLM integration enhancing high-level planning and low-code deployment. Latency from perception to action is minimized through dedicated edge computing, supporting ROS/Linux/proprietary OS hybrid. Perception is multimodal and cutting-edge: all-joint force-torque sensing for compliant control, high-resolution tactile arrays (2mm x 2mm across the hand mimicking human touch), and integrated vision for markerless visual servoing—even tracking dynamic objects without fiducials. Powered by electric servo motors with proprietary custom-built gears, the fully electric design ensures low maintenance, collaborative safety (cobotic-rated), and flexibility in human-shared workspaces. Deployment history underscores its maturity: as of December 2025, PR-34D is in commercial pilots and early production, integrated into pilot projects with global MNCs in automotive and electronics in India, addressing labor shortages. Expansion targets US/EU reshoring markets. Recognized as a finalist in the 2025 Humanoid Robotics Industry Awards for 'Groundbreaking Technology' alongside NVIDIA and AgiBot, it reduces integration cycles from quarters to weeks, cuts costs by 40%, and eliminates downtime without line modifications. In-house R&D—from mechanics to AI—enables scalability under 'Make in India', with production ramping to hundreds of units. The PR-34D transforms factories by delivering consistent, fatigue-free human dexterity, positioning Perceptyne as a leader in affordable, intelligent automation.

Key Features

Dual 7-DOF Arms with 10-DOF Grippers

Provides high reachability and dexterity for two-handed complex assembly tasks, handling payloads up to 6kg.

Multimodal Sensing Suite

Integrates high-resolution tactile (2mm resolution), all-joint force-torque, and RGB/depth vision for precise perception and safe human collaboration.

Neuro-Symbolic AI Platform

Combines ML perception with symbolic reasoning for adaptive task learning via low-code TeleOp, with LLM integration.

Markerless Visual Servoing

Tracks dynamic objects in real-time without fiducials or precise camera setup, enabling flexible workcell deployment.

Plug-and-Play Integration

Deploys into existing lines without infrastructure changes, reducing setup from months to weeks.

Fully Electric Cobotic Design

Low-maintenance electric servos, proprietary gears, and safety-rated for shared workspaces.

Specifications

AvailabilityCommercial Pilots / Early Production
NationalityIndia
Websitehttps://www.perceptyne.com/
Degrees Of Freedom, Overall34
Degrees Of Freedom, Hands10
Manipulation Performance3
Navigation Performance2
Strength [Kg]6
Safe With HumansYes
Cpu/GpuDedicated CPU/GPU for AI & Vision
Camera ResolutionHigh-Resolution Vision
ConnectivityEthernet, Wi‑Fi
Operating SystemROS/Linux/Proprietary
Llm IntegrationYes
Motor TechElectric servo motors
Gear TechProprietary/Custom-built
Main MarketAutomotive Industries, Electronics Manufacturing Services (EMS)
VerifiedNot verified
ColorWhite/Black/Industrial Blue
ManufacturerPerceptyne
Degrees Of Freedomtotal: 34, arms: 2 x 7-DOF (3 shoulder, 1 elbow, 3 wrist), grippers: 2 x 10-DOF three-fingered hands
Payload6 kg per arm
Sensorsvision: High-resolution RGB + depth-sensing 3D cameras, tactile: 2mm x 2mm resolution across hand, force_torque: Integrated in all joints, proprioception: Full joint encoders
ActuatorsElectric servo motors with proprietary/custom-built gears
ComputeDedicated CPU/GPU for AI & vision processing
OsROS/Linux/Proprietary hybrid
ConnectivityEthernet, Wi-Fi
PowerFully electric (industrial tethered, no battery specs)
SafetyCobotic-rated, collaborative with humans
MaterialsIndustrial-grade (white/black/industrial blue color scheme)
Price Estimate$120,000
OtherLLM integration: Yes; Manipulation perf: 3/5; Navigation: 2/5; No mobility (fixed base semi-humanoid)

Curated Videos

Video 1
Video 2

Frequently Asked Questions

What is the primary application of the PR-34D?

The PR-34D excels in dexterous industrial tasks like electronics assembly, automotive component handling, and packaging. Its 34 DOF and multimodal sensing enable precise operations in unstructured environments, outperforming traditional robots in flexibility and speed.

How does the AI system work for task learning?

Using neuro-symbolic AI and TeleOp training, humans demonstrate tasks while the robot records vision, force, and tactile data. The system generalizes via imitation learning and LLMs, allowing new skills in days with low-code interfaces, minimizing expert programming needs.

What is the payload and dexterity level?

Each arm supports 6kg payloads with 7-DOF reachability and 10-DOF grippers featuring three fingers. High-res tactile and force sensing (2mm resolution, all joints) mimic human touch for fine manipulation like pin insertion or delicate part handling.

Is it safe for human collaboration?

Yes, designed as collaborative (cobotic) with all-joint force-torque sensing for compliant control, obstacle detection, and safe speed adjustments. It operates flexibly in shared spaces without fencing, certified for human proximity.

What are the deployment status and markets?

In commercial pilots/early production as of 2025, deployed with major MNCs in India (auto/electronics). Plans for US/EU expansion amid reshoring. Integration is plug-and-play, cutting costs 40% and downtime via scalable 'Make in India' production.

What sensors and processors does it use?

Multimodal: RGB/depth cameras, 2mm tactile arrays, joint torque/force sensors. Powered by dedicated CPU/GPU for AI/vision on ROS/Linux/proprietary stack, ensuring low-latency edge processing without cloud dependency.

×