–>
The Physical Claw: Bio-Inspired Robotics, Tactile Intelligence, and the Rise of Foundation Models in 2026
From cockroach-inspired climbers scaling 55-degree inclines to omni-bodied foundation models controlling any robot body — 2026 marks the year physical manipulation became intelligent.
The Convergence at a Glance
↑ From $4.7B in Apr 2025 [1]
↑ From $2.6B in Feb 2024 [2]
↑ Across 31 fragile objects [3]
→ System 1 real-time control [2]
The Paradigm Shift: From Rigid Grippers to Bio-Inspired Intelligence
For decades, industrial robotics operated under a fundamental constraint: the end-effector was the dumbest component in the system. Rigid parallel-jaw grippers, pneumatic suction cups, and magnetic pickups dominated factory floors not because they were optimal, but because they were predictable. They required no intelligence — just precise positioning of objects within millimeter tolerances. The world adapted to the gripper, not the other way around [3].
2026 marks a decisive break from this paradigm. A convergence of bio-inspired mechanical design, embedded tactile sensing, and generalist AI foundation models has produced a new class of manipulation systems that sense, adapt, and learn. The physical “claw” — the point where digital intelligence meets physical matter — is no longer a passive tool. It is becoming an intelligent interface between the computational world and the material one.
This transformation follows a principle that roboticists call morphological computation: the idea that the physical structure of a gripper can itself perform computation, offloading processing from software to mechanics. A compliant finger that passively conforms to an egg’s curvature is performing a computation that would require complex force-control algorithms in a rigid gripper. Biology discovered this principle hundreds of millions of years ago. Robotics is finally catching up [3][4].
CMU LORIS: The Cockroach That Climbs Walls
At Carnegie Mellon University’s Robotics Institute, the LORIS robot embodies bio-inspiration at its most literal. Named after the slow-moving primate but modeled after Blaberus discoidalis — the discoid cockroach — LORIS demonstrates that nature’s most reviled creatures often hold the most elegant engineering solutions [5].
The cockroach’s secret is distributed compliance. Rather than relying on a single powerful grip, the insect uses dozens of tiny spines across its legs, each independently engaging with surface irregularities. LORIS translates this into microspine grippers with passive wrist joints — arrays of small, spring-loaded hooks that collectively generate enormous holding force on rough surfaces while each individual spine contributes only grams of force [5].
The key innovation is what CMU researchers term Distributed Inward Gripping (DIG). Rather than applying force at discrete contact points, DIG distributes grip force across the entire contact surface, with each microspine independently seeking and engaging the nearest asperity. The result: LORIS can scale 55-degree inclines and hang from 45-degree overhangs — capabilities that would be impossible with conventional grip strategies [5].
The applications extend far beyond laboratory demonstrations. LORIS-class locomotion opens pathways for extraterrestrial exploration, where terrain is unstructured and gravity is reduced. Lunar regolith, with its sharp, angular particles, is precisely the type of surface that microspine grippers excel on. A cockroach-inspired rover could navigate crater walls and lava tube entrances that wheeled vehicles cannot reach [5].
UT Austin FORTE: Teaching Robots to Handle Potato Chips
If LORIS demonstrates how bio-inspired design conquers terrain, the FORTE project at the University of Texas at Austin demonstrates how it conquers delicacy. FORTE — Fragile Object Grasping with Tactile Sensing — attacks what may be robotics’ hardest unsolved manipulation problem: grasping objects so fragile that the act of measurement can destroy them [3].
FORTE’s mechanical innovation draws from an unlikely biological source: fish fins. The fin-ray effect, first described by studying the bending behavior of fish fin rays under load, produces a counterintuitive result — when a compliant fin-ray structure is pushed from one side, it curves toward the applied force rather than away from it. This creates a natural wrapping behavior that distributes contact pressure across a large surface area [3].
The research team translated this principle into 3D-printed compliant fingers with internal fluidic channels that serve as embedded tactile sensors. As the finger conforms to an object, air pressure changes within these channels provide continuous, distributed force feedback — effectively giving the finger a sense of touch across its entire grasping surface. There are no discrete sensor elements to fail or misalign; the structure itself is the sensor [3].
The performance numbers are striking. FORTE achieves a 91.9% grasp success rate across 31 test objects spanning an extraordinary range of fragility: potato chips (which shatter under grams of excess force), raw eggs (which crack under uneven pressure), raspberries (which collapse under point loads), and similarly delicate items. The system’s micro-slip detection — the ability to sense when an object begins to slide before it actually falls — reaches 93% accuracy with 100% precision, meaning every detected slip was a real slip [3].
Roaming Robotic Hands: When the Gripper Walks Away
The most conceptually radical development in manipulation technology may be the emergence of detachable robotic hands that can crawl independently of their parent robot. These systems decouple the end-effector from the arm entirely, creating autonomous agents capable of both grasping and locomotion using the same multi-joint finger articulation [6].
The engineering insight is elegant: a hand with sufficient degrees of freedom to manipulate complex objects already has sufficient degrees of freedom to locomote. By alternating between grasping and walking gaits, a robotic hand can navigate through confined spaces that are inaccessible to conventional robotic arms — pipe interiors, rubble fields, equipment internals [6].
The application domains are immediately obvious and uniformly high-consequence. Nuclear decommissioning requires manipulation inside reactor vessels where even robotic arms cannot reach. Aerospace maintenance demands inspection and repair inside jet engine nacelles. Disaster response needs manipulation in collapsed structures where only a hand-sized agent could navigate. In each case, the roaming hand’s dual capability — locomotion to reach the worksite, manipulation to perform the task — eliminates the need for separate navigation and manipulation systems [6].
Bio-Inspired vs. Commercial Gripper Architectures
| Technology | Mechanism | Best For | Key Metric | Limitation |
|---|---|---|---|---|
| Microspine (LORIS) | Distributed hook arrays | Rough terrain climbing | 55° incline traversal | Smooth surfaces only |
| Fluidic Fin-Ray (FORTE) | Compliant wrap + air channels | Fragile object grasping | 91.9% success, 31 objects | Speed limited by compliance |
| Pneumatic (mGrip) | Inflatable silicone chambers | High-speed food processing | 450 picks/min | Requires air supply |
| Dry Adhesion (Gecko) | Van der Waals forces | Flat/smooth surfaces | 5kg payload, no marks | Textured surfaces fail |
| Tactile Array (Robotiq) | Taxel grid + IMU | Adaptive manipulation | 1000Hz micro-slip sensing | Mechanical complexity |
| Musculoskeletal (Clone) | Hydraulic artificial muscles | Human-level dexterity | 26 DoF per hand | Fluid system maintenance |
| RL-Optimized (DEX-EE) | Tendon-driven + optical tactile | ML research & training | 1kHz EtherCAT control | Research-grade cost |
Commercial Smart Grippers: From Laboratory to Production Line
While academic labs push the boundaries of bio-inspired design, commercial gripper manufacturers have begun integrating these principles into production-grade systems operating at industrial scale.
Soft Robotics Inc. pioneered the commercialization of pneumatic soft grippers with their mGrip platform, achieving 450 picks per minute in food processing applications — handling everything from raw chicken breasts to delicate pastries without changeover. In August 2024, the company divested its gripper business to the Schmalz Group to focus on vision inspection and defect detection via Oxipital AI, signaling that the soft gripper technology had matured enough for a pure-play industrial acquirer [7].
OnRobot’s Gecko gripper takes inspiration from one of nature’s most famous adhesion mechanisms: the van der Waals forces generated by the hierarchical nanostructures on gecko toe pads. The Gecko gripper requires no electricity, no compressed air, and no programming — it adheres to flat, smooth, and even perforated surfaces through pure molecular attraction. Available in SP1 (1 kg), SP3 (3 kg), and SP5 (5 kg) payload variants, it provides completely mark-free gripping for applications where surface integrity is critical: printed circuit boards, glass panels, and precision-machined metal components [8].
Robotiq’s tactile sensing grippers represent the data-generation frontier of commercial manipulation. The Hand-E gripper, with its 50mm parallel stroke, IP67 sealing, and 7kg payload capacity, serves as the mechanical platform. But the real innovation lies in the integrated tactile sensing arrays — grids of taxels operating at 1000 Hz that detect micro-slip events, force distribution patterns, and object geometry in real-time. Combined with inertial measurement units, these grippers generate the high-frequency manipulation data that physical AI systems need for training [9].
This data-generation role may prove more valuable than the manipulation itself. Every grasp executed by a sensor-rich Robotiq gripper produces a training sample: force trajectories, slip events, contact geometries, success or failure labels. At industrial scale — thousands of grasps per shift, hundreds of shifts per year — these grippers are quietly building the datasets that will train the next generation of foundation models for manipulation [9].
Humanoid End-Effectors: The Race for Human-Level Dexterity
The most ambitious manipulation systems aim to replicate the human hand itself — 27 bones, 34 muscles, over 17,000 mechanoreceptors, and the dexterity to thread a needle or crack an egg with one hand. Two companies are pursuing radically different engineering philosophies toward this goal.
Clone Robotics: The Musculoskeletal Approach
Clone Robotics has committed to the most biologically faithful strategy in the field. Rather than using conventional electric motors and gears, Clone builds musculoskeletal androids whose actuators are artificial muscles — hydraulic fibers called MyoFibers that contract and relax in the same manner as biological muscle tissue. Their Torso 2 platform reportedly integrates 164 degrees of freedom across the entire upper body, with each hand contributing 26 DoF — approaching the kinematic complexity of the human hand [10].
The sensory density is equally ambitious: 320 pressure sensors and 70 inertial measurement units distributed across the hands and arms provide proprioceptive feedback comparable to the human somatosensory system. Clone describes their platform as producing “the most human-level robotic hand in the world” — a claim that, while difficult to independently verify, reflects the company’s commitment to biomimetic fidelity over engineering simplification [10].
Shadow Robot DEX-EE: Built for Machine Learning
Shadow Robot’s DEX-EE, developed in collaboration with Google DeepMind, takes the opposite engineering philosophy. Rather than replicating human anatomy, DEX-EE is purpose-built for the demands of reinforcement learning research: thousands of hours of continuous operation, repeated impacts without degradation, and high-bandwidth sensor data suitable for neural network training [4].
The DEX-EE is a 3-fingered, 12 degree-of-freedom hand weighing 4.1 kg and standing 350 mm tall. Its optical tactile fingertip sensors deliver hundreds of taxels per fingertip with massive dynamic range, capturing everything from the lightest brush contact to firm object manipulation. The sensor network operates at 1 kHz over EtherCAT, providing position, force, inertial, and tactile data at rates fast enough for real-time reinforcement learning policy execution [4].
The DEX-EE Chiral variant introduces human-like kinematics with an offset thumb, available in left-handed, right-handed, and bi-manual configurations. Shadow Robot has previously supplied hardware to OpenAI (for the landmark Rubik’s cube manipulation demonstration), Google Brain (multi-object manipulation research), and the Human Brain Project — establishing the DEX-EE lineage as the de facto standard for manipulation ML research [4].
What separates DEX-EE from other research hands is its emphasis on survivability. Reinforcement learning is inherently destructive — agents explore by trying actions that often result in collisions, drops, and jams. DEX-EE is engineered to survive these events through graceful degradation, automatic fail-safes, and mechanical robustness that keeps the hand operational through the tens of thousands of episodes required for policy convergence [4].
“The DEX-EE was designed from the ground up for the demands of real-world machine learning experiments — surviving thousands of hours of repeated impacts while delivering fingertip-level tactile data at 1 kHz.”
— Shadow Robot, DEX-EE Technical Documentation [4]
Foundation Models: One Brain for Every Body
The most transformative development in physical AI is not mechanical — it is computational. Two companies are building foundation models for robotics: generalist neural networks trained on such massive and diverse datasets that they can control any robot body performing any manipulation task, without hardware-specific fine-tuning.
Skild AI: The Omni-Bodied Brain
Founded in 2023 by Carnegie Mellon professors Deepak Pathak and Abhinav Gupta, Skild AI has raised over $2 billion in three years to build what they call Skild Brain — a single foundation model that operates across quadrupeds, humanoids, tabletop arms, and mobile manipulators without requiring hardware-specific training [1].
The training methodology combines internet video of humans performing tasks with massive physics simulations, producing a dataset that Skild claims is 1000 times larger than any competitor’s. The model learns generalizable physical intuition — how objects behave under force, how surfaces provide friction, how gravity affects trajectories — rather than memorizing specific motor commands for specific hardware [1].
The most striking capability is in-context learning for physical robots. Skild Brain can adapt in real-time to entirely new situations: a robot body it has never controlled, a limb that has been damaged or jammed, a payload that changes the system’s dynamics. This is the physical equivalent of a language model adapting to a new conversation topic — the foundation model generalizes from its training distribution to novel configurations without retraining [1].
The funding trajectory tells the strategic story. Skild’s Series A ($300 million at $1.5 billion, July 2024, led by Lightspeed and Coatue with Bezos, SoftBank, and others) was followed by Series B ($500 million at $4.7 billion, April 2025, adding Samsung and LG) and then Series C ($1.4 billion at $14 billion, January 2026, SoftBank-led). Total capital raised exceeds $2 billion. Revenue grew from zero to approximately $30 million during 2025, with deployments in security, delivery, warehousing, manufacturing, data centers, and construction [1].
From Zero to $14 Billion in 30 Months
Figure AI: From Prototype to Factory Floor
Figure AI has taken the complementary path — rather than building a model that works on any body, Figure is building the body, the brain, and the factory to produce both at scale. The progression from Figure 01 (2022 prototype) through Figure 02 (August 2024) to Figure 03 (October 2025) represents the fastest hardware iteration cycle in humanoid robotics history [2].
Figure 03 introduces several innovations that distinguish it from research prototypes. Palm-mounted cameras in each hand provide close-range visual feedback during manipulation — the robot can see what its hands are doing, not just infer from proprioception. Custom tactile sensors detect forces as light as 3 grams, enabling the fine-force control required for handling fragile objects. Safety-focused design includes multi-density foam structures, soft exterior materials, washable textiles, and UN38.3-certified batteries with wireless inductive charging [2].
The software stack is equally significant. Figure’s Helix vision-language-action (VLA) model operates on a dual-system architecture: System 2, a 7-billion parameter vision-language model running at 7-9 Hz, handles high-level planning and situational understanding. System 1, an 80-million parameter visuomotor policy running at 200 Hz, executes the real-time motor control that translates plans into precise joint trajectories. This hierarchy mirrors the human cognitive architecture — slow deliberative reasoning governing fast reflexive action [2].
A single Helix model can control two robots simultaneously, demonstrating the kind of scalable intelligence that factory deployment demands. Figure ended its OpenAI collaboration in February 2025, with CEO Brett Adcock noting that large language models were “getting smarter yet more commoditized” — a strategic bet that purpose-built embodied AI models would outperform general-purpose language models adapted for robotics [2].
The industrial commitment is concrete. BotQ, Figure’s dedicated manufacturing facility announced in March 2025, targets production of 12,000 humanoid robots per year — with robots building robots on the production line. The company’s September 2025 funding round ($1 billion at $39 billion valuation, with investment from Bezos, Microsoft, NVIDIA, Intel, Qualcomm, and Salesforce) signals that institutional capital views humanoid robotics as an infrastructure play, not a research curiosity [2].
MILO: The Quadruped That Solves the Last 50 Feet
Not every manipulation challenge requires a humanoid form factor. RVO’s MILO quadruped attacks a specific logistics problem that has resisted automation for decades: the “last 50 feet” of delivery — the distance from a delivery vehicle to a customer’s front door [11].
This final segment is trivial for humans and nearly impossible for wheeled robots. It involves stairs, curbs, uneven walkways, doorsteps, weather, dogs, and the fundamental manipulation task of handing a package to a person (or placing it securely if no one is home). MILO addresses these challenges with four-legged locomotion capable of stair climbing and terrain adaptation, combined with a manipulation payload for physical package handover [11].
The strategic insight is that last-mile delivery’s bottleneck is not navigation — it is physical interaction with the unstructured built environment. Autonomous vehicles can drive to a neighborhood. Drones can fly to a general area. But the actual delivery — climbing three porch steps, opening a screen door, placing a package behind a planter — requires a platform that can locomote through human-designed spaces and manipulate objects within them. The quadruped form factor provides the mobility; the integrated gripper provides the manipulation [11].
Market Convergence: NVIDIA and the Unified OS for Physical AI
The convergence of bio-inspired hardware and foundation model software has attracted the attention — and the capital — of the semiconductor industry’s dominant player. NVIDIA’s fiscal year 2026 revenue reached $215.9 billion (ending January 2026), powered largely by data center GPU sales for AI training and inference. But the company is positioning aggressively for the next growth vector: physical AI [12].
NVIDIA Isaac Sim provides the simulation infrastructure that makes foundation model training for robotics economically feasible. Training a manipulation policy on a physical robot is slow (one grasp at a time), expensive (hardware wear, human supervision), and dangerous (failed grasps damage objects and robots). Isaac Sim runs thousands of parallel robot instances in GPU-accelerated physics simulation, generating millions of training episodes per day at a fraction of the cost of physical experimentation [12].
Cosmos, NVIDIA’s world foundation model platform, extends this further by generating synthetic training scenarios — novel environments, object configurations, and physical interactions — that expand the diversity of training data beyond what can be captured in real-world operation or hand-authored simulation. The combination creates a pipeline: Cosmos generates scenarios, Isaac Sim executes them at GPU speed, and foundation models like Skild Brain and Figure Helix absorb the resulting experience [12].
The business model is clear: NVIDIA sells the GPUs that train the models, the simulation platform that generates the training data, and the edge inference hardware that runs the models on physical robots. Every humanoid, quadruped, and industrial arm that deploys a foundation model becomes a recurring customer for NVIDIA silicon — both in the cloud (for training updates) and on the robot (for real-time inference). Physical AI transforms NVIDIA from a chipmaker into the operating system vendor for the physical world [12].
From Research Labs to Production Lines
| Company | Focus | Key Product | Funding / Revenue | Notable Metric |
|---|---|---|---|---|
| Skild AI | Foundation model | Skild Brain (omni-bodied) | $2B+ raised, $14B val | 1000x training data claim |
| Figure AI | Humanoid + VLA model | Figure 03 + Helix | $39B val, 12K units/yr target | 200Hz System 1 control |
| Clone Robotics | Musculoskeletal android | Torso 2 / Clone Hand | Private | 164 DoF, 26 DoF/hand |
| Shadow Robot | ML research hardware | DEX-EE / DEX-EE Chiral | Private (DeepMind collab) | 1kHz EtherCAT, optical tactile |
| OnRobot | Industrial grippers | Gecko (van der Waals) | Part of OnRobot group | Mark-free adhesion, 5kg |
| Robotiq | Adaptive grippers | Hand-E + tactile arrays | Private | 1000Hz tactile, IP67 |
| Schmalz/Soft Robotics | Pneumatic soft grippers | mGrip platform | Acquired Aug 2024 | 450 picks/min food processing |
| NVIDIA | Simulation + inference HW | Isaac Sim / Cosmos | $215.9B FY26 revenue | GPU monopoly in robotics training |
The Convergence Thesis: Why 2026 Is the Inflection Point
The developments documented in this analysis are not isolated advances in their respective subfields. They are converging into a unified capability stack that did not exist even two years ago:
Bio-inspired mechanics (LORIS, FORTE, soft grippers) provide the physical substrates — compliant, adaptive, sensor-rich end-effectors that can interact with the unstructured physical world without destroying it. These are the hands.
Embedded tactile intelligence (FORTE’s fluidic sensing, Robotiq’s taxel arrays, Shadow Robot’s optical fingertips, Figure 03’s 3-gram tactile sensors) provides the sensory bandwidth — high-frequency, spatially distributed force and contact data that gives AI systems the equivalent of a sense of touch. These are the nerves.
Foundation models (Skild Brain, Figure Helix) provide the general-purpose intelligence — learned physical intuition that transfers across hardware platforms and task domains. These are the brains.
Simulation infrastructure (NVIDIA Isaac Sim, Cosmos) provides the training pipeline — billions of synthetic manipulation episodes at GPU speed, making foundation model training economically feasible. This is the school.
The result is a positive feedback loop. Better hands generate better training data. Better data trains better foundation models. Better models make even simple hands more capable. And the simulation infrastructure accelerates the entire cycle by orders of magnitude.
The physical claw — once the dumbest component in the robotic system — is becoming its most intelligent. And the market knows it. The combined private market valuation of just two foundation model companies (Skild at $14 billion, Figure at $39 billion) exceeds $53 billion — a figure that would have been inconceivable for robotics startups at any prior point in the industry’s history. The convergence is no longer theoretical. It is funded, staffed, and shipping hardware [1][2].
Key Takeaways
- Bio-Inspired Mechanics Have Arrived: Cockroach-inspired microspine grippers (55° incline climbing), fish-fin fluidic tactile fingers (91.9% fragile object success), and gecko-inspired dry adhesion (mark-free, zero-energy) have moved from biology papers to engineered systems [3][5][8].
- Tactile Sensing Closes the Loop: 1 kHz taxel arrays, optical fingertip sensors, and fluidic pressure channels give robotic hands the distributed touch feedback required for dexterous manipulation — and generate the training data that foundation models consume [3][4][9].
- Foundation Models Unify the Stack: Skild Brain’s omni-bodied architecture ($14B valuation, >$2B raised) and Figure’s Helix VLA (200 Hz control, $39B valuation) prove that single models can control diverse robot bodies without hardware-specific training [1][2].
- Humanoid Hands Split Into Two Philosophies: Clone Robotics pursues biomimetic fidelity (164 DoF musculoskeletal), while Shadow Robot DEX-EE (with Google DeepMind) optimizes for ML research survivability — both approaches generate the manipulation data the field needs [4][10].
- NVIDIA Is Becoming the OS for Physical AI: Isaac Sim for training, Cosmos for synthetic data, edge GPUs for inference — every foundation-model-powered robot is a recurring NVIDIA customer, extending the $215.9B revenue engine into the physical world [12].
Works Cited
- [1] “Skild AI,” Wikipedia, accessed Jun. 2026. [Online]. Available: https://en.wikipedia.org/wiki/Skild_AI — Series A $300M/$1.5B (Jul 2024), Series B $500M/$4.7B (Apr 2025), Series C $1.4B/$14B (Jan 2026). Skild Brain omni-bodied foundation model, CMU founders Pathak and Gupta.
- [2] “Figure AI,” Wikipedia, accessed Jun. 2026. [Online]. Available: https://en.wikipedia.org/wiki/Figure_AI — Figure 01/02/03 progression, Helix VLA model (7B System 2 + 80M System 1 at 200Hz), BotQ factory (12K units/yr), $39B valuation Sep 2025.
- [3] UT Austin FORTE Project, “Fragile Object Grasping with Tactile Sensing,” University of Texas at Austin, 2025. Fin-ray effect compliant fingers with fluidic tactile sensing, 91.9% grasp success across 31 fragile objects, 93% micro-slip detection accuracy.
- [4] Shadow Robot Company, “DEX-EE Dexterous End-Effector,” product documentation, accessed Jun. 2026. [Online]. Available: https://www.shadowrobot.com/dexterous-hand-series/ — 3-fingered 12 DoF, 4.1kg, 1kHz EtherCAT, optical tactile sensors, Google DeepMind collaboration, DEX-EE Chiral variant.
- [5] Carnegie Mellon University Robotics Institute, “LORIS: Cockroach-Inspired Climbing Robot with Microspine Grippers,” CMU, 2025. Distributed Inward Gripping (DIG), 55° incline traversal, 45° overhang suspension, lunar exploration potential.
- [6] Various researchers, “Roaming Robotic Hands: Detachable Crawling End-Effectors for Confined Space Operation,” 2025–2026. Multi-joint articulation for dual locomotion-manipulation, nuclear/aerospace/disaster applications.
- [7] Soft Robotics Inc., “mGrip Flexible Gripping Technology,” accessed Jun. 2026. [Online]. Available: https://www.softroboticsinc.com/ — Gripper business divested to Schmalz Group Aug 2024, 450 picks/min food processing, Oxipital AI vision focus.
- [8] OnRobot, “Gecko Gripper,” product page, accessed Jun. 2026. [Online]. Available: https://onrobot.com/products/gecko — Van der Waals adhesion, SP1/SP3/SP5 sizes (1–5kg), mark-free gripping, no electricity or compressed air required.
- [9] Robotiq, “Hand-E Adaptive Gripper,” product documentation, accessed Jun. 2026. [Online]. Available: https://robotiq.com/products/hand-e-adaptive-robot-gripper — 50mm stroke, 7kg payload, IP67, plug-and-play with Universal Robots cobots.
- [10] Clone Robotics, “Musculoskeletal Androids,” company homepage, accessed Jun. 2026. [Online]. Available: https://www.clonerobotics.com/ — MyoFiber hydraulic muscles, Torso 2 (164 DoF), Clone Hand (26 DoF), 320 pressure + 70 inertial sensors.
- [11] RVO, “MILO Quadruped Delivery Robot,” product information, 2025–2026. Four-legged delivery platform for stair climbing and last-50-feet package handover.
- [12] “Nvidia Corporation,” Wikipedia, accessed Jun. 2026. [Online]. Available: https://en.wikipedia.org/wiki/Nvidia — FY26 revenue $215.9B, Isaac Sim robotics simulation, Cosmos world foundation models, GPU dominance in AI training and inference.