Skip to main content

Scientists Discover Way to Send Information into Black Holes Without Using Energy

This New AI Lets Robots Learn Touch Like Humans In Just 2 Weeks

Robots are becoming smarter every year. They can see, hear, and even make decisions using artificial intelligence. But one human ability has remained extremely difficult to replicate: the sense of touch.

Now, researchers at King's College London have developed a powerful new approach that could dramatically change the future of tactile robotics. Their new simulation platform can reduce the design and training time of touch-sensitive robots from eighteen months to just two weeks.

Published in Cyborg and Bionic Systems and supported by complementary research in Nature Communications, the work introduces two major innovations: SimTac, a physics-based simulator for bio-inspired tactile sensors, and GenForce, an AI training method that mimics human tactile memory.

Together, these breakthroughs could significantly cut the cost and time needed to develop next-generation robots.


Why Touch Matters for Robots

Most robots today rely heavily on cameras and visual systems. While vision is important, it is not enough for tasks that require delicate handling.

Humans naturally adjust force when picking up different objects. We instinctively know the difference between holding a strawberry and gripping a hammer. A robot, however, does not have this built-in understanding. Without touch feedback, it may squeeze too hard or too softly, leading to mistakes.

Tactile robots solve this problem by using sensors embedded in robotic hands or grippers. These sensors measure pressure, force, and surface texture, allowing robots to interact more carefully with objects.

This ability is crucial for:

  • Automated manufacturing

  • Warehouse logistics

  • Handling fragile goods

  • Medical robotics

  • Advanced prosthetic limbs

Despite these benefits, designing tactile robots has traditionally been slow and expensive. Engineers rely heavily on trial and error. Each prototype must be physically built, tested, recalibrated, and redesigned multiple times.

The result? It can take up to eighteen months to produce a single working tactile robot prototype — with no guarantee of success.


Learning from Nature’s Best Sensors

Nature has already solved the problem of touch through millions of years of evolution.

Researchers drew inspiration from some of the most sensitive biological systems:

  • Cats’ paws

  • Octopus tentacles

  • Elephant trunks

  • The human hand

Each of these structures is designed for a specific type of interaction. A cat’s paw can silently sense vibrations. An octopus tentacle can grasp irregular shapes. An elephant’s trunk can handle both delicate flowers and heavy logs.

Instead of copying these designs physically through slow prototyping, the research team decided to simulate them digitally.


SimTac: A Virtual Playground for Tactile Design

The team developed SimTac, a physics-based simulation platform that allows researchers to design and test bio-inspired tactile sensors inside a virtual environment.

Traditional simulation tools mostly modeled flat sensors, similar to a human fingertip. But real-world interactions are more complex. For example, try picking up a sheet of paper using only a flat fingertip — it’s extremely difficult.

SimTac expands the design possibilities. Engineers can now create:

  • Simulated cat-like paw pads

  • Flexible tentacle structures

  • Trunk-inspired gripping surfaces

  • Complex curved tactile geometries

The system generates realistic data based on virtual interactions with real-world object shapes. This removes the need for repeated physical prototyping during early development.

By exploring thousands of designs digitally, engineers can quickly identify the most effective structure before building a physical robot.

This drastically reduces development time from months to weeks.


Cutting Costs with GenForce

Designing better sensors is only half the challenge. Training them is equally expensive.

High-accuracy force and torque sensors can cost more than £10,000 each. Training a full robotic hand often requires multiple sensors working together, increasing costs significantly.

To solve this, the researchers introduced GenForce, an AI model that mimics how humans learn to sense force.

Humans do not need to repeatedly test every finger to learn how much force to apply. If one finger touches an object, the brain quickly generalizes that knowledge across the entire hand.

GenForce works in a similar way.

Instead of training every tactile sensor individually, the AI abstracts force information into a simplified representation — similar to a 2D image. This “memory” of force can then calibrate the entire robotic hand.

In simple terms:

  • Train one sensor

  • Teach the AI the force pattern

  • Apply that knowledge to the whole device

This approach can dramatically reduce both hardware costs and training time.


Why This Is a Major Breakthrough

This combined approach — simulation plus intelligent training — offers several advantages:

1. Massive Time Savings

Development cycles shrink from 18 months to about 2 weeks.

2. Lower Financial Barriers

Fewer expensive physical sensors are needed.

3. Expanded Design Freedom

Engineers can test nature-inspired shapes easily.

4. Improved Dexterity

Robots can handle delicate and complex objects more effectively.

5. Scalable Manufacturing

Industries can deploy tactile robots at larger scale without excessive costs.


Real-World Impact

The impact of this research could extend across multiple sectors.

Automated Manufacturing

Factories could deploy smarter robotic pickers capable of handling fragile components without damage.

E-Commerce and Warehousing

Robots could safely sort items of varying shapes and textures.

Healthcare and Prosthetics

Advanced prosthetic hands could provide more natural grip control.

Service Robotics

Domestic robots could perform household tasks requiring gentle touch.

The key benefit is flexibility. Instead of building specialized robots for narrow tasks, companies could design adaptable tactile systems faster and more affordably.


The Future of Bio-Inspired Robotics

The researchers believe this is just the beginning.

Their long-term goal is to fully fabricate bio-inspired tactile robots based on designs first perfected in simulation. By expanding the design space even further, they aim to unlock new capabilities in robotics.

As artificial intelligence and simulation tools continue to improve, we may soon see robots that can:

  • Adjust grip strength like a human

  • Recognize surface textures instantly

  • Manipulate soft and fragile objects safely

  • Learn new force interactions after a single touch

This level of sensitivity would mark a major step forward in human-robot interaction.


A New Era for Robotic Touch

For decades, robotic touch has been a bottleneck in development. While robots became excellent at vision and movement, their inability to “feel” limited their versatility.

The combination of SimTac and GenForce changes that equation.

By learning from nature and mimicking the way the human brain processes tactile information, researchers have created a faster, smarter path toward truly dexterous machines.

If this approach scales successfully, the next generation of robots may not just see and think — they may finally feel.

And thanks to this breakthrough, they could be built in weeks instead of years.

References: (1) 

  • Xuyang Zhang,
  •  
  • Jiaqi Jiang,
  •  
  • Zhuo Chen,
  •  
  • Yongqiang Zhao,
  •  
  • Tianqi Yang,
  •  
  • Daniel Fernandes Gomes,
  •  
  • Jianan Wang,
  •  
  • Shan Luo.
  •  

SimTac: A Physics-Based Simulator for Vision-Based Tactile Sensing with Biomorphic Structures. Cyborg Bionic Syst. 2026;7:0510.DOI:10.34133/cbsystems.0510 (2) Chen, Z., Ou, N., Zhang, X. et al. Training tactile sensors to learn force sensing from each other. Nat Commun 17, 2101 (2026). https://doi.org/10.1038/s41467-026-68753-1

Comments

Popular

Scientists Discover Way to Send Information into Black Holes Without Using Energy

For years, scientists believed that adding even one qubit (a unit of quantum information) to a black hole needed energy. This was based on the idea that a black hole’s entropy must increase with more information, which means it must gain energy. But a new study by Jonah Kudler-Flam and Geoff Penington changes that thinking. They found that quantum information can be teleported into a black hole without adding energy or increasing entropy . This works through a process called black hole decoherence , where “soft” radiation — very low-energy signals — carry information into the black hole. In their method, the qubit enters the black hole while a new pair of entangled particles (like Hawking radiation) is created. This keeps the total information balanced, so there's no violation of the laws of physics. The energy cost only shows up when information is erased from the outside — these are called zerobits . According to Landauer’s principle, erasing information always needs energy. But ...

Black Holes That Never Dies

Black holes are powerful objects in space with gravity so strong that nothing can escape them. In the 1970s, Stephen Hawking showed that black holes can slowly lose energy by giving off tiny particles. This process is called Hawking radiation . Over time, the black hole gets smaller and hotter, and in the end, it disappears completely. But new research by Menezes and his team shows something different. Using a theory called Loop Quantum Gravity (LQG) , they studied black holes with quantum corrections. In their model, the black hole does not vanish completely. Instead, it stops shrinking when it reaches a very small size. This leftover is called a black hole remnant . They also studied something called grey-body factors , which affect how much energy escapes from a black hole. Their findings show that the black hole cools down and stops losing mass once it reaches a minimum mass . This new model removes the idea of a “singularity” at the center of the black hole and gives us a better ...

How Planetary Movements Might Explain Sunspot Cycles and Solar Phenomena

Sunspots, dark patches on the Sun's surface, follow a cycle of increasing and decreasing activity every 11 years. For years, scientists have relied on the dynamo model to explain this cycle. According to this model, the Sun's magnetic field is generated by the movement of plasma and the Sun's rotation. However, this model does not fully explain why the sunspot cycle is sometimes unpredictable. Lauri Jetsu, a researcher, has proposed a new approach. Jetsu’s analysis, using a method called the Discrete Chi-square Method (DCM), suggests that planetary movements, especially those of Earth, Jupiter, and Mercury, play a key role in driving the sunspot cycle. His theory focuses on Flux Transfer Events (FTEs), where the magnetic fields of these planets interact with the Sun’s magnetic field. These interactions could create the sunspots and explain other solar phenomena like the Sun’s magnetic polarity reversing every 11 years. The Sun, our closest star, has been a subject of scient...