New research out of the Massachusetts Institute of Technology (MIT) demonstrates how blockchain technology could be used as a communication tool for a team of robots, providing security against deception. The research was a collaboration between MIT and the Polytechnic University of Madrid, and it was published in IEEE Transactions on Robotics.
The new research could impact multi robot systems of self-driving cars, which deliver goods and transport people in certain cities.
Blockchain Among Robots
In the case of the robots, blockchain can offer a record of all messages issued by robot team leaders, which would enable follower robots to identify inconsistencies in the information trail.
Tokens are used by robot leaders to signal movements and add transactions to the chain, and they would forfeit these tokens when they lie. This means that the communication system would limit the number of lies a hacked robot could spread.
Eduardo Castelló is a Marie Curie Fellow in the MIT Media Lab and lead author of the paper.
“The world of blockchain beyond the discourse about cryptocurrency has many things under the hood that can create new ways of understanding security protocols,” Castelló says.
In the simulation-based study, each block stored information about a set of directions from a leader robot to followers. In the case of a compromised robot attempting to alter the content of a block, it will change the block hash. This means the altered block would no longer be connected to the chain, and the follower robots would ignore the altered directions.
This system also enables the followers to see all the directions issued by leader robots to know where they have been misled.
The Blockchain System
In this new system, each leader receives a fixed number of tokens that they can use to add transactions to the chain, with each transaction requiring one token. When the followers determine the information in a block is false, the leader loses the token. Robots that are out of tokens no longer possess the ability to send messages.
“We envisioned a system in which lying costs money. When the malicious robots run out of tokens, they can no longer spread lies. So, you can limit or constrain the lies that the system can expose the robots to,” Castelló says.
The system was tested by simulating multiple follow-the-leader situations in which the number of malicious robots was either known or unknown. Leaders used the blockchain to send directions to follower robots that moved across a Catesian plane, and malicious leaders sent out incorrect directions or tried to block followers.
These simulations demonstrated that when follower robots were initially misled, the transaction-based system enabled them to still reach their final destination.
“Since we know how lies can impact the system, and the maximum harm that a malicious robot can cause in the system, we can calculate the maximum bound of how misled the swarm could be. So, we could say, if you have robots with a certain amount of battery life, it doesn’t really matter who hacks the system, the robots will have enough battery to reach their goal,” Castelló says.
The research team will now look to create new security systems for robots using transaction-based interactions, which Castelló says could build trust between humans and machines.
“When you turn these robot systems into public robot infrastructure, you expose them to malicious actors and failures. These techniques are useful to be able to validate, audit, and understand that the system is not going to go rogue. Even if certain members of the system are hacked, it is not going to make the infrastructure collapse,” he says.
Credit: Source link
Comments are closed.