Amazon’s ARMBench dataset helps train pick-and-place robots

Listen to this article

Voiced by Amazon Polly

Amazon has released a dataset that contains images of more than 190,000 objects that it said can be used to train robots for pick-and-place tasks. Amazon claims this is the largest dataset of images captured in an industrial product-sorting setting.

The dataset, called ARMBench, can be used to train pick-and-place robots to better generalize new objects and contexts. The images were collected in an Amazon warehouse where a robotic arm retrieves a single item from a bin full of items and then transfers it to a tray on a conveyor belt. This task can be difficult because of the variety of objects in the bin and their various configurations and interactions.

Images in the dataset fall into three categories:

  • Pick images: top-down images of a bin filled with items before a robot starts picking
  • Transfer images: images captured from multiple viewpoints as the robot transfers an item to the tray
  • Place images: top-down images of the tray in which the selected item is placed

ARMBench contains images from three separate tasks, object segmentation, object identification and defect detection.

The object detection dataset, which helps robots identify the boundaries of different products in the same bin, contains more than 50,000 images. The images show anywhere from one to 50 manual object segmentations per image, with an average of about 10.5.

The object segmentation dataset helps robots determine which product image in a reference database corresponds to the highlighted product in an image. This dataset includes more than 235,000 labeled pick activities, with each pick activity including a pick image and three transfer images. This dataset also includes reference images and text descriptions of more than 190,000 products. Models can learn to match one of these reference products to an object highlighted in pick and transfer images.

From left to right: a pick image, a transfer image and place image from Amazon's ARMBench dataset.

From left to right: a pick image, a transfer image and place image from Amazon’s ARMBench dataset. | Source: Amazon

The defect detection dataset, which includes both images and videos, helps systems know when a robot has committed an error, like picking up multiple items rather than one or damaging an item during transfer. The dataset has more than 19,000 images captured during the transfer phase. It also includes more than 4,000 videos that document pick-and-place activities that resulted in damage to a product.

Videos are a key aspect of this dataset, as certain types of product damage are best diagnosed through video, as they can occur at any point in the transfer process. The defect detection dataset also contains images and videos for over 100,000 pick-and-place activities without defects.

Amazon plans to continue to expand the number of images and videos, and the range of products they depict, in ARMBench.

In November 2022, Amazon unveiled Sparrow, a robotic arm capable of picking individual products before they get packaged. Sparrow can pick 65% of the over 100 million different items that could be processed at an Amazon warehouse, according to the company.

Sparrow can pick a variety of items, like DVDs, socks and stuffed animals, but struggled with items that have loose or complex packaging. It seems likely the company drew on the research it did while developing Sparrow to build this dataset.

Credit: Source link

Comments are closed.