NVIDIA Isaac Sim on AWS to ease robotics development

Listen to this article

Voiced by Amazon Polly

heat map overlaid on autonomous mobile robots.

NVIDIA is looking to make it easier for robotics developers to build out applications in the cloud. NVIDIA recently announced that its Isaac Sim platform and L40S GPUs are coming to Amazon Web Services (AWS).

NVIDIA said bringing its GPUs to AWS will offer a 2x performance leap to the Isaac simulator. NVIDIA said roboticists will have improved access to preconfigured virtual machines to run Isaac Sim workloads with the new Amazon Machine Images (AMIs) on the NVIDIA L40S in the AWS Marketplace. The L40S GPUs can be used for Generative AI tasks such as real-time inferencing in text-to-image apps and fine-tuning of large language models in hours.

AWS early adopters using Isaac Sim include Amazon Robotics, Soft Robotics, and Theory Studios. Amazon Robotics, for example, has used it for sensor emulation on its Proteus autonomous mobile robot (AMR) which was introduced in June 2022. Robots have played an important role across Amazon’s fulfillment centers to help meet the demands of online shoppers. Amazon has deployed more than 750,000 robots in its warehouses around the world.

Amazon Robotics has also begun using NVIDIA Omniverse to build digital twins for automating, optimizing, and planning its autonomous warehouses in virtual environments before deploying them into the real world.

“Simulation technology plays a critical role in how we develop, test, and deploy our robots,” said Brian Basile, head of virtual systems at Amazon Robotics. “At Amazon Robotics, we continue to increase the scale and complexity of our simulations. With the new AWS L40S offering we will push the boundaries of simulation, rendering, and model training even further.”

LLMs help robotics developers

NVIDIA also recently shared a slew of 2024 predictions from 17 of its AI experts. One of those experts is Deepu Talla, VP of embedded and edge computing, who said LLMs will lead to a rise in the number of improvements for robotics engineers.

“Generative AI will develop code for robots and create new simulations to test and train them.

“LLMs will accelerate simulation development by automatically building 3D scenes, constructing environments, and generating assets from inputs. The resulting simulation assets will be critical for workflows like synthetic data generation, robot skills training, and robotics application testing.

“In addition to helping robotics engineers, transformer AI models, the engines behind LLMs, will make robots themselves smarter so that they better understand complex environments and more effectively execute a breadth of skills within them.

“For the robotics industry to scale, robots have to become more generalizable – that is, they need to acquire skills more quickly or bring them to new environments. Generative AI models – trained and tested in simulation – will be a key enabler in the drive toward more powerful, flexible and easier-to-use robots.”


SITE AD for the 2024 RBR50 call for nominations.Submit your nominations for innovation awards in the 2024 RBR50 awards.


Partnership between AWS, NVIDIA grows

AWS and NVIDIA have collaborated for more than 13 years, beginning with the world’s first GPU cloud instance.

“Today, we offer the widest range of NVIDIA GPU solutions for workloads including graphics, gaming, high-performance computing, machine learning, and now, generative AI,” said Adam Selipsky, CEO at AWS. “We continue to innovate with NVIDIA to make AWS the best place to run GPUs, combining next-gen NVIDIA Grace Hopper Superchips with AWS’s EFA powerful networking, EC2 UltraClusters’ hyper-scale clustering, and Nitro’s advanced virtualization capabilities.”

“Generative AI is transforming cloud workloads and putting accelerated computing at the foundation of diverse content generation,” said Jensen Huang, founder and CEO of NVIDIA. “Driven by a common mission to deliver cost-effective state-of-the-art generative AI to every customer, NVIDIA and AWS are collaborating across the entire computing stack, spanning AI infrastructure, acceleration libraries, foundation models, to generative AI services.”

Credit: Source link

Comments are closed.