Yashar Behzadi, the CEO of Synthesis AI – Interview Series

Yashar Behzadi PhD is the CEO and Founder of Synthesis AI. He is an experienced entrepreneur who has built transformative businesses in AI, medical technology, and IoT markets. He has spent the last 14 years in Silicon Valley building and scaling data-centric technology companies. Yashar has over 30 patents and patents pending and a Ph.D. from UCSD with a focus on spatial-temporal modeling of functional brain imaging.

Synthesis AI is a startup at the intersection of deep learning and CGI, creating a new paradigm for computer vision model development. They enable customers to develop better models at a fraction of the time and cost of traditional human-annotation based approaches.

How did you initially get involved in computer science and AI?

I earned a Ph.D. from UCSD in 2006 focused on computer vision and spatial & temporal modeling of brain imaging data. I then worked in Silicon Valley at the intersection of sensors, data, and machine learning across industries for the next 16 years. I feel very fortunate to have the opportunity to work on some remarkable technologies, and I have over 30 patents issued or filed focused on signal processing, machine learning, and data science.

Could you share the genesis story of Synthesis AI?

Before founding Synthesis AI in 2019, I led a global AI services company focused on developing computer vision models for leading technology enterprises. No matter the company’s size, I found we were extremely limited by the quality and amount of labeled training data. As companies expanded geographically, grew their customer base, or developed new models and new hardware, new training data was required to ensure models performed adequately. It also became clear that the future of computer vision would not be successful with today’s human-in-the-loop annotation paradigm. Emerging computer vision applications in autonomy, robotics, and AR/VR/metaverse applications require a rich set of 3D labels, depth information, material properties, detailed segmentation, etc., that humans can’t label. A new paradigm was needed to provide the necessary rich set of labels to train these new models.  In addition to technical drivers, we saw increasing consumer and regulatory scrutiny around ethical issues related to model bias and consumer privacy.

I established Synthesis AI intending to transform the computer vision paradigm. The company’s synthetic data-generation platform enables on-demand generation of photorealistic image data with an expanded set of 3D pixel-perfect labels. Our mission is to pioneer synthetic data technologies to allow the ethical development of more capable models.

For readers who are unfamiliar with this term, could you define what synthetic data is?

Synthetic data is computer-generated data that serves as an alternative to real-world data. Synthetic data is created in simulated digital worlds rather than collected from or measured in the real world. Combining tools from the world of visual effects and CGI with generative AI models, Synthesis AI enables companies to create vast amounts of photorealistic, diverse data on-demand to train computer vision models. The company’s data generation platform reduced the cost and speed to obtain high-quality image data by orders of magnitude while preserving privacy.

Could you discuss how synthetic data is generated?

A synthetic data set is created artificially rather than through real-world data. Technologies from the visual effects industry are coupled with generative neural networks to create vast, diverse, and photorealistic labeled image data. Synthetic data allows for creating training data at a fraction of the cost and time of current approaches.

How does leveraging synthetic data create a competitive edge?

Currently, most AI systems leverage ‘supervised learning’ where humans label key attributed in images and then train AI algorithms to interpret images. This is a resource and time-intensive process and is limited by what humans can accurately label. Additionally, concerns with AI demographic bias and consumer privacy have amplified, making it increasingly difficult to obtain representative human data.

Our approach is to create photorealistic digital worlds that synthesize complex image data. Since we generate the data, we know everything about the scenes, including never before available information about the 3D location of objects and their complex interactions with one another and the environment. Acquiring and labeling this amount of data using current approaches would take months, if not years. This new paradigm will enable a 100x improvement in efficiency and cost and drive a new class of more capable models.

Since synthetic data is generated artificially, this eliminates many biases and privacy concerns with traditionally collecting data sets from the real world.

How does on-demand data generation enable accelerated scaling?

Capturing and preparing real-world data for model training is a long and tedious process. Deploying the necessary hardware can be prohibitively expensive for complicated computer vision systems like autonomous vehicles, robotics, or satellite imagery. Once the data is captured, humans label and annotate essential features. This process is prone to error, and humans are limited in their ability to label key information like the 3D position required for many applications.

Synthetic data is orders of magnitude faster and cheaper than traditional human-annotated real-data approaches and will come to accelerate the deployment of new and more capable models across industries.

How does synthetic data enable a reduction or prevention of AI bias?

AI systems are omnipresent but can contain inherent biases that can impact groups of people. Datasets can be unbalanced with certain classes of data and either over or underrepresented groups of people. Building human-centric systems can often lead to gender, ethnicity, and age biases. In contrast, design-generated training data is properly balanced and lacks human biases.

Synthetic data could become a robust solution in solving AI’s bias problem. Synthetic data is generated partially or completely artificially rather than measured or extracted from real-world events or phenomena. If the dataset is not diverse or large enough, AI-generated data can fill in the holes and form an unbiased dataset. The best part? Manually creating these data sets can take teams several months or years to complete. When designed with synthetic data, it can be done overnight.

Outside of computer vision, what are some future other potential use cases for synthetic data?

In addition to the multitude of computer vision use-cases related to consumer products, autonomy, robotics, AR/VR/metaverse, and more, synthetic data will also impact other data modalities. We are already seeing companies leverage synthetic data approaches for structured tabular data, voice, and natural language processing. The underlying technologies and generation pipelines differ for each modality, and in the near future, we expect to see multi-modal systems (e.g., video + voice).

Is there anything else that you would like to share about Synthesis AI?

Late last year, we released HumanAPI, a significant expansion of Synthesis AI’s synthetic data capabilities enabling the programmatic generation of millions of unique, high-quality 3D digital humans. This announcement comes months after the launch of the FaceAPI synthetic data-as-a-service product, which has delivered over 10M labeled facial images for leading smartphone, teleconferencing, automobile, and technology companies. HumanAPI is the next step in the company’s journey to support advanced computer vision Artificial Intelligence (AI) applications.

HumanAPI also enables a myriad of new opportunities for our customers, including smart AI assistants, virtual fitness coaches, and of course, the world of metaverse applications.

By creating a digital double of the real world, the metaverse will enable new applications ranging from reimagined social networks, entertainment experiences, teleconferencing, gaming, and more. Computer vision AI will be fundamental to how the real world is captured and recreated with high-fidelity in the digital realm. Photorealistic, expressive, and behaviorally accurate humans will be an essential component of the future of computer vision applications. HumanAPI is the first product to enable companies to create vast amounts of perfectly labeled whole-body data on-demand to build more capable AI models, including pose estimation, emotion recognition, activity and behavior characterization, facial reconstruction, and more.

Thank you for the great interview, readers who wish to learn more should visit Synthesis AI.

Credit: Source link

Comments are closed.