DeepAI founder Kevin Baragona is a professional software engineer and product developer with more than a decade of experience. His goal in designing and developing DeepAI is to create a comprehensive platform that’s intuitive for general practitioners, useful for developers to integrate DeepAI into their projects and to introduce learners new to AI to its many and varied capabilities.
What initially attracted you to AI?
I was initially attracted to AI due to its immense potential. As I delved into the academic field of Deep Learning, I saw major breakthroughs happening simultaneously in image and text processing. This convinced me that the potential of AI was not a fluke. We launched the world’s first online Text to Image AI generator on DeepAI. Back then, the generator could somewhat deliver what was asked of it, but the results were more of novelty items than useful creations. What intrigued me was the possibility that advancements in algorithms and compute power would lead to consumer-ready results.
Could you share the genesis story of DeepAI?
DeepAI began as a website in 2016, offering the first AI text to image generator. In the following year, we incorporated and officially launched it as a business. Over time, we expanded our offerings to include a computer vision product that was utilized by fortune 100 companies as well as major governments. We have since discontinued our computer vision product and shifted our focus entirely on AI Generation tools. Our current offerings include Text to Image, AI Chat, and over twenty related APIs such as AI Image Editor.
DeepAI offers an art generator and other generative AI tools, could you share some details on the LLM and open source code that is used?
DeepAI is a consumer and developer facing platform and our commitment is in providing the best possible experience for them. We aren’t tied to any one LLM and pull from a variety of internal or external LLMs at any given point. In the past six months we’ve used a handful of AI Chat and AI Image generators. We don’t publish which ones we’re using so users won’t come to expect a particular LLM. We can’t wait to try out Anthropic, for instance, and numerous other LLMs in the future. We’ll likely evaluate all the leading options at some point and potentially develop our own.
You recently signed an open letter by The Future of Life Institute for a 6-month moratorium on AI development, why do you believe that the industry should take a pause?
At this point it’s clear signing the letter calling for the 6-month pause was largely a symbolic gesture, but I feel it was important and had other effects. One positive outcome was shortly after the signing the US government allocated $140 Million for a new AI Oversight department headed up by Kamala Harris. Perhaps through our efforts, we influenced that decision and helped governing bodies realize that oversight is necessary for the safety of society.
How will a pause benefit the industry, if bad actors and AI firms in China continue the rapid advancement of AI development?
Certainly, the same line of reasoning was used to escalate the nuclear arms race.
The difference with AI is that it has the power to harm its creators in unexpected ways. If AI can eliminate entire categories of jobs, then mass unemployment and poverty may follow. Or if it begins being used to make decisions for medical or judicial purposes, it could lead to humanitarian injustices.
I believe that advocating for a pause in AI development is crucial for our protection from unforeseen harm, not to give an advantage to bad actors.
You’ve previously called AI the nuclear weapons of software, could you elaborate on why you believe it is such an existential threat?
AI is already capable of replacing a wide range of jobs, and researchers are actively working on developing Artificial General Intelligence (AGI), which would potentially surpass human intelligence. If AGI is achieved, it has the potential to render all human professions obsolete. This similarity to nuclear weapons stems from the ongoing competition and race to develop AI.
The fundamental concern lies in creating a technology that surpasses human intelligence on all fronts. We currently lack any understanding of how we could control such a powerful entity. Moreover, if we continue to deploy AI extensively without proper caution, there is a risk of humanity simply being overshadowed or outperformed by this widespread implementation of AI.
What are your concerns for the economic impact of this technology?
I’m concerned that if AI progresses faster than society can adjust, then there will be widespread job loss. Technological advancements are great, but if they move too fast then many people will find themselves falling on hard times. Cars put horses out of business, but not all at once. AI could move much faster and people might not be able to adapt.
How far away do you believe that we are from reaching AGI?
I think we’re 5-10 years away. That’s just a guess, but given the momentum and investment that’s going into it, I think we’ll get there much faster than we realize.
You’ve spoken out about the risks and pitfalls of AI, what would be some benefits of advanced AI or even AGI?
AI has enormous potential benefits, and I look forward to seeing them realize. For instance, San Francisco is already teeming with self-driving vehicles. In the future, once these become widespread, traffic accidents might become obsolete. From a personal perspective, I find using AI-assisted tools for writing computer code far more enjoyable as it eliminates much of the mundane work. Moreover, AI will likely outperform humans in identifying medical issues. Medical research will be accelerated by AI. A whole host of new inventions will arise, in broad fields from physics, materials science, to space travel. We’re likely going to end up with a Sci-Fi future with incredible advancements that will make our current lives look absurdly antiquated.
Thank you for the great interview, readers who wish to learn more should visit DeepAI.
Credit: Source link
Comments are closed.