The lessons companies can learn from the cloud’s arrival when it comes to embracing generative AI

By nature, startups are used to being the disruptors; the ‘fast movers’ that challenge the inertia of bigger organisations, finding ways to embed themselves and helping others to innovate, adapt and progress faster.

But what happens when even faster tech threatens to disrupt even the disruptors?  

Leaders today face a velocity of change that exceeds anything we’ve ever experienced before.

In February, Reuters reported that ChatGPT had reached an estimated 100 million active monthly users just two months from launch, making it the “fastest-growing consumer application in history” (UBS). (By way of comparison, popular platforms like TikTok took nine months to reach 100 million monthly users, and Instagram took 2.5 years.)

Based on what we’re seeing right now, it’s possible to predict ChatGPT’s radical and ongoing improvement. Exactly what that looks like, however, remains to be seen; but there are some important fundamentals for businesses to consider as they evaluate their approach. 

Capability matters 

Our brains are hardwired to assess new technology for its ability to be either a threat or an opportunity. Unsurprisingly, we will often assess the likelihood of technology like ChatGPT to be a threat at a 70% level and the likelihood of it being an opportunity at just 30%. 

We’ve experienced the consequences of a resistance to exploring ‘opportunity’ play out via new technology dramatically over the past few decades. Blockbuster’s downfall wasn’t an innate problem with business intelligence or even capability, but simply a failure to understand the potential of and adopt the technology that would determine its fate. It perceived the Cloud as a security threat; unaware that security was a completely solvable problem and that it would give rise to a competitor business model of streaming media (built in the Cloud!).

Netflix and others put paid to any attempt at its recovery. 

Similarly, the emergent capabilities of ChatGPT and other generative AI platforms are somewhat nascent in nature ‘now’; but they won’t be for long. The ability of these platforms to generate original art is a good example which most businesses didn’t take seriously 12 months ago; but which has quickly moved from ‘barely passable’ to highly accurate and capable of saving businesses significant sums of money.  

Some of the most useful capabilities for businesses right now include the ability to query a large amount of knowledge (within, for example, a database) and recreate the information it holds into a marketing spreadsheet; a newsletter or even a video – almost instantly. An ability to review content (such as job adverts for any gender bias) or code provides an added layer of diligence. The ability to line the content generated (from emails and slack messages to client proposals) up with a particular business or exec’s tone of voice, too, provides endless scope for scaling productivity. 

Smart businesses are asking how consequential generative AI capabilities could be to their business. They’re asking themselves: “How would we evolve and adapt to take advantage of the latency between requiring content (multimedia or otherwise) and gaining access to that content if the time was ‘almost instant’ and the cost was fast approaching almost $0?”  

Balancing capability with risk 

It’s important to understand that ChatGPT is a public database of information that is trained using input data from users. The security parameters and how this data is used (at this stage) are unknown. We don’t fully understand how input data is managed or not managed.

For this reason, many company policies right now are focused on defining what constitutes ‘acceptable use’. At their most dogmatic, these policies might deem the use of these technologies simply too risky.

Others have instituted a blanket ban on inputting content that may contain sensitive company information such as trade secrets; privately held identifiable data; IP or non-public strategic elements of the business. 

Business today must balance the conundrum of innovation and creativity with a need to protect their business. A dogmatic stance in the face of enormous progression in technology is a dangerous place for industry and businesses to operate in.

“We don’t understand it; so we don’t use it” is a harbinger for future failure. A more balanced stance would be a policy that considers privacy and appropriate use but actively promotes exploration. 

A ‘hybrid solution’ is coming

ChatGPT and other generative AI technologies are simply large language models that are publicly accessible. These products are both the interface and the database with the ability to understand; and articulate huge databases trained on public sources like Wikipedia. 

Any and all privacy concerns we have stem from the type of datasets this technology has been trained on. If you break this apart and consider only the interface; we’re simply experiencing an extremely powerful way to interact with information and data. A way to query large bodies of information and data (using spelling mistakes and slang in our queries, even) in an instant. 

Let’s imagine for a moment that this interface was trained on private datasets only and didn’t link back to a public database. Let’s imagine a hybrid model in which AI could understand our query; and then articulate an answer in a secure way using an internal (to a particular company, account or even individual) knowledge base only. 

This is the exciting next evolution that Qrious is seeing (and prototyping) in which companies will not have to spend incredible amounts of resource on creating dashboards that require defining a particular view with 100% accuracy for the output to make sense. Using these hybrid large language models, it will be possible to instantly create data structures for consumption in multiple formats without the highly specilised consulation that usually goes into this kind of work upfront. 

In future, hybrid large language models will see a lot of the  ‘last mile work’ done by traditional data companies (such as helping define what views companies need to query for their data to become the most useful it can be) deemed unnecessary.

Within the financial, medical, legal and other fields with little tolerance or need for creativity (or ‘hallucinations’), training these models on limited datasets and constraining the outputs will give rise to a whole new world of emergent use cases that rely on a low degree of error (and the articulation of facts using zero assumption). 

Armed with a thorough understanding of capability; balanced with risk – the time is now for ‘disruptors’ (agile startups and businesses with their eye on future success) to ingest (understand, adopt and use to their advantage) the ‘disruptive’. ‘Keeping up’ is key; but so, too, is an eye on how to outpace the competition using technology such as ChatGPT as a catalyst. 

 

  • Stephen Ponsford is CEO of Qrious, Spark Business Group’s AI and data innovation experts.

 

 


Credit: Source link

Comments are closed.