By Jason Revelle, CTO, Datatron
Experts, analysts and those with first-hand implementation experience sing the praises of machine learning (ML), leading to an increasing number of enterprises adopting ML for a variety of uses. Its claim to fame lies in the idea that it would computerize human cognition and further enable digital transformation. However, even though ML carries great promise, there’s a downside: for the teams that must manage it in production, work can become a difficult slog.
Enterprises are implementing ML for myriad tasks, such as classifying health risks, processing documents, predicting prices, detecting fraud, preventive maintenance and more. Teams train and evaluate models on historical data until they seem to fit performance and accuracy targets. Enterprises are keen to reap the high-value benefits that ML promises by predicting, classifying or prescribing future outcomes—and acting on those predictions.
All this business value, though, comes at a high operational cost. Once a model is “ready,” its automation through reliable delivery mechanisms creates operational complexities and risks that require vigilance. To make these projects efficient and effective, delivery and operational teams must holistically manage the ML lifecycle. Data must be available and of the expected quality compared to what was used for training.
It soon becomes apparent that this isn’t quite like other engineering efforts, and you need to start thinking about the challenges in new ways to truly become an AI-powered company. Here are four things to consider for success with machine learning, and specifically with ML models.
1. How to be able to rapidly deploy multiple versions behind the scenes
Machine learning models require an iterative process reliant on real production data. Make focused investments to have deployment targets which can run and log results without production systems or customers seeing those results. You need the ability to deploy this way easily and fluidly until you have the model you like. In the long term, it’s much more effective and economical to assume there will be a lot of tuning of your models, as well as a need to compare current versions to new candidates promising better outcomes.
2. Understand that what your team is promising typically won’t fit within traditional, application-centric “approved software” policies.
As ML quickly grows and diversifies, it boasts an ever-expanding list of technology providers, both large and small. It’s a given that IT needs to maintain proper support, security scans and controls for your operating environments. But be warned: applying the same controls and processes for governing operating or product hosting technology to your ML practice will probably significantly diminish your potential returns before you’ve ever crossed the starting line.
It’s also important to note that data scientists are in high demand. It’s not difficult for them to find another job if they feel they are being pressured to deliver better and better results without changing the technology they use or how they can use it.
3. Understand that the ML model development lifecycle is not the same as a software development lifecycle
Software development and creating ML models are very different processes; attempting to treat the latter like the former is a recipe for disaster. Enterprises that treat model delivery as just another software release end up with gaps in cross-cutting capabilities like monitoring and analysis, and high overhead for knowledge transfer between creators and operators, with sequential, extended timelines. Specialists who deploy and support your models need to grasp how the model and the data work, not just triage service reliability and error codes. The right talent, hybrid teams comprised of data and operational specialists, along with the right tools will enable you to test and interpret not only whether software is executing but also how accurate and explainable the responses are.
It’s also important to take on a success-based approach to auditing machine learning models because they’re fallible. If you want your company’s data scientists to trust their machine learning model predictions even when things go wrong (realizing this will happen), then the ML audit process should be designed in a way that allows machine learning models to be iterated upon and improved.
4. Plan for scale.
You’ll need to deploy machine intelligence tools that have an automated deployment system such as Kubernetes or Docker Swarm. This will help you achieve multiple objectives. You’ll be able to spin up machine learning workloads dynamically and monitor tasks automatically across clusters for failures using AI algorithms – alerting operators of any potential issues before they arise. You’ll also be able to consolidate analytics reports from multiple sources into one centralized dashboard to make sure machine learning models are performing as expected.
Master the learning curve
Before beginning an ML project, it’s crucial to understand that though ML models can offer tremendous advantages and efficiencies to your organization, they can take on a life of their own if not properly managed. Know the risks and the operational complexities from the starts, keeping the above four tips in mind. With a solid foundation in place, you will be able to create and easily change your ML models so that they serve your business goals.
Credit: Source link
Comments are closed.