By Victor Thu, president, Datatron
Initiatives using machine learning cannot be treated in the same manner as projects involving conventional software. It’s imperative to move quickly so that you can test things, fix issues and test them again. In other words, you must be able to fail quickly – and do so early on in the process. Waiting until later in this process to find issues can end up being very expensive and time-consuming.
AI requires a new approach
When developing software using the traditional method, you use decision logic. To be as precise as you can, you incorporate logic that enables the software to function properly. There (typically) is no need for changes after the application’s logic has been developed, other than bug fixes. It’s a very methodical development process; you advance gradually by making sure each step in the process is accurate before moving on to the next. It’s a tried-and-true strategy that has consistently demonstrated its effectiveness for software development.
However, you can’t use the same strategy for AI/ML projects because it simply won’t work. Instead, you need to have the capacity to iterate fast and frequently in order to find success with an ML project. Since ML requires initial training and is a process, you should approach it with the knowledge that it won’t be accurate the first time it is deployed.
This process calls for multiple iterations. The reality is that your first model will encounter unexpected results 99% of the time. Even if you spend months training your model in the lab, it will undoubtedly change once it encounters real-world data and traffic.
Don’t aim for immediate perfection
So then, in order to test a model and determine what modifications are required, you must be able to put it into production swiftly. You can then make any adjustments, release it again and refine it. For this reason, you shouldn’t put too much effort into trying to make your model flawless before testing it in production; the initial attempt won’t be perfect, and no one should expect it to be.
While the model is being developed in the lab, the additional improvements from 92% to 95% accuracy might not be significant for some use cases. Why not? Only a small portion of the training data has been used to train your AI model. You can end up investing a lot of time and money to obtain the extra bit of accuracy while foregoing the advantages your model might offer you in the meantime.
Effective steps in ML deployment
Because there is a chance that a model will fail or produce incorrect predictions, ML scientists are sometimes reluctant to put a model into production. It makes sense, to a degree. You need a system that enables you to view events as they happen in real time. With this approach, you can immediately pull and update your model and then swiftly release a new model. Instead of getting bogged down in “analysis paralysis,” this is the most efficient method for putting machine learning models into production.
It’s far preferable to just launch the model and let it to gain some life experience. This doesn’t eliminate the necessity for the data scientists to create the model as accurately as possible from the beginning. But as soon as you finish that initial version, you ought to start gathering that important data right away.
You might want to run your models in A/B testing mode or shadow mode against real-world data as part of this process. That way, you can basically compare the performances of the various models and have a lot of data and proof before choosing which model to promote or demote.
Building a localized model rather than concentrating on creating a single global model to forecast behavior for the macro environment is another best practice. With a local model, you may use data from specific situations so that the model behaves as it should for each of those scenarios. This saves time, data and effort compared to an all-encompassing model that would require a significant quantity of these resources to ensure it works.
Determining the demand for customized sneakers will serve as an illustration here. The global model might be applicable to the rest of North America if it were based on the population of New York City. Yet it would probably not accurately represent demand in other parts of the country. A localized model strategy would have allowed you to gain higher profit margins, which you are now losing out on.
Models require regular updating, of course. Models require ongoing updates because the environment’s data is always changing, in contrast to traditional software that can be set once and left alone. ML models deteriorate over time if they aren’t iterated on a regular basis. This must take place during the course of the model’s lifetime and must be carefully monitored.
Machine learning’s new paradigm
Comparing machine learning models to conventional software is unwise. However, ML experts gain from a rapid deployment technique for AI/ML models, just as software engineers have done with DevOps. For ML projects, you need a system that makes it possible to quickly launch models. You must be able to compare different models, effectively contrasting one that is live with one that isn’t. These and the other best practices mentioned above will assist you in bypassing analysis paralysis and failing quickly and early on so that you can scale your machine learning.
Credit: Source link
Comments are closed.