Amazon Open-Sources Fortuna, An Open-Source Library For Uncertainty Quantification of Machine Learning ML Models

On examining the class probabilities predicted by a deep neural network classifier, there are times when one can observe that the likelihood of one class can be noticeably higher than others. When a large number of data points in the test data yield such results, the underlying algorithm behind the model is likely overfitting and has to be adjusted. Such overfitted algorithms might result in an ‘overconfident’ model, which refers to the case where a model is more certain in its forecast than what the data shows. To verify if the probabilities the classifier returns are indeed accurate, researchers frequently make use of a concept known as confidence calibration. Confidence calibration refers to the ability of a model to deliver accurate probabilities of correctness for any of its predictions. The accuracy of the predictions made by the model is verified by comparing these values with the actual accuracy attained over a holdout data set. 

When it comes to applications where inaccurate predictions can be extremely troublesome like in the case of self-driving cars and medical diagnosis, the necessity for such calibrated confidence scores becomes even more apparent. In these situations, calibrated uncertainty estimates are essential for evaluating whether a model is safe for deployment or deciding whether human intervention is required. By linking each prediction with a precise confidence score, calibrated probability scores allow the model to identify and discard all poor predictions. This helps researchers avoid costly errors. 

Confidence calibration is valuable since several trained deep neural networks suffer from overconfidence. As a result, several different metrics for calibrating confidence have been presented over time, each with its own advantages and disadvantages. The most often employed methods include conformal prediction techniques, temperature scaling, and Bayesian inference. However, one major drawback of uncertainty quantification is that the scope of the existing tools and libraries is limited, and they do not provide a variety of methodologies in one location. This causes a large overhead, discouraging researchers from implementing uncertainty quantification in production.

Working on this problem, Amazon Web Services (AWS) researchers introduced Fortuna, an open-source library for uncertainty quantification. The library makes it simple to conduct benchmarks, allowing researchers to develop strong and trustworthy AI systems by utilizing cutting-edge uncertainty quantification methodologies. Fortuna offers several calibration methods across literature that can be used with any trained neural network and allows users to access the calibration methods on a standardized and user-friendly interface. In addition to its primary use case, uncertainty estimation, Fortuna can fit a posterior distribution, calibrate the model outputs, and even generate evaluation metrics.

Three different usage modes are provided by Fortuna, starting from uncertainty estimates, starting from model output, and starting from Flax models. The quickest level of engagement with the library is provided by the starting with uncertainty estimations mode. This usage mode provides conformal prediction techniques for both regression and classification. Moreover, it also has the least compatibility requirements. The following manner of usage, starting with model output, assumes that the model has already been trained using some framework. With the help of this usage mode, users can compute metrics, get conformal sets, evaluate uncertainty, and calibrate model outputs. Bayesian inference techniques can be used to train a model written in flax Flax models in the third usage mode, starting from Flax models. As this mode requires deep learning models written in Flax, it has more compatibility requirements than the other two. Users of Fortuna can begin either using model outputs or directly with their own uncertainty estimations. The first two described modes are agnostic of any particular framework and enable users to derive calibrated uncertainty estimates from a trained model. 

The Fortuna team is now working on adding even more uncertainty quantification techniques to the library and expanding on the number of examples that illustrate its application in various contexts. In a nutshell, Fortuna provides users with a standardized and user-friendly interface to access popular uncertainty quantification approaches like conformal methods, Bayesian inference, etc. To get started with Fortuna, the team strongly advises referring to their GitHub repository and official documentation. AWS has also open-sourced Fortuna to encourage numerous independent developers to contribute to the library and aid in its improvement.


Check out the Tool, and Github. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Meet Hailo-8™: An AI Processor That Uses Computer Vision For Multi-Camera Multi-Person Re-Identification (Sponsored)

Credit: Source link

Comments are closed.