UCI Researchers Propose A New Mathematical Model That Can Improve Performance By Combining Human And Algorithmic Predictions And Confidence Scores
This Article is written as a summay by Marktechpost Staff based on the paper 'Bayesian modeling of human–AI complementarity'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper and post. Please Don't Forget To Join Our ML Subreddit
Numerous facets of daily life are facilitated by artificial intelligence, from chatbots that answer tax questions to algorithms that drive autonomous vehicles and provide medical diagnoses. Developing more intelligent and accurate systems requires a hybrid human-machine approach. They present a new mathematical model that can improve performance by combining human and algorithmic predictions with confidence scores.
The strengths and weaknesses of human and machine algorithms complement one another. Each utilizes unique informational and strategic resources to make predictions and decisions. Through empirical demonstrations and theoretical analyses, researchers demonstrate that humans can improve AI’s predictions even when their accuracy is subpar – and vice versa. This precision is superior to combining the predictions of two humans or two artificial intelligence algorithms.
To evaluate the framework, UCI researchers conducted an image classification experiment in which human participants and computer algorithms correctly identified distorted images of animals and commonplace objects, such as chairs, bottles, bicycles, and trucks. Human participants rated their confidence in the accuracy of each image identification as low, moderate, or high, while the machine classifier generated a continuous score. The results demonstrate that humans and AI algorithms have vastly different levels of image-based confidence.
In certain instances, human participants were confident that a specific image depicted a chair, whereas the AI algorithm was uncertain. Similarly, the AI algorithm could confidently identify the object in other photographs, whereas human participants were unsure whether the distorted image contained any recognizable thing.
Combining Human and Machine Classifier Predictions
The Bayesian combination model combines the classifications and confidence scores of different ensembles of classifiers, where a “classifier” can refer to either a human or a machine classifier. Although this framework applies to any number of classifiers, to simplify the analysis, we focus on pairs of classifiers: hybrid human-machine (HM), human-human (HH), and machine-human (MM) pairs. For each image, the predictions from the two classifiers in a couple are combined to produce a forecast for the couple.
Using the new Bayesian framework, when predictions and confidence scores from human and machine predictions were combined, the hybrid model performed better than either human or machine predictions alone.
While previous research has demonstrated the advantages of combining machine predictions or human predictions – the so-called “wisdom of the crowds” – this study forges a new path in demonstrating the potential of combining human and machine forecasts, indicating the need for new and improved approaches to human-AI collaboration.
According to the researchers, the convergence of cognitive sciences – which seek to understand how humans think and behave – and computer science – which focuses on the production of technologies – will provide additional insight into how humans and machines can work together to create more accurate artificially intelligent systems.
The findings impact algorithmic systems that have not yet attained human-level precision. Adding algorithmic predictions to a human predictor may be more advantageous than adding additional human predictors. Human-level performance is not required as an evaluation criterion for AI algorithms. Even if an algorithm is incapable of human-level precision, it can still improve the accuracy of hybrid predictions. In contrast, the results indicate that AI approaches have surpassed human performance in specific domains. This does not imply that human judgment is no longer advantageous in hybrid HM systems.
Credit: Source link
Comments are closed.