MIT Researchers Developed Tools to Help Data Scientists Make the Features Used in Machine-Learning Models More Understandable for End-Users

It is well known that machine learning models excel at a wide range of tasks. Building trust in the AI process requires understanding how these models operate. However, researchers still don’t clearly understand how AI/ML models use particular aspects or come to certain conclusions because of the complexity of the features and algorithms used to train these models.

Recent research by the MIT team creates a taxonomy to help developers across places to create features that are easier for their target audience to understand. In their paper, “The Need for Interpretable Features: Motivation and Taxonomy,” they identify the properties that make features more interpretable to create the taxonomy. They did this for five different user types—from artificial intelligence professionals to those whom a machine-learning model’s prediction may impact. They also offer advice on how developers might make features more accessible to the general public.

Machine-learning models use features as input variables. The features are often chosen to ensure increased model accuracy rather than whether a decision-maker can interpret them.

The team discovered that some features, such as the trend of a patient’s heart rate over time, were presented as aggregated values by machine learning models used to predict the risk that a patient will experience complications after cardiac surgery. Clinicians were unaware of how features obtained in this fashion were calculated, despite being “model ready.”

In contrast, many scientists valued aggregated features. For instance, instead of a feature like “number of posts a student made on discussion forums,” they prefer relevant features to be aggregated and labeled with words they recognize, like “participation.”

Their chief researcher claims that there are many interpretability levels, and this is a major driving factor behind their work. They detail which characteristics are probably the most significant to particular users and specify characteristics that can make features more or less interpretable for various decision-makers.

For instance, machine-learning developers may prioritize predictive and compatible features, enhancing the model’s performance. On the other hand, many value human-word features (which are described in a natural way for users) that are understandable and better suited to decision-makers without prior experience with machine learning.

When creating interpretable features, it is important to understand to what level they are interpretable. According to them, depending on the domain, one might not need all levels.

The researchers also propose feature engineering methodologies that developers can utilize in order to make features more understandable to a certain audience.

For machine-learning models to process the data, data scientists use aggregation and normalization techniques. In many cases, it’s nearly impossible for the average person to interpret these changes. Further, most models cannot process categorical data without first transforming it into a numerical code.

They remark that it may be necessary to undo some of that encoding to produce interpretable characteristics. Further, many fields have minimal trade between interpretable characteristics and model accuracy. For example, the researchers mention in one of their articles that they adhered to the features that met their standards of interpretability while retraining the model for child welfare screeners. The results showed that the model performance drop was essentially nonexistent.

Their work will enable a model developer to manage intricate feature transformations more effectively and produce explanations for machine-learning models oriented toward people. Additionally, this new system will translate algorithms created to explain model-ready datasets into formats that decision-makers can comprehend.

They believe their study would encourage model developers to consider incorporating interpretable elements early rather than concentrating on later explainability.

This Article is written as a summary article by Marktechpost Staff based on the research paper 'The Need for Interpretable Features: Motivation and Taxonomy'. All credit for this research goes to researchers on this project. Checkout the paper and blog post.

Please Don't Forget To Join Our ML Subreddit

Credit: Source link

Comments are closed.