Unmasking the Author: AI or Human? Exploring the Emergence of AI Forensics with IBM’s Innovative Text Detection Tools

In the era of rapidly advancing AI, a pivot challenge needs attention: the transparency and trustworthiness in generative AI. IBM researchers aim to arm the world with AI detection and attribution tools to change how we perceive generative AI. However, the complexity is that LLMs are not so great at detecting the content they wrote or tracing a tuned model to its source. As they continue to reshape day-to-day communication, researchers are working on new tools to make generative AI more explainable and reliable.

By adapting their trustworthy AI toolkit for the foundation of the modern era, the researchers aim to ensure accountability and trust in these developing technologies. IBM and Harvard’s researchers helped create one of the first AI-text detectors, GLTR, that analyze the statistical relationships among words or looks for tell-tale generated text. IBM researchers have developed RADAR, a novel tool that helps to identify AI-generated text that has been paraphrased to deceive the detectors. It pits two language models against each other, one that paraphrases the text and the other whether it has been AI-generated. Safety measures have been implemented to use generative AI by restricting employee access to third-party models like Chatgpt, thus preventing leaks of client data.

In the world of generative AI, the next challenge is identifying the origin of models that produced the text and their text through a field known as attribution. The IBM researchers have developed a matching pairs classifier to compare the responses and reveal the related models. Automated AI attribution using machine learning has helped researchers pinpoint a specific model’s origin and numerous others. These tools help trace the model’s base and understand its behavior.  

IBM has been a long advocate for explainable and trustworthy AI. They introduced the AI Fairness 360 toolkit, incorporating bias mitigation and explainability in their products. And now, with the November release of Watsonx.governance, they are enhancing transparency in AI workflows. IBM is determined in its mission to provide accessibility of transparency tools to everyone.


Check out the IBM Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 27k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Astha Kumari is a consulting intern at MarktechPost. She is currently pursuing Dual degree course in the department of chemical engineering from Indian Institute of Technology(IIT), Kharagpur. She is a machine learning and artificial intelligence enthusiast. She is keen in exploring their real life applications in various fields.


🔥 Use SQL to predict the future (Sponsored)

Credit: Source link

Comments are closed.