Congress is taking action to restrict the use of artificial intelligence (AI) as it continues to transform numerous sectors. The House of Representatives has imposed restrictions on the usage of ChatGPT, an AI-powered chatbot developed by OpenAI and supported by Microsoft, in an effort to address the potential ramifications and ensure responsible AI integration. Let’s look closer at this revolutionary breakthrough.
The House has developed regulations for the use of ChatGPT among its employees. The premium version of ChatGPT, known as ChatGPT Plus, is the only version permitted for use in congressional offices, according to a memo distributed by the chamber’s Chief Administrative Officer, Catherine L Szpindor. Due to the seriousness of the risks involved in protecting sensitive House data, the use of the free version of ChatGPT, which lacks key privacy measures, is strongly prohibited.
The use of ChatGPT is also restricted to “research and evaluation” rather than being integrated into the operational workflow. Employees are not allowed to share private information with or use the chatbot with sensitive data. These precautions are an attempt to balance the benefits of using AI with the need to protect sensitive information.
The House’s decision to control AI and its uses by setting usage caps on ChatGPT is consistent with this broader movement. Legislation to regulate the application of generative AI models like ChatGPT is currently being drafted by lawmakers including Senate Majority Leader Chuck Schumer and a bipartisan group of senators. Their goal is to promote innovation while also ensuring that AI technologies be used in an ethical and safe manner.
Senator Schumer rightly pointed out that AI has the ability to propel scientific progress, technological innovation, and economic development. To successfully encourage innovation, however, safety and abuse concerns must be addressed. Important issues with generative AI are being addressed by new laws, including how to notify users, how to tell it apart from other types of AI, and how to handle information that was made by both machines and people.
The restrictions placed on ChatGPT by Congress are consistent with those imposed by governments and international organizations around the world. Concerns over data harvesting and the lack of restrictions prohibiting minors from using the chatbot led Italy to be the first country to ban its use. This highlights the importance of the ongoing worldwide dialogue on AI regulation and the necessity to set thorough rules to safeguard users and guarantee ethical AI practices.
Organizations across industries confront comparable difficulties as Congress does in using generative AI into its procedures. Concerned about potential breaches of confidentiality, tech titans like Apple and Samsung have already limited the usage of ChatGPT and other generative AI technologies in the workplace. Plagiarism using generative AI is also a problem in the education industry, especially in universities.
The chief executive officer of OpenAI, Sam Altman, has been pushing for stronger AI regulation for months. Altman has been working to have the EU water down the impending AI Act, which is designed to safeguard citizens from the dangers posed by AI development. There is a growing consensus, as expressed by Altman, that strict rules are necessary to guarantee the responsible and secure application of AI technologies.
A comprehensive regulation package covering disclosure, enforcement, and differentiation from other forms of AI is scheduled to be released in the coming weeks. While legislators work on a sweeping framework, individual bills are being submitted with the intention that their contents would be rolled into the final package. This group is working together to encourage ethical AI development and define the future of AI law.
First reported on Yahoo Finance
Originally published on ReadWrite.
Credit: Source link
Comments are closed.