On its Google Colaboratory platform, Google has restricted the training of AI systems that can produce deepfakes. Deepfakes-related work is included on the forbidden projects in the amended terms of usage, which was uncovered by BleepingComputer over the weekend.
In late 2017, Collaboratory, or Colab, grew out of an internal Google Research initiative. It’s intended for anyone to use a web browser to create and run arbitrary Python code, notably programming for machine learning, teaching, and data analysis. Google makes hardware available to free and paid Colab users, including GPUs and Google’s custom-designed, AI-accelerating tensor processing units (TPUs).
Colab has become the de facto venue for demos within the AI research community in recent years. Researchers who write code frequently provide links to Colab pages on or alongside the GitHub repositories where the code is hosted. However, Google hasn’t always been stringent about Colab material, which might open the door for people who want to exploit the site for less than ethical objectives.
According to data from Archive.org, Google secretly modified the Colab agreements in mid-May. The prior prohibitions on things like denial-of-service attacks, password cracking, and torrent downloads remain in place.
Deepfakes may take various forms, but one of the most prevalent is films in which a person’s face has been successfully superimposed over another person’s face. In certain circumstances, AI-generated deepfakes may replicate a person’s body motions, microexpressions, and skin tones better than Hollywood-produced CGI, unlike the crude Photoshop efforts of the past.
Several popular videos have demonstrated that deepfakes may be innocuous — even enjoyable. They’ve also been used for political propaganda, such as creating movies of Ukrainian President Volodymyr Zelenskyy making a speech on the war in Ukraine that he never did. On the other hand, Hackers are increasingly using them to target social network users in extortion and fraud operations.
According to one report, the number of deepfakes online increased from 14,000 to 145,000 between 2019 and 2021. Deepfake fraud scams are expected to cost $250 million by the end of 2020, according to Forrester Research.
Conclusion:
Google Colab has become a popular platform for deepfakes created by non-coders. According to a recent snapshot, Colab has secretly added “making deepfakes” to their list of forbidden behaviors. Those who try to utilize it now receive an error notice. The prohibition appears to have targeted DeepFace Lab (DFL), a prominent open-source deepfake generator. Faceswap, another deepfake algorithm, apparently escaped the restriction, although the ban may not have reached it yet.
References:
Credit: Source link
Comments are closed.