Over half of Russians believe deepfakes need to be regulated legislatively
In the modern world, AI-generated content can be used
for deception, fraud, and the manipulation of public opinion. As such, Russia
is increasingly looking at ways to counter the spread of false information of
this type. A recent poll showed that 54% of Russians believe that deepfakes
need to be regulated at the legislative level. Vladimir Tabak, General
Director of the Dialogue and Dialogue Regions autonomous non-commercial
organizations, spoke about this issue during the session ‘Stolen Identity:
Legal Aspects Related to the Regulation of Deepfakes and Voice Imitation’ at
the St. Petersburg International Legal Forum 2024.
In recent years, deepfake technologies have been
rapidly developing and reached a new level of evolution. However, their use can
result in both positive and negative consequences. Today, deepfakes are often
used for malicious and criminal purposes and pose a real threat to the security
of individuals, the state, and business. This is why the danger of deepfakes is
important and needs to be discussed, said Tikhon Makarov, Advisor to the
General Director of Dialogue Regions. Experts at the SPILF answered questions
about how generated content should be regulated, what technologies exist to
identify deepfakes in Russia today, and where the line should be drawn between
the violation of rights and the freedom of creativity.
In 2023, the number of deepfake videos tripled
compared with 2022, while the number of deepfake audio recordings spiked by
almost eight-fold, said Tatyana Matveeva, Head of the Presidential
Directorate for the Development of Information and Communication Technology and
Communication Infrastructure of the Russian Federation. The expert community
predicts an even bigger increase in the number of deepfakes in the coming
years:
“What makes deepfakes unique is that such content
seems real, is misleading, and has a high speed of dissemination, which
prevents people from thinking. Deepfake technologies are evolving, so
recognition technologies must also evolve. It is important for us that people
are warned, understand the information risks, and double-check their sources of
information. It is important to spread the technology of trusted artificial
intelligence.”
Today, the biggest problem is deepfake videos and
audio recordings, noted Anton Gorelkin, Deputy Chairman of the Committee
of the State Duma of the Federal Assembly of the Russian Federation on
Information Policy, Information Technologies and Communications and Chairman of
the Board of the Regional Public Center for Internet Technologies. However, the
regulatory measures that are currently under discussion, including labelling,
do not provide a comprehensive solution to the problem:
“Neural networks learn from their mistakes, so people
need an accessible tool to check content. The government needs to focus on this
task now and incorporate the best projects that exist in Russia for recognizing
deepfakes. As for regulation, we cannot get by without such a tool; it should
be in the arsenal of both investigators and judges. But we don’t need to rush
with regulation, so as not to make the work of our development companies even
more difficult with regulation.”
Vladimir Tabak shared the results of a study in which 47% of
Russians said they know or have heard about deepfakes, but every fifth
respondent gave the wrong definition of this concept. At the same time, 50% of
the respondents believe that deepfakes pose a danger. Modern technologies help
to create higher quality deepfakes and have demonstrated impressive
capabilities, Tabak noted. However, Russia has technological solutions to this
problem:
“Our service is Zefir, which helps you detect
deepfakes with an integral accuracy of 85%. Over the three years of its
existence, the service has verified more than five million pieces of content.
The amount of content created by artificial intelligence is increasing
exponentially, so we face a big challenge. AI generates 10% of unreliable
information. Moreover, the volume of such content increased by 17 times in 2023
compared with 2022. It is critical for us to decide on a strategy for working
with deepfakes and artificial intelligence. Until then it is too early to talk
about legislative regulation.”
Alexey Goreslavsky, General Director of the Internet Development
Institute (IRI), said it is also crucial to understand why AI-generated content
is being created. If technology is being used with the intent to mislead or steal
a person’s identity, it will be possible to charge and punish the attacker:
“In many ways, this problem arose not as a result of
technological progress, but because people are gullible, and in some ways they
have really been misled. Can and should content be created using technology?
For sure. If new synthesized images are not created, we will not evolve and
will remain in the world of the paper book. The IRI has supported a number of
projects that involve these experiments. I think this practice will continue.”
Evgenia Ryzhova, Advisor to the General Director for Scientific and
Technical Development of the Main Radio Private Center, said that AI became a
‘black box’ technology back in 2014.
Today it is impossible to completely control AI and predict its development.
This technology has several different levels of purposes: on the one hand,
generative AI allows people to solve problems with creating content, thereby
exponentially reducing the time for its production, while on the other hand, it
is becoming a weapon in the information war. Thus, in order to effectively
regulate this sphere, it is crucial, first of all, to determine all the essential
aspects of AI technology, objects, the parties to legal relations, and other
elements.