Deepfakes And Political Manipulation

Written by Leonid SAVIN

ORIENTAL REVIEW

 

Not a day seems to go by without the American media writing about Russia’s Internet meddling in the US elections. Major international and specialist publications headquartered in the US are routinely regurgitating the myth about “Russian trolls” and “GRU hackers” without a single shred of evidence besides unsubstantiated accusations. Actually, evidence has been provided by a private company, but this evidence points to the contrary. As one Google project so convincingly shows, for example, for just $100 you can create the illusion that a Russian company is trying to influence public opinion within America. All you need to do is buy a mobile phone and a few SIM cards in Panama, choose a common Russian name and surname and use it to set up a Yandex account, then indicate your IP address is in Saint Petersburg using NordVPN. You can then set up an account with AdWords, pay for advertising using the details of a legally registered company, and place political content on the Internet that could be regarded as inflammatory. This was what was done by US citizens from Google and they didn’t hesitate to report on it. So what is stopping the NSA, the CIA, or some Russophobe fanatics familiar with hacking techniques from doing exactly the same thing, regardless of whether they belong to a political party or not? Common sense suggests that this is exactly what is being done to create the appearance of Russian interference, but no one is able to provide any real evidence, of course.

Another example of how the US can influence public opinion is the creation of fake propaganda, a technique that was developed by the US military in Iraq in the early 2000s.

According to the British non-governmental organisation The Bureau of Investigative Journalism, the Pentagon paid the British PR company Bell Pottinger more than $500 million to create fake videos showing various militant and terrorist activities. A group of Bell Pottinger employees was stationed alongside the US military in its Baghdad Camp Victory headquarters almost as soon as the American occupation began in 2003. A series of contracts was also issued between 2007 and 2011. The company’s former chairman, Lord Tim Bell, confirmed to journalists that Bell Pottinger did in fact carry out covert work for the Americans, the details of which cannot be disclosed for reasons of confidentiality.

It is worth mentioning that Bell Pottinger was once responsible for shaping Margaret Thatcher’s image and helped the Conservative Party win three elections.

Martin Wells, a former video editor with the company, said that his time in Camp Victory had opened his eyes and changed his life. On the US side, the project was supervised by former General David Petraeus. If he was unable to decide on a matter, then it was sent to the very highest levels in Washington for approval.

The most scandalous part of the story is the propaganda videos produced by the UK company in the name of the terrorist group Al-Qaeda. Once the material was ready and in the required format, the videos were copied onto CDs and given to US marines, who would then leave them in Iraqi homes during searches and raids. A code was embedded into the CDs that made it possible to track where they were played. It was subsequently discovered that the fake Al-Qaeda videos were not just being watched in Iraq, but also Iran, Syria, and even the United States. It is possible that this tracking also helped US security agencies trace the distribution of fake propaganda videos, but how many people became extremists thanks to the Pentagon’s secret project?

And since technology has come on leaps and bounds in recent years, there is now talk of possibly using artificial intelligence for projects like these – whether it is the political manipulation of elections or the spread of disinformation.

In fact, AI-based technology has already been associated with several recent scandals. One of these was Cambridge Analytica’s use of information from Facebook profiles to target voters during the US presidential elections.

Commenting on the scandal, The Washington Post noted that: “Future campaigns will pick not just the issues and slogans a candidate should support, but also the candidate who should champion those issues. Dating apps, the aggregate output of thousands of swipes, provide the perfect physical composite, educational pedigree and professional background for recruiting attractive candidates appealing to specific voting segments across a range of demographics and regions. Even further in the future, temporal trends for different voter blocks might be compared to ancestry, genetic and medical data to understand generational and regional shifts in political leanings, thereby illuminating methods for slicing and dicing audiences in favor of or against a specified agenda.”

Artificial intelligence can also be used as a bot to substitute for a person and even to simulate a conversation. Algorithms like Quill, Wordsmith and Heliograf are used to convert tables of data into a text document and even write news articles on various subjects – Heliograf is used by The Washington Post, in fact – but bots can be used for both good and bad.

According to the US military, AI-based information operations tools can empathise with people, say something if needed, and alter the perception that controls these physical weapons. Future information operations systems will be able to individually control and influence tens of thousands of people simultaneously.

In the summer of 2018, DARPA launched a project to determine the possibilities of identifying fake video and audio generated by artificial intelligence. The analysis of such files is also done using artificial intelligence.

Videos typically have more impact on an audience because it is believed that they are harder to fake than photographs. They also look more convincing than a text read out on behalf of a politician. This is no problem for modern technologies, however. In April 2018, a video was made public called ObamaPeele after the people involved. The video showed Barak Obama giving a rather strange speech, but the text was actually being read by an unseen actor. A special programme had processed what the actor was saying in such a way that Obama’s facial gestures were fully consistent with what was being said. Computer technology experts at Washington University conducted a similar experiment with a Barack Obama speech in 2017 and made the results publicly available with a detailed description of how it works.

YouTuber ‘derpfakes‘ trained the AI image swap tool to create a composite of Trump’s face, over Baldwin’s speech and mannerisms.

 

The DARPA project used so-called “deepfakes” – videos in which the face of one person has been superimposed onto the body of another. Experts note that technology like this has already been used to create several fake celebrity porno videos, but the method could also be used to create videos of politicians talking or doing something outrageous and unacceptable.

Technologists at DARPA are particularly concerned that new AI techniques for creating fake videos make it almost impossible for them to be recognised automatically. Using so-called generative adversarial networks or GAN, it is possible to create realistic artificial images. Experts at DARPA are evidently concerned that this technology may be used by someone else, since, if the US loses its monopoly on the creation, verification and distribution of fake material, it will find itself facing the same problems it has been preparing for other countries.

And while scientists in military uniforms are racking their brains over how to get ahead of other countries in such a specific information arms race, their civilian colleagues are already calling the trend “an information apocalypse” and “disinformation on steroids”.