Tag Archives: deepfake

Voice mimicking AI dupes Alexa and other voice recognition devices

Credit: Pixabay.

Deepfakes (a portmanteau of “deep learning” and “fake“) are synthetic media in which a real person’s pictures, videos, or speech are converted into someone else’s (often a celebrity’s) artificial, AI-generated likeness. You may have come across some on the internet before, such as Tom Cruise deepfakes on Tik Tok or Joe Rogan voice clones.

While image and video varieties are more convincing, the impression was that audio deepfakes have lagged behind — not without copious amounts of training audio, at least. But a new study serves as a wake-up call, showing that voice copying algorithms that are easy to find on the internet are already pretty good. In fact, the researchers found that with minimal amounts of training, these algorithms can fool voice recognition devices, such as Amazon’s Alexa.

Researchers at the University of Chicago’s Security, Algorithms, Networking and Data (SAND) Lab tested two of the most popular deepfake voice synthesis algorithms — SV2TTS and AutoVC — both of which are open-source and freely available on Github.

The two programs are known as ‘real-time voice cloning toolboxes’. The developers of SV2TTS boast that only five seconds’ worth of training recordings are enough to generate a passable imitation.

The researchers put both systems to the test by feeding them the same 90 five-minute voice recordings of different people talking. They also recorded their own samples from 14 volunteers, who were asked for permission to see whether the computer-generated voices could unlock their voice recognition devices, such as Microsoft Azure, WeChat, and Amazon Alexa.

SV2TTS was able to trick Microsoft Azure about 30 percent of the time but got the best of both WeChat and Amazon Alexa almost two-thirds, or 63 percent, of the time. A hacker could use this to log into WeChat with a synthetic vocal message mimicking the real user or access a person’s Alexa to make payments to third-party apps.

AutoVC performed quite poorly, being able to fool Microsoft Azure only 15 percent of the time. Since it fell short of expectations, the researchers didn’t bother to test it against WeChat and Alexa voice recognition security.

In another experiment, the researchers enlisted 200 volunteers who were asked to listen to pairs of recordings and identify which of two they thought was fake. The volunteers were tricked nearly half the time, which made their judgments no better than a coin toss.

The most convincing deepfake audios were those mimicking women’s voices and those of non-native English speakers. This is something that researchers are currently looking into.

‘We find that both humans and machines can be reliably fooled by synthetic speech and that existing defenses against synthesized speech fall short,’ the researchers wrote in a report posted on the open-access server arXiv

‘Such tools in the wrong hands will enable a range of powerful attacks against both humans and software systems [aka machines].’

In 2019, a scammer performed an ‘AI heist’, using deep fake voice algorithms to impersonate a German executive at an energy company and convince employees to wire him $240,000. According to the Washington Post, the person who performed the wire transfer found it odd that their boss would make such a request, but the German accent and familiar voice heard over the phone was convincing. Cybersecurity firm Symantec says it has identified similar cases of deepfake voice scams that resulted in losses in the millions of dollars. 

We should talk about ‘deepfake geography’: fake AI-generated satellite images

You may have heard about ‘deepfakes’ before. These are essentially elaborate hoaxes generated by artificial intelligence-driven technology, most typically in a video format. During these highly realistic video forgeries, an actor’s facial expressions and lip movements are superimposed over the impersonated individual’s face. This isn’t some comical Photoshop. The voice is also impersonated, leading to lifelike apparitions that are both impressive and terrifying at the same time.

Obvious targets include celebrities like Mark Zuckerberg, Barack Obama, or Vladimir Putin who were turned into realistic puppets. Many others are pornographic, mapping faces from female celebrities onto porn stars — a staggering 96% of deepfakes posted online up to September 2019 were fake porn, showcasing the technology’s ability to be weaponized against women.

Besides deepfake pictures, videos, and audio, scientists at the University of Washington now warn that maps can also be faked using this technology via augmented satellite imagery.

Deepfakes: now a geography problem

Various agents, whether state-sponsored or not, have been forging satellite imagery for years. This isn’t news. What’s more, some inaccuracies are intentionally added by mapmakers as a means to prevent copyright infringement. These include fake streets, churches, or towns that are put on purpose so if someone copies the map then the map owner knows it was you because you couldn’t possibly have mapped these fake features.

Sometimes the cartographers have fun with these spoofs and even challenge users to find them as a sort of easter egg hunt. For instance, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu” and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater.

But deepfake makes are anything but funny. Bo Zhao, an assistant professor of geography at the University of Washington and lead author of a recent study that exposes the dangers of AI-forged maps, claims that such misleading satellite imagery could be used to do harm in a number of ways. This is even more concerning if deepfakes are ever applied to WorldView 3 satellite imagery, whose resolution is so high you can zoom in to see individual people.

In fact, in 2019, the US military warned about this very prospect through its National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense. For instance, military planning software can be misled by fake data showing tactically important locations, such as a bridge, in an incorrect location.

“The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it,” Zhao said.

For the new study, Zhao and colleagues fed maps and satellite images from three cities — Tacoma, Seattle, and Beijing — to a deep learning network that is not all that different from those used to create deepfakes of people. The technique is known as generative adversarial networks, or GANs.

After the machine was trained, it was instructed to generate new satellite images from scratch showing a fictitious region of one city, drawn from the characteristics of the other two.

Fake videos, now fake buildings and satellite images

One such set of fake satellite images shows a supposed Tacoma neighborhood (the base map) but with visual patterns typical of Seattle and Beijing. In the image below, slides a) and b) feature the mapping software and an actual satellite image of the neighborhood as it truly is in real life, respectively. The bottom slides show the same neighborhood with low-rise buildings and greenery you’d expect to see in Seattle (slide c) and a Beijing version with taller buildings in which the AI cast a shadow over the building structures in the Tacoma map. In both genuine and fake maps, the road networks and building locations are similar but not exact. And it is these small but misleading details that can cause mayhem.

These are maps and satellite images, real and fake, of one Tacoma neighborhood. Credit: Cartography and Geographic Information Science.

Telling apart the real satellite imagery from the fake one can be challenging to the untrained eye. This is why Zhao and colleagues also performed image processing analyses that can identify fakes based on artifacts found in color histograms, as well as in frequency and spatial domains.

In any event, the aim of this study wasn’t to show that satellite imagery can be falsified. That was already a foregone conclusion. Rather, scientists wanted to learn whether they could reliably detect fake satellite images, so that geographers may one day develop tools that allow them to spot fake maps similarly to how fact-checkers spot fake news today — all for the good of the public. According to Zhao, this was the first study to touch upon the topic of deepfakes in the context of geography.

“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data,” Zhao said. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary.”

The findings appeared in the journal Cartography and Geographic Information Science.

Portrait-to-animation AI brings to life Marie Curie, Charles Darwin, and more

Marie Curie (1920). Credit: My Heritage.

Going from pictures to moving pictures was a huge leap in technology and value. We can now archive human culture in a far richer format than simple text or static photos. Now, it is even possible to fill in the blanks from the past. Using AI, researchers have transformed photos of famous people into hyper-realistic animations that shine new light upon historical figures.

Charles Darwin (1855). Credit: My Heritage.

Anyone can use the tool — fittingly named Deep Nostalgia — to animate faces in photos uploaded to the system. The new service, which was produced by genealogy site MyHeritage, uses deep learning to turn a static portrait into a short video with life-like facial expressions.

Amelia Earhart (1937). Credit: My Heritage.

Specifically, the AI uses a method known as adversarial networks (or GANs for short) in which two different AIs are pit against each other. One of the networks is responsible for producing content while the other verifies how well the content emulates references. Over billions of iterations, the AI can get very good — so good it might fool you that it is original footage.

The tool is ideal for animating old family photos and celebrity pictures. It can even work with drawings and illustrations.

In order to bring a portrait to life, the AI maps a person’s face onto footage of another. It’s essentially the same way deepfakes work to impersonate people, whether it’s Donald Trump joining Breaking Bad or Mark Zuckerberg saying things he never actually said. But since the tool doesn’t also come with fake audio, there shouldn’t be any risk of nefarious usage — yet.

Some will feel enchanted by Deep Nostalgia, while others will undoubtedly be creeped out. But regardless of how the products of this AI make you feel, I think we can all agree that the technology behind them is damn impressive.

Satire from South Park creators shows how eerily real deepfakes already are

The year is 2020 and Fred Sassy is a reporter for the Cheyenne News at 9, a local TV station in Cheyenne, Wyoming. Fred is looking out for the consumer, and this week, he’s uncovering the truth about deepfake videos. Except Fred is Donald Trump in a cheap costume and a wig.

Fred is himself a deepfake, produced by Trey Parker and Matt Stone, the creators of “South Park”. Welcome to 2020, where everything can be faked.

Deepwhat?

Deepfakes are sophisticated forms of image or video forgery in which the actor’s appearance is changed to resemble someone else. It’s a form of synthetic media with serious implications for the future. Just think how much we’re dealing with fake news — and that’s in written form; what if the next generation comes in audio or video format? The scariest part is that this technology is already here.

To see just how real deepfakes can be, you need look no further than the viral video “Sassy Justice”. Fred Sassy, the spitting image of President Trump, is here to tell you all about it. See, I could ramble on for a thousand words about the dangers of deepfakes and how experts have been sounding the alarm for years, but in true South Park fashion, this video does a way better job at it by just showing the dangers.

Sassy interviews the likes of Al Gore, Julie Andrews, and Michael Cain, there an unscrupulous Mark Zuckerberg running a shady dialysis center, there’s a puppet Tom Cruise, an eerie child-version of Jared Kushne — all deepfakes, of course.

It’s all so confusing it actually does a perfect job at conveying the desired message.

The child-like version of jared Kushner is played by Betty, the 7-year-old daughter of Peter Serafinowicz, a voice actor who worked with Stone and Parker on the project.

See, this is the thing about deep fakes: they don’t necessarily need to convince people that someone said something, all you need to do is sow confusion about it. It’s South Park energy applied to a very scary technology.

“Before the big scary thing of coronavirus showed up, everyone was so afraid of deepfakes,” Stone said in an interview for the New York Times. “We just wanted to make fun of it because it makes it less scary.”

“It really is this new form of animation for people like us, who like to construct things on a shot-by-shot level and have control over every single actor and voice. It’s a perfect medium for us,” Parker added for NYT.

Deepfake Zuckerberg, making an honest(?) living.

For the artists, it was a way to immerse themselves in the technology and maybe even start a new venture (they even started a new studio and spent “millions” of dollars to make the video).

At the same time, it’s a reminder that deepfakes are here, and they’re probably here to stay. The next ones might not be as lighthearted as this one.

India’s first political deepfake during elections is deeply concerning

Deepfakes are AI-generated fake videos depicting individuals who, in reality, never appeared in the staged scene. This 21st-century “Photoshop” has the potential to greatly manipulate public opinion. This is evident in two party-approved deepfake videos featuring Bharatiya Janata Party (BJP) member Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal. It’s the first time a deepfake video designed for political motives has been identified in India — and it won’t be the last.

Credit: Youtube.

Officials from BJP partnered with political communications firm The Ideaz Factory to employ deepfakes in order to reach different linguistic voter bases.

Although the official languages in India are Hindi and English, there are actually 22 major languages in India, written in 13 different scripts, with over 720 dialects. In a country of 1.3 billion people, politicians cannot ignore voters who exclusively speak another dialect.

In the original video, Manoj Tiwari made a brief political statement accusing the current Delhi leadership of making false promises to their electorate.

The original was then deepfaked in English and Haryanvi, a popular Hindi dialect spoken in Delhi.

According to Vice, the two deepfakes were shared across 5,8000 WhatsApp groups in the Delhi region, reaching around 15 million people.

Deepfakes are the reason why you can see Obama calling Trump a “complete dipshit” or Mark Zuckerberg bragging about having “total control of billions of people’s stolen data”. These statements were never made in reality, but they show the tremendous power modern machine learning algorithms have of spreading fake news. Imagine someone putting their words in your mouth and making it all seem eerily genuine.

According to Deeptrace, an AI firm, there were over 15,000 deepfake videos online in September 2019. Their number doubled over nine months, a staggering 96% of which are porn deepfakes that map the faces of female celebrities on to porn stars. Then there are deepfakes made as spoofs or satire. And, of course, there are also deepfakes used for political reasons.

In this particular case, the deepfakes were approved by Manoj Tiwari’s party to serve as a sort of high-tech dubbing, in which the speaker’s lips and facial expressions are synced with the novel audio that utters words Tiwari had never spoken.

This, in itself, might sound somewhat innocent. However, where can you draw the line between what’s nefarious and what isn’t when weaponizing deepfake tech during political campaigns and elections?

In the future, as deepfakes will become increasingly harder to spot, the danger they pose to democracy and journalism cannot be understated. A lie can travel halfway around the world while the truth is putting on its shoes. By the time a deepfake is exposed as a ruse, many people will have already formed an opinion based on the fake.

Expect troubled times ahead, especially as the high-stakes US Presidential elections in November arrives. The solution? Social networks need to keep up and employ an equally powerful AI to filter and flag potential deepfakes — but that’s harder said than done. What’s truly worrisome is that many of these deepfake algorithms are freely available online and people who even don’t know how to code can easily use them to make their own fake videos.

Ultimately, people need to be aware that these things exist and should become more skeptical of what they come across online.

Tech experts band together to issue $10 million challenge against deepfakes

A newly-announced challenge is offering a total of $10 million in prizes to those who can create reliable deepfake-spotting software.

Image via Pixabay.

It’s a good approach in life not to believe everything you see — but it’s a vital skill on today’s Internet. In response to the dangers posed by deepfakes, realistic AI-generated videos of people doing and saying fictional things, a group of technology firms and academics have banded together to find a solution.

The group, which also includes Facebook, announced Tuesday that they’re launching a public race to develop technology for detecting deepfakes.

Fake videos, real prizes

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” said Facebook chief technical officer Mike Schroepfer.

In total, Facebook is dedicating $10 million to the program. The challenge, called the Deepfake Detection Challenge (DFDC), will have a leaderboard and prizes, which will be given out “to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others,” Schroepfer explains.

Microsoft and the Partnership on AI have also thrown their weight behind the initiative. Partnership on AI is an industry-backed group whose mission is to promote beneficial uses of artificial intelligence. It includes members from the Massachusetts Institute of Technology, Cornell University, University of Oxford, University of California-Berkeley, University of Maryland, and University at Albany. It’s backed by Apple, Amazon, IBM and other tech firms and non-governmental organizations

All in all, the DFDC is likely the single most significant move ever taken against the dissemination of altered video and audio material intended to misinform public discourse. It’s also the first project on the subject of media integrity started by Partnership on AI.

Deepfakes “have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions,” explains the executive director of the Partnership on AI, Terah Lyons.

Facebook said the funds it put up for grabs will go towards research collaborations and prizes for the challenge. Facebook itself will also enter the competition, but not accept any of the prize money. According to the DFDC website, the challenge will run throughout 2020. A winner will be selected using “a test mechanism that enables teams to score the effectiveness of their models, against one or more black-box test sets from our founding partners.”

Ridiculous DeepFake video of Mark Zuckerberg stretches Facebook’s fake news policies to the limit

You’re browsing through Facebook’s newsfeed when you come across a recording of Mark Zuckerberg, none other than the social network’s founder, giving an outrageous speech on national television. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” Zuckerberg says in the video. “I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”

https://www.instagram.com/p/ByaVigGFP2U/?utm_source=ig_embed

Except it wasn’t Zuckerberg that said any of that. It’s all a hoax. It’s part of a recent genre of AI-driven technology called ‘deepfakes’. This particular video, which was uploaded to Instagram, was produced by a partnership between a pair of artists and the advertising company Canny. While meant as a demonstration, similar deepfakes can be a lot more sinister and damaging.

Recently, a deepfake video showing House Speaker Nancy Pelosi made rounds on social media causing an uproar. Instead of removing the fake video, Facebook chose to de-prioritize it, meaning it showed up less frequently in users’ feeds. It also showed third-party fact-checker information just like you’d see in a fake news story shared over Facebook.

But will the social network take a more radical stand once it sees that its brand can be directly affected by such impersonations? During the time when the Pelosi fake surfaced on the platform, Neil Potts, Facebook’s director of public policy, said that this would make no difference. Now that this hypothetical has turned into reality, it remains to be seen how Facebook will react.

Deepfake is seriously creepy as well as dangerous. It’s been used to attribute fake content to politicians like Barrack Obama and Vladimir Putin, or to swap Nicolas Cage’s face with those of characters from famous movies where he never stared in. Similar tech was used to swap the faces of celebrities in porn.

It’s been getting ridiculously easy to create these, too. Previously, researchers made deepfakes starting from a single image or painting, thus bringing to life portraits of Einstein, Dali, and even the Mona Lisa. Elsewhere, a recent collaboration between Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research wrote a new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

The Zuckerberg video was created like just about any other deepfake. It employed an algorithm developed at the University of Washington by the same people that made the Obama fake videos. Canny also sprinkled in some code inspired by Stanford’s Face2Face program that enables real-time facial matching. The algorithm was then trained on short scenes featuring the target face lasting no more than 45 seconds. A voice actor’s recording was then used to reconstruct frames in the fake video showing Zuckerberg making statements he never actually voiced.

Canny used the same software to make similar deepfakes of Kim Kardashian and Donald Trump, which were showcased at Spectre, an exhibition that took place as part of the Sheffield Doc Fest in the UK.

https://www.instagram.com/p/ByPhCKuF22h/

https://www.instagram.com/p/ByKg-uKlP4C/?utm_source=ig_embed

If you pay close attention, it’s relatively easy to spot these as fakes. The voices are particularly unnatural and synthetic, but with some improvements, they could become indistinguishable from those of real people. Just a few weeks ago, someone made an AI that sounds just like Joe Rogan. Seriously, listen to the production and be prepared for one heck of a trip.

In the years to come, deepfakes will only get better and easier to make. If you thought fake news was bad, wait until you see this ungodly technology released into the wild.

This AI sounds just like Joe Rogan — and the possibilities are disturbing

The world better brace itself for an era of deepfakes.

Lately, we’ve seen that you can’t always trust what you see — now, you shouldn’t trust your ears either. Up until recently, artificial voices have sounded robotic and metallic, but an AI startup has recently published an incredibly realistic fake voice, mimicking famous podcaster and announcer Joe Rogan.

Rogan’s voice was re-created to talk about a chimp hockey team and the advantages of being a robot — topics which, while not out of the realm of what Rogan might discuss, have never been addressed by the podcaster. Deesa, the company behind the new voice algorithm, says that the implications of this are massive.

“Clearly, the societal implications for technologies like speech synthesis are massive,” Dessa writes. “And the implications will affect everyone. Poor consumers and rich consumers. Enterprises and governments.”

The consequences can be both positive and negative. Just think about the possibility of offering realistic synthetic voices to people with speech impairments, or the revolution that can happen in audiobooks and dubbing. However, at the same time, the possibility for fraud is also very concerning. You think “fake news” is a problem now? Wait ’til something like this hits the shelves.

Understandably, Deesa has not released any details about how its AI works and will not be publishing the results in a scientific journal — the possibility for malicious use of the technology is simply too great. However, with over 1,300 episodes of the Joe Rogan podcast, the AI sure had a lot of material to train itself with.

It remains to be seen just how useful or dangerous this technology will be. So far, although deep fakes emerged quite a while ago, they’ve yet to make a real impact in the world, leading many to believe that fears and concerns are far overblown. However, if we can learn anything from technology cycles in the past, it takes a while for these technologies to be implemented, so there may yet be an emergence of these issues in the not-too-distant future.

If deepfakes actually take off, hearing AI-Joe-Rogan saying that chimps will rip your balls off will the least of our concerns.