How Archivists Could Stop Deepfakes From Rewriting History

Excerpt from this article:

Imagine, however, if experts couldn’t readily identify the diaries as fraudulent. And imagine, also, if forgers were able to create and distribute ultra-realistic fake Nazi records at breakneck speed. Finally, imagine if some of these documents were forever preserved as authentic pieces of Nazi history. A threat like this is edging increasingly out of the hypothetical and into the real as the tools to quickly create realistic manipulated videos go mainstream. These videos, which use machine learning to graft one person’s face onto the body of another, are known as deepfakes—and they’re getting disturbingly good.

While many have feared the potential of deepfakes to spread misinformation in the here and now, these videos could distort reality long after today’s fake news goes viral if they’re improperly archived as legitimate. Gizmodo spoke with several historians, archivists, and professors who were familiar with the deepfakes phenomenon, some of whom had pragmatic concerns about it. Fortunately, archivists have rigidly established principles meant to catch forgeries and screw-ups, but these protections are only as strong as the institutions that provide them.

Advertisements

Oprah, Is That You? On Social Media, the Answer Is Often No.

Excerpt from this article:

The issue of fake social media accounts masquerading as public figures is acute. Facebook, Instagram and Twitter teem with accounts that mimic ordinary people to spread propaganda or to be sold as followers to those who want to appear more influential. But millions of the phony profiles pose specifically as actors, singers, politicians and other well-known figures to broadcast falsehoods, cheat people out of money — or worse. Last year, Australian authorities charged a 42-year-old man with more than 900 child sex offenses for impersonating Justin Bieber on Facebook and other sites to solicit nude photos from minors.

The sheer volume of social media impostors poses a challenge to even the wealthiest celebrities. In a video last year, Oprah Winfrey warned her Twitter followers that “somebody out there is trying to scam you using my name and my avatar on social media, asking for money.”

The Reality of Twitter Puffery. Or Why Does Everyone Now Hate Bots?

Excerpt from this article:

A friend of mine worked for an online dating company whose audience was predominantly hetero 30-somethings. At some point, they realized that a large number of the “female” accounts were actually bait for porn sites and 1–900 numbers. I don’t remember if users complained or if they found it themselves, but they concluded that they needed to get rid of these fake profiles. So they did.

And then their numbers started dropping. And dropping. And dropping.

Trying to understand why, researchers were sent in. What they learned was that hot men were attracted to the site because there were women that they felt were out of their league.

Why am I telling you this story? Fake accounts and bots on social media are not new. Yet, in the last couple of weeks, there’s been newfound hysteria around Twitter bots and fake accounts. I find it deeply problematic that folks are saying that having fake followers is inauthentic. This is like saying that makeup is inauthentic. What is really going on here?

The Follower Factory

Jessica Rychly, whose social identity was stolen by a Twitter bot when she was in high school. 

Excerpt from this article:

All these accounts belong to customers of an obscure American company named Devumi that has collected millions of dollars in a shadowy global marketplace for social media fraud. Devumi sells Twitter followers and retweets to celebrities, businesses and anyone who wants to appear more popular or exert influence online. Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, the company has provided customers with more than 200 million Twitter followers, a New York Times investigation found.

The accounts that most resemble real people, like Ms. Rychly, reveal a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to a Times data analysis.

The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models. In most cases, the records show, they purchased their own followers. In others, their employees, agents, public relations companies, family members or friends did the buying. For just pennies each — sometimes even less — Devumi offers Twitter followers, views on YouTube, plays on SoundCloud, the music-hosting site, and endorsements on LinkedIn, the professional-networking site.

Several Devumi customers acknowledged that they bought bots because their careers had come to depend, in part, on the appearance of social media influence. “No one will take you seriously if you don’t have a noteworthy presence,” said Jason Schenker, an economist who specializes in economic forecasting and has purchased at least 260,000 followers.

How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos

Excerpt from this article:

At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.

The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.

In recent years, thanks to a breed of algorithm that can learn tasks by analyzing vast amounts of data, companies like Google and Facebook have built systems that can recognize faces and common objects with an accuracy that rivals the human eye. Now, these and other companies, alongside many of the world’s top academic A.I. labs, are using similar methods to both recognize and create.

Nvidia’s images can’t match the resolution of images produced by a top-of-the-line camera, but when viewed on even the largest smartphones, they are sharp, detailed, and, in many cases, remarkably convincing.

The viral story of Taiwan Jones, who learned he failed his midterms on Twitter, doesn’t add up

Excerpt from this article:

In other words, the “Taiwan Jones” account that went super viral was very likely changed from a previous Twitter handle to match that of the student described in the midterm tweet. It’s a well known, relatively easy trick that shows up again and again in dubious viral Twitter moments. It also works pretty well, as the hundreds of thousands of retweets on the “Taiwan Jones” reply show.