Media

Using deepfake technology, MIT crafts the Nixon ‘moon disaster’ speech that never happened

The project highlights the growing capabilities of manipulated media to craft false narratives.

Nixon deepfake Screenshot

MIT researchers have used advanced artificial intelligence to rewrite one of mankind’s greatest achievements: What if the Apollo 11 moon landing failed?

Using archival NASA footage, the team at the Center for Advanced Virtuality made a short film called In Event of Moon Disaster, which presents an alternate timeline where the mission ended in tragedy. It culminates in an eerily convincing Richard Nixon “deepfake” reading the actual backup speech written for the president in case of catastrophe, complete with the fuzzy images and high-pitched hiss characteristic of the era’s CRT televisions.

Deepfakes are realistic manipulated videos that use AI models to simulate one’s face and voice. They can be used to depict a figure saying something they never actually said.

Advertisement:

The video is the centerpiece of an interactive website published Monday, which was the 51st anniversary of the moon landing. It includes a behind-the-scenes look at the project and articles on the implementation and consequences of deepfake technology. The short film was viewable in festivals and a touring exhibition last fall, but the digital release this week is the first time the whole video is available online.

The producers filmed an actor reading the contingency speech and mapped his facial movements onto footage of Nixon’s resignation — they picked that clip for the target because it had “the right serious and somber tone.” The actor wasn’t doing an impression, either; his voice was converted to “Nixon’s” using a separate AI model trained on hundreds of samples from the president’s Vietnam speeches.

Advertisement:

The implications of deepfakes are alarming, but producing convincing fabrications isn’t that easy quite yet. It took the MIT team three months to make the video. Still, the technology continues to improve, and convincing deepfakes will only become easier to craft over time.

“By making [a deepfake] ourselves, we wanted to show viewers how advanced this technology has become, as well as help them guard against the more sophisticated deepfakes that will no doubt circulate in the near future,” the projects website states. “If someone who experiences our video later recalls the believability of our piece and, as a result, turns a more critical eye on videos they see pop up in their Facebook feed, we will have been successful.”

Advertisement:

The MIT team partnered with Scientific American to produce a half-hour documentary on the project, which was also published Monday.

If you want to test out your own ability at spotting deepfakes, the MIT Media Lab published a project called Detect Fakes earlier this year. For some help, a press release provides hints on what video artifacts to look for.

Jump To Comments

Conversation

This discussion has ended. Please join elsewhere on Boston.com