Follow this authorMolly Roberts's opinions

Follow

The truth is probably neither as innocent as the royals contend nor as sinister as the public claims. Maybe Catherine herself doesn’t spend her days idly airbrushing on Adobe Photoshop, as she suggests, but it seems that in this case someone did: The BBC reports that analysis of the image’s metadata shows the shot was snapped with a digital camera, then saved twice on the software. Teenage touch-up artists have been using the tool to obscure pimples in their prom pics for more than a decade now. But these days, Photoshop has even niftier capabilities.

While AI can easily conjure something that never existed at all, it can also alter reality an eensy, weensy bit, so the product is neither totally real nor totally fake. Smooth your skin; add some attractive foliage; remove that random guy with the ugly hat visible over grandma’s shoulder. “Perfect your memories,” promises Google’s Magic Editor. This is tempting. You can use a simple slider in Photoshop’s Smart Portrait to make everyone look at the camera. Better yet, you can make them look younger or older. Best of all, you can make them look happier. Imagine the princess turning this last one up to 11. Under that kind of pressure, perhaps you would, too.

Advertisement

People might not always realize they’re relying on AI to pull off these tricks; it feels, after all, just like clicking any old button. But when they do rely on it, they invite those hallmarks of manipulation, like messed-up hands or incomprehensible text (“Encherining entertainment,” read promotional images for a scammy Willy Wonka-inspired event in Glasgow that went viral last month. “Exarserdray lollipops, a pasadise of sweet teats”), along the way. And, of course, they invite confusion, too.

The trouble these AI tools introduce to society is similar to the problem of so-called fake news. Progressives seized on the phrase to describe bona fide foreign disinformation, but conservatives eventually appropriated it to discredit accurate reports they didn’t like, or, for that matter, opinion pieces. We’re already developing a reflex to question any photo or video we see online, because we have to. But that means it’s not only easier in the age of AI to create fake images (which it is); it’s easier also to claim images are fake even when they’re not. Something like a happiness slider or a “best take” mashing all your mediocre attempts at taking a cute selfie into one actually cute selfie — something whose product is real but not real, fake but not fake — illustrates both of these problems at once.

These changes come as a boon to conspiracy theorists, who love to point out coincidences, inconsistencies and small-bore falsehoods that show up in everyday life as evidence of vast and dastardly plots. Catherine edited a photo, maybe a little, maybe a lot, and apparently, this is evidence that, next time we see her, we’ll be seeing a body double instead. The only solution to the conundrum would be for everyone to be completely and utterly real — especially people with power and prominence, and especially princesses. Unfortunately, they’re probably the least likely to do it.

Share

Comments

Popular opinions articles

HAND CURATED

View 3 more stories

Sign up

Where’s Kate? The question has turned in recent months into a hashtaggable conspiracy theory, centered around the “disappearance” of Catherine, Princess of Wales, following what Kensington Palace has described as abdominal surgery. This past weekend, the world got an answer: Right here, smiling with her three children — except, hold on, there’s something weird going on with their hands.

A photo of Prince William’s happy family, released by the palace presumably to placate the speculators, ended up doing exactly the opposite when oddity upon oddity were discerned. The incident holds plenty of lessons about how the age of artificial intelligence has given us reason not to believe what we see.

The picture, admittedly, is all wrong. Look at Kate’s absent wedding ring. Look at Prince Louis’s warped fingers. Look at the misaligned sleeve of Princess Charlotte’s sweater (sorry, jumper). The conspiracy theorists, already asserting everything from plastic surgery to divorce to death, suddenly had fodder on Sunday for a whole new round of theorizing. Was the lost princess’s face added to the image after the fact, perhaps culled from an old Vogue cover? Was the photo an old one, recolored and furnished with a fresh background to fool viewers? Most tantalizing, was the whole thing a work of artificial intelligence — capable these days of dreaming up all sorts of imagined scenes, many of them notoriously adorned with anatomically dubious appendages?

The Associated Press, after initially distributing the photo, issued a “kill notification” to participating media outlets — alerting journalists that “at closer inspection it appears the source has manipulated the image.” The source herself only offered an explanation the following day: “Like many amateur photographers,” the princess wrote in a statement, “I do occasionally experiment with editing.” Ah, yes, who among us does not add an extra row of teeth to our eldest child’s mouth before posting on Instagram?

The truth is probably neither as innocent as the royals contend nor as sinister as the public claims. Maybe Catherine herself doesn’t spend her days idly airbrushing on Adobe Photoshop, as she suggests, but it seems that in this case someone did: The BBC reports that analysis of the image’s metadata shows the shot was snapped with a digital camera, then saved twice on the software. Teenage touch-up artists have been using the tool to obscure pimples in their prom pics for more than a decade now. But these days, Photoshop has even niftier capabilities.

While AI can easily conjure something that never existed at all, it can also alter reality an eensy, weensy bit, so the product is neither totally real nor totally fake. Smooth your skin; add some attractive foliage; remove that random guy with the ugly hat visible over grandma’s shoulder. “Perfect your memories,” promises Google’s Magic Editor. This is tempting. You can use a simple slider in Photoshop’s Smart Portrait to make everyone look at the camera. Better yet, you can make them look younger or older. Best of all, you can make them look happier. Imagine the princess turning this last one up to 11. Under that kind of pressure, perhaps you would, too.

People might not always realize they’re relying on AI to pull off these tricks; it feels, after all, just like clicking any old button. But when they do rely on it, they invite those hallmarks of manipulation, like messed-up hands or incomprehensible text (“Encherining entertainment,” read promotional images for a scammy Willy Wonka-inspired event in Glasgow that went viral last month. “Exarserdray lollipops, a pasadise of sweet teats”), along the way. And, of course, they invite confusion, too.

The trouble these AI tools introduce to society is similar to the problem of so-called fake news. Progressives seized on the phrase to describe bona fide foreign disinformation, but conservatives eventually appropriated it to discredit accurate reports they didn’t like, or, for that matter, opinion pieces. We’re already developing a reflex to question any photo or video we see online, because we have to. But that means it’s not only easier in the age of AI to create fake images (which it is); it’s easier also to claim images are fake even when they’re not. Something like a happiness slider or a “best take” mashing all your mediocre attempts at taking a cute selfie into one actually cute selfie — something whose product is real but not real, fake but not fake — illustrates both of these problems at once.

These changes come as a boon to conspiracy theorists, who love to point out coincidences, inconsistencies and small-bore falsehoods that show up in everyday life as evidence of vast and dastardly plots. Catherine edited a photo, maybe a little, maybe a lot, and apparently, this is evidence that, next time we see her, we’ll be seeing a body double instead. The only solution to the conundrum would be for everyone to be completely and utterly real — especially people with power and prominence, and especially princesses. Unfortunately, they’re probably the least likely to do it.

QOSHE - Royal photo fiasco shows why no one believes what they see anymore - Molly Roberts
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Royal photo fiasco shows why no one believes what they see anymore

5 35
12.03.2024

Follow this authorMolly Roberts's opinions

Follow

The truth is probably neither as innocent as the royals contend nor as sinister as the public claims. Maybe Catherine herself doesn’t spend her days idly airbrushing on Adobe Photoshop, as she suggests, but it seems that in this case someone did: The BBC reports that analysis of the image’s metadata shows the shot was snapped with a digital camera, then saved twice on the software. Teenage touch-up artists have been using the tool to obscure pimples in their prom pics for more than a decade now. But these days, Photoshop has even niftier capabilities.

While AI can easily conjure something that never existed at all, it can also alter reality an eensy, weensy bit, so the product is neither totally real nor totally fake. Smooth your skin; add some attractive foliage; remove that random guy with the ugly hat visible over grandma’s shoulder. “Perfect your memories,” promises Google’s Magic Editor. This is tempting. You can use a simple slider in Photoshop’s Smart Portrait to make everyone look at the camera. Better yet, you can make them look younger or older. Best of all, you can make them look happier. Imagine the princess turning this last one up to 11. Under that kind of pressure, perhaps you would, too.

Advertisement

People might not always realize they’re relying on AI to pull off these tricks; it feels, after all, just like clicking any old button. But when they do rely on it, they invite those hallmarks of manipulation, like messed-up hands or incomprehensible text (“Encherining entertainment,” read promotional images for a scammy Willy Wonka-inspired event in Glasgow that went viral last month. “Exarserdray lollipops, a pasadise of sweet teats”), along the way. And, of course, they invite confusion, too.

The trouble these AI tools introduce to society is similar to the problem of so-called fake news. Progressives seized on the phrase to describe bona fide foreign disinformation, but conservatives eventually appropriated it to discredit accurate reports they didn’t like, or, for that matter, opinion pieces. We’re already developing a reflex to question any photo or video we see online, because we have to. But that means it’s not only easier........

© Washington Post


Get it on Google Play