Call Me Ethical


When I was in elementary school, there was a funny video of then President Obama circulating online. Someone on YouTube had taken certain words the president said in his speeches, and compiled them to sound like he was singing Carly Rae Jepsen’s “Call Me Maybe.”

This video stayed in my mind for a long time. How funny! Here was the President of the United States singing a pop song. And yes, it was obviously pieced together, and ridiculously fake. But it was all just for good fun. No one would watch it and think it was believable.

But by 2017, technology had become more advanced. According to an article by Hany Farid at Quartz, Researchers at the University of Washington were able to use A.I. to make videos of the former president, allowing them to create videos of Obama saying things that he didn’t actually say. Farid included this video, created by the BBC, in his article.

Unlike Obama singing “Call Me Maybe,” the University of Washington’s video is much more believable – and dangerously scary. And this was done for a former president. Imagine if someone took the image or the voice of a sitting president or a presidential candidate.

Which, by the way, has already been done.

Elon Musk came under fire for sharing an AI generated video using the voice of presidential candidate Kamala Harris – a video that is a parody with a voice over that sounds exactly like the Vice President’s voice.

The dangers of AI use in a presidential election is clear: if people can use technology to make videos of either candidate saying things they didn’t actually say, it can change people’s perceptions of those candidates. This type of misleading content can alter political conversations and actions.

And that’s just within politics. Imagine the role that misleading content can have in our daily lives, including in advertising and content strategy.

If It’s Believable, It’s True

The sad reality is that if something looks fairly believable, it can quickly be considered to be true.

According to an article by Hal Conick for the American Marketing Association, The Atlantic ran a native advertisement – basically, an ad that doesn’t look like an ad – on their well-trusted website from the Church of Scientology back in 2016. The native advertisement looked exactly like an article from The Atlantic, misleading everyday viewers into thinking that it was a rigorous journalistic report. “Readers expected a piece of reported journalism about the religion” wrote Conick, “and instead got a laudatory piece of content marketing.”

Screenshot from The Guardian of The Atlantic’s Scientology native advertisement. The only thing which denotes this piece as an advertisement is a highly ignorable “Sponsored Content” banner at the top of the article.

In short, readers were fooled. The advertisement looked like a believable article from The Atlantic, a magazine that publishes credible and true articles. Instead, they got a marketing stunt. The pushback was horrendous, and The Atlantic took the ad down in shame.

In the same way that AI is fooling people, native advertising is also misleading people. Let’s be honest about what native advertising really is: a way to make an ad deceivingly believable in a world where people just don’t trust advertising.

Do Labels Actually Make a Difference?

Even though the piece from The Atlantic was labeled as “Sponsored Content,” the notice wasn’t noticed. Native advertising is a common practice, however, and it’s safe to assume that these notices have the potential to be missed in other pieces as well.

According to Advant Technology in an article that seeks to answer the question of whether native advertising is ethical, the author writes that there isn’t one solid answer. “Our answer to that is, yes, as long as they conform to the necessary regulations ensuring it is clearly labelled.”

In other words, “call me maybe.”

This Scientology piece was labeled, meaning that under this definition and by modern advertising standards, it was ethical even if people were misinformed. Maybe the notice could have been made bigger, larger, and more dominant. And yet, just because it could’ve been larger, doesn’t mean the advertisement failed to be ethical on paper.

Let’s apply this logic to AI, which is slowly but surely being integrated into advertising. The AI campaign video reshared by Elon Musk was originally labeled as a parody. Under technical definition, the video itself is ethical. People were informed via fine print that it was just for comedic purposes, and whether or not they saw the notice is irrelevant.

But society does not run solely by technical definitions and fine print; it also runs on truth. And the truth is that it is not ethical to mislead people, intentionally or unintentionally, even if you briefly say, “by the way, this is an ad” or “this is a comedic parody.” At the end of the day, even if a warning is clear, the intention of these pieces of content is to convince the viewer to act upon said content. Making it look deceivingly believable is a vain and futile attempt to make the content seem more trustworthy.

Marketing is supposed to build trust between the business and consumer, but native advertising could serve to erode the viewers trust. Native advertising is a selfish attempt to market in the company’s best interests. And if the purpose of a good content strategy is to provide content that balances the interest of both the company and the audience, would native advertising not serve the company’s interests more than that of its customers?

If a company wants to have a good content strategy, then its content must be clear and trustworthy. I’m not sure if we can classify native advertising as either of those traits.

Advertising Should Be Clear

The concept of native advertising, simply put, is playing with fire. Advertisements are intentionally made to look like a true journalistic piece so that people trust them more, but with fine print snapped on top in a somewhat visible area to make it legally acceptable. And to add to these disturbing facts, this is done all in the name of making people trust advertising more.

It’s not a secret that people don’t trust advertising. According to a 2022 article by Marketing Charts, “In the US, trust in advertising practitioners remains rather low.” Steve Olenski, in an article for Forbes, tells advertisers to “Stop with the sponsored posts” on social media because “consumers are not buying it. They see right through it as another attempt to sell them something.” People don’t like ads, because they want the truth. Ads are seen as constant lies – how in the world could native advertising and other forms of misleading content serve to counter that?

Advertisers have lost trust with the people they seek to market to. And the way to build trust with people, in any scenario, is to be honest. Any relationship built on a lie will crumble, and the relationship from company to customer is no exception. The basis of content strategy is to balance the interests of both parties, but native advertising throws this balance off.

In my opinion, the answer to the ethics of native advertising shouldn’t even be a maybe. If AI can affect politics, how much more can native advertising affect basic consumer decisions?

Instead of trying to fool people, we need to build trust. A good content strategy can do this; not misleading, try-hard advertising. So call me ethical: native advertising should stop.

I’m Sean Formantes, a graphic designer and content creator for social media. I am a lover of music, art, and coffee.

Previous
Previous

Focusing on What Actually Matters

Next
Next

The Good, The Bad, and The Strategy