Interest in AI is at fever pitch. This is all thanks to ChatGPT, Dall-E 2, Google AI, Meta AI and Bing AI, among others. Promises are plenty, stumbles are frequent, yet anticipation for where it will all land remains high. For an industry that fetishises speed, scale, efficiency and automation, what’s not to like about AI? It helps us do so much, so quickly. But have we considered the weighty topic of ethical AI in marketing? In other words, the ethics of using AI in what we do; and what it means to our industry as well as to us as practitioners? Before we rush with unbridled zeal to play with the latest shiny new toy, let’s take a moment to first consider some thorny questions. Doing so may inform how we use AI in marketing.
TBH, marketers have been using AI for a while now. But its potential has recently become exponentially seductive in view of what OpenAI has introduced to the market. In the months since, we’ve seen an encroaching commodification, commercialisation and widespread adoption of AI, and all this is making issues of ethics and potential misuse even more glaring. So if you are already using this technology or considering widening your use of it, this article’s for you.
Applications of AI in marketing
So how are we using AI in marketing now? Communication of information, for one. AI technology plays an increasingly important role in the processing, curation and provision of content. This includes news, social media and search engines. They do so based on our search history, the kind of content we interact with, who we interact with, and our professed interests. AI is what drives the engine behind search marketing, recommendation algorithms (the ubiquitous “You may also like” and “Because you watched…”), and also why marketers are obsessed with keyword tagging and targeting.
Another common use of AI in marketing, particularly among creatives, is in stock image/footage/music search. AI has progressed to the stage where you can simply drag a piece of content into the search box and the engine will surface similar content of interest to you. This certainly has helped sped up workflow and the creative process. And for creatives always struggling with deadlines, it’s like manna from heaven.
Adobe has in recent years also endeared themselves to the creative community when they introduced Adobe Sensei, which uses AI and machine learning to simplify and automate creative workflows. The tool makes manipulating videos and photos much easier. Erasing unwanted elements from multimedia can be done with a couple of clicks now; achieving photorealistic effects is also just as breezy. Beyond creatives, marketers use Sensei (among other martech platforms) to zero in on customer data to predict behaviour, to better target, and then to deliver personalised content to them. All in the name of providing customers with a better brand experience. Generative AI allows them to do so at scale.
And speaking of generative AI…
Marketers these days are losing their minds over ChatGPT, Bard, Dall-E 2, Midjourney and the likes. Some are AI chatbots capable of writing blog articles, video scripts, ad copy, white papers and other text-based content, based on prompts and instructions entered by the user; others are AI systems that create images and art from keyword prompts. Similar AI systems that generate videos, music and audio are also now blazing up the charts and into the marketer’s vernacular.
And of course there’s the old chestnut, the chatbot–one of the earliest examples of AI in marketing. The first wave of excitement upon launch a few years ago was soon followed by a wave of disenchantment when customer experience proved subpar. But yet marketers aren’t one to argue with numbers and speed: There were discernible gains in efficiencies and reductions in operational overheads. And through machine learning, neural networks and deep learning, chatbots have since become more sophisticated and the initial wave of criticisms ebbed. Today, Siri and Alexa are omnipresent, as are chatbots on multiple brand websites.
These all sound great. So what’s the fuss over AI ethics?
Lawful use versus ethical use of AI
Ethics goes beyond what is lawful. For example, an AI algorithm that effectively manipulates people to engage in a particular behaviour may be legal but unethical. Alcohol companies collecting personal data of teens on social media and then targeting those who frequently look up alcohol content–exposing them to more alcohol ads to encourage impulse buying–may not be unlawful but certainly does not represent ethical AI. In many countries, laws and regulations are insufficient to ensure ethical use of AI.
In Nov 2021, the 193 Member States at UNESCO’s General Conference adopted the pioneering Recommendation on the Ethics of Artificial Intelligence. The paper intended to not only protect but also promote human rights and dignity. It is meant to be an ethical guiding compass to users and practitioners on the responsible use of AI. Yet, a recent Pew Research study indicates that by 2030, a staggering 68% users think public good will not be employed in most AI systems. It’s a bleak scenario. What went wrong?
So what is ethical AI in marketing?
Ethical AI considers the full impact of AI usage on all stakeholders within an industry ecosystem. In marketing, we’re talking everyone from consumers, customers and suppliers to employees and society as a whole. It seeks to prevent potentially bad, biased and unethical uses of AI, ensuring usage is fair and responsible, even transparent.
Organisations that apply ethical AI have clearly stated policies and well-defined review processes. Some make it a point to inform users of the use of their data in AI applications. But many don’t. While now we’re seeing 45% of companies defining an ethical charter to provide guidelines on AI development; it remains that the number of organisations that informed users about the ways in which AI decisions might affect them have been seeing a year-on-year drop. Which makes the use of AI in marketing a territory still fraught with ethical landmines. As an industry, are we doing enough? Besides peddling alcohol to teens, what are other examples of unethical use of AI you can think of? Are you complicit?
Considerations for ethical AI in marketing
Left unchecked, AI can potentially spread disinformation, misinformation, hate speech, deception, discrimination and human abuse. It can limit freedom of expression, retard the emergence of new societal narratives, infringe upon human privacy, arrest human rights, and dull media and information literacy, among other issues. How?
1. AI can propagate biases
As a society we are already struggling with issues of identity, equity, equality and representation. While some organisations and brands are gaining ground getting on board the DEI gravy train in their handling of diversity and inclusion, there’s still so much more marketers can do. There’s so much more ground we have yet to cover. When you have something like generative AI trawling historical content to surface new content today, it is merely reproducing past discrimination and perpetuating historical biases (and let’s face it, there’s plenty). This is how bias in AI marketing algorithm gets even more biased. Surely a spiral we cannot be participant to.
So instead of asking “Should we use ChatGPT?”, marketers might do well to ask “What do we want to leave as legacy?”. In your attempts at storytelling, what stories are you inadvertently telling for posterity? Should we let AI do its thing, or should we focus our energies toward creating new DEI-positive content for future AI to learn from?
2. AI can groom us for specific behaviours
With the power of AI concentrated in big tech, another question we should ponder is, What does this mean to cultural and social behaviours? Think about a world where an increased concentration of supply of cultural content, data, markets and income are in the hands of only a few agents–namely, Alphabet (Google), Amazon, Apple, Meta and Microsoft; all dominant players in the field of AI–with potentially negative implications for the diversity and pluralism of languages, media, cultural expressions, participation and equality. If we are already uncomfortable with the new economic order of surveillance capitalism, behaviour control and conditioning by the Big Five (or any company, for that matter), imagine a world where this is exponential.
3. AI and the intellectual property conundrum
When a platform like Dall-E 2 amalgamates artwork from various sources, it may not know if it’s infringing upon the intellectual property rights of the sources’ original creators, nor does it care. As a user of AI, do you? Fact is, generative AI platforms are currently not rigged to observe the 3 Cs of copyright: Consent, compensation and credit. And the reason copyright law was created in the first place was to offer creators the assurance that the law would protect their interest should anyone attempt to “copy” their work.
So this is where things get murky. AI-generated content may be largely seen as original work, “inspired” by other sources, made somewhat unrecognisable in its original form through a series of manipulation. While most creatives develop artwork based on either rights-managed content, or commit a style to memory and then mimic or interpret it, having fragments of the original work forming the actual basis of your work (as it’s wont to in AI, where it’s all 1s and 0s) begs the question: How much change is change enough to violate the 3 Cs? As a marketer, where do you stand on this issue? And as we’ve already established, where laws can’t reach, ethics might.
(Tidbit: The US Copyright Office has declared that AI-generated art is not entitled to IP protection. Why? Because it lacks the “nexus between the human mind and creative expression” necessary to invoke copyright protection.)
4. AI can fake it till you buy it
You must’ve heard of deepfakes. We’ve all seen it, laughed over it, been entertained by it, maybe even fooled by it. A portmanteau of “deep learning” and “fake”, it has been much (ab)used in the entertainment and media industries today. Practitioners use it to create child sexual abuse material, celebrity sex videos, revenge porn, fake news, hoaxes, bullying and financial fraud. And none of these sound like something a marketer would want to touch with a 10-foot pole. But would you take the likeness of someone without their consent to create a beautiful ad or TVC? Well, some companies already have.
The ethical responsibility of AI users in marketing
So what is the ethical responsibility of AI users in advertising or marketing? How can we be more responsible as a cohort? Well, pondering the ethical dilemmas of AI is a good place to start. Think about where you stand. Think about not just profit making but also social control (in a human-centered, DEI-positive way). Prize not just speed and scale but also the integrity of the creative process. Let’s think about what it means to be human, to be productive, to exercise free will. Then think also about transparency and accountability in AI marketing. Understand your company’s policies and stance. If your company doesn’t yet have guidelines, workflow or a charter for AI ethics, develop one.
Even as AI evolves–as I’m sure it will–so should we. Let’s be responsible marketers who continually examine our biases and are conscientious about what we feed the future. Let’s be marketers who make human-robot interactions intentional and meaningful, relating to them as a collaborator rather than a quick salve. (Perhaps a better use of ChatGPT can be for research, education and inspiration rather than content creation, i.e., use it as a starting point rather than an ending one?) Let’s be marketers who consider fundamentally what we do marketing for–(pure) profit? or the betterment of society?
Everyone’s now asking what can AI be. I feel the better question to ask isn’t what AI can be to marketing, but what kind of marketers we want to be.