Screenshot 2023 06 11 at 15515 PMScreenshot 2023 06 11 at 15515 PMBY KRISTEN GRIMM

My prediction: Without clear ethics-based guardrails, it is much more likely that artificial intelligence (AI) will be used for self-interest or, worse, nefarious purposes.

Here’s why.

As someone who has worked on public interest communication for three decades, I view the era of AI-generated communication as a mixed bag. It is going to enable a student who wants law enforcement officers out of their schools to create an Oscar-worthy documentary that just may get the job done. I can also see how turning AI loose on measuring different internet speeds or pollution rates or lending rates for different neighborhoods will generate reports in record time that show inequities that we can fix. We’ve already seen how using deepfake AI technology, soccer legend David Beckham was able to do a public service announcement for malaria that was translated in nine languages.

When we look at how people are tapping AI, using it as a force for good may be the exception rather than the rule. Countless commentators talk about how AI will change communication. I don’t disagree. They talk about moving from predictive AI to generative AI. That basically means instead of using AI to “surveil” (their word not mine) a brand or issue, people can now use it to generate communications, often called “synthetic media.” These same commentators focus on how AI will save time and be more cost effective. Notably, these experts rarely mention AI leading to communication that is more accurate or more equitable. That gives me pause.

AI is trained to find patterns. “So?” you may be thinking. Here’s how that could play out for a communication professional whose job often requires motivating people. I may be working to enhance teen girls’ self-esteem and leadership skills through exercise. Studies show that rates of exercise decline steadily among middle school girls and become seriously problematic by the time girls are in high school, having negative effects on their grades and self-esteem compared to boys of the same age. I ask my trusty Bard or ChatGPT to give me some statistics that will motivate teen girls to exercise. Off it goes and is back in seconds after scraping through Instagram, SnapChat, Twitter, Facebook and YouTube and who knows where else: Boys don’t want to date larger-bodied girls, and the AI provides a bunch of statistics and “expert” quotes to back that up. Without considering the accuracy of what AI has turned up, I draft a press pitch featuring that claim.

I now use AI to find journalists who are interested in these types of pitches. It spits back a bunch of journalists who cover dating and teen lifestyle topics. I pick the most popular ones and direct message them on Twitter. A journalist now picks up my AI-researched pitch, adds a provocative headline to attract clicks and likes and forwards, and posts. It is wildly popular. The editor Slacks the journalist and says: “More like that.” We’ve just infested perhaps millions of girls with information that will negatively impact their body image, promote anti-fatness and contribute to body-size discrimination.

Let’s back up to see where we might have been stopped. At any point in this chain, did the AI-generated content come with a warning about the mental health hazards of body shaming teen girls? Did I as a communication professional consider the downsides of the generated content and its implications? Did the journalist consider what promoting this research might lead to, or did the editor?

And that brings me to an important point: AI is not a technology with agency. It doesn’t know right from wrong. It will crowd source the truth. It has no values. AI is built by people who have biases. It will replicate the human bias from which it is learning. That means who designs it and the data it’s given are immensely important.

AI will be used by people who do have agency — and values. And we need to use both to mitigate the harms that unchaperoned AI can cause. We need to keep top of mind the limitations and concerns AI raises and what we can do to mitigate those. When considering generative AI, ask: Who created the data from which the content is generated and why? Who was represented and not represented in that data? Did AI pull from trusted, accurate sources? What context is that data missing? How will the information as presented impact people? What harms might it reinforce? Disinformation campaigns are nothing new; neither is cynical, one-sided messaging or the practice of leveraging harmful frameworks for gains. This technology is a supercharger for our instincts, good and bad.

The example of getting teenage girls to exercise falls under the self-interested category. I want young girls to grow up with high self-esteem and strong leadership skills. I understand that exercise is an important opportunity for girls to learn those skills. At the same time, journalists and editors want to generate stories of interest to their readers. AI might seem at first glance like an easy way to achieve both goals. But more often, it can be used to design communication that sows public distrust, increases polarization and exacerbates inequities rather than minimizes them.

If we use AI to generate reports to decide how to lower crime, evidence shows us that we will create public safety programs that are likely discriminatory to Black people. Studies have shown that computer programs like risk scores used in assessing future criminal activity are biased against Blacks. It is all about what data goes into systems where AI and others will look for patterns. Blacks are more likely to be stopped and surveilled while in the community, and that data is fed into systems where AI “learns” patterns. If AI is then asked to generate reports based on that flawed data, we will compound problems like racial discrimination.

And these reports can get traction. People can also use AI-generated synthetic media to weaponize the public’s response to rising crime rates. Journalists and policymakers who rely on AI to generate information will get this skewed data described above, package it and amplify it. Suddenly, we have a group of people who are responsible for rising crime rates based on flawed data. That leads to overpolicing, and the cycle continues.

And let’s be clear. Some bad actors will use AI to write persuasive content to move disinformation and propaganda at the speed of hitting enter. Stanford tested that. It showed that people can use AI to generate content that consumers find credible. And the bad actors, like those trying to undermine elections or make conspiracy theories popular, know that. Some bad actors will flood social media and exploit search engine optimization to make lies and garbage seem popular (via bots, each using generative AI to say the same thing in ways that are different enough to get around screens). AI programs designed to promote what is getting attention will then see this is popular and amplify it, making it more popular. Generative AI draws on the garbage, not knowing it is inaccurate, to write press releases, statements, social media posts, etc. All get attention. Bots that journalists use while researching the issue find it and turn it into first drafts of stories that get lightly edited and fed back into the machine. The machine loves it and regurgitates it again and again. People believe information when they see it over and over again — even when they know it is wrong. Bad actors know that and exploit it.

If we want to use AI for its upsides and guard against its downsides, then many someones will need to be charged with that oversight. And that means not just looking at what is generated but also looking at the data that is fed into the system that AI will use to generate content. If I ask AI that is generating content for me, “What are the downsides of the content generated?,” I won’t get an answer. It can’t regulate itself or provide ethical oversight. It can’t highlight the benefits and burdens of the information it generated.

But unless and until there is regulation or accountability of some kind, we can count on this: Bad actors will use AI for bad purposes. Marketers exclusively focused on the bottom line will use it for that purpose. Expect it. Big Tech companies that are developing and pushing this technology must be accountable first and foremost for the harms, especially as they seek to celebrate AI’s capabilities. But we know regulation is a broad brush, and bad actors will try to take advantage of gaps and the quickly changing nature of technologies. So, we also need detection and verification to be universally accessible and usable. And we need civil rights defenders to build strong connections with tech advocates who can help identify threats before they become an issue. For professionals dedicated to using communication to advance the public interest, that means we need to hold a line. We need to ask ourselves when using synthetic media: Does the information being generated lead to more trust, or does it lead to manipulation, which makes people less trusting of news, science and facts? And we need to keep on top of the ongoing conversations around this. We can’t pretend we didn’t know there were issues to consider.

AI is great at seeking patterns and generating answers. Maybe we should ask it: What oversight and ethical framework would keep AI advancing the public interest and not undermining it? But let’s fact check its answer.

 

  • Kristen Grimm is the founder and president of Spitfire Strategies, a strategic communication and campaign firm that advances racial, economic and social justice, protects the environment and promotes opportunity for all. Spitfire’s fundamental values are rooted in one core principle: everyone belongs and has the power to spark change.

 

Image by stokkete on Envato Elements