Image by Janne_Amunet on Envato ElementsImage by Janne_Amunet on Envato Elements BY PETER LOGE

“[T]he ethical implications of using AI language models in political campaigns will depend on how the technology is used and the specific context in which it is applied.” That insight comes from ChatGPT in response to a question posed by a former student of mine in the School of Media and Public Affairs at The George Washington University named Jessica Nix. Now a graduate student at Columbia University, Jessica asked me for my take on the ethics of AI in political campaigns. I suggested she ask ChatGPT. It provided a reasonably coherent answer.1

However coherent, the answer was without depth or insight. Generative AI and large language models (LLMs) like ChatGPT do not think or reason.2 They are trained, in other words, to produce “not what is really right, but what is likely to seem right in the eyes of the mass of people who are going to pass judgment: not what is really good or fine but what will seem so…” That last quote of course comes from neither Jessica nor ChatGPT, but from Phaedrus in Plato’s dialogue of the same name. (Plato, Phaedrus and Letters VII and VIII, 1973, p. 71)

On the Political Scene podcast, The New Yorker’s Joshua Rothman said that ChatGPT is “not trying to be right, it’s just trying to be plausible.” (Foggatt, 2023) The problem, as New York Times columnist Zeynep Tufekci noted when she put ChatGPT through the paces, is that “ChatGPT sometimes gave highly plausible answers that were flat-out wrong.” (Tufekci, 2022) To return to Phaedrus, it is plausible that a donkey could be a horse if all you knew was that horse was a large domestic animal with long ears, but that does not make a donkey something you would want to ride into battle.

What Plato would think of the internet is not a new question. In her 2014 book, Plato at the Googleplex, Rebecca Newberger Goldstein dropped the ancient philosopher into early 21st century America and had him doing a media tour and answering questions about whether or not all the information to which we have access means we have more knowledge. (Goldstein, 2014) Over the past few months writers like the New York Times’ Tufekci have asked, “what would Plato think about ChatGPT?”

In Protagoras, Plato argued that the sophists risked poisoning the souls of their students by teaching them eloquence without truth. (Plato, Protagoras, 1948) The sophists, in Plato’s telling, provided the appearance of knowledge, not knowledge itself. For Plato, this was a profoundly dangerous thing to do. ChatGPT cannot present knowledge itself because ChatGPT cannot truly know anything. ChatGPT does not think, it repeats what others have said. And it chooses what to repeat based in part on what has been repeated by others most often. Most of what it draws on is in English, and written by white guys - hardly the sum total of all the insights in the world by a long shot. (Weil, 2023) ChatGPT, in other words, is designed to produce only “what is likely to seem right in the eyes of the mass of people who are going to pass judgment.”

ChatGPT flagged an obvious risk here. It told Jessica: “One concern is that the technology could be used to spread false information or propaganda to influence voters. AI language models can generate text that sounds human-like, which could be used to create fake news or fake social media accounts that could spread misinformation.”

In this light, the debate over ChatGPT is not just about ChatGPT. It is a debate about the nature of knowledge and truth, and their relationship to rhetoric and persuasion. These debates have echoed over the centuries. Quintilian wrote that “[t]he orator must above all things devote his attention to the formation of moral character and must acquire a complete knowledge of all that is just and honorable,” and complained about “hack advocates” who used good gifts for bad ends. (Quintilian, 1922, pp. 2.1, 1.25) Much more recently, George Orwell complained that political language makes “lies sound truthful” and is meant to “present the appearance of solidity to pure wind.” (Orwell, (1946) 1981, p. 171) Orwell’s political opposite, the American rhetorical scholar Richard Weaver, wrote that rhetoric appropriately conceived is “truth plus its artful presentation.” A true rhetorician must know the nature of Truth because, as Weaver wrote, “[i]t is impossible to talk about rhetoric as effective expression without having as a term giving intelligibility to the whole discourse, the Good.” (Weaver, 1953, pp. 15, 23) ChatGPT cannot communicate these things because it cannot know the nature of Truth or the Good, it cannot distinguish between the rock and the wind. In a Platonic sense, ChatGPT cannot truly know at all.

All of which is to say, people have been complaining about other people making stuff up that sounds true, but that may have little or no connection to reality, for a very long time. A challenge is that our ability to make up and spread plausible sounding nonsense keeps getting more efficient and effective.

There are entire industries devoted to making political ideas sound plausible. Lobbyists, advocates, grassroots organizers, speechwriters, social media experts, ghostwriters, press secretaries, strategic communication consultants, and more make their living by making ideas sound as persuasive as possible. Their job is to make their candidate, client or organization “seem right in the eyes of the mass of people who are going to pass judgment.” Some of these people are subject matter experts. There are a lot of very smart, very committed advocates who know their issue inside and out, and who can explain that issue in ways that change laws, attitudes and behaviors. There are many more people who have values to which they are committed, but whose expertise is explaining someone else’s ideas about how to turn those values into policy, rather than having ideas of their own. Washington DC is teeming with would-be Protagorases, Phaedruses, Gorgiases and assorted other sophists, rattling their rhetorical cans to pay the bills.3

ChatGPT is already being thrown into this political communication marketplace. The political industry trade publication Campaigns & Elections has featured two articles by political consultants who tested ChatGPT, both of whom concluded it basically provided entry-level staffer quality work in almost no time and for free. (Delany, 2023) (Arnold, 2023) A California based political consulting firm features tips on its webpage for campaigns who want to use ChatGPT. (Ford, 2023) A network for advocacy communications professionals has started a spreadsheet where its members can list how they are using ChatGPT in their work.

Once ChatGPT helps candidates get elected, it can help govern - a state legislator in Massachusetts has used ChatGPT to write first drafts of legislation. (Lima, 2023) Legislation attracts advocates. ChatGPT is at the ready. AI leader and adjunct professor at Stanford University Law School, John J. Nay and his colleagues recently “demonstrate[d] a proof-of-concept of a large language model conducting corporate lobbying related activities.” (Nay, 2023) These advocates demand responses. ChatGPT is there. On its website, a company called PoliScribe says “...offices use our web-based platform to research legislative issues and surface the right messaging and content instantly.” They brag, “PoliScribe can even compile this content into draft form letters using the Member's own voice. PoliScribe instantly researches the topic, pulls your Member’s vote record and messaging, and weaves together a well-written, meaningful letter.” (PoliScribe, n.d.)

The applications -current, coming and imagined- are dizzying. ChatGPT seamlessly assembling audio so a candidate or elected official can “say” anything or even have conversations with voters based entirely on clips from old speeches. It can exchange emails and text messages. It can draft policy statements and supporting materials. And on and on. Anything a reasonably well-trained junior staffer can do, ChatGPT can do faster and for free (or nearly free, someone has to make money at some point, but no matter how expensive it becomes it will cost less than a person, and it won’t complain about working late nights or long weekends).

As dystopian as this all sounds, at its core it is not new. Seen one way, ChatGPT is a souped-up Antiphon.4 Conceptually, ChatGPT is Joe McGinniss’ The Selling of the President taken to its technological extreme. These new concerns are really old concerns, but with emojis.

Of course, the worlds of Plato and Elon Musk are different. This is not Athens, or even the post-World War II world of Orwell and Weaver. Massive nuclear arsenals can destroy all life as we know it. Climate change is already changing our planet, and things could get much worse. We keep finding new ways to be unspeakably cruel to those with whom we disagree or see as dangerously “other.”

In addition to the stakes, other differences between Phaedrus and ChatGPT include speed, volume and reach. Mis- and disinformation can be produced faster, distributed both more widely and in more targeted ways, and create greater damage than at any time in history. Bad actors can - and have - spread lies about proposed policies, democratic elections, public health, and more. These lies can threaten political stability, hinder social progress, and risk lives.

These things matter. That social media ranters, conspiracy theorists and democratic arsonists are basically the pamphleteers of our time doesn’t mean the amount of damage they can do is the same.5 That people have tried to kill each other with rocks since the dawn of time doesn’t mean we shouldn’t worry about nuclear weapons because they’re just really big, really powerful rocks. It would be irresponsible to toss up our hands and say, “sophists will be sophists.” But our debates will be better, more robust and more productive if we put them in this larger context.

At the very least, as ChatGPT told my former student, “It is important for political campaigns to be transparent about their use of AI language models and to use the technology in ways that are ethical and responsible.”



1. You can read the full piece here:

2. I will use ChatGPT throughout for both simplicity and readability. In using the term, I mean all of generative AI and LLMs, services that create content and sound as if they could have come from a person.

3. As a matter of full transparency, in addition to teaching at The George Washington University, I am one of those people. I typically know enough, or can learn enough, to have a reasonable conversation. But as a consultant I am not hired for my subject matter expertise, I am hired for my ability to make someone else’s expertise sound compelling.

4. This is neither the time nor the place to argue about how many Antiphons there were, the point is that paid political scribes have been around for a while, and people have been raising concerns about them for just as long.

5. “The pamphlet [George Orwell, a modern pamphleteer, has written] is a one-man show. One has complete freedom of expression, including, if one chooses, the freedom to be scurrilous, abusive, and seditious; or, on the other hand, to be more detailed, serious and “high brow” than is ever possible in a newspaper or in most kinds of periodicals. At the same time, since the pamphlet is always short and unbound, it can be produced much more quickly than a book, and in principle, at any rate, can reach a bigger public. Above all, the pamphlet does not have to follow any prescribed pattern. It can be in prose or in verse, it can consist largely of maps or statistics or quotations, it can take the form of a story, a fable, a letter, an essay, a dialogue, or a piece of ‘reportage.’ All that is required of it is that it shall be topical, polemical, and short.” (Bailyn, 1967, p. 2)


Works Cited

Arnold, M. (2023, February 2). What AI Tools Like ChatGPT Mean for Political Consultants. Retrieved from Campaigns & Elections:

Bailyn, B. (1967). Ideological Origins of the American Revolution. Cambridge, MA: Harvard University Press.

Delany, C. (2023, January 4). It's 2023 Consultants, Welcome to the Machine. Retrieved from Campaigns & Elections:

Foggatt, T. (2023, March 1). How ChatGPT Will Strain a Political System in Peril. The Politcal Scene Podcast. The New Yorker. Retrieved from

Ford, J. (2023, Feburary 2). How Political Campaigns Can Benefit from ChatGPT. Retrieved from BASK Insights:

Goldstein, R. N. (2014). Plato at the Googleplex - Why Philosophy Won't Go Away. New York: Pantheon Books.

Lima, C. (2023, January 2). ChatGPT is now writing legislation. Is this the future? The Washington Post. Retrieved from

Nay, J. J. (2023, January 28). Large Language Models as Corporate Lobbyists. Retrieved from arXiv:

Orwell, G. ((1946) 1981). Politics and the English Langauge. In G. Orwell, George Orwell: A Collection of Essays. Harvest Books.

Plato. (1948). Protagoras. (S. Buchanan, Ed., & B. Jowett, Trans.) New York: Viking Press.

Plato. (1973). Phaedrus and Letters VII and VIII. (W. Hamilton, Trans.) London: Penguin Classics.

PoliScribe. (n.d.). PoliScribe. Retrieved from

Quintilian. (1922). Institutio Oratoria, Book 12. (H. E. Butler, Trans.) Cambridge, MA: Harvard University Press. Retrieved from,1002,00112:1:24

Tufekci, Z. (2022, December 15). What Would Plato Say About ChatGPT? The New York Times. Retrieved from

Weaver, R. (1953). The Ethics of Rhetoric. Brattleboro: Echo Point Books & Media.

Weil, E. (2023, March 1). You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. New York Magazine - Intelligencer. Retrieved from


  • Peter Loge is an Associate Professor in the School of Media and Public Affairs at The George Washington University and Director of the Project on Ethics in Political Communication.


Image by Janne_Amunet on Envato Elements.