The Artificial Intelligence Election
If 2023 was the year of generative artificial intelligence, 2024 is already the year of its application to politics
Last year, just as ChatGPT was exploding into public consciousness, I was invited to comment on its potential political impact. A year later, I’ve been asked to do it again! but this time for a radio segment.
In preparation for that, I thought I’d take a look at what’s happened since then:
All the think shops—Council on Foreign Relations, Brookings, The Atlantic, MIT Technology Review, The Brennan Center—have felt compelled to put out their reports about how AI will change the 2024 election.
In December, a congressional candidate used AI to have phone conversations with voters. Today, the FCC extended its prohibitions against scam robocalls to include artificially generated voices.
This came after “a Texas man and two companies” made AI-generated robocalls impersonating Joe Biden. That doesn’t ban AI robocalls, of course—it just applies the same scam robocall laws to calls using AI-generated voices.
Even Meta is getting antsy, claiming it will prohibit the undisclosed use of AI in politics ads.
How they’ll do this, I’m not sure—I can’t effectively prohibit it my own classroom.
I’m exploring how to use this software to sort through and access the national database of campaign staff that I put together while writing my dissertation.
At UTA, I’m teaching a class this spring called Democracy in Theory and Practice. In addition to the Aristotle, Machiavelli and Madison, we’ll spend a couple of weeks talking about technology. I encourage my students to think about this and all technological developments in two ways:
The democracy-weakening uses of new technology.
This is most of what you’ll read about: deep fakes, micro-targeting, and even more robocalls that turn off voters. The sum effect of all this technology, I believe, is that it will be logical for us to trust less of what we see and hear on our computers.
In that way, I think the cumulative effect of the use of AI, including generative AI, is that we’ll have to return to some old school trust-but-verify behavior, where the “verify” part happens in person, in living rooms and in public speeches. Very Athenian, really!
The democracy-enhancing uses of new technology.
Some of the theories we study in class point to the need for citizens to be educated in how to participate in democracy. Participation is hard! and weird! and takes time. These tools could make it much easier to train people to participate in democracy. And some of those AI robocalls might turn out to be useful, drawing people in to the political process.
Other theories that we study in class focus on the need for people to learn how to deliberate well—to argue effectively and empathetically. These tools can help! They can offer guides for how to unstick conversations that feel stuck, or suggest ways forward on issues that are contentious. Maybe ChatGPT will train us to argue and deliberate more effectively.
A quick post-posting update: I just saw that the Hewlett Foundation is working on the possibilities of AI rather than just its dangers—check it out.
If you really want to gaze into the looking glass, the MIT Technology Review article is my favorite of the ones listed above. An AI-generated political party??