Democracy, ChatGPT, and Generative AI
Whether ChatGPT and other generative AI prove helpful or harmful to democracy is still a jump ball.
TLDR: The impact of generative AI on democracy isn’t foreordained. Being either optimistic or pessimistic about it is less helpful in shaping its impact than being curious.
The background: When you open ChatGPT on your phone or computer, it looks at first like you might be texting with a chatbot. But you’re actually putting words into a massive equation (or algorithm) that takes its best guess at the best response to any question you throw at it. Imagine Alexa, if Alexa were smart and could text. This is called generative artificial intelligence.
Data Science@SMU asked me to speak to its students today about the potential impact of generative AI on domestic politics and foreign policy. Here’s what the foreign policy experts are talking about:
The Council on Foreign Relations is tracking China’s race to put out a ChatGPT alternative—a much harder task for an authoritarian country. Foreign Policy and the Carnegie Endowment similarly China-focused.
Foreign Policy and generative AI: There is another Dr. Hand that is our household foreign policy expert but here are a few other potential considerations that go beyond a narrow US-China focus:
We should expect to see countries attempt to manipulate the data that feeds into generative AI models, as part of information warfare.
We should expect to see more hands-free, voice-controlled offensive and defensive weapons.
Generative AI could support or replace human translators at the UN an in other negotiations, and potentially serve as a (more or less) neutral referee in negotiations, helping negotiators move past stuck points.
Generative AI can already create weird and wonderful images from a short text prompt. Combined with technology like StageCraft, it could completely rework how soldiers are trained. Even before that, it could help generate more realistic and dynamic war games.
As generative AI gets connected to more datasets, it will enable intelligence officers to intuitively sort through data much more quickly, and generate (false) data.
Domestic Politics and generative AI: As a political scientist, it is frankly overwhelming to consider how this technology, combined with others, might interface with politics and policy. One question is how we regulate it, which Brookings has already taken a stab at. Here are a few other considerations:
Anywhere humans have to compose text, ChatGPT will be used. Interest groups will use it to generate individual-looking letters from constituents to congressional offices. Lobbyists will use it to craft arguments, candidates will use it to create websites and speeches and even more text messages, and bureaucrats will create generative AI layers that we will have to interact with before we get to speak to a human.
The war over social media content will intensify, because once these algorithms are released into the wild, they learn on their own in ways we can’t predict.
Once language-based generative AI is connected to large datasets, we could see a revolution in how politicians are studied; you will not have to touch a keyboard to ask who the network-central legislators in gun policy are, and where mathematical leverage points in a policy system might be.
Democracy and generative AI: Most interesting to us at EOWD is how generative AI might affect democracy at a small scale. Some ideas:
Generative AI could make us even more lonely. That is not great for democracy, which from a participatory perspective requires us to develop democratic habits at small scales in order to contribute to democracy on a large scale. Imagine two people conversing with a generative AI assistant coaching them through their AirPods, and try not to shudder.
Generative AI could help us deliberate better. If you, MAGA Republican, cannot figure out how to have a conversation with your hyper-liberal cousin, generative AI can help unstick that. Here are ChatGP’sT suggestions for that hypothetical:
On the other hand, ChatGPT is just a discourse machine, just a discourse machine, and our current democratic discourse is in pretty rough shape. Generative AI can enable misinformation as much as it could help detect it. And it can reinforce our existing biases just as well as help us grow past them.
The bottom line: Democracy, in our participatory view, is not a machine that we can set up to run itself. It depends on our participation to work, and generative AI doesn’t change that. Rather than succumbing to either what MLK called “deadening pessimism” or “superficial optimism,” we can instead throw ourselves in the fray, identifying ways to use generative AI to bend people back toward, rather than away from, each other.
—
My favorite general take on generative AI is from Andrew Dean, available here. Don’t miss the part about the armadillo.
Image made using DALL-E 2, a generative AI.