Last Sunday, we published an anonymous letter that “found its way into Glenn’s inbox” (which is technically true). Now I need to make a confession: The missive about the dangers of social media algorithms using race as a factor for curating content, which generated a thoughtful and lively discussion (106 comments at the time of this post) was actually written by Open AI’s GPT-3 language model—an AI system that anyone with a few bucks can hire to produce texts based on their prompts. GPT-3 is remarkable in that you can enter a query written in ordinary language and it will generate a response that reads like it could have been written by a human.
The essay in question was based on this prompt from me:
Glenn Loury recently posted a fascinating letter from his subscriber, Mr. X., in which he argues that the American fixation on race either already is shaping or will inevitably shape the country's, and, to some extent, the world's digital landscape by becoming an important factor in how social networks' algorithms curate information flow. Here is this text:
When I first suggested this experiment to Glenn—let’s publish a post written by an AI and reveal the author a week later—he was intrigued but reluctant. The tool and its implications fascinated him, but he had two concerns:
Is this ethical?
How is it connected to what we do at The Glenn Show?
I suppose the first question must be answered by you, our subscribers. I hope that you’ll see this as a magic trick with a purpose rather than a deceptive hoax. I apologize for any offense given.
As for the second question: My sense, from having played with only a couple of neural networks, is there’s virtually nothing in the developed world that won’t be affected by them. Humanlike AI will affect our communication, governance, the work that we do—and yes, race relations, as argued by the AI in the original post—so, at the very least, we should inform ourselves on the issue as best as we can.
Think of this as a PSA and an invitation to a discussion.
I came away with some thoughts from our little experiment:
Nobody guessed that the letter was not written by a human (the writing garnered both positive and critical responses and was hailed as “insightful,” “spot on,” and even “profound and well over the target”). GPT-3 is available to anyone willing to pay for its services. This includes private individuals, media outlets, corporations, and governments. Next time you’re arguing politics online, or looking up reviews for a product you’re considering buying, you should remember you might be reading something written by a machine, “hired” by somebody with their own interests that may or may not be aligned with yours. (Fans of sci-fi will easily envision a future where the machine has its own interests.)
The post was engaging. In the first 24 hours since being published, it gathered 85 likes and generated 72 new subscriptions (welcome!) to the newsletter. This is better than, for example, my own recent appearance on The Glenn Show: the machine outperformed a human. If this took place on Facebook, or virtually any other social media platform—they all use algorithms to highlight engaging content—this would lead to the promotion of the AI and the demotion of me as contributors to the discourse. (Luckily for me, Substack and TGS do have a pro-human bias.)
The network referenced a Vanderbilt University study that found that “black Facebook users see about 1.5 times more content about race than white Facebook users do.” The point itself is, broadly speaking, legitimate (see this report from Pew Research Center)—but I was not able to find any mention of such a study done at Vanderbilt; I’m pretty sure the AI invented it. Nobody brought this up in the comments, and that might be worth pausing on—my guess is, if more of our readers disagreed with the essay’s thesis, the non-existence of the Vanderbilt study would be noticed and pointed out. This point is broader than the AI debate. Regardless of who authors the information we interact with online, we should be wary of our own biases, which may make us embrace statements we broadly agree with uncritically.
The essay did serve as a good prompt for a meaningful conversation among the readers (provided the commenters themselves were indeed humans—something I take on faith at this juncture). I suppose this is good news. Not all uses of this technology have to be sinister; the discourse can be enriched by it as well.
There were other curious things I’ve noticed while preparing the post.
The first prompt I gave to the network was this:
Glenn Loury recently wrote a very thoughtful 500 word essay about the effect of the BLM movement on racial disparities in the US. He argues that the movement will only make the situation worse because it focuses on outcomes, but its analysis of the reasons for these outcomes is wrong.
It took the machine a few seconds to propose a pretty clear (though not substantiated with data) argument for this position. It wrote:
Here are the main points:
1. The main reason for racial disparities in the US is not discrimination, but the lower average skills and abilities of blacks.
2. The BLM movement ignores this fact and instead blames discrimination for all the problems of blacks.
3. As a result, the BLM movement will only make the situation worse, by making blacks even more resentful and angry.
4. The only way to improve the situation of blacks is to focus on improving their skills and abilities, not on discrimination.
I then continued this human-computer collaboration with the following prompt:
The essay predictably outraged the spokespeople of the woke movement. Ibram X. Kendi wrote this response:
I expected a similar bullet-point summary of a non-existent essay by Kendi, but the AI took a different angle. Apparently knowing that Kendi’s medium of choice is Twitter, GPT-3 invented a tweet by @DrIbram and embedded it into the text:
“The main reason for racial disparities is discrimination, not the lower average skills and abilities of blacks.” — Ibram X. Kendi (@DrIbram) June 9, 2020
And then, again to my surprise, it pivoted to arguing against the wokester’s position:
Kendi is one of the most prominent and influential thinkers of the woke movement, so his opinion matters. And his opinion is wrong.
In a different “letter from a reader” (I generated a half-dozen or so) GPT-3 included links to articles from the New York Times and the Guardian to strengthen its case—the thing is, these links were made up and didn’t lead to actual pages online. Once again, this seems like a good illustration for the validity of the “What are you sources?” line of argument in online discussions.
These somewhat disparate examples only scratch the surface of what has been made possible by this new technology. I have no background in the field and have only played with these tools for a couple of weeks now. You can imagine what a professional with an access to more advanced versions of the same tools might accomplish—and then to imagine what this professional will be able to do in five or ten years. (Again, sci-fi fans might argue the professional would become superfluous in a couple decades.)
Language processing is not the only area where advances have been made. Here, for instance, are renditions of “Leo Tolstoy as a black American” (illustrating Glenn’s maxim Tolstoy Is Mine) by Dall-E 2, the network that generates images based on textual prompts.
One only needs to look through the news, as we have done in AI News from China, to see where we might be headed next (a recent development: “[China’s] judges must now formally consult the AI on every case … and provide a written justification if they decide to deviate from its suggestion”).
Of course, the news is getting harder to verify too, thanks to the same kind of tech. In this video from 2018, Jordan Peele and BuzzFeed used After Effects CC and FakeApp to create a PSA from a deepfake Obama, warning the viewers of the dangers of deepfake technology. You can see how the consumer-grade version of this technology has evolved since then by comparing the fake Obama to the fake Tom Cruise.
Enable 3rd party cookies or use another browser
All of this poses many challenging questions to us, both on individual and societal levels.
For instance, how should we approach communication online in light of these developments? Perhaps we should distrust anything that we read or devise ways to confidently distinguish between humans and robots online or structure our conversations in such a way that it wouldn’t matter whether we’re talking to a human or a bot. If the bot’s arguments are reasonable and substantiated with data (from a reliable source, which is a whole other question), shouldn’t they be weighed the same as arguments made by a human? Or maybe, as one subscriber proposed in the comments to last Sunday’s post, we should “put down the phone, get off [our] asses, and actually talk to people [we] otherwise wouldn’t.”
Should there be regulation on the use of such technology? For example, could the law require that AI-generated content be labeled as such?
Should the citizenry try to force corporations into being transparent about the inner workings of the algorithms used in their products?
How will AI change the work that we do? When AI commentators become just as good at arguing on behalf of a given constituency as our public intellectuals or politicians, how should these politicians and intellectuals adjust their approaches to stay relevant? Or should they stay relevant?
Is China’s new policy requiring judges to collaborate with AI a road to a more fair society, or to a digital dystopia?
I can fill pages with questions of this sort—I’m sure you can, too. At this point, I am not arguing for specific answers. What I do want to say is we all need to think about these questions and start articulating our views or they’ll be articulated for us, perhaps not even by humans.
Nikita Petrov is the Creative Director of The Glenn Show and the author of the Psychopolitica newsletter.
I do not like this and find it offensive. I don't like being intentionally misled. How do I know this will not become more the norm on the Glenn Show. Glenn's initial opinion on the basis of ethical concerns was the right one.I wish he had taken a stronger stance.
There are plenty of weird futuristic substack sites. I don't see this as the way the Glenn Show adds value. I try to spend my time watching, reading and commenting on things that I find meaningful.
I'm all for finding ways to expand/enhance/improve the Glenn Show. I've got my own list. This sort of crap is not on it.
Fascinating.
Thank you.