We’re in the midst of a culture war, with battle lines drawn between the left and the right, especially when it comes to the role of race, racial inequality, and discrimination in public and private life. At least that’s how our current moment is often described. In reality, very few people are “culture warriors” in their bones, in the sense that their actual views line up neatly with partisan doctrines of the left or the right. Enlist someone in a culture war, you’ll get a culture warrior. Sit down with them over a meal and a couple drinks, and you’ll almost invariably encounter someone much more complex and hard to define than a political caricature.
The trouble is, the culture war sells. It leads to higher ratings on cable news, increased readership even in “unbiased” legacy media outlets, electoral gains for political parties who use fear to motivate voters, and more clicks for hustlers on both sides who’ve figured out how to translate social division into grievance-fueled online followings.
Social media, with its reliance on algorithmic recommendation, plays a major role in this state of affairs as well. A communiqué recently found its way into my inbox, and it drove home to me in stark terms just how much we should be worrying about the way algorithmic recommendation stokes feelings of grievance and misperceptions about our social landscape. How can we gauge the true intensity of the culture war—and find a way to dial it down—when algorithms and other forms of AI generate profit by amplifying it? Is online interaction giving us a clear picture of who thinks and feels what? Or does our very participation online leave our thoughts and feelings vulnerable to manipulation?
I present the message I received below. I’d love to hear your thoughts—let me know in the comments.
This post is free and available to the public. To receive early access to TGS episodes, an ad-free podcast feed, Q&As, and other exclusive content and benefits, click below.
I've been thinking a lot about algorithms lately, and how they are being used to curate the information we see. I think this is going to be one of the most important issues facing our country in the next few years. And I think it's going to have a major impact on race relations.
Here's what I mean: Right now, algorithms are being used to determine what content we see on social media sites like Facebook and Twitter. These algorithms take into account a variety of factors, including who our friends are, what kinds of things we've liked or shared in the past, and so forth.
But here's the thing: Race is also becoming an increasingly important factor in these algorithms. That's because social media companies are starting to realize that people of different races tend to interact with each other differently online.
For example, studies have shown that black users of social media sites are more likely than white users to click on links to articles about race-related topics. So if Facebook wants its algorithm to show us more stuff that we're likely to click on, it makes sense for them to start showing us more stuff about race. And once they start doing that, it becomes a self-reinforcing cycle: The algorithm starts showing us more stuff about race because we're clicking on it, and then we start clicking on even more stuff about race because that's what the algorithm is showing us.
This is already happening. A recent study by researchers at Vanderbilt University found that, on average, black Facebook users see about 1.5 times more content about race than white Facebook users do. And this difference is even bigger for certain topics: Black Facebook users are 2.5 times more likely to see content about the shooting of Trayvon Martin than white Facebook users are, for example.
Now, you might think that this isn't a big deal; after all, what's wrong with seeing more information about things that are important to us?
But there's a danger here.
First of all, it's not clear that social media algorithms can accurately gauge our interests; they may simply be amplifying our preexisting biases.
Second, and more importantly, if we're only seeing information that confirms our existing beliefs—if we're living in an echo chamber—it becomes very difficult to have constructive dialogue or find common ground with people who don't share those beliefs.
This is a problem for everyone, but it's especially acute for people of color. That's because the United States has a long history of race-based discrimination, which means that people of color are more likely to have mistrust and suspicion towards people who don't share their racial background. If we're only seeing information that reinforces this mistrust—if we're only hearing one side of the story—it becomes very difficult to bridge the divide.
So what can be done about this?
First of all, social media companies need to be more transparent about how their algorithms work. We need to understand how they are curating our content, and why they are making the choices they are making. Second, we need to find ways to encourage cross-racial dialogue and understanding. This will require effort from everyone involved: Social media users need to make an effort to seek out diverse perspectives; social media companies need to design their algorithms in ways that promote diversity; and policy makers need to create incentives for both individuals and companies to promote cross-racial understanding.
This is not going to be easy, but it's important. The future of our country depends on it.
[Name Withheld]
When I see Tucker Carlson pushed into my YouTube recommendations I start to worry “what am I watching?”. The algorithms are all about doubling down. You get an exponential effect that leads to more radical views. We need our institutions to show leadership in mixing people of different views and counteracting the polarizing effects of social media. But instead they just pick a side and add to the effect. Someone goes through DEIA training during the day - at their work. It pisses the off. They go home and start watching YouTube videos of why it’s misguided. Those videos just lead them farther and farther down the rabbit hole. Or the opposite happens and they go down the rabbit hole of SJW videos. Either way - It’s not good.
Disclosure: I am a software engineer building online advertising systems.
// How can we gauge the true intensity of the culture war—and find a way to dial it down—when algorithms and other forms of AI generate profit by amplifying it?
You are fed what you engage with. There are millions of people who's only experience of [insert media platform] is makeup tutorials, sports, etc. Most platforms expose tools for you to curate your feed. Youtube/Facebook et.al. have little buttons that say "I'm not interested in this" - those buttons _work_. You can also uninstall or block the apps, and yes it is possible, people do it all the time.
In general, talk of this manner surrenders way too much agency to the platforms. The main driver of the content you engage with is _you_.
// Is online interaction giving us a clear picture of who thinks and feels what?
No. Mostly its you looking into your own reflection.
// does our very participation online leave our thoughts and feelings vulnerable to manipulation?
Yes, but so do most things. If you read the NYT, you are being manipulated. Platforms want to maximize time-on-site, and that is different flavor of manipulation, but its not obvious which is worse. This probably varies case-by-case.
EDIT:
// social media companies need to be more transparent about how their algorithms work. We need to understand how they are curating our content and why they are making the choices they are making.
I make algorithms for a living. With a few narrow exceptions, "how" or "why" are not questions that can be answered in a way that people would find satisfying. We have a few options:
1) I can point you to the formal mathematical specification in any number of textbooks. Read them, and you will know one kind of "how"
2) I can show you the ten billion numbers that constitute it for a second kind of "how"
3) I can say "why" in that X content maximized the probability of you generating money for the platform.
4) Sometimes (but more often not) I can say "why" in that the model considers certain things "important". But in what way its important I (typically) cant say.
Imagine you have a brain in a jar, you ask me why the brain did Y instead of Z. I answer that by showing you a video of all the neurons activating at the time the decision was made. This is a bad example for various reasons, but it illustrates that "why/how" are not applicable questions.
The reason we can't answer questions like this is not a conspiracy of silence on the part of tech companies - its that nobody has an answer humans can comprehend let alone accept.