We’re in the midst of a culture war, with battle lines drawn between the left and the right, especially when it comes to the role of race, racial inequality, and discrimination in public and private life. At least that’s how our current moment is often described. In reality, very few people are “culture warriors” in their bones, in the sense that their actual views line up neatly with partisan doctrines of the left or the right. Enlist someone in a culture war, you’ll get a culture warrior. Sit down with them over a meal and a couple drinks, and you’ll almost invariably encounter someone much more complex and hard to define than a political caricature.
The trouble is, the culture war sells. It leads to higher ratings on cable news, increased readership even in “unbiased” legacy media outlets, electoral gains for political parties who use fear to motivate voters, and more clicks for hustlers on both sides who’ve figured out how to translate social division into grievance-fueled online followings.
Social media, with its reliance on algorithmic recommendation, plays a major role in this state of affairs as well. A communiqué recently found its way into my inbox, and it drove home to me in stark terms just how much we should be worrying about the way algorithmic recommendation stokes feelings of grievance and misperceptions about our social landscape. How can we gauge the true intensity of the culture war—and find a way to dial it down—when algorithms and other forms of AI generate profit by amplifying it? Is online interaction giving us a clear picture of who thinks and feels what? Or does our very participation online leave our thoughts and feelings vulnerable to manipulation?
I present the message I received below. I’d love to hear your thoughts—let me know in the comments.
This post is free and available to the public. To receive early access to TGS episodes, an ad-free podcast feed, Q&As, and other exclusive content and benefits, click below.
I've been thinking a lot about algorithms lately, and how they are being used to curate the information we see. I think this is going to be one of the most important issues facing our country in the next few years. And I think it's going to have a major impact on race relations.
Here's what I mean: Right now, algorithms are being used to determine what content we see on social media sites like Facebook and Twitter. These algorithms take into account a variety of factors, including who our friends are, what kinds of things we've liked or shared in the past, and so forth.
But here's the thing: Race is also becoming an increasingly important factor in these algorithms. That's because social media companies are starting to realize that people of different races tend to interact with each other differently online.
For example, studies have shown that black users of social media sites are more likely than white users to click on links to articles about race-related topics. So if Facebook wants its algorithm to show us more stuff that we're likely to click on, it makes sense for them to start showing us more stuff about race. And once they start doing that, it becomes a self-reinforcing cycle: The algorithm starts showing us more stuff about race because we're clicking on it, and then we start clicking on even more stuff about race because that's what the algorithm is showing us.
This is already happening. A recent study by researchers at Vanderbilt University found that, on average, black Facebook users see about 1.5 times more content about race than white Facebook users do. And this difference is even bigger for certain topics: Black Facebook users are 2.5 times more likely to see content about the shooting of Trayvon Martin than white Facebook users are, for example.
Now, you might think that this isn't a big deal; after all, what's wrong with seeing more information about things that are important to us?
But there's a danger here.
First of all, it's not clear that social media algorithms can accurately gauge our interests; they may simply be amplifying our preexisting biases.
Second, and more importantly, if we're only seeing information that confirms our existing beliefs—if we're living in an echo chamber—it becomes very difficult to have constructive dialogue or find common ground with people who don't share those beliefs.
This is a problem for everyone, but it's especially acute for people of color. That's because the United States has a long history of race-based discrimination, which means that people of color are more likely to have mistrust and suspicion towards people who don't share their racial background. If we're only seeing information that reinforces this mistrust—if we're only hearing one side of the story—it becomes very difficult to bridge the divide.
So what can be done about this?
First of all, social media companies need to be more transparent about how their algorithms work. We need to understand how they are curating our content, and why they are making the choices they are making. Second, we need to find ways to encourage cross-racial dialogue and understanding. This will require effort from everyone involved: Social media users need to make an effort to seek out diverse perspectives; social media companies need to design their algorithms in ways that promote diversity; and policy makers need to create incentives for both individuals and companies to promote cross-racial understanding.
This is not going to be easy, but it's important. The future of our country depends on it.
[Name Withheld]
Our "elite" universities like Brown are major sources of toxic ideas and dialogue: https://yuribezmenov.substack.com/p/how-to-rank-the-top-npc-universities
When I see Tucker Carlson pushed into my YouTube recommendations I start to worry “what am I watching?”. The algorithms are all about doubling down. You get an exponential effect that leads to more radical views. We need our institutions to show leadership in mixing people of different views and counteracting the polarizing effects of social media. But instead they just pick a side and add to the effect. Someone goes through DEIA training during the day - at their work. It pisses the off. They go home and start watching YouTube videos of why it’s misguided. Those videos just lead them farther and farther down the rabbit hole. Or the opposite happens and they go down the rabbit hole of SJW videos. Either way - It’s not good.