Discussion about this post

User's avatar
Edward's avatar

When I see Tucker Carlson pushed into my YouTube recommendations I start to worry “what am I watching?”. The algorithms are all about doubling down. You get an exponential effect that leads to more radical views. We need our institutions to show leadership in mixing people of different views and counteracting the polarizing effects of social media. But instead they just pick a side and add to the effect. Someone goes through DEIA training during the day - at their work. It pisses the off. They go home and start watching YouTube videos of why it’s misguided. Those videos just lead them farther and farther down the rabbit hole. Or the opposite happens and they go down the rabbit hole of SJW videos. Either way - It’s not good.

Expand full comment
Burke's avatar

Disclosure: I am a software engineer building online advertising systems.

// How can we gauge the true intensity of the culture war—and find a way to dial it down—when algorithms and other forms of AI generate profit by amplifying it?

You are fed what you engage with. There are millions of people who's only experience of [insert media platform] is makeup tutorials, sports, etc. Most platforms expose tools for you to curate your feed. Youtube/Facebook et.al. have little buttons that say "I'm not interested in this" - those buttons _work_. You can also uninstall or block the apps, and yes it is possible, people do it all the time.

In general, talk of this manner surrenders way too much agency to the platforms. The main driver of the content you engage with is _you_.

// Is online interaction giving us a clear picture of who thinks and feels what?

No. Mostly its you looking into your own reflection.

// does our very participation online leave our thoughts and feelings vulnerable to manipulation?

Yes, but so do most things. If you read the NYT, you are being manipulated. Platforms want to maximize time-on-site, and that is different flavor of manipulation, but its not obvious which is worse. This probably varies case-by-case.

EDIT:

// social media companies need to be more transparent about how their algorithms work. We need to understand how they are curating our content and why they are making the choices they are making.

I make algorithms for a living. With a few narrow exceptions, "how" or "why" are not questions that can be answered in a way that people would find satisfying. We have a few options:

1) I can point you to the formal mathematical specification in any number of textbooks. Read them, and you will know one kind of "how"

2) I can show you the ten billion numbers that constitute it for a second kind of "how"

3) I can say "why" in that X content maximized the probability of you generating money for the platform.

4) Sometimes (but more often not) I can say "why" in that the model considers certain things "important". But in what way its important I (typically) cant say.

Imagine you have a brain in a jar, you ask me why the brain did Y instead of Z. I answer that by showing you a video of all the neurons activating at the time the decision was made. This is a bad example for various reasons, but it illustrates that "why/how" are not applicable questions.

The reason we can't answer questions like this is not a conspiracy of silence on the part of tech companies - its that nobody has an answer humans can comprehend let alone accept.

Expand full comment
103 more comments...

No posts