105 Comments
⭠ Return to thread
Jul 24, 2022·edited Jul 24, 2022Liked by Nikita Petrov

Disclosure: I am a software engineer building online advertising systems.

// How can we gauge the true intensity of the culture war—and find a way to dial it down—when algorithms and other forms of AI generate profit by amplifying it?

You are fed what you engage with. There are millions of people who's only experience of [insert media platform] is makeup tutorials, sports, etc. Most platforms expose tools for you to curate your feed. Youtube/Facebook et.al. have little buttons that say "I'm not interested in this" - those buttons _work_. You can also uninstall or block the apps, and yes it is possible, people do it all the time.

In general, talk of this manner surrenders way too much agency to the platforms. The main driver of the content you engage with is _you_.

// Is online interaction giving us a clear picture of who thinks and feels what?

No. Mostly its you looking into your own reflection.

// does our very participation online leave our thoughts and feelings vulnerable to manipulation?

Yes, but so do most things. If you read the NYT, you are being manipulated. Platforms want to maximize time-on-site, and that is different flavor of manipulation, but its not obvious which is worse. This probably varies case-by-case.

EDIT:

// social media companies need to be more transparent about how their algorithms work. We need to understand how they are curating our content and why they are making the choices they are making.

I make algorithms for a living. With a few narrow exceptions, "how" or "why" are not questions that can be answered in a way that people would find satisfying. We have a few options:

1) I can point you to the formal mathematical specification in any number of textbooks. Read them, and you will know one kind of "how"

2) I can show you the ten billion numbers that constitute it for a second kind of "how"

3) I can say "why" in that X content maximized the probability of you generating money for the platform.

4) Sometimes (but more often not) I can say "why" in that the model considers certain things "important". But in what way its important I (typically) cant say.

Imagine you have a brain in a jar, you ask me why the brain did Y instead of Z. I answer that by showing you a video of all the neurons activating at the time the decision was made. This is a bad example for various reasons, but it illustrates that "why/how" are not applicable questions.

The reason we can't answer questions like this is not a conspiracy of silence on the part of tech companies - its that nobody has an answer humans can comprehend let alone accept.

Expand full comment

This is all true. This is not about the ethics of individual programmers as much as companies like Facebook wanting to maximize engagement with their platform instead of healthy civic engagement and not really having the will or capacity to deal with the consequences later.

Expand full comment

// maximize engagement ... instead of healthy civic engagement

You make a site that optimizes "healthy civic engagement" and I'll make one that maximizes Kardashians' butts. Lets see who is in business a year later.

We can make it illegal to run competitor to Zuckerburg inc., and then mandate "eat your vegetables" FDA/CDC/FBI propaganda. Pretty sure thats not what people want, but AFAIKT its the only solve that would address the grievances people have.

// not really having the will or capacity to deal with the consequences

What specifically would 'dealing with them' entail that isn't currently being done? Answer that question in an actionable, unproblematic way and you'll be a billionaire.

People talk about tech giants as if they are the simultaneous monocausal source and panacea to all problems. This, when we have an enormous difficulty defining or agreeing on what the problems are, let alone coming up with solutions that aren't even worse.

Expand full comment

I was relatively active when it was still possible to talk about the "netroots" as a community until the internecine conflict about Obama just got too toxic for me. (And my mother died and making two minyanim a day was suddenly much more important than whatever some internet personality was yelling.) The political blogosphere, depending on the site, might have provided only caricatures of the other side and the 2008 primary was so awful that I now keep my presidential primary online discourse to a minimum, but it was very effective in getting people to participate and pay attention to small details about the political process that were under the MSM's radar. And most of the bloggers didn't make a living by blogging. You could get healthy civic engagement out of that in the offline world. Social media incentives are very different. To be fair, it is possible to get good information from both political science Twitter and election data Twitter. But an ordinary citizen who doesn't know very much will walk into an environment where people can confine their "activism" exclusively to online; the activism is often of "personal is political" type; users don't have long histories of past statements for many or most of the people they are engaging with just from a history with the site; and many of the rewards are for one-liners at the expense of whatever foolish thing someone on the other side said. This is not a recipe for productive conversations about politics even with one's own family. All this leads me to conclude that you COULD make money with a civic engagement site. But scale and trying to be everything advertiser-friendly to everybody at once might have to be sacrificed. The sites might have to be replacements for local journalism in some way.

Extremely simple things that Facebook and Twitter could both do are

a) let users discover who to follow and what groups to join entirely by themselves

b) not place content in users'feeds for the sole reason of getting an emotional rise out of the users.

Social media companies need to do more to take down obvious hate speech when it is reported to them, but I am not willing to think that the answer is taking down "disinformation" as much as encouraging people to have more media literacy.

Expand full comment

They could step us through a few examples, the way a programmer would if he were using a debugger. But my complaint is not about the algorithms working too well, but about the algorithms working poorly -- if the intent is to increase my engagement by catering to my interests.

I'd never heard of Ray Epps until the NYT piece. My impression, after watching the videos, was that it was very suspicious not only that he wasn't locked up, but that the NYT would write a piece defending him. He seemed the poster boy for "insurrectionist," the person most deserving of being thrown underneath the jail by the Democrats. I started reading pieces reflecting my suspicions. However, when I did Google news searches on "Ray Epps," I was flooded, and still am, with articles such as -- and this is a real example -- "The Little Guys Being Taken Down by Trumpworld." Epps was a man urging people, starting the day before and continuing on Jan 6, to go INSIDE the Capitol and now he is being "taken down by Trump world"?

Anyway, people can argue the pros and cons of algorithms feeding my existing preconceptions until the cows come home; that is a conversation worth having. My complaint is about the information pushed on me when the designers of the algorithms clearly have an interest in NOT feeding my preconceptions, but rather in "educating" me.

(On the other hand, YouTube suggestions are amazingly attentive to my short and long term viewing habits. They have my number, so to speak.)

Expand full comment

// They could step us through a few examples, the way a programmer would if he were using a debugger.

Machine learning doesn't work like this. A trained model is not a sequence of interpretable instructions. When I make a system that works I dont have any idea of "why" it works in the sense that people would find satisfying.

Expand full comment

A person's history has to be stored somewhere. A devoted Sean Hannity fan is somehow distinguishable from a Rachel Maddow devotee. In the end, it's all 1's and 0's. I believe I would find it satisfying.

That said, I'm open to being disabused should a piece of reasonable length exist which makes your case. I'm not trying to be argumentative. It's just that I can't imagine how it would be impossible to trace the exact behavior of the algorithms. It's not magic.

Expand full comment
Jul 24, 2022·edited Jul 24, 2022

// I believe I would find it satisfying.

This is Hitch Hikers Guide to the Galaxy in real life. The answer is 42. Are you actually happy with that? (Actually the answer is a vector of 2.3 million numbers between 0 and 1, but the volume doesn't make it easier to comprehend.)

// I can't imagine how it would be impossible

I can't change the boundaries of your imagination. Interestingly, this is part of the problem - people have never encountered anything like it and so they literally cannot conceive of anything this complicated. Until I got into it I would have said the same thing.

The best I can do is the brain in the jar analogy. A complete map of every neuron firing in my head is not regarded as an acceptable explanation for why I ate eggs for breakfast. We are in a field where the limit of the possible is the neuron map.

If you dont want to believe that, the only thing left is point you to a series of math, programming, and machine learning textbooks.

Expand full comment

I was trying to be polite, so I used "I can't imagine" and "I believe I would find." Forgive me. Your contention amounts to saying bugs cannot be fixed, nor adjustments made, because the system is too complicated. Manifestly, that is not true. Your combativeness does not strengthen your case.

Expand full comment

// Your contention amounts to saying bugs cannot be fixed, nor adjustments made, because the system is too complicated.

No. My contention is that there are no techniques that make these systems sufficiently interpretable _today_. Maybe we will be able to do it 10 years from now, but as of this moment it simply doesn't exist.

// Manifestly, that is not true.

Electrons are both a particle and a waveform. It makes no sense, and yet it is so. Insisting otherwise doesn't change the facts.

// Your combativeness does not strengthen your case.

My apologies, I tend to avoid posting because I gravitate towards edgy analogies and snark. I'm not trying to be combative, but it falls out of me.

Expand full comment
Jul 25, 2022·edited Jul 25, 2022

It has been a few years since I watched it, but Grant Sanderson (a.k.a. 3Blue1Brown) has a really nice series of four short videos on neural networks if you have the math background to follow it - mainly linear algebra and multivariable calculus. I seem to recall him doing, as he always does, an excellent job explaining why the different "layers" involved in such a network really can't be interpreted in a clear manner - even to those who designed the network.

https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

Expand full comment

Thanks much. I'll check it out.

Expand full comment

As a person who has worked with algorithms (albeit for trading not socials) this answer resonates with me. A huge issue with this debate is the fact that people don’t want to admit the solution is giving up something they value…they want every other kind of fix except the one that actually makes a difference.

Second, I recently was musing on what I think is a different issue: most people prefer one social over all others as a primary. These each have very different methods of engagement, cultures, ideologies, technologies, populations, etc. at a higher level than algorithms, are the multitude of platforms driving wedges into the social fabric? I see the social war content as less important (or perhaps the glue uniting the field of social media) than the fact that everyone is simply veering away from the means of communicating universally.

This is a bit half-baked but seems appropriate in this thread. Perhaps it’s a reflection of Jonathan Haidt’s Tower of Babel metaphor.

Expand full comment