I think much of today’s social unrest is directly attributable to these algorithms that reinforce rather then educate, that inflame rather than inform. The process drives eyeballs, which drives clicks, which drives advertising revenues. It’s central to the business model of firms like Twitter and Facebook. But people like it because it’s “free”. How many of us are willing to pay to subscribe to a social media site that doesn’t do this? Perhaps the old adage is true: you get what you pay for.
Good question. That's above my understanding. Why does the algorithm have to decide? Can't we prompt much like we do with search engines and be responsible for defining our own areas of interest? I think you would need a paid subscription model to make this work, however. Another option might be a checklist of interests, but those seem to get overriden in my experience.
I guess it doesn’t have to make suggestions at all, but to stop it would require a curtailing of the freedoms we currently exercise. I feel perfectly capable of discerning which content has value and which is intentionally inflammatory. I think one of the big problems is that my feelings aren’t unique. Everyone feels the same way about themselves. They are approaching their feeds with clarity and good judgement. It’s all these other sheep who can’t see that they are being manipulated.
I think your suggestions would make social media far less attractive. And maybe thats the only solution. Make social media so user-unfriendly that people spend way less time on it.
I think much of today’s social unrest is directly attributable to these algorithms that reinforce rather then educate, that inflame rather than inform. The process drives eyeballs, which drives clicks, which drives advertising revenues. It’s central to the business model of firms like Twitter and Facebook. But people like it because it’s “free”. How many of us are willing to pay to subscribe to a social media site that doesn’t do this? Perhaps the old adage is true: you get what you pay for.
How will the algorithm decide what is educational and informative?
Good question. That's above my understanding. Why does the algorithm have to decide? Can't we prompt much like we do with search engines and be responsible for defining our own areas of interest? I think you would need a paid subscription model to make this work, however. Another option might be a checklist of interests, but those seem to get overriden in my experience.
I guess it doesn’t have to make suggestions at all, but to stop it would require a curtailing of the freedoms we currently exercise. I feel perfectly capable of discerning which content has value and which is intentionally inflammatory. I think one of the big problems is that my feelings aren’t unique. Everyone feels the same way about themselves. They are approaching their feeds with clarity and good judgement. It’s all these other sheep who can’t see that they are being manipulated.
I think your suggestions would make social media far less attractive. And maybe thats the only solution. Make social media so user-unfriendly that people spend way less time on it.
This is a good conversation and might be a good topic for Glenn. How to make social media more responsive and less inflammatory.