I spent a lot of my life and money thinking about building better algorithms (over five years).
We have a bit of a chicken / egg problem. Is it the algorithm or is it the preference of the Users which is the problem.
I'd argue the latter.
What I learned which was counter-intuitive was that the vast majority of people aren't interested in thinking hard. This community, in large part, is an exception where many members pride themselves on intellectually challenging material.
That's not the norm. We're not the norm.
My belief that every human was by their nature "curious" and wanting to be engaged deeply was proven false.
This isn't to claim that this is our nature, but when testing with huge populations in the US (specifically), that's not how adults are.
The problem, to me, is deeper and is rooted in our education system and work systems that demand compliance over creativity. Algorithms serve what Users engage with, if the Users were to no longer be interested in ragebait, clickbait, focused on thoughtful content -- the algorithms would adapt.
show comments
Aurornis
I don’t know if I buy the explanation that this was due to the feed algorithm. It looks like an artifact of being exposed to X’s current user base instead of their old followers. When Twitter switched to X there was a noticeable shift in the average political leanings of the platform toward alignment with Musk, as many left-leaning people abandoned the platform for Bluesky, Mastodon, and Threads.
So changing your feed to show popular posts on the platform instead of just your friends’ Tweets would be expected to shift someone’s intake toward the average of the platform.
show comments
arwhatever
I deleted my account after many years when X recently made the Chronological Feed setting ephemeral, defaulting back to the Algorithmic Feed each time the page is refreshed.
No away I'm going to let that level of outrage-baiting garbage even so much as flash before my eyes.
show comments
jmugan
Oddly enough, X is the only platform i've been able to teach to not show me culture war stuff, from either side. It just shows me AI in the "For You."
show comments
rbanffy
And this is why the price for Twitter was, in the end, remarkably low.
show comments
periodjet
It couldn’t be possible for a social media feed to influence users in the direction of issues important to the Democratic Party, could it?
Or would that just be considered an unalloyed good?
ppeetteerr
Why anyone is still using X after 2025 is a mystery (I know, it's where everyone is, but the moral implications are wild)
show comments
apparent
What does it mean to have someone on a chronological feed, versus the algorithmic one? Does that mean a chronological feed of the accounts they follow? I hardly ever use that, since I don't follow many people, and some people I follow post about lots of stuff I don't care about
from the study:
> We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks
show comments
mikepurvis
"We need more funding into open protocols that decentralize algorithmic ownership; open platforms that give users a choice of algorithm and platform provider; and algorithmic transparency across our information ecosystem."
This sounds like a call to separate the aggregation step from the content. Reasonable enough, but does it really address the root cause? Aren't we just as polarized in a world where there are dozens of aggregators into the same data and everyone picks the one that most indulges their specific predilections for engagement, rage, and clicks?
What does "open" really buy you in this space?
Don't get me wrong, I want this figured out too, and maybe this is a helpful first step on the way to other things, but I'm not quite seeing how it plays out.
show comments
kettlecorn
Underrated in X's changes is how blue checkmark users are shown first underneath popular tweets. Most people who pay for blue checkmarks are either sympathetic to Musk's ideology or indifferent. Many blue checkmark users are there to make money from engagement.
The result is underneath any tweet that gets traction you will see countless blue checkmark users either saying something trolling for their side or engagement-baiting.
The people who are more ideologically neutral or not aligned with Musk are completely drowned out below the hundreds of bulk replies of blue checkmarks.
It used to be that if you saw someone, like a tech CEO, take an interesting position you'd have a varied and interesting discussion in the replies. The algorithm would show you replies in particular from people you follow, and often you'd see some productive exchange that actually mattered. Now it's like entirely drivel and you have to scroll through rage bait and engagement slop before getting to the crumbs of meaningful exchange.
It has had a chilling effect on productive intellectual conversation while also accelerating the polarization of the platform by scaring away many people who care about measured conversation.
show comments
ortusdux
There was a great study from a decade ago that showed that baseball cards held by lighter skinned hands outsold cards held by darker skinned individuals on eBay.
An algorithm designed today with the goal of helping users pick the most profitable product photo would probably steer people towards using caucasian models, and because eBay's cut is a percentage, they would be incentivized to use it.
Studies show that conservatives tend to respond more positively to sponsored content. If this is true, algorithm-driven ad-sponsored social sites will tend towards conservative content.
It is interesting to see a general bias taken away from the study, which I wouldn't necessarily guess given my own experience. My X "For You" feed mostly does not read pro-Trump - instead mostly pushing very intense pro-European and pro-Canadian economic and political separation from the USA, and pushing very negative narratives of the USA, although I suppose it occasionally also introduces pro-Trump posts, and perhaps those do not sway me in the same way given I am a progressive American.
That said, the Trending tab does tend to push very heavy MAGA-aligned narrative, in a way that to me just seems comical, but I suppose there must be people that genuinely take it at face value, and maybe that does push people.
Less to do with the article:
The more I think about it, I'm not really even sure why I use X these days, other than the fact that I don't really have much of an in-person social life outside of work. Sometimes it can be enjoyable, but honestly the main takeaway I have is that microblogging as a format is genuinely terrible, and X in particular does seem to just feed the most angry things possible. Maybe it's exciting to try and discuss opinions but it is also simultaneously hardly possible to have a nuanced or careful discussion when you have limited characters, and someone on the other end that just wants to shout over you.
I miss being a kid and going onto some forums like for Scratch or Minecraft or whatever. The internet felt way more fun when it was just making cool things and chatting with people about it. I think the USA sort of felt more that way too, but it's hard to know if that was just my privilege. When I write about X, it uncomfortably parallels to how I would consider how my interactions have evolved with my family and friends in real life.
dagelf
Theres much more diversity of thought on the right, did they get more open minded?
fluoridation
I honestly don't understand how or why people are using Twitter to keep up with the news. The only thing I use it for is to follow artists, and even that has been going down in recent weeks with most of my favorites moving over to BlueSky. Maybe I'm just a long-winded idiot, but the character limits barely let me have a conversation on either platform. How are people consuming news like this?
It just baffles me how different my experience of using the platform is. I literally do not see any news. I'm not entirely convinced that it's Twitter being biased and not just giving each person what they most engage with.
You have to find good people. Bad people will find you.
kypro
I really wish these points were made in a non-political / platform-specific way because if you care about this issue it's ultimately unhelpful to frame it as if this is an issue with just X or conservatives given how politically divided people are.
I do share the author's concerns and was also concerned back in the day when Twitter was quite literally banning people for posting the wrong opinions there. But it's interesting how the people who used to complain about political bias, now seem to not care, and the people who argued "Twitter is a private company they can do what they want" suddenly think the conservative leaning algorithm now on X is a problem. It's hard to get people across political lines to agree when we do this.
In my opinion there two issues here, neither are politically partisan.
The first is that we humans are flawed and algorithms can use our flaws against us. I've repeatedly spoken about how much I love YouTube's algorithm because despite some people saying it's an echo chamber, I think it's one of the few recommendation algorithms which will serves you a genuinely diverse range of content. But I suspect that's because I genuinely like consuming a very wide range of political content and I know I'm in a minority there (probably because I'm interested in politics as a meta subject, but don't have strong political opinions myself). But my point is these algorithms can work really well if you genuinely want to watch a diverse range of political content.
Secondly some recommendation algorithms (and search algorithms) seem to be genuinely biased which I'd argue isn't a problem itself (they are private companies and can do what they want), but that bias isn't transparent. X very clearly has a conservative bias and Bluesky also very clearly has political bias. Neither would admit their bias so people incorrectly assume they're being served something which is fairly representative of public opinion rather than curated – either by moderation or algorithm tweaks.
What we need is honesty, both from individuals who are themselves seeking out their own bias, and platforms which pretend to not have bias but do, and therefore influence where people believe the center ground is.
We can all be more honest with ourselves. If you exclusively use X or Bluesky, it's worth asking why that is, especially if you're engaging with political content on these platforms. But secondly I think we do need more regulation around the transparency of algorithms. I don't necessary think it's a problem if some platform recommends certain content above other content, or has some algorithm to ban users posts content they don't like, but these decisions should be far transparent than they are today so people are at least able to feed that into how they perceive the neutrality of the content they're consuming.
jmyeet
I blame Google for a lot of this. Why? Because they more than anyone else succedded in spreading the propaganda that "the algorithm" was like some unbiased even all-knowing black box with no human influence whatsoever. They did this for obvious self-serving reasons to defend how Google properties ranked in search results.
But now people seem to think newsfeeds, which increase the influence of "the algorithm", are just a result of engagement and (IMHO) nothing could be further from the truth.
Factually accurate and provable statements get labelled "misinformation" (either by human intervention or by other AI systems ostensibly created to fight misinformation) and thus get lower distribution. All while conspiracy theories get broad distribution.
Even ignoring "misinformation", certain platforms will label some content as "political" and other content as not when a "political" label often comes down to whether or not you agree with it.
One of the most laughable incidents of putting a thumb on the scale was when Grok started complaining about white genocide in South Africa in completely unrelated posts [1].
I predict a coming showdown over Section 230 about all this. Briefly, S230 establishes a distinction between being a publisher (eg a newspaper) and a platform (eg Twitter) and gave broad immunity from prosecution for the platform for user-generated content. This was, at the time (the 1990s), a good thing.
But now we have a third option: social media platforms have become de facto publishers while pretending to be platforms. How? Ranking algorithms, recommendations and newsfeeds.
Think about it this way: imagine you had a million people in an auditorium and you were taking audience questions. What if you only selected questions that were supportive of the government or a particular policy? Are you really a platform? Or are you selecting user questions to pretend something has broad consensus or to push a message compatible with the views of the "platform's" owner?
My stance is that if you, as a platform, actively suppresses and promotoes content based on politics (as IMHO they all do), you are a publisher not a platform in the Section 230 sense.
Oh my. Now that X is affecting people's politics (for the better IMO), suddenly people care about the influence of algorithms over politics...
show comments
barfiure
This person is confused. Trump was a well known pussy grabber for decades. Epstein was anything but a secret, it seems, given how many politicians and celebrities and moguls he rubbed elbows with. Jerry stopping by the island for a lemonade and a spot of lunch with his high school aged girls? Yeah.
It comes down to this: you can have visibility into things and yet those in power won’t care whatsoever what you may think. That has always been the case, it is the case now, and will continue to be in the future.
Lovely thought Ben. Good to hear from you!
I spent a lot of my life and money thinking about building better algorithms (over five years).
We have a bit of a chicken / egg problem. Is it the algorithm or is it the preference of the Users which is the problem.
I'd argue the latter.
What I learned which was counter-intuitive was that the vast majority of people aren't interested in thinking hard. This community, in large part, is an exception where many members pride themselves on intellectually challenging material.
That's not the norm. We're not the norm.
My belief that every human was by their nature "curious" and wanting to be engaged deeply was proven false.
This isn't to claim that this is our nature, but when testing with huge populations in the US (specifically), that's not how adults are.
The problem, to me, is deeper and is rooted in our education system and work systems that demand compliance over creativity. Algorithms serve what Users engage with, if the Users were to no longer be interested in ragebait, clickbait, focused on thoughtful content -- the algorithms would adapt.
I don’t know if I buy the explanation that this was due to the feed algorithm. It looks like an artifact of being exposed to X’s current user base instead of their old followers. When Twitter switched to X there was a noticeable shift in the average political leanings of the platform toward alignment with Musk, as many left-leaning people abandoned the platform for Bluesky, Mastodon, and Threads.
So changing your feed to show popular posts on the platform instead of just your friends’ Tweets would be expected to shift someone’s intake toward the average of the platform.
I deleted my account after many years when X recently made the Chronological Feed setting ephemeral, defaulting back to the Algorithmic Feed each time the page is refreshed.
No away I'm going to let that level of outrage-baiting garbage even so much as flash before my eyes.
Oddly enough, X is the only platform i've been able to teach to not show me culture war stuff, from either side. It just shows me AI in the "For You."
And this is why the price for Twitter was, in the end, remarkably low.
It couldn’t be possible for a social media feed to influence users in the direction of issues important to the Democratic Party, could it?
Or would that just be considered an unalloyed good?
Why anyone is still using X after 2025 is a mystery (I know, it's where everyone is, but the moral implications are wild)
What does it mean to have someone on a chronological feed, versus the algorithmic one? Does that mean a chronological feed of the accounts they follow? I hardly ever use that, since I don't follow many people, and some people I follow post about lots of stuff I don't care about
from the study:
> We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks
"We need more funding into open protocols that decentralize algorithmic ownership; open platforms that give users a choice of algorithm and platform provider; and algorithmic transparency across our information ecosystem."
This sounds like a call to separate the aggregation step from the content. Reasonable enough, but does it really address the root cause? Aren't we just as polarized in a world where there are dozens of aggregators into the same data and everyone picks the one that most indulges their specific predilections for engagement, rage, and clicks?
What does "open" really buy you in this space?
Don't get me wrong, I want this figured out too, and maybe this is a helpful first step on the way to other things, but I'm not quite seeing how it plays out.
Underrated in X's changes is how blue checkmark users are shown first underneath popular tweets. Most people who pay for blue checkmarks are either sympathetic to Musk's ideology or indifferent. Many blue checkmark users are there to make money from engagement.
The result is underneath any tweet that gets traction you will see countless blue checkmark users either saying something trolling for their side or engagement-baiting.
The people who are more ideologically neutral or not aligned with Musk are completely drowned out below the hundreds of bulk replies of blue checkmarks.
It used to be that if you saw someone, like a tech CEO, take an interesting position you'd have a varied and interesting discussion in the replies. The algorithm would show you replies in particular from people you follow, and often you'd see some productive exchange that actually mattered. Now it's like entirely drivel and you have to scroll through rage bait and engagement slop before getting to the crumbs of meaningful exchange.
It has had a chilling effect on productive intellectual conversation while also accelerating the polarization of the platform by scaring away many people who care about measured conversation.
There was a great study from a decade ago that showed that baseball cards held by lighter skinned hands outsold cards held by darker skinned individuals on eBay.
An algorithm designed today with the goal of helping users pick the most profitable product photo would probably steer people towards using caucasian models, and because eBay's cut is a percentage, they would be incentivized to use it.
Studies show that conservatives tend to respond more positively to sponsored content. If this is true, algorithm-driven ad-sponsored social sites will tend towards conservative content.
https://onlinelibrary.wiley.com/doi/abs/10.1111/1756-2171.12...
https://www.tandfonline.com/doi/full/10.1080/00913367.2024.2...
It is interesting to see a general bias taken away from the study, which I wouldn't necessarily guess given my own experience. My X "For You" feed mostly does not read pro-Trump - instead mostly pushing very intense pro-European and pro-Canadian economic and political separation from the USA, and pushing very negative narratives of the USA, although I suppose it occasionally also introduces pro-Trump posts, and perhaps those do not sway me in the same way given I am a progressive American.
That said, the Trending tab does tend to push very heavy MAGA-aligned narrative, in a way that to me just seems comical, but I suppose there must be people that genuinely take it at face value, and maybe that does push people.
Less to do with the article:
The more I think about it, I'm not really even sure why I use X these days, other than the fact that I don't really have much of an in-person social life outside of work. Sometimes it can be enjoyable, but honestly the main takeaway I have is that microblogging as a format is genuinely terrible, and X in particular does seem to just feed the most angry things possible. Maybe it's exciting to try and discuss opinions but it is also simultaneously hardly possible to have a nuanced or careful discussion when you have limited characters, and someone on the other end that just wants to shout over you.
I miss being a kid and going onto some forums like for Scratch or Minecraft or whatever. The internet felt way more fun when it was just making cool things and chatting with people about it. I think the USA sort of felt more that way too, but it's hard to know if that was just my privilege. When I write about X, it uncomfortably parallels to how I would consider how my interactions have evolved with my family and friends in real life.
Theres much more diversity of thought on the right, did they get more open minded?
I honestly don't understand how or why people are using Twitter to keep up with the news. The only thing I use it for is to follow artists, and even that has been going down in recent weeks with most of my favorites moving over to BlueSky. Maybe I'm just a long-winded idiot, but the character limits barely let me have a conversation on either platform. How are people consuming news like this?
It just baffles me how different my experience of using the platform is. I literally do not see any news. I'm not entirely convinced that it's Twitter being biased and not just giving each person what they most engage with.
Earlier source: https://www.nature.com/articles/s41586-026-10098-2 (https://news.ycombinator.com/item?id=47064130)
You have to find good people. Bad people will find you.
I really wish these points were made in a non-political / platform-specific way because if you care about this issue it's ultimately unhelpful to frame it as if this is an issue with just X or conservatives given how politically divided people are.
I do share the author's concerns and was also concerned back in the day when Twitter was quite literally banning people for posting the wrong opinions there. But it's interesting how the people who used to complain about political bias, now seem to not care, and the people who argued "Twitter is a private company they can do what they want" suddenly think the conservative leaning algorithm now on X is a problem. It's hard to get people across political lines to agree when we do this.
In my opinion there two issues here, neither are politically partisan.
The first is that we humans are flawed and algorithms can use our flaws against us. I've repeatedly spoken about how much I love YouTube's algorithm because despite some people saying it's an echo chamber, I think it's one of the few recommendation algorithms which will serves you a genuinely diverse range of content. But I suspect that's because I genuinely like consuming a very wide range of political content and I know I'm in a minority there (probably because I'm interested in politics as a meta subject, but don't have strong political opinions myself). But my point is these algorithms can work really well if you genuinely want to watch a diverse range of political content.
Secondly some recommendation algorithms (and search algorithms) seem to be genuinely biased which I'd argue isn't a problem itself (they are private companies and can do what they want), but that bias isn't transparent. X very clearly has a conservative bias and Bluesky also very clearly has political bias. Neither would admit their bias so people incorrectly assume they're being served something which is fairly representative of public opinion rather than curated – either by moderation or algorithm tweaks.
What we need is honesty, both from individuals who are themselves seeking out their own bias, and platforms which pretend to not have bias but do, and therefore influence where people believe the center ground is.
We can all be more honest with ourselves. If you exclusively use X or Bluesky, it's worth asking why that is, especially if you're engaging with political content on these platforms. But secondly I think we do need more regulation around the transparency of algorithms. I don't necessary think it's a problem if some platform recommends certain content above other content, or has some algorithm to ban users posts content they don't like, but these decisions should be far transparent than they are today so people are at least able to feed that into how they perceive the neutrality of the content they're consuming.
I blame Google for a lot of this. Why? Because they more than anyone else succedded in spreading the propaganda that "the algorithm" was like some unbiased even all-knowing black box with no human influence whatsoever. They did this for obvious self-serving reasons to defend how Google properties ranked in search results.
But now people seem to think newsfeeds, which increase the influence of "the algorithm", are just a result of engagement and (IMHO) nothing could be further from the truth.
Factually accurate and provable statements get labelled "misinformation" (either by human intervention or by other AI systems ostensibly created to fight misinformation) and thus get lower distribution. All while conspiracy theories get broad distribution.
Even ignoring "misinformation", certain platforms will label some content as "political" and other content as not when a "political" label often comes down to whether or not you agree with it.
One of the most laughable incidents of putting a thumb on the scale was when Grok started complaining about white genocide in South Africa in completely unrelated posts [1].
I predict a coming showdown over Section 230 about all this. Briefly, S230 establishes a distinction between being a publisher (eg a newspaper) and a platform (eg Twitter) and gave broad immunity from prosecution for the platform for user-generated content. This was, at the time (the 1990s), a good thing.
But now we have a third option: social media platforms have become de facto publishers while pretending to be platforms. How? Ranking algorithms, recommendations and newsfeeds.
Think about it this way: imagine you had a million people in an auditorium and you were taking audience questions. What if you only selected questions that were supportive of the government or a particular policy? Are you really a platform? Or are you selecting user questions to pretend something has broad consensus or to push a message compatible with the views of the "platform's" owner?
My stance is that if you, as a platform, actively suppresses and promotoes content based on politics (as IMHO they all do), you are a publisher not a platform in the Section 230 sense.
[1]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[dead]
Oh my. Now that X is affecting people's politics (for the better IMO), suddenly people care about the influence of algorithms over politics...
This person is confused. Trump was a well known pussy grabber for decades. Epstein was anything but a secret, it seems, given how many politicians and celebrities and moguls he rubbed elbows with. Jerry stopping by the island for a lemonade and a spot of lunch with his high school aged girls? Yeah.
It comes down to this: you can have visibility into things and yet those in power won’t care whatsoever what you may think. That has always been the case, it is the case now, and will continue to be in the future.