For a content platform it seems natural that a sharp reduction in content by 80% would reduce the number of visitors. I wonder how this would play out if spotify or youtube removed 80% of their catalogs.
I am reminded of how people said that spotify "won" over music piracy by simply being more convenient. Remove 80% of the music there, and my bet is that music piracy would see a massive upswing, together with legal competitors. 50% reduction in users do not sound unthinkable.
ssalka
> The shift did not take place immediately. Within six months, traffic at smaller, less regulated sites had grown by 55%, and at larger sites by 10%, with point estimates implying that the traffic was entirely diverted to competing firms. This suggests that regulating only the largest platforms may push traffic to fringe sites and less controlled spaces.
This rings true to me, especially in the recent context of AI adopters looking for uncensored alternatives. This frame of thinking can be applied not only to models, i.e. many move away from OpenAI/ChatGPT in search of less restricted models, as well as being applied to sites providing AI resources. Just the other day, CivitAI (the current leader for distributing custom checkpoints, LoRAs for image-centric models) announced it was taking a much more heavy-handed approach to moderation due to pressure from Mastercard/Visa. Its users are simply outraged, and many I think will be leaving in search of a safe haven for their models/gens going forward.
show comments
motolov
Interesting abstract.
I can see similar concepts applied to eg govt regulation, censorship, etc (only one side monitoring, other sides absorb content of the monitored)
BTW, it looks like your PDF is missing figures/illustrations/etc (there is placeholder text)
Not sure if this was a publishing tech issue or if missed in authoring
show comments
trod1234
This study is an example of how you don't do science.
It fails to define critical term definitions, and uses such terms in contextual scopes that depending on the scope meant may contradict itself, with the context absent (i.e. welfare).
Fails to account for impactful actions that occurred in the same time period (i.e. internal catalog search changes, external search changes, and other changes related to requirements of all large businesses doing business in the US related to FOSTA-SESTA Act 2018).
Fails to vet data collection methodology or identify limitations of the dataset (Similarweb, bad data in bad data out).
Most people searching for porn use protection, fails to address collection methodology shortcomings when data collection is thwarted. It is also entirely unclear how the study controls for duplicate signals.
Fails by inserting value-based statements and asserting false narratives or flawed reasoning (a null hypothesis without alternatives, in a stochastic environment), also without proper basis, (i.e. the loss of 80% content and drastic changes in site discoverability/usability in aggregate).
There are a few phrasings, coupled with the poor methodology, that make me think this paper/study was in large part generated by AI, potentially as a pre-fabricated narrative (soft-propaganda).
The reasoning does not follow logically, and fails at obvious points where an AI would fail. On its face, this doesn't look like a sound study.
frankfrank13
> Our findings highlight how asymmetric exposure to content moderation shocks can reshape market competition, drive consumers toward less regulated spaces, and alter substitution patterns across platforms.
For a content platform it seems natural that a sharp reduction in content by 80% would reduce the number of visitors. I wonder how this would play out if spotify or youtube removed 80% of their catalogs.
I am reminded of how people said that spotify "won" over music piracy by simply being more convenient. Remove 80% of the music there, and my bet is that music piracy would see a massive upswing, together with legal competitors. 50% reduction in users do not sound unthinkable.
> The shift did not take place immediately. Within six months, traffic at smaller, less regulated sites had grown by 55%, and at larger sites by 10%, with point estimates implying that the traffic was entirely diverted to competing firms. This suggests that regulating only the largest platforms may push traffic to fringe sites and less controlled spaces.
This rings true to me, especially in the recent context of AI adopters looking for uncensored alternatives. This frame of thinking can be applied not only to models, i.e. many move away from OpenAI/ChatGPT in search of less restricted models, as well as being applied to sites providing AI resources. Just the other day, CivitAI (the current leader for distributing custom checkpoints, LoRAs for image-centric models) announced it was taking a much more heavy-handed approach to moderation due to pressure from Mastercard/Visa. Its users are simply outraged, and many I think will be leaving in search of a safe haven for their models/gens going forward.
Interesting abstract. I can see similar concepts applied to eg govt regulation, censorship, etc (only one side monitoring, other sides absorb content of the monitored)
BTW, it looks like your PDF is missing figures/illustrations/etc (there is placeholder text) Not sure if this was a publishing tech issue or if missed in authoring
This study is an example of how you don't do science.
It fails to define critical term definitions, and uses such terms in contextual scopes that depending on the scope meant may contradict itself, with the context absent (i.e. welfare).
Fails to account for impactful actions that occurred in the same time period (i.e. internal catalog search changes, external search changes, and other changes related to requirements of all large businesses doing business in the US related to FOSTA-SESTA Act 2018).
Fails to vet data collection methodology or identify limitations of the dataset (Similarweb, bad data in bad data out).
Most people searching for porn use protection, fails to address collection methodology shortcomings when data collection is thwarted. It is also entirely unclear how the study controls for duplicate signals.
Fails by inserting value-based statements and asserting false narratives or flawed reasoning (a null hypothesis without alternatives, in a stochastic environment), also without proper basis, (i.e. the loss of 80% content and drastic changes in site discoverability/usability in aggregate).
There are a few phrasings, coupled with the poor methodology, that make me think this paper/study was in large part generated by AI, potentially as a pre-fabricated narrative (soft-propaganda).
The reasoning does not follow logically, and fails at obvious points where an AI would fail. On its face, this doesn't look like a sound study.
> Our findings highlight how asymmetric exposure to content moderation shocks can reshape market competition, drive consumers toward less regulated spaces, and alter substitution patterns across platforms.
Or at least one very specific market and platform
"What do you do?" Study porn