Your claims far exceed your evidence. You don't account for the basic premise of the 'radicalisation' thesis: that algorithms fine tune reccs based on user data - i.e previously watched videos, search terms, etc. You only use an 'anonymous' user's first video recc.
— Vikram Singh (@wordsandsuch) December 28, 2019
4. My new article explains in detail. It takes aim at the NYT (in particular, @kevinroose) who have been on myth-filled crusade vs social media. We should start questioning the authoritative status of outlets that have soiled themselves with agendas.https://t.co/bt3mMscJi6
— Mark Ledwich (@mark_ledwich) December 28, 2019
5/ As for b) here is monthly data since Dec 2019. It's not totally clean - there were channels added and some minor changes in processes throughout that period - it slightly noisy but representative of what happened over that period. pic.twitter.com/iJz3IOXXfS
— Mark Ledwich (@mark_ledwich) December 29, 2019
That’s not to say it isn’t important to understand how people use YouTube and how YouTube makes it so -- it’s desperately important for all kinds of reasons. But the narrow focus on radicalization will, to me, always be overly mechanistic and incomplete
— Joe Bernstein (@Bernstein) December 29, 2019
More broadly, I'm concerned there's a way reporting on tech companies cedes the validity of their dismal vision of humanity — input output machines — before the argument has even started
— Joe Bernstein (@Bernstein) December 29, 2019
YouTube pushes politics mainstream versus radicalizes? https://t.co/BicadvS9Cm pic.twitter.com/wqCA6b2Kzi
— Elad Gil (@eladgil) December 28, 2019
so "the late 2019 version of the algorithm doesn't do the thing" means that past criticisms were invalid and a "crusade" even though youtube has publicly announced changes/fixes to the algorithm multiple times in the last couple years?
— Katelyn Gadd (@antumbral) December 28, 2019
this is the key point about studying proprietary algorithms of big platforms w/o inside access. anyone who really cares about this work will tell you that opaque platform design and super personalized algo decisions at massive scale are what holds back our understanding https://t.co/hrxd3mNftP
— Charlie Warzel (@cwarzel) December 29, 2019
"What can be asserted without evidence can also be dismissed without evidence."
— Tim Pool (@Timcast) December 29, 2019
Its amazing how so many people have a view about radicalization that was never backed by data
Two studies came out so far and debunk the rabbit hole narrative but they still cling to their narrative https://t.co/F4Sh7CDAEy
One explanation: YouTube is better thanks to the NYT (and others’) investigations of its previous algorithms. https://t.co/FHzUqLhoFA
— Can Duruk (@can) December 28, 2019
I am glad that YouTube has made the changes it has. I hope they work! I am also reasonably certain that this wouldn’t have happened nearly as quickly without journalists and experts like @beccalew / @zeynep digging in.
— Kevin Roose (@kevinroose) December 29, 2019
Let’s not forget: the peddlers of extreme content adversarially navigate YouTube’s algorithm, optimizing the clickbaitiness of their video thumbnails and titles, while reputable sources attempt to maintain some semblance of impartiality. (None of this is modeled in the paper.)
— Arvind Narayanan (@random_walker) December 29, 2019
I agree this is a large limitation, albeit standard for the literature on this phenomenon. What's interesting to me is that despite this study and others, many (not necessarily Arvind) are nonetheless willing to accept the NYT's core claim made without ANY data on face value. https://t.co/0ciTeGv4Kz
— Littlefoot (@LTF_01) December 29, 2019
Does YouTube's algo direct people down extreme right wing rabbit holes? Study of 800 channels finds it discourages viewers from visiting radicalizing or extremist content. Algo favors mainstream media over independent YouTube channels. Slants left/neutral
— André Spicer (@andre_spicer) December 28, 2019
https://t.co/aFszFEGBQC
Wow.
— Austen Allred (@Austen) December 28, 2019
Very, very well done analysis of the YouTube algorithm that shows, contrary to NYT claims, YouTube actually *reduces* radicalization.
Two questions:
1. If this is true, what does it say about YouTube?
2. If this is true, what does it say about the New York Times? https://t.co/rlkIl8b4MU
The idea that users are manipulated by an impersonal algorithm into a world of conspiracy theorists, provocateurs and racists is a story that the mainstream media has been eager to promote https://t.co/06JfdfWphV via @nuzzel
— hussein kanji (@hkanji) December 28, 2019
Also, many of the first people to identify the corrosive effects of the rabbit hole were Google employees, who, afaik, had no myth-filled crusades and rarely soiled themselves. https://t.co/1GTCfxqss5
— Mark Bergen (@mhbergen) December 29, 2019
Radicalization via YouTube, as widely understood, is when someone watches a few partisan videos and unwittingly starts a feedback loop in which the algorithm gradually recommends more and more extreme content and the viewer starts to believe more and more of it.
— Arvind Narayanan (@random_walker) December 29, 2019
the last story i worked on at @BuzzFeedNews last year tried to suss this out but even in automating user journeys the only thing is clear is that YouTube’s recommendation algorithm isn’t a partisan monster — it’s an engagement monster https://t.co/sQ3wp4VrxA pic.twitter.com/ugOknYkVAq
— Charlie Warzel (@cwarzel) December 29, 2019
A crucial perspective on the recent non-peer-reviewed paper about "youtube radicalization." Quantifying radicalization isn't just hard, it's impossible. Sociologists, like @JessieNYC, have been saying this for over a decade.
— Joan Donovan, PhD (@BostonJoan) December 29, 2019
Studying socio-technical systems is messy. https://t.co/5V3ksn3YrO
2/ Anonymous recs (even when averaged out over all users) could have a different influence compared to personalized ones. This is a legit limitation, but one that applies to all of the studies on recs so far.
— Mark Ledwich (@mark_ledwich) December 29, 2019
A new paper has been making the rounds with the intriguing claim that YouTube has a *de-radicalizing* influence. https://t.co/TTtWR0uBgi
— Arvind Narayanan (@random_walker) December 29, 2019
Having read the paper, I wanted to call it wrong, but that would give the paper too much credit, because it is not even wrong. Let me explain.
There’s still tons of work to be done analyzing the role of algorithmic recommendation in radicalization. No one can legitimately claim an authoritative answer bc, frankly, no one has access (yet) to the granular, longitudinal, user-specific data that we need! https://t.co/9X2JxAIlCl
— Brian Hughes (@MrBrianHughes) December 29, 2019
YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content; it favors mainstream media & cable news content over independent channels with a slant towards left-leaning or politically neutral channels
— Matt Grossmann (@MattGrossmann) December 28, 2019
https://t.co/9JroVgUqCI
An interesting critique of the “Actually, YouTube doesn’t radicalize people” article currently making the rounds, though ignore the bit about the co-author’s intentions.
— Jeffrey Sachs (@JeffreyASachs) December 29, 2019
Attn. @Noahpinion https://t.co/FD54eKSdYF
The answer: they didn’t! They reached their sweeping conclusions by analyzing YouTube *without logging in*, based on sidebar recommendations for a sample of channels (not even the user’s home page because, again, there’s no user). Whatever they measured, it’s not radicalization.
— Arvind Narayanan (@random_walker) December 29, 2019
3. Check out https://t.co/w5YciDywBi to have a play with this new dataset. We also include categorization from @manoelribeiro et a.l. and other studies so you can see some alternative groupings.
— Mark Ledwich (@mark_ledwich) December 28, 2019
All of the code and data is free to review to use https://t.co/vsLVK7wYt4 pic.twitter.com/s17z9kr5lA
This story uses data about a single user’s journey through YouTube — 12,000 videos spanning 4 years. Personalized, logged-in, longitudinal data is how radicalization has to be understood, since it’s how it’s experienced on a platform like YouTube.
— Kevin Roose (@kevinroose) December 29, 2019
If you’re wondering how such a widely discussed problem has attracted so little scientific study before this paper, that’s exactly why. Many have tried, but chose to say nothing rather than publish meaningless results, leaving the field open for authors with lower standards.
— Arvind Narayanan (@random_walker) December 29, 2019
4/ I plan on begging for stats from youtube creators to export data ... from their "YouTube analytics traffic sources -> suggested video" reports to comparing it with our data. In the meantime, you should accept this as the best quality data you have to go on. pic.twitter.com/vsMVgoSguh
— Mark Ledwich (@mark_ledwich) December 29, 2019
For example, think about how silly it would be to focus exclusively on YouTube as the vector for radicalizing ISIS members without thinking about recent history, class, temperament, etc
— Joe Bernstein (@Bernstein) December 29, 2019
This is important for people who care about YouTube radicalisation and may have read the flawed research (which Google will have loved) downplaying the problem. https://t.co/iM8rjjFdyM
— Paul Lewis (@PaulLewis) December 29, 2019
After tussling with these complexities, my students and I ended up with nothing publishable because we realized that there’s no good way for external researchers to quantitatively study radicalization. I think YouTube can study it internally, but only in a very limited way.
— Arvind Narayanan (@random_walker) December 29, 2019
I would love to have more data about individual users’ trips through YouTube from 2014-2018, but it’s extraordinarily hard to get. (I only got it because @Faradayspeaks was incredibly generous and trusting enough to send me his entire watch history.)
— Kevin Roose (@kevinroose) December 29, 2019
8/ Recommendation-rabbit-hole proponents have been ignoring evidence and searching for compelling anecdotes since 2018.
— Mark Ledwich (@mark_ledwich) December 29, 2019
I wish you all the best in dealing with your dissonance. I hope it's not too painful.
END
I also think radicalization is much more complex than just “the algorithm.” But the algorithm is important (70% of time spent on YouTube results from it) and I would love more people with actual AI/ML expertise studying the mechanics.
— Kevin Roose (@kevinroose) December 29, 2019
Incidentally, I spent about a year studying YouTube radicalization with several students. We dismissed simplistic research designs (like the one in the paper) by about week 2, and realized that the phenomenon results from users/the algorithm/video creators adapting to each other.
— Arvind Narayanan (@random_walker) December 29, 2019
If anything this study (whose methodology is super iffy, given that it analyzes logged-out recommendations and doesn’t account for personalization) shows that YouTube’s algo changes to reduce extreme content recs are working. Which, great!
— Kevin Roose (@kevinroose) December 28, 2019
Very important study here of absence of radicalization through YouTube recommendation algorithm. Scholars need comprehensive access to YouTube data to confirm these findings. (This study demonstrates why YouTube should not be afraid to share its data with researchers!) https://t.co/8XsdxHc2JZ
— nathaniel persily (@persily) December 28, 2019
The premise of this article/study is so odd. Studying the YouTube algo of late 2019 (after YouTube made some very well-publicized algo changes to reduce recommendations of extreme content) doesn’t say anything about what YouTube recommendations were like before then. https://t.co/wgVbbGNecX
— Kevin Roose (@kevinroose) December 28, 2019
There’s been some renewed interest in this story because of a poorly designed study that claimed to debunk it. A few points! https://t.co/7UojMUuwWR
— Kevin Roose (@kevinroose) December 29, 2019
This could have been interesting empirical research, but a conclusion like this kind of gives the game away. Oh well! pic.twitter.com/T1iZEdVmyD
— Kevin Roose (@kevinroose) December 28, 2019
Examining millions of YouTube recommendations over the course of a year, two researchers have determined that the platform in fact combats political radicalization.https://t.co/cXG96LBEs2
— Will Feuer (@WillFOIA) December 28, 2019
?Yep, that “paper” isn’t even wrong. One tragedy of all this is that, at the moment, only the companies can fully study phenomenon such as the behavior of recommendation algorithms. There are some great external studies that do give us a sense—but that’s all we get. Yet. https://t.co/8PvHkskqZL
— zeynep tufekci (@zeynep) December 29, 2019
The only people who have the right data to study radicalization at scale work at YouTube, and they have made changes in 2019 they say have reduced “borderline content” recs by 70%. Why would they have done that, if that content wasn’t being recommended in the first place?
— Kevin Roose (@kevinroose) December 29, 2019
Others have pointed out many more limitations of the paper, including the fact that it claims to refute years of allegations of radicalization using late-2019 measurements. Sure, but that’s a bit like pointing out typos in the article that announced "Dewey Defeats Truman".
— Arvind Narayanan (@random_walker) December 29, 2019
Fantastic thread on why quantitative methods are often ill-suited to studying radicalization on YouTube via the algorithm. https://t.co/LiiIsfPVzN
— Becca Lewis (@beccalew) December 29, 2019
1. I worked with Anna Zaitsev (Berkely postdoc) to study YouTube recommendation radicalization. We painstakingly collected and grouped channels (768) and recommendations (23M) and found that the algo has a deradicalizing influence.
— Mark Ledwich (@mark_ledwich) December 28, 2019
Pre-print:https://t.co/1NneHDnKHD
?
2. It turns out the late 2019 algorithm
— Mark Ledwich (@mark_ledwich) December 28, 2019
*DESTROYS* conspiracy theorists, provocateurs and white identitarians
*Helps* partisans
*Hurts* almost everyone else.
? compares an estimate of the recommendations presented (grey) to received (green) for each of the groups: pic.twitter.com/5qPtyi5ZIP
3/ There are practical reasons that make this extremely difficult - you would need a chrome extension (or equivalent) that captures real recommendations in click-through stats from a representative set of users. I don't plan on doing that.
— Mark Ledwich (@mark_ledwich) December 29, 2019
In any event, I’m glad people are talking about this. I’m proud of the work we’ve done, and I hope that YouTube will open up data to third-party researchers like @random_walker, so that future studies can be more meaningful than what's out there now. https://t.co/YOJxps5qvk
— Kevin Roose (@kevinroose) December 29, 2019
The kids are alright. Not alt-right. https://t.co/zQQmYeb3ur
— Melissa Chen (@MsMelChen) December 28, 2019
The key is that the user’s beliefs, preferences, and behavior shift over time, and the algorithm both learns and encourages this, nudging the user gradually. But this study didn’t analyze real users. So the crucial question becomes: what model of user behavior did they use?
— Arvind Narayanan (@random_walker) December 29, 2019
Pretty stunning indictment of that “YouTube is fine really” paper and at least one of its authors https://t.co/WzaGRywjih
— Christopher Mims ? (@mims) December 29, 2019
These discussions/disagreements about radicalization and the algorithm are really important, but as always, they seem to me to presume some kind of set understanding or control group state of the human mind and human context, which are irreducibly complex
— Joe Bernstein (@Bernstein) December 29, 2019
Examining millions of YouTube recommendations over the course of a year, two researchers have determined that the platform in fact combats political radicalization.https://t.co/cXG96LBEs2
— Will Feuer (@WillFOIA) December 28, 2019
Algorithmic Radicalization — The Making of a New York Times Myth https://t.co/QkPqJ1W060 pic.twitter.com/7FwAqfrCb2
— Rich Tehrani (@rtehrani) December 29, 2019