Anti-Muslim Hate Speech Has Been A Problem On Social Media For Years [www.buzzfeednews.com]
Why can’t YouTube automatically catch re-uploads of disturbing footage? [www.theverge.com]
New Zealand shooting: Facebook says it removed 1.5 million videos of mosque attack within 24 hours [www.nbcnews.com]
Extremists Understand What Tech Platforms Have Built [www.theatlantic.com]
Facebook, YouTube work to remove copied New Zealand shooting videos [www.cnbc.com]
Recode Daily: Facebook is left cleaning up a mess after the New Zealand terrorist attack was streamed on the platform [www.recode.net]
New Zealand Massacre Video Clings to the Internet’s Dark Corners [www.wsj.com]
Valve takes down user tributes memorializing the New Zealand shooting suspect [www.theverge.com]
Facebook says it removed 1.5 million videos of the New Zealand mosque attack [www.reuters.com]
Facebook removed 1.5mill videos of mosques attacks [www.radionz.co.nz]
Facebook and Google will both fail to prevent another murder live stream. They can hire more moderators or tweak algorithms, but their core business models are designed for them to be platforms, not publishers. Moderation is an afterthought to the mission of selling data / ads
— Tom Warren (@tomwarren) March 18, 2019
I'm not sure that headline says what it means. "Disabling content moderation" != "Automatically taking suspicious content offline without the usual human review".
— Liz Fong-Jones (方禮真) (@lizthegrey) March 18, 2019
The former sounds like it disabled all reviews and let everything stay up, which my impression disagrees with. https://t.co/QxijU960IP
hmmm, if you were going to go down this path, what you'd do is pause uploads from "new" accts, or accts w policy strikes against them for previous CommunityStandards violations.
— ???☕️ (@hunterwalk) March 18, 2019
probability accts w long, productive histories are going to suddenly upload snuff films is near 0
Some questions I’d really like reporters to ask: “Did you ever consider shutting down all uploads until you could handle this situation?” And “what are you doing to put consequences to the people who violate your usage terms?” https://t.co/ii09ZbSc5w
— Vijay Ravindran (@vijayravindran) March 18, 2019
People could easily become radicalized before social media, and many are still radicalized without it.
— CNN (@CNN) March 18, 2019
But social media algorithms -- and easy formation of communities -- often combine with other factors to make radicalization ever more efficient: https://t.co/uunHgYNIkQ pic.twitter.com/HvIx5Oh0H6
without knowing proportions and delays and # views this doesn't tell us much, but wow, 1.5 million uploads?! Absolutely staggering numbers https://t.co/fksuZMPNt7
— Alex Krasodomski (@akrasodomski) March 17, 2019
This is a pretty striking illustration of the scale of the challenge. https://t.co/eJjL9M4DNB
— Benedict Evans (@benedictevans) March 17, 2019
This is the right question. “We put out a lot of the fires after a million people got burned” may be an effective measure of response after harm, but we should still ask why they’ve been dousing everything in gasoline for years. https://t.co/q3C5GaQiW1
— Anil Dash ? (@anildash) March 17, 2019
These numbers are vanity metrics unless they’re shown alongside engagement and video views from the live stream. This is bordering on misinformation which gives us no way of understanding the true scale of distribution and amplification.https://t.co/6dgRc7KiVj
— Mark Rickerby (@maetl) March 17, 2019
Feel like the NZ shooting is truly a worst case scenario for online extremism: an an individual radicalized across the internet using many of those same mechanisms to amplify and spread the terror of his heinous act. My piece: https://t.co/jZjRtpF4gP
— Charlie Warzel (@cwarzel) March 15, 2019
Social-media platforms were eager to embrace live streaming because it promised growth. Now, in the wake of the attack in New Zealand, it’s clear that scale has become a problem. https://t.co/kef9O4Zyqr pic.twitter.com/GoRIXFrrey
— The New Yorker (@NewYorker) March 16, 2019
"This is what social media are supposed to do, what they were designed to do: spread the images and messages that accelerate interest, without check, and absent concern for their consequences" https://t.co/9kroazl43j
— Drew Harwell (@drewharwell) March 15, 2019
YouTube tells its side of the story to the Post from the Christchurch incident, but the company shares little detail. They only said “tens of thousands” of videos were uploaded. Even Facebook gave more stats. https://t.co/YcDTB3D22I
— Ryan Mac (@RMac18) March 18, 2019
Even the stock-photo providers are licensing stills from the New Zealand shooter’s video; the one that adorns this article was credited simply, “social media.” https://t.co/r0Y9nDFZtF
— Ian Bogost (@ibogost) March 15, 2019
"We removed 300,000 videos" means little if Facebook failed to remove 500,000 vidoes still on the site.
— Zack Whittaker (@zackwhittaker) March 17, 2019
How do you build a platform that only nice people use? I wish there were an easy answer, but I'm unclear why we keep blaming tech for the fact that some people are bad. https://t.co/be5PCfFEBc
— Mike Masnick (@mmasnick) March 15, 2019
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload...
— Facebook Newsroom (@fbnewsroom) March 17, 2019
youtube and google and facebook insist on using tech and ai alone to fix a problem that they easily have the capital to invest in HUMAN resources to solve. tech is ruining the world and they think the responsibility can be given to a robot. https://t.co/UmOE9yiWjp
— Oliver Willis (@owillis) March 18, 2019
This is basically a confession that YouTube is badly broken and that it is vulnerable to the same sort of takeover without radical changes. https://t.co/RmvmQ0VTN5
— Ed Bott (@edbott) March 18, 2019
I've been reading James C. Scott's Against The Grain and a lot of it is about how early cities were like disease magnets and the rise of cities required them to develop more controls against that. Reminds me a bit of "virality" on social networks. https://t.co/y7S6eyOXDq
— Melissa McEwen (@melissamcewen) March 17, 2019
Update from Mia Garlick, Facebook New Zealand: "We continue to work around the clock to remove violating content using a combination of technology and people...
— Facebook Newsroom (@fbnewsroom) March 17, 2019
SCOOP: YouTube received an “unprecedented volume” of uploads, coming as quickly as 1 per second after New Zealand shooting. The company took drastic measures, including breaking its own functionality and disabling content moderation, to stop the bleeding. https://t.co/Sn7ENYQe35
— Elizabeth Dwoskin (@lizzadwoskin) March 18, 2019
When YouTube took down a video of the New Zealand massacre, another would appear, as quickly as one per second. Inside the tech giant as it raced to contain "a tragedy almost designed for the purpose of going viral" https://t.co/6vqu0gwyyv @lizzadwoskin @craigtimberg
— Drew Harwell (@drewharwell) March 18, 2019
You always hear "There's more work to be done." There's something to be said about how all these tech companies never relaly considered the worst actions that humans are capable of. https://t.co/aTkcM2twbd
— Gene Park (@GenePark) March 18, 2019
It isn’t that these platforms don’t act—and quickly; it’s that they’re acting in the face of a tsunami of content. Their scale literally can’t be managed. https://t.co/C4bgifzRHx
— Annemarie Bridy (@AnnemarieBridy) March 18, 2019
Facebook says:
— Nicholas Grossman (@NGrossman81) March 17, 2019
-We let at least 300,000 copies of the video onto our platform, a 20% failure rate.
-Took us 24 hours to take them down.
-No, we won't tell you how many people saw it. It's many millions.
-These are just the copies we know about/will admit.https://t.co/P1roxWbJCj
Post says YouTube found it hard to moderate the video because users weren’t uploading it with his name.
— Ryan Mac (@RMac18) March 18, 2019
Actually the shooter’s name was the only search I was running on YouTube in the 4 hours after the incident. And plenty of videos were being uploaded with the name. pic.twitter.com/SdZbxVYmo2
Wow. Facebook claims it blocked 80% of the New Zealand shooter videos that were uploaded with its AI technology. The problem is the 20% that made it past the AI equaled 300,000 videos (!)
— Kurt Wagner (@KurtWagner8) March 17, 2019
Facebook’s scale is its own worst enemy these days https://t.co/Op63gF7OKw
In other words, 300,000 out of an unknown number of total uploaded videos were actually removed from the site. https://t.co/RSHwswMn8b
— Zack Whittaker (@zackwhittaker) March 17, 2019
this stat is being shared to suggest that FB is ‘on it.’ but talking about takedowns/speed of takedowns is sidedtepping the root of the problem. what is it about fb that incentivized a vid like that to be uploaded 1.5 million times in one day? the scale alone shld give them pause https://t.co/UbPBmeUtaM
— Charlie Warzel (@cwarzel) March 17, 2019
Should these platforms consider literally suspending real-time uploads completely during an event like this? I’m asking, not prescribing. But one commenter on this thread describes quarantine — that’s what you do in an outbreak, right? https://t.co/2e34V6KZvc
— Molly Wood (@mollywood) March 18, 2019
This is good transparency. I hope we see something like it from YT.
— Renee DiResta (@noUpside) March 17, 2019
From a procedure standpoint, I'm curious if/when @facebook communicated w/their counterterrorism partners and/or got the content into GIFCT's database. GIFCT was created in part to stop spread of these videos. https://t.co/BAvW8v7Kwx
So a 20% false negative rate on upload. An average time-to-deleted for the ones that got through would be interesting.
— Antonio García Martínez (@antoniogm) March 17, 2019
Of course, these are the uploads *they know about*. https://t.co/DMh1LkwyCI
Came out of writing retirement for this piece on how the internet is radicalizing angry white men and what big tech could do about it -- and isn't: https://t.co/eD52McjU3M
— Alex Koppelman (@AlexKoppelman) March 17, 2019
Facebook doesn’t often share stats like this and they’ve been heavily criticized in light of this event. The key tho is to see what the engagement was on the 300k videos that got through as well as the original Facebook live stream. https://t.co/6XMquc71EL
— Ryan Mac (@RMac18) March 17, 2019
I spent a good part of 2 years reporting on ISIS internet and how the group uses social media — in 2019 it's mind-boggling to me how well the coordinated cross-platform effort to remove them from the internet worked and how there hasn't been a similar one for white supremacists.
— Ellie Hall (@ellievhall) March 16, 2019
Anti-Muslim Hate Speech Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups https://t.co/ARmNZuJrUp
— Victor Asal (@Victor_Asal) March 18, 2019
Anti-Muslim bias seems to be an exception to platform rules about weeding out hate speech https://t.co/Q8u02TDAYY
— Ben Smith (@BuzzFeedBen) March 18, 2019
Muslims have had to deal with Islamophobia and hate speech on social media for years even as platforms crack down on other dangerous content https://t.co/BF3mm2GARE
— Jane Lytvynenko ??♀️??♀️??♀️ (@JaneLytv) March 18, 2019
Facebook has taken down more than a million videos of the Christchurch attack, but groups with names like “War against Islam” and “Bikers Against Radical Islam Europe” continue to exist https://t.co/PFR2kUJ7tW
— Miriam Elder (@MiriamElder) March 18, 2019
WHEW READ THEM @JaneLytv https://t.co/dRgK23YTiX pic.twitter.com/wDhgpIiuym
— David Mack (@davidmackau) March 18, 2019
.@JaneLytv making sense of the horror show as usualhttps://t.co/V3AxVdYEmp pic.twitter.com/CrJKIg3oNR
— Anushka Patil (@anushkapatil) March 18, 2019
"Researchers say Facebook is the primary mainstream platform where extremists organize and anti-Muslim content is deliberately spread" https://t.co/XroRhZWKgt
— Eddie Vale (@evale72) March 18, 2019
Social media companies have the ability to erase extremism from their platforms.
— ?Bill Maxwell ?#ImpeachPutin? (@Bill_Maxwell_) March 18, 2019
“Islamophobia happens to be something that made these companies lots and lots of money, it keeps people on the platform and available to see ads."#Resistance #Facebook https://t.co/IWtVpTDpay
Social media companies managed to effectively eradicate ISIS content from their platforms — but Islamophobia? That's made them "lots and lots of money"https://t.co/V3AxVdYEmp
— Anushka Patil (@anushkapatil) March 18, 2019
Islamophobia Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups https://t.co/Ve1872zEs3 via @janelytv
— mat honan (@mat) March 18, 2019
Islamophobia Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups.
— TellMAMAUK (@TellMamaUK) March 18, 2019
That is the point. There has been a ‘free speech’ approach to far right extremists. Why is that?: https://t.co/GGn0bgX2C7
Why can't/won't #socialmedia platforms rapidly remove racist & other vile content from their sites? @verge article offers insights re #YouTube: yes they have content moderators, but also they want "newsworthiness". How about wanting social responsibilty? https://t.co/JYmUAgameF
— Helen Clark (@HelenClarkNZ) March 16, 2019
And here’s a good piece from @loudmouthjulia on some of the moderation challenges here particular to YouTube https://t.co/y2wOO7cYhU pic.twitter.com/fgNn7MTOA8
— Casey Newton (@CaseyNewton) March 15, 2019
This @verge article is one of the clearer explanations of the challenges in algorithmically flagging and addressing different types of problematic video content in real time: https://t.co/sgAsHvAMT6
— Matt Schruers (@MSchruers) March 15, 2019
The extra attention that extremist ideas gain after a violent attack isn’t just a side effect of news coverage. It’s part of the sound system that hate groups use to transmit their ideas to a broader public, @bostonjoan argues. https://t.co/dWrBy1E0Xw
— Siva Vaidhyanathan??? (@sivavaid) March 18, 2019
Amplify thishttps://t.co/UeDbZxD8tg
— Joan Donovan, PhD (@BostonJoan) March 18, 2019
And here are some more good things to read: @BostonJoan on the sophisticated tactics used to exploit platform dynamics of virality: https://t.co/kMlvkGTyio
— Jean Burgess ? (@jeanburgess) March 18, 2019
Extremists Now Understand What Tech Platforms Have Built And Know How To Beat Them at It https://t.co/SvOjUWkeeD
— Raju Narisetti (@raju) March 17, 2019
Sociologist Joan Donovan (@bostonjoan) argues the extra attention extremist ideas gain after a violent attack is part of the sound system that hate groups use to transmit their ideas to a broader public via social media. @ASAnews https://t.co/6q3Ez45jeM
— W. Carson Byrd (@Prof_WCByrd) March 18, 2019
the level of information this piece assumes to know about the christchurch shooter’s thinking and intent is borderline irresponsible and i’ve seen dozens of variants as well https://t.co/XkW46yQzIP
— brian feldman (@bafeldman) March 18, 2019
"The platform companies do not know how to fix, or perhaps do not understand, what they have built." -@BostonJoan on social media and the spread of a global white supremacist movement. https://t.co/2ieP5WOrvH
— Justin Hendrix (@justinhendrix) March 18, 2019
The "extra attention that these ideas gain in the aftermath of a violent attack...[is] the sound system by which extremist movements transmit their ideas to a broader public, and they are using it with more and more skill." https://t.co/uVUDdmZuyq
— NoelDickover (@NoelDickover) March 18, 2019
“Withholding details runs counter to the usual rules of storytelling—show, don’t tell—but it also helps slow down the spread of white-supremacist keywords.” @BostonJoan on what we can learn about how the web works today during violent attacks https://t.co/GWQ6oDmlYX
— Margarita Noriega (@margarita) March 18, 2019
Recode Daily: Facebook is left cleaning up a mess after the New Zealand terrorist attack was streamed on the platform https://t.co/SFr8m3fhMN pic.twitter.com/Xpl9bWbyXb
— Recode (@Recode) March 18, 2019
Valve takes down user tributes memorializing the New Zealand shooting suspect https://t.co/WTo43dHQi6 pic.twitter.com/bJgA8NGYeq
— The Verge (@verge) March 16, 2019
Facebook says it removed 1.5 million videos of the #NewZealand #MosqueAttack #ATUKBusiness #ATSocialMedia #MancIsMarvellous #UKSOPRO #BlackpoolRocks #SocialMedia #Londonislovinit #SheffieldisSuper #UKSmallBiz #YORKSHIREIShttps://t.co/tLIiso8wjx
— Andrew Ivan (@AndrewIvan9) March 17, 2019
Facebook said it removed 1.5 million videos of the New Zealand mosque attack in the first 24 hours after it happened.
— J.R. Reed (@JRReed) March 18, 2019
1.2 million were blocked at upload, but it’s unclear how many people watched the remaining 300,000 videos before they were taken down. https://t.co/walShaiOAW
Facebook says it removed 1.5 million videos of the New Zealand mosque attack https://t.co/ANYl5h7iRa pic.twitter.com/RAxYlvKKAk
— Rich Tehrani (@rtehrani) March 17, 2019
“Islamophobia happens to be something that made these companies lots and lots of money,” said one researcher, who added that it keeps people on the platform and available to see ads.https://t.co/yRtqjTLOmu
— Melissa Ryan (@MelissaRyan) March 19, 2019
Even after the New Zealand attack, Facebook has allowed groups with names like “War against Islam” and “Bikers Against Radical Islam Europe” to exist. They have memberships in the thousands. https://t.co/n7VDaL4o66
— Scott Bixby (@scottbix) March 19, 2019
“Islamophobia happens to be something that made these companies lots and lots of money" - @wphillips49 https://t.co/BnfcuGXvZP
— Becca Lewis (@beccalew) March 19, 2019
Facebook, YouTube, and Amazon moved to remove or reduce anti-vaccination content. The platforms largely eradicated ISIS terrorists and made inroads to remove white supremacists. But through it all, anti-Muslim content has been allowed to fester@JaneLytvhttps://t.co/NWtdwJzrgb
— Tom Namako (@TomNamako) March 19, 2019
Anti-Muslim Hate Speech Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups https://t.co/qlGGMIcOD5 via @janelytv
— GStuedler?????? (@GStuedler) March 19, 2019
Anti-Muslim Hate Speech Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups https://t.co/crqEpM81bK
— Adrienne Mahsa Varkiani (@AdrienneMahsa) March 18, 2019
"Muslims endured racial slurs, dehumanising photos, threats of violence, and targeted harassment campaigns, which continue to spread and generate significant engagement on social media platforms even though it's prohibited by most terms of service." https://t.co/Bw7j1Yszz7
— BuzzFeedOz Politics (@BuzzFeedOzPol) March 19, 2019
“Islamophobia happens to be something that made these companies lots and lots of money." https://t.co/Dyrm890kcR
— Gina Rushton (@ginarush) March 19, 2019
Until there is deliberate actions taken to address systemic source of the prejudice by tech companies unclear how this gets resolved.
— Mansoor (@MansoorMagic) March 18, 2019
Islamophobia Is Absolutely Relentless On Social Media Even As Platforms Crack Down On Other Extremist Groups https://t.co/bXFqYQ5DEB
Really interesting @TheAtlantic article by @BostonJoan on how white supremacists exploit the weaknesses in the social-media ecosystem and make social media platforms megaphones for their ideology. https://t.co/vREdLl7k3k
— Joe Mulhall (@JoeMulhall_) March 18, 2019
“Journalists and regular internet users need to be cognizant of their role in spreading these ideas, especially because the platform companies haven’t recognized theirs.” Wise words from @BostonJoan https://t.co/p4Ldib79NJ
— One Ring (doorbell) to surveil them all... (@hypervisible) March 18, 2019
Hate groups are expert manipulators of today’s tech and media ecosystem, and they’re only getting better at it. But the tech platforms, @BostonJoan writes, either don’t understand or can’t fix what they’ve built. https://t.co/DWaysGKThU
— Dante Ramos (@danteramos) March 17, 2019
Facebook, YouTube and Twitter go to extraordinary lengths to take down mosque massacre videos https://t.co/02CQyBZJF2
— Bo Snerdley (@BoSnerdley) March 18, 2019