Google aims to standardise robots.txt, 25 years on [devclass.com]
Google Wants to Establish an Official Standard for Using Robots.txt [www.searchenginejournal.com]
Formalizing the Robots Exclusion Protocol Specification [webmasters.googleblog.com]
Google open-sources robots.txt parser in push to make Robots Exclusion Protocol an official standard [venturebeat.com]
Google posts draft to formalize Robots Exclusion Protocol Specification [searchengineland.com]
Google Works To Make Robots Exclusion Protocol An Official Standard [www.seroundtable.com]
google/robotstxt: The repository contains Google's robots.txt parser and matcher as a C++ library (compliant to C++11). [github.com]
Googlebot is very generous with how you spell "disallow" in your robots.txt: https://t.co/evoIySDZ8J
— Matt Holt (@mholt6) July 1, 2019
robots.txt was penned 25 years ago, is now used by 500M+ sites, and — finally — has an IETF spec: https://t.co/JjJRVPXPts ?
— Ilya Grigorik (@igrigorik) July 1, 2019
Google Webmaster blog post @ https://t.co/QKIyUMBwXe
Bonus: Google's robots.txt parser is now open source! https://t.co/ZEHTkVQnuz ? pic.twitter.com/foUGXeh7YD
Google posts draft to formalize Robots Exclusion Protocol Specification https://t.co/nYeYodowQt pic.twitter.com/fJnwzUpkXQ
— New Web Network (@NewWebNetwork) July 1, 2019
Google today has made the Robots Exclusive Protocol a real standard today, after 25 years https://t.co/mOjSrk0HFQ pic.twitter.com/aCdgq7fy1G
— Barry Schwartz (@rustybrick) July 1, 2019
ICYMI: Google today has took steps to make the Robots Exclusion Protocol a real standard today, after 25 years https://t.co/mOjSrk0HFQ pic.twitter.com/wr3NnYsxLU
— Barry Schwartz (@rustybrick) July 1, 2019
Google Works To Make Robots Exclusion Protocol A Real Standard https://t.co/2clnREO1s5
— duane forrester (@DuaneForrester) July 1, 2019
Google worked hard to make the Robots Exclusive Protocol a real standard today, after 25 years https://t.co/mOjSrjJ6Oi pic.twitter.com/Gi8fDL4TEI
— Barry Schwartz (@rustybrick) July 1, 2019
Wait what ? Google Search open sourcing their Robots.txt parser
— Aymen Loukil (@LoukilAymen) July 1, 2019
Github repo : https://t.co/mK5LJyz8Ky
I already have some ideas to build with it :)
And a new team born : Search Open Sourcing Team ? https://t.co/PwXYg3ECJr
Googleのrobots.txtパーサーがオープンソースで公開されてる。 / “GitHub - google/robotstxt: The repository contains Google's robots.txt parser and matcher as a C++ library (compliant to C++11).” https://t.co/rj4YesiPUf
— 齊藤貴義@サイバーメガネ (@miraihack) July 1, 2019
C++ で書かれた robots.txt parser という、ある意味で過去最大級に Google のコアっぽいものが Apache ライセンスで出てきたなhttps://t.co/hbYNASrfz8
— _ (@apstndb) July 1, 2019
Seeing https://t.co/vir1f0xAuE https://t.co/DMQzRfmCHV… is awesome! Thanks @epere4 @lvandeve @methode
— Martijn Koster (@makuk66) July 1, 2019
Today we had lunch to celebrate the launch! With @methode and Lodehttps://t.co/6jUfOKMULd pic.twitter.com/fkkplkbw4j
— Edu Pereda (@epere4) July 1, 2019
Google, Robots.txt 사용을 위한 공식 표준 수립을 원한다 https://t.co/fmPfyRPL5x
— editoy (@editoy) July 2, 2019
裏を読まなくても全て Webmaster Central ブログで説明されていたhttps://t.co/K5hAGP12J5https://t.co/qtLDGtaIXm
— _ (@apstndb) July 1, 2019
A real standard! Where Google won't crawl the URLs, or list them in their index? Because "nocrawl" really means "stay the fuck away from these URLs entirely"? Where Google won't index entire URLs & their content because "it's only a hint"? HAHAHAHAHA https://t.co/pdykDi245T
— Alan Bleiweiss (@AlanBleiweiss) July 1, 2019
Google's robots.txt Parser is Now Open Source (few C++ files, can be read easily) - https://t.co/t0UltJCFHK #Google #opensource #programming #SEO
— Thibault Desmoulins (@t_desmoulins) July 2, 2019
Github : https://t.co/T4XCEZbwaz