This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Make robotparser.RobotFileParser ignore blank lines
Type: enhancement Stage: test needed
Components: Library (Lib) Versions: Python 3.10, Python 3.9
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: bernie9998, eric.araujo, ezio.melotti, orsenthil, petri.lehtinen, terry.reedy
Priority: normal Keywords: needs review, patch

Created on 2011-10-27 20:30 by bernie9998, last changed 2022-04-11 14:57 by admin.

Files
File name Uploaded Description Edit
robotparser.py.patch bernie9998, 2011-10-27 20:30 patch for RobotFileParser which ignores all blank lines
Messages (12)
msg146518 - (view) Author: Brian Bernstein (bernie9998) Date: 2011-10-27 20:30
When attempting to parse a robots.txt file which has a blank line between allow/disallow rules, all rules after the blank line are ignored.

If a blank line occurs between the user-agent and its rules, all of the rules for that user-agent are ignored.

I am not sure if having a blank line between rules is allowed in the spec, but I am seeing this behavior in a number of sites, for instance:

http://www.whitehouse.gov/robots.txt has a blank line between the disallow rules all other lines, including the associated user-agent line, resulting in the python RobotFileParser to ignore all rules.

http://www.last.fm/robots.txt appears to separate their rules with arbitrary blank lines between them.  The python RobotFileParser only sees the first two rule between the user-agent and the next newline.

If the parser is changed to simply ignore all blank lines, would it have any adverse affect on parsing robots.txt files?

I am including a simple patch which ignores all blank lines and appears to find all rules from these robots.txt files.
msg146536 - (view) Author: Petri Lehtinen (petri.lehtinen) * (Python committer) Date: 2011-10-28 05:43
Blank lines are allowed according to the specification at http://www.robotstxt.org/norobots-rfc.txt, section 3.3 Formal Syntax.

The issue also seems to exist on 3.2 and 3.3.
msg146586 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2011-10-29 01:06
Because of the line break, clicking that link gives "Server error 404".
http://www.robotstxt.org/norobots-rfc.txt
works (so please pay attention to formatting). The main page is
http://www.robotstxt.org/robotstxt.html 

The way I read the grammar, 'records' (which start with an agent line) cannot have blank lines and must be separated by blank lines. Other than than, the suggestion seems reasonable, but it also seems like a feature request. Does test/test_robotparser pass with the patch?

I also do not see "Crawl-delay" and "Sitemap" (from whitehouse.gov) in the grammar referenced above. So I wonder if de facto practice has evolved.

Philip S.: do you have any opinions?
(I am asking you because of your comments on #1437699.)
msg146601 - (view) Author: Petri Lehtinen (petri.lehtinen) * (Python committer) Date: 2011-10-29 10:11
> Because of the line break, clicking that link gives "Server error 404".

I don't see a line break, but the comma after the link seems to breaks it. Sorry.

> The way I read the grammar, 'records' (which start with an agent
> line) cannot have blank lines and must be separated by blank lines.

Ah, true. But it seems to me that having blank lines elsewhere doesn't break the parsing. If other robots.txt parser implementations allow arbitrary blank lines, we could add a strict=False parameter to make the parser non-strict. This would be a new feature of course.

Does the parser currently handle blank lines between full records (agentline(s) + ruleline(s)) correctly?

> I also do not see "Crawl-delay" and "Sitemap" (from whitehouse.gov) in the grammar referenced above. So I wonder if de facto practice has evolved.

The spec says:

   Lines with Fields not explicitly specified by this specification
   may occur in the /robots.txt, allowing for future extension of the
   format.

So these seem to be nonstandard extensions.
msg146619 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2011-10-29 20:02
Sorry, the visual linebreak depends on font size. It *is* the comma that caused the problem.

You missed my question about the current test suite.

Senthil, you are the listed expert for urllib, which includes robotparser. Any opinions on what to do?
msg146668 - (view) Author: Senthil Kumaran (orsenthil) * (Python committer) Date: 2011-10-31 00:01
I agree with your interpretation of the RFC. The parsing rules do not specify any provision for inclusion of blank lines "within" the records.

However, I find that inclusion is no harm either. I checked that with a robots.txt parser (Google webmaster tools) and presented the last.fm's robots.txt file which had blank line within records. As expected, it did not crib. 

I would say that we can be lenient on this front and the question would if we allow, would it break any parsing rules? I think, no.

The patch does not break any tests, but a new test should be added to reflect this situation. 

I don't have a strong opinion on having a strict=(True|False) for the blank line accommodation within records(only). I think, it is better we don't add a new parameter and just be lenient.
msg146675 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2011-10-31 03:22
Since following the spec is not a bug, this issue is a feature request for 3.3. I agree with just being lenient with no added parameter. Perhaps that should be mentioned in the doc with (or in) a version-added note. Senthil: does doc standard allow something like

Version added 3.3: Ignore blanks lines within record groups.
?
msg146679 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2011-10-31 09:26
If it's added it should be a versionchanged, not a versionadded.
I'm also not entirely sure this should be considered a new feature and don't see the point of having a strict mode.  IMHO robotparser should honor what the robots.txt files say, and not doing so because there's an extra blank line doesn't strike me as a useful behavior.  Of course it shouldn't parse every kind of broken syntax, but the OP pointed to two fairly popular websites that use blank lines and that seems to indicate that blank lines are generally allowed.
msg146731 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2011-10-31 19:00
The robotparser is currently doing exactly what it is documented as doing. 20.9. urllib.robotparser — Parser for robots.txt
says "For more details on the structure of robots.txt files, see http://www.robotstxt.org/orig.html." (Since there are no previous details, 'more' should be deleted.) That page, in turn, says

'''The file consists of one or more records separated by one or more blank lines (terminated by CR,CR/NL, or NL). Each record contains lines of the form "<field>:<optionalspace><value><optionalspace>".'''

The formal grammar says the same thing. The page goes on with

'''Comments ... are discarded completely, and therefore do not indicate a record boundary.'''

followed by

'''The record starts with one or more User-agent lines, followed by one or more Disallow lines, as detailed below. Unrecognised headers are ignored.'''

Not allowing blank lines within records is obviously, to me, intentional and not an accidental oversight. It aids error detection.  Consider:

User-agent: A ...
Disallow: ...

User-aget: B ...
Disallow: ...

Currently, the blank line signals a new record, the misspelled 'User-aget' line is ignored, and the new record, starting with 'Disallow' instead of 'User-agent' is correctly seen as an error and ignored. The same would be true if the User-agent line were accidentally omitted. When humans edit files, perhaps from someone else's notes, such things happen.

With this change, the second disallow line will be incorrectly attributed to A. We can justify that on the hypothesis that intentional blank lines within record, in violation of the standard, are now more common than missing or misspelled User-Agent lines. Or we can decide that mis-attributing Disallow lines is a lesser sin than ignoring them. But the change is pretty plainly a feature change and not a bug fix. 

My current suggested doc change is to replace the sentence quoted at the top with
"Such files are parsed according to the rules given at http://www.robotstxt.org/orig.html , with the exception that blank lines are allowed within records.
Versionchanged 3.3: allow blank lines within records"

Side note: The example in the doc uses musi-cal.com. We need a replacement as it was closed last June, as noted in
http://www.wolfgangsvault.com/blog/index.php/2011/06/closing-mojam-com-and-musi-cal-com/
msg146871 - (view) Author: Petri Lehtinen (petri.lehtinen) * (Python committer) Date: 2011-11-02 19:20
> My current suggested doc change is to replace the sentence quoted at the top with

Sounds good to me.
msg147478 - (view) Author: Éric Araujo (eric.araujo) * (Python committer) Date: 2011-11-12 11:51
First, I’d like to remind that the robots spec is not an official Internet spec backed up by an official body.  It’s also not as important as (say) HTTP parsing.

For this bug, IMO the guiding principle should be Postel’s Law.  What harm is there in being more lenient than the spec?  People apparently want to parse the robots.txt with blank lines from last.fm and whitehouse.gov, and I don’t think there are people that depend on the fact that blank lines cause the rest of the file to be ignored.  Hence, I think too that we should be pragmatic and allow blank lines, to follow the precedent established by other tools and be pragmatic.

If you feel strongly about this, I can contact the robotstxt.org people.
msg147541 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2011-11-13 02:41
My suggested doc change is how to change the doc along with the patch.
History
Date User Action Args
2022-04-11 14:57:23adminsetgithub: 57490
2020-11-17 20:21:04iritkatrielsetversions: + Python 3.9, Python 3.10, - Python 3.3
2011-11-13 02:41:12terry.reedysetmessages: + msg147541
2011-11-12 11:51:38eric.araujosetnosy: + eric.araujo
messages: + msg147478
2011-11-02 21:41:05ezio.melottisetnosy: - osvenskan
2011-11-02 19:20:11petri.lehtinensetmessages: + msg146871
2011-10-31 19:00:40terry.reedysetmessages: + msg146731
2011-10-31 09:26:02ezio.melottisetmessages: + msg146679
2011-10-31 03:22:28terry.reedysettype: behavior -> enhancement
title: robotparser.RobotFileParser ignores rules preceeded by a blank line -> Make robotparser.RobotFileParser ignore blank lines
messages: + msg146675
versions: - Python 2.7, Python 3.2
2011-10-31 00:01:49orsenthilsetmessages: + msg146668
2011-10-29 20:02:24terry.reedysetnosy: + orsenthil
messages: + msg146619
2011-10-29 10:11:20petri.lehtinensetmessages: + msg146601
2011-10-29 01:06:24terry.reedysetnosy: + terry.reedy, osvenskan
messages: + msg146586
2011-10-28 06:54:20ezio.melottisetnosy: + ezio.melotti

stage: patch review -> test needed
2011-10-28 05:43:23petri.lehtinensetversions: + Python 3.2, Python 3.3
messages: + msg146536

components: + Library (Lib)
keywords: + needs review
stage: patch review
2011-10-28 05:32:58petri.lehtinensetnosy: + petri.lehtinen
2011-10-27 20:30:43bernie9998create