Rietveld Code Review Tool
Help | Bug tracker | Discussion group | Source code | Sign in
(4)

Side by Side Diff: Lib/tokenize.py

Issue 26581: Double coding cookie
Patch Set: Created 4 years, 2 months ago
Left:
Right:
Use n/p to move between diff chunks; N/P to move between comments. Please Sign in to add in-line comments.
Jump to:
View unified diff | Download patch
« no previous file with comments | « Lib/test/test_source_encoding.py ('k') | Parser/tokenizer.c » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 """Tokenization help for Python programs. 1 """Tokenization help for Python programs.
2 2
3 tokenize(readline) is a generator that breaks a stream of bytes into 3 tokenize(readline) is a generator that breaks a stream of bytes into
4 Python tokens. It decodes the bytes according to PEP-0263 for 4 Python tokens. It decodes the bytes according to PEP-0263 for
5 determining source file encoding. 5 determining source file encoding.
6 6
7 It accepts a readline-like method which is called repeatedly to get the 7 It accepts a readline-like method which is called repeatedly to get the
8 next line of input (or b"" for EOF). It generates 5-tuples with these 8 next line of input (or b"" for EOF). It generates 5-tuples with these
9 members: 9 members:
10 10
(...skipping 16 matching lines...) Expand all
27 from builtins import open as _builtin_open 27 from builtins import open as _builtin_open
28 from codecs import lookup, BOM_UTF8 28 from codecs import lookup, BOM_UTF8
29 import collections 29 import collections
30 from io import TextIOWrapper 30 from io import TextIOWrapper
31 from itertools import chain 31 from itertools import chain
32 import itertools as _itertools 32 import itertools as _itertools
33 import re 33 import re
34 import sys 34 import sys
35 from token import * 35 from token import *
36 36
37 cookie_re = re.compile(r'^[ \t\f]*#.*coding[:=][ \t]*([-\w.]+)', re.ASCII) 37 cookie_re = re.compile(r'^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)', re.ASCII)
38 blank_re = re.compile(br'^[ \t\f]*(?:[#\r\n]|$)', re.ASCII) 38 blank_re = re.compile(br'^[ \t\f]*(?:[#\r\n]|$)', re.ASCII)
39 39
40 import token 40 import token
41 __all__ = token.__all__ + ["COMMENT", "tokenize", "detect_encoding", 41 __all__ = token.__all__ + ["COMMENT", "tokenize", "detect_encoding",
42 "NL", "untokenize", "ENCODING", "TokenInfo"] 42 "NL", "untokenize", "ENCODING", "TokenInfo"]
43 del token 43 del token
44 44
45 COMMENT = N_TOKENS 45 COMMENT = N_TOKENS
46 tok_name[COMMENT] = 'COMMENT' 46 tok_name[COMMENT] = 'COMMENT'
47 NL = N_TOKENS + 1 47 NL = N_TOKENS + 1
(...skipping 735 matching lines...) Expand 10 before | Expand all | Expand 10 after
783 except OSError as err: 783 except OSError as err:
784 error(err) 784 error(err)
785 except KeyboardInterrupt: 785 except KeyboardInterrupt:
786 print("interrupted\n") 786 print("interrupted\n")
787 except Exception as err: 787 except Exception as err:
788 perror("unexpected error: %s" % err) 788 perror("unexpected error: %s" % err)
789 raise 789 raise
790 790
791 if __name__ == "__main__": 791 if __name__ == "__main__":
792 main() 792 main()
OLDNEW
« no previous file with comments | « Lib/test/test_source_encoding.py ('k') | Parser/tokenizer.c » ('j') | no next file with comments »

RSS Feeds Recent Issues | This issue
This is Rietveld 894c83f36cb7+