Message25692
hi,
I have found a bug in 'tokenize' module: it is merging
a COMMENT and a NL token for lines that start with a
comment.
I have made a fell changes and it seems to be working
fine. Follows a patch:
*** /usr/lib/python2.4/tokenize.py 2005-01-02
03:34:20.000000000 -0200
--- tokenize.py 2005-06-30 14:31:19.000000000 -0300
***************
*** 216,223 ****
pos = pos + 1
if pos == max: break
! if line[pos] in '#\r\n': # skip
comments or blank lines
! yield ((NL, COMMENT)[line[pos] ==
'#'], line[pos:],
(lnum, pos), (lnum,
len(line)), line)
continue
--- 216,235 ----
pos = pos + 1
if pos == max: break
! if line[pos] == '#': # skip
comments
! end = len(line) - 1
! while end > pos and line[end] in '\r\n':
! end = end - 1
! end = end + 1
!
! yield (COMMENT, line[pos:end],
! (lnum, pos), (lnum, end),
line)
! yield (NL, line[end:],
! (lnum, end), (lnum,
len(line)), line)
! continue
!
! if line[pos] in '\r\n': # skip
blank lines
! yield (NL, line[pos:],
(lnum, pos), (lnum,
len(line)), line)
continue
Best regards,
Eduardo Aguiar (eaguiar@programmer.net) |
|
Date |
User |
Action |
Args |
2007-08-23 14:32:50 | admin | link | issue1230484 messages |
2007-08-23 14:32:50 | admin | create | |
|