This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author aguiar
Recipients
Date 2005-06-30.17:35:21
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
hi,

I have found a bug in 'tokenize' module: it is merging
a COMMENT and a NL token for lines that start with a
comment.

I have made a fell changes and it seems to be working
fine. Follows a patch:

 *** /usr/lib/python2.4/tokenize.py      2005-01-02
03:34:20.000000000 -0200
--- tokenize.py 2005-06-30 14:31:19.000000000 -0300
***************
*** 216,223 ****
                  pos = pos + 1
              if pos == max: break
  
!             if line[pos] in '#\r\n':           # skip
comments or blank lines
!                 yield ((NL, COMMENT)[line[pos] ==
'#'], line[pos:],
                             (lnum, pos), (lnum,
len(line)), line)
                  continue
  
--- 216,235 ----
                  pos = pos + 1
              if pos == max: break
  
!             if line[pos] == '#':           # skip
comments
!                 end = len(line) - 1
!                 while end > pos and line[end] in '\r\n':
!                    end = end - 1
!                 end = end + 1
! 
!                 yield (COMMENT, line[pos:end],
!                            (lnum, pos), (lnum, end),
line)
!                 yield (NL, line[end:],
!                            (lnum, end), (lnum,
len(line)), line)
!                 continue
! 
!             if line[pos] in '\r\n':           # skip
blank lines
!                 yield (NL, line[pos:],
                             (lnum, pos), (lnum,
len(line)), line)
                  continue

Best regards,
Eduardo Aguiar (eaguiar@programmer.net)
History
Date User Action Args
2007-08-23 14:32:50adminlinkissue1230484 messages
2007-08-23 14:32:50admincreate