Title: \w not helpful for non-Roman scripts
Type: Stage: resolved
Components: Regular Expressions Versions: Python 3.3, Python 3.4, Python 2.7
Status: closed Resolution: duplicate
Dependencies: Superseder: tokenize fails on some Other_ID_Start or Other_ID_Continue
View: 24194
Assigned To: Nosy List: ezio.melotti, l0nwlf, lemburg, loewis, mrabarnett, nathanlmiles, rsc, terry.reedy, timehorse, vstinner
Priority: normal Keywords:

Created on 2007-04-02 15:27 by nathanlmiles, last changed 2018-03-15 00:32 by terry.reedy. This issue is now closed.

Messages (15)
msg31688 - (view) Author: nlmiles (nathanlmiles) Date: 2007-04-02 15:27
When I try to use r"\w+(?u)" to find words in a unicode Devanagari text bad things happen. Words get chopped into small pieces. I think this is likely because vowel signs such as 093e are not considered to match \w.

I think that if you wish \w to be useful for Indic
scipts \w will need to be exanded to unclude unicode character categories Mc, Mn, Me.

I am using Python 2.4.4 on Windows XP SP2.

I ran the following script to see the characters which I think ought to match \w but don't

import re
import unicodedata

text = ""
for i in range(0x901,0x939): text += unichr(i)
for i in range(0x93c,0x93d): text += unichr(i)
for i in range(0x93e,0x94d): text += unichr(i)
for i in range(0x950,0x954): text += unichr(i)
for i in range(0x958,0x963): text += unichr(i)
parts = re.findall("\W(?u)", text)
for ch in parts:
    print "%04x" % ord(ch), unicodedata.category(ch)

The odd character here is 0904. Its categorization seems to imply that you are using the uncode 3.0 database but perhaps later versions of Python are using the current 5.0 database.
msg31689 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2007-04-02 15:38
Python 2.4 is using Unicode 3.2. Python 2.5 ships with Unicode 4.1.

We're likely to ship Unicode 5.x with Python 2.6 or a later release.

Regarding the char classes: I don't think Mc, Mn and Me should be considered parts of a word. Those are marks which usually separate words.
msg76556 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2008-11-28 21:14
Vowel 'marks' are condensed vowel characters and are very much part of
words and do not separate words.  Python3 properly includes Mn and Mc as
identifier characters.

For instance, the word 'hindi' has 3 consonants 'h', 'n', 'd', 2 vowels
'i' and 'ii' (long i) following 'h' and 'd', and a null vowel (virama)
after 'n'. [The null vowel is needed because no vowel mark indicates the
default vowel short a.  So without it, the word would be hinadii.]
The difference between the devanagari vowel characters, used at the
beginning of words, and the vowel marks, used thereafter, is purely
graphical and not phonological.  In short, in the sanskrit family,
word = syllable+
syllable = vowel | consonant + vowel mark

From a clp post asking why re does not see hindi as a word:


.isapha and possibly other unicode methods need fixing also
>>> 'हिन्दी'.isalpha()#2.x and 3.0
msg76557 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2008-11-28 21:33
Unicode TR#18 defines \w as a shorthand for


which would include all marks. We should recursively check whether we
follow the recommendation (e.g. \p{alpha} refers to all character having
the Alphabetic derived core property, which is Lu+Ll+Lt+Lm+Lo+Nl +
Other_Alphabetic, where Other_Alphabetic is a selected list of
additional character - all from Mn/Mc)
msg81221 - (view) Author: Matthew Barnett (mrabarnett) * (Python triager) Date: 2009-02-05 19:51
In issue #2636 I'm using the following:

Alpha is Ll, Lo, Lt, Lu.
Digit is Nd.
Word is Ll, Lo, Lt, Lu, Mc, Me, Mn, Nd, Nl, No, Pc.

These are what are specified at
msg190075 - (view) Author: Mark Lawrence (BreamoreBoy) * Date: 2013-05-26 11:02
Am I correct in saying that this must stay open as it targets the re module but as given in msg81221 is fixed in the new regex module?
msg190100 - (view) Author: Matthew Barnett (mrabarnett) * (Python triager) Date: 2013-05-26 16:56
I had to check what re does in Python 3.3:

>>> print(len(re.match(r'\w+', 'हिन्दी').group()))

Regex does this:

>>> print(len(regex.match(r'\w+', 'हिन्दी').group()))
msg190219 - (view) Author: Jeffrey C. Jacobs (timehorse) Date: 2013-05-28 15:22
Matthew, I think that is considered a single word in Sanscrit or Thai so Python 3.x is correct.  In this case you've written the Sanscrit word for Hindi.
msg190226 - (view) Author: Matthew Barnett (mrabarnett) * (Python triager) Date: 2013-05-28 16:51
I'm not sure what you're saying.

The re module in Python 3.3 matches only the first codepoint, treating the second codepoint as not part of a word, whereas the regex module matches all 6 codepoints, treating them all as part of a single word.
msg190268 - (view) Author: Jeffrey C. Jacobs (timehorse) Date: 2013-05-29 03:34
Maybe you could show us the byte-for-byte hex of the string you're testing so we can examine if it's really a code point intending word boundary or just a code point for the sake of beginning a new character.
msg190322 - (view) Author: Matthew Barnett (mrabarnett) * (Python triager) Date: 2013-05-29 17:31
You could've obtained it from msg76556 or msg190100:

>>> print(ascii('हिन्दी'))
>>> import re, regex
>>> print(ascii(re.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
>>> print(ascii(regex.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
msg190323 - (view) Author: Jeffrey C. Jacobs (timehorse) Date: 2013-05-29 18:23
Thanks Matthew and sorry to put you through more work; I just wanted to verify exactly which unicode (UTF-16 I take it) were being used to verify if the UNICODE standard expected them to be treated as unique words or single letters within a word.  Sanskrit is an alphabet, not an ideograph so each symbol is considered a letter.  So I believe your implementation is correct and yes, you are right, re is at fault.  There are just accenting characters and letters in that sequence so they should be interpreted as a single word of 6 letters, as you determine, and not one of the first letter.  Mind you, I misinterpreted msg190100 in that I thought you were using findall in which case the answer should be 1, but as far as length of extraction, yes, 6, I totally agree.  Sorry for the misunderstanding. contains the code chart for Hindi.
msg190324 - (view) Author: Matthew Barnett (mrabarnett) * (Python triager) Date: 2013-05-29 18:46
UTF-16 has nothing to do with it, that's just an encoding (a pair of them actually, UTF-16LE and UTF-16BE).

And I don't know why you thought I was using findall in msg190100 when the examples were using match! :-)
msg190326 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-05-29 20:33
Let see Modules/_sre.c:

#define SRE_UNI_IS_WORD(ch) (SRE_UNI_IS_ALNUM(ch) || (ch) == '_')

>>> [ch.isalpha() for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
[True, False, True, False, True, False]
>>> import unicodedata
>>> [unicodedata.category(ch) for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
['Lo', 'Mc', 'Lo', 'Mn', 'Lo', 'Mc']

So the matching ends at U+093f because its category is a "spacing combining" (Mc), which is part of the Mark category, where the re module expects an alphanumeric character.


Unicode TR#18 defines \w as a shorthand for


So if we want to respect this standard, the re module needs to be modified to accept other Unicode categories.
msg313849 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2018-03-15 00:32
Whatever I may have said before, I favor supporting the Unicode standard for \w, which is related to the standard for identifiers.

This is one of 2 issues about \w being defined too narrowly.  I am somewhat arbitrarily closing this as a duplicate of #12731 (fewer digits ;-).

There are 3 issues about tokenize.tokenize failing on valid identifiers, defined as \w sequences whose first char is an identifier itself (and therefore a start char).  In msg313814 of #32987, Serhiy indicates which start and continue identifier characters are matched by \W for re and regex.  I am leaving #24194 open as the tokenizer name issue.
Date User Action Args
2018-03-15 00:32:39terry.reedysetstatus: open -> closed
superseder: tokenize fails on some Other_ID_Start or Other_ID_Continue
messages: + msg313849

resolution: duplicate
stage: resolved
2014-02-03 17:08:44BreamoreBoysetnosy: - BreamoreBoy
2013-05-29 20:33:58vstinnersetmessages: + msg190326
2013-05-29 18:46:46mrabarnettsetmessages: + msg190324
2013-05-29 18:23:52timehorsesetmessages: + msg190323
2013-05-29 17:31:08mrabarnettsetmessages: + msg190322
2013-05-29 03:34:40timehorsesetmessages: + msg190268
2013-05-28 16:51:51mrabarnettsetmessages: + msg190226
2013-05-28 15:22:33timehorsesetmessages: + msg190219
2013-05-26 21:09:23terry.reedysetversions: + Python 3.3, Python 3.4, - Python 3.1
2013-05-26 16:56:19mrabarnettsetmessages: + msg190100
2013-05-26 11:02:14BreamoreBoysetnosy: + BreamoreBoy
messages: + msg190075
2010-03-31 01:29:17l0nwlfsetnosy: + l0nwlf
2010-03-05 15:37:50vstinnersetnosy: + vstinner
2009-05-12 14:41:55ezio.melottisetnosy: + ezio.melotti
2009-02-05 19:51:20mrabarnettsetnosy: + mrabarnett
messages: + msg81221
2008-11-28 21:33:40loewissetnosy: + loewis
messages: + msg76557
2008-11-28 21:14:55terry.reedysetnosy: + terry.reedy
messages: + msg76556
versions: + Python 3.1
2008-09-28 19:20:16timehorsesetnosy: + timehorse
versions: + Python 2.7, - Python 2.4
2008-04-24 21:07:01rscsetnosy: + rsc
2007-04-02 15:27:11nathanlmilescreate