classification
Title: A strange bug in eval() not present in Python 3
Type: behavior Stage: resolved
Components: Library (Lib) Versions: Python 2.7
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: rhettinger Nosy List: rhettinger, xuancong84
Priority: normal Keywords:

Created on 2019-08-07 01:43 by xuancong84, last changed 2019-08-07 03:18 by rhettinger. This issue is now closed.

Messages (2)
msg349146 - (view) Author: wang xuancong (xuancong84) Date: 2019-08-07 01:43
We all know that since:
[False, True, False].count(True) gives 1
eval('[False, True, False].count(True)') also gives 1.

However, in Python 2,
eval('[False, True, False].count(True)', {}, Counter()) gives 3, while
eval('[False, True, False].count(True)', {}, {}) gives 1.
Take note that a Counter is a special kind of defaultdict, which is again a special kind of dict. Thus, this should not alter the behaviour of eval().

This behaviour is correct in Python 3.
msg349148 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2019-08-07 03:18
This isn't a bug.  

In Python 2, True and False are variable names rather than keywords.  That means they can be shadowed:

>>> False = 10
>>> True = 20
>>> [False, True]
[10, 20]

A Counter() is a kind a dictionary that returns zero rather than raising a KeyError.  When you give eval() a Counter as a locals() dict, you're effectively shadowing the False and True variables:

>>> eval('[False, True]', {}, Counter())
[0, 0]

That follows from:

>>> c = Counter()
>>> c['True']
0
>>> c['False']
0

So effectively, your example translates to:

>>> [0, 0, 0].count(0)
3
History
Date User Action Args
2019-08-07 03:18:03rhettingersetstatus: open -> closed

assignee: rhettinger

nosy: + rhettinger
messages: + msg349148
resolution: not a bug
stage: resolved
2019-08-07 01:43:58xuancong84create