This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: [2.7] PyStructSequence objects not behaving like nametuple
Type: enhancement Stage: resolved
Components: Interpreter Core Versions: Python 2.7
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: Xiang Gao, benjamin.peterson, christian.heimes, eric.snow, rhettinger
Priority: normal Keywords:

Created on 2019-02-06 15:54 by Xiang Gao, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Messages (6)
msg334946 - (view) Author: Xiang Gao (Xiang Gao) Date: 2019-02-06 15:54

On issue 1820, a bunch of improvements was made on PyStructSequence to make it behave like a namedtuple. These improvements are not ported to Python 2, which makes it a trouble to write python 2-3 compatible code.

See also:
msg334965 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-06 17:53
tl;dr It's too late to change anything here.  Also, is it actually a problem in practice?


At this point enhancements can not go into 2.7 (you're welcome to appeal to the release manager).  The changes to `PyStructSequence` (from bpo-1820) appear to have been merged for 3.2 in July 2010.  However, this is just after 2.7 went final and 3 months after the first beta release (where no more enhancements are allowed).

Likewise, at this point we cannot revert the original change (from bpo-1820).  Doing so in 3.7 (the only one taking bug fixes) or 3.8 (upcoming) would break compatibility with 3.2 and up.

Notably, this discrepancy has been there since 2.7/3.2 were released (8+ years).  I don't recall any previous reports of a problem here, so I expect the impact is small.

Regardless, from what I understand the problem is that pytorch was returning tuples and (with that PR) was going to return "named" tuples. 
 For C-extension functions it wouldn't actually be a tuple any more but a structseq instead.  So user code that checked for a tuple would suddenly work differently.  Is that right?

If so, I'm not clear on when this would be a problem in practice.  Why would users of pytorch (or any similar library) be checking the type returned from a specific function in the library's documented API?  If it was returning a tuple before then I'd expect them to be using unpacking or index access.  That would keep working if the library switched to namedtuple and structseq.
msg334966 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-06 17:57
msg334967 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-06 17:58
FWIW, if folks *are* checking for tuple (and I'd love to know why) then I'd recommend that they not. :)  A more Pythonic (and arguably generally better) approach would be to stick tightly to what data you need and take advantage of duck-typing.  When possible, try attribute access first if you expect that (e.g. namedtuple or struct seq).  Otherwise try use unpacking.

For example:

    x, y = ret.x, ret.y
except AttributeError:

   x, y, _ = ret
except TypeError:
except ValueError:

Either way is easier to follow than code that relies on type (and length) checking.  They also have the advantage of allowing arbitrary types that fit (duck-typing).
msg334970 - (view) Author: Xiang Gao (Xiang Gao) Date: 2019-02-06 18:55
Hi Eric,

Thanks for your valuable information and fast reply. You understand the problem exactly correct: initially pytorch had codes like `isinstance(x, tuple)` and lots of `PyTuple_Check`, but when we start to change the return type from tuple to structseq, these checks starts to fail.

Yes, I agree generally the impact is small for this issue. Most users of PyTorch would just use the way like `values, indices = a.max(dim=0)`, which works perfectly before or after that PR. The impacts are probably mostly on libraries that use PyTorch as dependency. These libraries might have generic code that should be able to handle returns of most operators, for example, something like:

User code:
import torch
import a_deep_learning_library

class MyModel(torch.nn.Module):
    def forward(self, input_):
        return tensor.sum(dim=0)

model = MyModel()

Code of a_deep_learning_library:
def do_something(model):
    input_ = prepare_input()
    output = model(input_)
    if torch.is_tensor(output):
    elif isinstance(output, float):
    elif isinstance(output, tuple):
        sum_ = sum(output)
    elif ....

Unpacking does not always work because it is hard to differentiate these two cases: `a, b = torch.tensor([1, 2])` and `a, b = torch.tensor([1, 2]), torch.tensor([3, 4])`, but your suggestion is very valuable.

I am neither a member of PyTorch team nor a Facebook employee. I am just a community contributor working on that issue. I will open an issue on PyTorch discussing the problem.

msg334994 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-07 00:43
Sounds good.  I hope that helped.
Date User Action Args
2022-04-11 14:59:10adminsetgithub: 80095
2019-02-07 00:43:55eric.snowsetstatus: open -> closed
resolution: not a bug
messages: + msg334994

stage: resolved
2019-02-06 18:55:41Xiang Gaosetmessages: + msg334970
2019-02-06 17:58:08eric.snowsetmessages: + msg334967
2019-02-06 17:57:24eric.snowsetmessages: + msg334966
2019-02-06 17:53:05eric.snowsetnosy: + eric.snow, rhettinger, christian.heimes, benjamin.peterson
type: behavior -> enhancement
messages: + msg334965
2019-02-06 15:54:42Xiang Gaocreate