This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author bob.ippolito
Recipients bob.ippolito, enedil, rhettinger, ronron
Date 2018-08-29.23:28:53
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1535585333.76.0.56676864532.issue34529@psf.upfronthosting.co.za>
In-reply-to
Content
I think the best start would be to add a bit of documentation with an example of how you could work with newline delimited json using the existing module as-is. On the encoding side you need to ensure that it's a compact representation without embedded newlines, e.g.:

    for obj in objs:
        yield json.dumps(obj, separators=(',', ':')) + '\n'

I don't think it would make sense to support this directly from dumps, as it's really multiple documents rather than the single document that every other form of dumps will output.

On the read side it would be something like:

    for doc in lines:
        yield json.loads(doc)

I'm not sure if this is common enough (and separable enough from I/O and error handling constraints) to be worth adding the functionality directly into json module. I think it would be more appropriate in the short to medium term that the each service (e.g. BigQuery) would have its own module with helper functions or framework that encapsulates the protocol that the particular service speaks.
History
Date User Action Args
2018-08-29 23:28:53bob.ippolitosetrecipients: + bob.ippolito, rhettinger, enedil, ronron
2018-08-29 23:28:53bob.ippolitosetmessageid: <1535585333.76.0.56676864532.issue34529@psf.upfronthosting.co.za>
2018-08-29 23:28:53bob.ippolitolinkissue34529 messages
2018-08-29 23:28:53bob.ippolitocreate