This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: add the option for json.dumps to return newline delimited json
Type: enhancement Stage:
Components: Library (Lib) Versions: Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: bob.ippolito Nosy List: Thibault Molleman, bob.ippolito, enedil, rhettinger, ronron, serhiy.storchaka
Priority: normal Keywords:

Created on 2018-08-28 12:47 by ronron, last changed 2022-04-11 14:59 by admin.

Messages (10)
msg324244 - (view) Author: ron (ronron) Date: 2018-08-28 12:47
Many service providers such as Google BigQuery do not accept Json.
They accept newline delimited Json.

https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json#limitations

please allow to receive this format directly from the dump.
msg324257 - (view) Author: Michał Radwański (enedil) * Date: 2018-08-28 15:08
So this format is just a series of JSON, delimited by a newline. 
Instead of changing API, you might consider this piece of code:

def ndjsondump(objects):
    return '\n'.join(json.dumps(obj) for obj in objects)

Conversely, first you `str.splitlines`, then `json.loads` each line separately.

Does it satisfy your needs?
msg324271 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-08-28 18:18
Would this need be fulfilled by the *separators* option?

    >>> from json import dumps
    >>> query = dict(system='primary', action='load')
    >>> print(dumps(query, separators=(',\n', ': ')))
    {"system": "primary",
    "action": "load"}
msg324305 - (view) Author: ron (ronron) Date: 2018-08-29 07:32
Raymond Hettinger answer is incorrect.

The main difference between Json and new line delimited json is that new line contains valid json in each line. Meaning you can do to line #47 and what you will have in this line is a valid json. Unlike the regular json where if one bracket is wrong the while  file is unreadable. 

You can not just add /n after one "object". You need also to change the brackets.

Keep in mind that not all Jsons are simple.. some contains huge amount of nested objects inside of them. You must identify where Json start and where it ends without being confused by nesting jsons.


There are many programming solutions to this issue.
For example:
https://stackoverflow.com/questions/51595072/convert-json-to-newline-json-standard-using-python/


My point is that this is a new format which is going to be widely accepted since Google adopted it for BigQuery.

flipping strings can also be easily implemented yet Python still build a function to do that for the user.

I think it's wise to allow support for this with in the Json library.. saving the trouble for programmer from thinking how to implement it.
msg324358 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-08-29 23:11
> The main difference between Json and new line delimited json is that new line contains valid json in each line. 

It is up to Bob to decide whether this feature request is within the scope of the module.
msg324360 - (view) Author: Bob Ippolito (bob.ippolito) * (Python committer) Date: 2018-08-29 23:28
I think the best start would be to add a bit of documentation with an example of how you could work with newline delimited json using the existing module as-is. On the encoding side you need to ensure that it's a compact representation without embedded newlines, e.g.:

    for obj in objs:
        yield json.dumps(obj, separators=(',', ':')) + '\n'

I don't think it would make sense to support this directly from dumps, as it's really multiple documents rather than the single document that every other form of dumps will output.

On the read side it would be something like:

    for doc in lines:
        yield json.loads(doc)

I'm not sure if this is common enough (and separable enough from I/O and error handling constraints) to be worth adding the functionality directly into json module. I think it would be more appropriate in the short to medium term that the each service (e.g. BigQuery) would have its own module with helper functions or framework that encapsulates the protocol that the particular service speaks.
msg324363 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2018-08-30 04:26
This format is known as JSON Lines: http://jsonlines.org/. Its support in the user code is trivial -- just one or two lines of code.

Writing:

    for item in items:
        json.dump(item, file)

or

    jsondata = ''.join(json.dumps(item) for item in items)

Reading:

    items = [json.loads(line) for line in file]

or

    items = [json.loads(line) for line in jsondata.splitlines()]

See also issue31553 and issue34393. I think all these propositions should be rejected.
msg324370 - (view) Author: ron (ronron) Date: 2018-08-30 07:28
I'm a bit confused here.

On one hand you say it's two lines of code. On other hand you suggest that each service provider will implement it's own functions.

What's the harm from adding - small , unbreakable functionality? 

Your points for small code could have also been raised against implementing reverse() - yet Python still implemented it - saved the 2 line code from the developer.

At the end.. small change or not.. This is a new format.
Conversion between formats are required from any programming language..
msg324371 - (view) Author: Bob Ippolito (bob.ippolito) * (Python committer) Date: 2018-08-30 08:00
I suggested that each module would likely implement its own functions tailored to that project's IO and error handling requirements. The implementation may differ slightly depending on the protocol. This is consistent with how JSON is typically dealt with from a web framework, for example.
msg324475 - (view) Author: ron (ronron) Date: 2018-09-02 08:26
Well... when handling GBs of data - it's preferred to generate the file directly in the required format rather than doing conversions. 

The new line is a format... protocols don't matter here...
I still think the library should allow the user to create this format directly.

Lets get out of the scope of Google or others... The new line is a great format it allows you to take a "row" in the middle of the file and see it. You don't need to read 1gb file into parser in order to see it you can use copy 1 row... We are adopting this format for all our jsons, so it would be nice to get the service directly from the library.
History
Date User Action Args
2022-04-11 14:59:05adminsetgithub: 78710
2019-08-18 21:49:41Thibault Mollemansetnosy: + Thibault Molleman
2018-09-02 08:26:28ronronsetmessages: + msg324475
2018-08-30 08:00:11bob.ippolitosetmessages: + msg324371
2018-08-30 07:28:57ronronsetmessages: + msg324370
2018-08-30 04:26:04serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg324363
2018-08-29 23:28:53bob.ippolitosetmessages: + msg324360
2018-08-29 23:11:19rhettingersetversions: + Python 3.8, - Python 3.7
nosy: + bob.ippolito

messages: + msg324358

assignee: bob.ippolito
components: + Library (Lib)
2018-08-29 07:32:49ronronsetmessages: + msg324305
2018-08-28 18:18:42rhettingersetnosy: + rhettinger
messages: + msg324271
2018-08-28 15:08:49enedilsetnosy: + enedil
messages: + msg324257
2018-08-28 12:47:12ronroncreate