New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding an atomic FS write API #52850
Comments
Often I have the contents to be written in a file at a given path that
IMO the last place where people'd look for such a function is inside The idea is to provide a a similar function in |
This sounds silly to me. You can write a file in two lines: with open("foo", "wb") as f:
f.write(contents) If you want to do something more useful, you can add a function for atomic writing of a file. |
Does the standard library really need something so trivial? I'd put it in your own program. And I'd make the one in distutils private (and fix it to use a with statement). |
Then perhaps it would be better as a context manager: with shutil.atomic_write("foo", "wb") as f:
f.write("mycontents") |
Well, a context manager sounds overkill since we just want to write some content in a file (and nothing else during that context). I think a simple call is straightforward: shutil.atomic_write("foo", "contents", mode="wb") |
Notice that "contents" could be replaced here by an iterator, that would return data to send to write() |
Using a context manager, though, is the recommended idiom to write |
This idiom makes sense when you want to do some things with an open file, and replaces the usual try..finally idiom. That's not what we want to do here. We want to write data in a file in a single step, in an atomic manner. Giving the ability to the developers to work in a context manager means that you potentially give them the ability to break this atomicity. So I don't think the context manager idiom prevails, and should be avoided : shutil.atomic_write is asked to write a file, given some data, and don't return until it's done. |
Considering that it's extremely difficult to implement this correctly and in a cross-platform way, I think it makes sense as a stdlib addition (though I'd add it as a method of a |
I agree that with the addition of the new requirement that it be an atomic write, it should be in a library. I'd also do it as a context manager, since that's the more general case. distutils2 can either call the context manager version, or have a trivial function that calls the context manager. |
AFAICT this doesn't make sense. The writing isn't atomic, the renaming |
If the disk is full, write fails and the new file only contains a part of 'contents'. If the file does already exist and the write fails, the original content is lost. The correct pattern is something like: @contextlib.contextmanager
def atomic_write(filename):
tempname = filename + ".tmp"
out = open(tempname, "w")
try:
yield out
if hasattr('os', 'fsync'):
os.fsync(out.fileno())
out.close()
if os.name in ('nt', 'ce'):
os.unlink(filename)
# ... hope that it doesn't fail here ...
os.rename(tempname, filename)
except:
out.close()
os.unlink(tempname)
raise Remarks:
This context manager is *not* atomic on any OS. It's only atomic on some OS, and it may depends on the kernel version (see recent discussions about ext3/ext4, fsync and write barrier). |
An article about ext3 and fsync: "Solving the ext3 latency problem" Another good article, written by the maintainer of ext4 (Theodore Ts'o): "Don’t fear the fsync!" |
We need an atomic rename function to implement atomic_write(): I opened a new issue, bpo-8828. |
Trac has an AtomicFile class which copies file mode, owner, etc. |
I have a class for overwriting files "properly" that I use in three of my projects. It aims to be atomic as possible, supports creating backups, but it doesn't have functions to set or retain permissions when requested (which might be desirable if such a function is added to stdlib). I'd give it here for reference and ideas.
I also have a simple unit test for it, but I've ran it on POSIX only. Here's the part of the code that does the open/close part: And the unit test: I hope that's useful. |
Well, since I had a typo in the main method which called the Windows implementation on POSIX, the unit test works on the code for Windows when ran on POSIX. Heh, I'm sorry for the noise, but it seems that re-reading the code four times and running the unit tests is not enough, corrected revision: http://bazaar.launchpad.net/~exabyte/blackherd/async-refactor/view/62/blackherd/misc.py#L498 |
Something's missing in all the implementations presented: |
I upgraded my proposed approach to include dir fsync, and to do a copy when backing up instead of rename (is that good?) http://bazaar.launchpad.net/~exabyte/blackherd/async-refactor/view/66/blackherd/misc.py#L498 |
I'd like to make this move forward. |
Some OS don't provide atomic rename. If we only define a function when it is atomic (as we do in the posix module, only expose functions available on the OS), programs will have to write two versions of their function. I prefer to write a "best-effort" function, and so I consider that shutil is the best place for such function. If this function requires a function specific to the OS, the specific function can be added to posix (or another module). Example: with shutil.write_file(filename) as f:
f.write("I hope it will be atomic!") We need an "atomic rename file" function to implement "atomic write file": see bpo-8828. |
os.path is mostly about path manipulation functions, although it also performs directory I/O (e.g. realpath()). |
I disagree. People who explicitely use an atomic file API want atomicity/persistency, otherwise they wouldn't use it. Exposing a function that may, or may not, be atomic is just plain wrong.
As noted by Jean-Paul, shutil stands for "shell utils": that would be a rather poor choice: atomicfile fits in shutil as much as tempfile would :-)
Which one? Apparently, Windows >= XP does have an atomic rename(), and every POSIX compliant OS rename(2) should be atomic.
I agree. Thoughts? |
I adhere to "flat is better than nested". e.g. it always irks me to type "urllib.request" instead of "urllib". A 3-level deep nesting in the stdlib would certainly shock many people :) |
Well, that's what (selective) imports are for, no ? Anyway, my "thoughts?" wasn't referring to this namespace hierarchy |
Does it have to be a class? What would be the operations apart from
I think that's our best compromise. |
Well, I thought that making it a file-like object could be useful:
OK. |
Mmmh... I might have misunderstood the proposal then. I thought this Am I missing something? |
Why so? |
Ah, it's a temporary file indeed.
I see. Yes, then an AtomicFile class makes sense (not for logging And I understand your point about it being available in io or some io |
Yes, logging was a poor example :-)
OK, I'll try that. |
I'm opposed to adding "atomic" files to the standard library. This has nothing to do with the original issue; anybody contributing to the issue is making up user needs that have not been sufficiently demonstrated. It would be best to release such code on PyPI first, and wait a few years to see whether there really is an end-user demand for the feature. |
Since it may not be clear from the rest of the thread, the os.replace() cross-platform atomic renaming building block was added in 3.3 (in bpo-8828, which Victor linked above) Another stdlib use case now also exists in importlib (see http://hg.python.org/cpython/file/3.3/Lib/importlib/_bootstrap.py#l121). However, due to bootstrapping issues, importlib would not be able to use a version that was provided in a module like shutil. |
Here is a patch based on the new os.replace() function. I only tested the patch on Linux. My patch must be tested on Windows, I don't know what happens when the file object is closed when the NamedTemporaryFile.delete attribute is set to False. Does close() raise an exception? I hesitate between prefix and dir parameters for NamedTemporaryFile. If prefix is equal to filename, is it possible for NamedTemporaryFile to create a file called filename? Or does it always add a suffix? If the program crashs before rename, I prefer to leave "file.XXX" temporary files (ex: "file.tmp", "file.tmp.2", ...) instead of random names like "tmpXXX". It is important to create the temporary file in the same directory to not create a file on a different file system. os.replace() fails on POSIX if the source and the destination are on two different file systems. If importing tempfile in shutil is a problem, it may be done in atomic_write() (maybe using a global variable to ensure that the module is only imported once). -- First, I tried to handle the temporary file manually (without the tempfile module) with a constant suffix. I only created a temporary file if the creation of the file (in exclusive mode, using "x" mode") failed. But Antoine pointed me on IRC that the function is not atomic because the file may be invalid before the file content is completly written and flushed. If two threads try to create the same file, the second thread gets a FileExistsError. In this case, the caller has to handle FileExistsError. The caller may remove the temporary file of the first thread, which may make the situation even worse. I prefer to use tempfile.NamedTemporaryFile because it is portable and well tested. It also uses a random suffix. On Windows, the O_TEMPORARY flag is passed to os.open(), so the file is removed by the OS when the file is closed, or when the program does crash. |
+ # Flush Python buffers to system buffers A fsync=True paramater may be added if calling os.fsync() is an issue. |
"I prefer to use tempfile.NamedTemporaryFile because it is portable and well tested. It also uses a random suffix. On Windows, the O_TEMPORARY flag is passed to os.open(), ..." Oh, and it sets also the close-on-exec safe, which is also more secure. |
atomic_write.patch calls os.replace(src, dst) whereas the src file is open. It works on Linux, but it sounds a little bit strange to me and may fail on other platforms. Here is another patch (atomic_write_mkstemp.patch) using tempfile.mkstemp() instead of tempfile.NamedTemporaryFile(delete=True) to control when the file is closed and removed. |
This code contains a bug: hasattr('os', 'fsync') is never True :-) This part is interesting: # fsync on Mac OS X doesn't work, it requires using the
# F_FULLSYNC fcntl
if hasattr(fcntl, 'F_FULLFSYNC'):
fcntl.fcntl(self._fd, fcntl.F_FULLFSYNC) => see also bpo-11877
This class copies file owner, permissions, flags, etc. atomic_write() should probably also call copystat() on the temporary file. |
We have one of these in Beaker (we didn't need to wait for os.replace, since Beaker only runs on Linux): http://git.beaker-project.org/cgit/beaker/tree/Common/bkr/common/helpers.py?h=develop#n90 It turns out to be beneficial to separate the three operations in order to cooperate more cleanly with other operations on the same file system (in our case, we needed to be able to write data uploaded over a network connection to the temporary file *before* acquiring an flock when we did the rename). We also create the file adjacent to the destination, as creating it under the /tmp heirarchy and then renaming it is invalid under SELinux (since the security context ends up being incorrect). While Beaker is GPLv2, I'd easily be able to get permission to extract this and contribute it to the PSF. Incidentally, I'm thinking tempfile might actually be a more sensible home for this than shutil. |
(Note that the Beaker version would need to be enhanced with the extra API parameters from Victor's version, as well as updated to use the exclusive open and close-on-exec flags) |
I think the API would be nicer if it was just a wrapper around the Something like (untested, not even foolproof-read) : class AtomicFile:
def __init__(self, path):
self.dest_path = path
dirname, basename = os.path.split(self.dest_path)
fd, temp_path = tempfile.mkstemp(prefix='.' + basename, dir=dirname)
try:
self.f = os.fdopen(fd, 'w')
except:
os.unlink(temp_path)
raise
self.temp_path = temp_path
def close(self):
self.f.close()
os.rename(self.temp_path, self.dest_path)
# copied from tempfile
def __getattr__(self, name):
# Attribute lookups are delegated to the underlying file
# and cached for non-numeric results
# (i.e. methods are cached, closed and friends are not)
file = self.__dict__['file']
a = getattr(file, name)
if not issubclass(type(a), type(0)):
setattr(self, name, a)
return a
def __enter__self():
self.file.__enter__()
return f
def __exit__(self, exc, value, tb):
if exc is None:
self.close()
else:
self.file.close()
os.remove(self.temp_path) This way, one just has to do: Otherwise, I think the calls to fsync() should be optional (maybe an Also, fsync() should also be done on the containing directory (which |
Of course, I have my own atomic-rename thing, but I'm not going to post the code here. It's fairly limited to my own use case and I have no interest in making it cross platform. That being said, I personally found that a context manager with a signature identical to built-in open() was the most convenient API. It looks natural: with atomic(filename, 'w', encoding='utf-8') as fp:
data = do_a_bunch_of_things_that_might_fail()
fp.write(data) If any of that fails, the temporary file is guaranteed to be cleaned up, otherwise, filename (which is the ultimate destination) will be guaranteed to exist. Another reason why context managers are useful is with ExitStack(), where you might have a bunch of conditions that you want to guarantee get cleaned up properly if any of them fail. |
You also forgot about two things:
So, here are real life function, that we use in production: # TODO: save_mtime, save_selinux, extended attr, chattrs and so on...
# TODO: malicious user may replace directory with another while writing to file,
# so we should use open directory first, and then use openat() and renameat()
# with relative filename instead of specifying absolute path.
# also NameTemporaryFile should allow to specify dir=<dir_file_descriptor>
@contextmanager
def replace_file(path, save_perms=False, fsync=True, **kwargs):
realpath = os.path.realpath(path)
operating_dir = os.path.dirname(realpath)
uid = None
gid = None
mode = None
if save_perms:
try:
stinfo = os.lstat(realpath)
uid = stinfo.st_uid
gid = stinfo.st_gid
mode = stinfo.st_mode
except OSError as e:
if e.errno != errno.ENOENT:
raise
with NamedTemporaryFile(dir=operating_dir, **kwargs) as fff:
filedes = fff.fileno()
if None not in (uid, gid, mode):
os.fchown(filedes, uid, gid)
os.fchmod(filedes, mode & 0o7777)
# survive application crash
fff.flush()
if fsync:
# survive power outage, is not required if that is temporary file
os.fsync(filedes)
os.rename(fff.name, realpath)
# see http://bugs.python.org/issue21579
fff._closer.delete = False
if fsync:
# Sync directory: http://stackoverflow.com/questions/3764822/how-to-durably-rename-a-file-in-posix
dirfd = os.open(operating_dir, os.O_RDONLY | os.O_CLOEXEC | os.O_DIRECTORY)
try:
os.fsync(dirfd)
finally:
os.close(dirfd) |
instead of realpath, os.lstat() may be used in order to detect if target file object is not plain file. Also, os.rename() cannot be used since it follows symlinks by default. |
Also, modern kernels allows to replace entire directory! renameat2() + RENAME_EXCHANGE So, there should be more atomic operations |
This issue is old and different operating systems provide different |
For reference, we added our own atomic_writing() context manager in Jupyter/IPython, after someone lost work when save failed on a full disk. We initially went for the same approach - write to a temporary file, fsync, rename over the original file - but it caused issues with network filesystems and folders synced by Dropbox or similar services. As a result we switched to an approach that copies the old file to a temporary name, writes the new data directly to the old name, then removes the temporary copy on success, or renames it back to the old name on failure. This is certainly less rigorous, but it means that we're not replacing the file on every save, and it's less of a problem if copying file attributes with copystat() fails. I'm sure there are use cases for atomic writing where these problems won't come up, but I hope this is a useful data point, at least for documenting any atomic writing solution. |
It looks like each platform and each filesystem behave differently, with subtle differencies with different kernel versions. I'm not more sure that the Python stdlib is the best place to host an implementation of an "atomic write" API. I now suggest to create a project on PyPI, play with it, try to handle as much platforms as possible, etc. Then when it is considered as stable and robust enough, propose to integrate it to the Python stdlib. Python stdlib is not the best place for best effort algorithm, we prefer to only add reliable and portable code. And leave "other code" outside the stdlib. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: