Following system colour scheme Selected dark colour scheme Selected light colour scheme

Python Enhancement Proposals

PEP 533 – Deterministic cleanup for iterators

Author:
Nathaniel J. Smith
BDFL-Delegate:
Yury Selivanov <yury at edgedb >
Status:
Deferred
Type:
Standards Track
Created:
18-Oct-2016
Post-History:
18-Oct-2016

Table of Contents

Abstract

We propose to extend the iterator protocol with a new __(a)iterclose__slot, which is called automatically on exit from (async)forloops, regardless of how they exit. This allows for convenient, deterministic cleanup of resources held by iterators without reliance on the garbage collector. This is especially valuable for asynchronous generators.

Note on timing

In practical terms, the proposal here is divided into two separate parts: the handling of async iterators, which should ideally be implemented ASAP, and the handling of regular iterators, which is a larger but more relaxed project that can’t start until 3.7 at the earliest. But since the changes are closely related, and we probably don’t want to end up with async iterators and regular iterators diverging in the long run, it seems useful to look at them together.

Background and motivation

Python iterables often hold resources which require cleanup. For example:fileobjects need to be closed; theWSGI specadds aclosemethod on top of the regular iterator protocol and demands that consumers call it at the appropriate time (though forgetting to do so is a frequent source of bugs); andPEP 342(based onPEP 325) extended generator objects to add a closemethod to allow generators to clean up after themselves.

Generally, objects that need to clean up after themselves also define a__del__method to ensure that this cleanup will happen eventually, when the object is garbage collected. However, relying on the garbage collector for cleanup like this causes serious problems in several cases:

  • In Python implementations that do not use reference counting (e.g. PyPy, Jython), calls to__del__may be arbitrarily delayed – yet many situations requirepromptcleanup of resources. Delayed cleanup produces problems like crashes due to file descriptor exhaustion, or WSGI timing middleware that collects bogus times.
  • Async generators (PEP 525) can only perform cleanup under the supervision of the appropriate coroutine runner.__del__doesn’t have access to the coroutine runner; indeed, the coroutine runner might be garbage collected before the generator object. So relying on the garbage collector is effectively impossible without some kind of language extension. (PEP 525does provide such an extension, but it has a number of limitations that this proposal fixes; see the “alternatives” section below for discussion.)

Fortunately, Python provides a standard tool for doing resource cleanup in a more structured way:withblocks. For example, this code opens a file but relies on the garbage collector to close it:

defread_newline_separated_json(path):
forlineinopen(path):
yieldjson.loads(line)

fordocumentinread_newline_separated_json(path):
...

and recent versions of CPython will point this out by issuing a ResourceWarning,nudging us to fix it by adding awithblock:

defread_newline_separated_json(path):
withopen(path)asfile_handle:# <-- with block
forlineinfile_handle:
yieldjson.loads(line)

fordocumentinread_newline_separated_json(path):# <-- outer for loop
...

But there’s a subtlety here, caused by the interaction ofwith blocks and generators.withblocks are Python’s main tool for managing cleanup, and they’re a powerful one, because they pin the lifetime of a resource to the lifetime of a stack frame. But this assumes that someone will take care of cleaning up the stack frame… and for generators, this requires that someoneclose them.

In this case, adding thewithblockisenough to shut up the ResourceWarning,but this is misleading – the file object cleanup here is still dependent on the garbage collector. Thewithblock will only be unwound when theread_newline_separated_json generator is closed. If the outerforloop runs to completion then the cleanup will happen immediately; but if this loop is terminated early by abreakor an exception, then thewithblock won’t fire until the generator object is garbage collected.

The correct solution requires that allusersof this API wrap every forloop in its ownwithblock:

withclosing(read_newline_separated_json(path))asgenobj:
fordocumentingenobj:
...

This gets even worse if we consider the idiom of decomposing a complex pipeline into multiple nested generators:

defread_users(path):
withclosing(read_newline_separated_json(path))asgen:
fordocumentingen:
yieldUser.from_json(document)

defusers_in_group(path,group):
withclosing(read_users(path))asgen:
foruseringen:
ifuser.group==group:
yielduser

In general if you have N nested generators then you need N+1with blocks to clean up 1 file. And good defensive programming would suggest that any time we use a generator, we should assume the possibility that there could be at least onewithblock somewhere in its (potentially transitive) call stack, either now or in the future, and thus always wrap it in awith.But in practice, basically nobody does this, because programmers would rather write buggy code than tiresome repetitive code. In simple cases like this there are some workarounds that good Python developers know (e.g. in this simple case it would be idiomatic to pass in a file handle instead of a path and move the resource management to the top level), but in general we cannot avoid the use ofwith/finallyinside of generators, and thus dealing with this problem one way or another. When beauty and correctness fight then beauty tends to win, so it’s important to make correct code beautiful.

Still, is this worth fi xing? Until async generators came along I would have argued yes, but that it was a low priority, since everyone seems to be muddling along okay – but async generators make it much more urgent. Async generators cannot do cleanupat allwithout some mechanism for deterministic cleanup that people will actually use, and async generators are particularly likely to hold resources like file descriptors. (After all, if they weren’t doing I/O, they’d be generators, not async generators.) So we have to do something, and it might as well be a comprehensive fix to the underlying problem. And it’s much easier to fix this now when async generators are first rolling out, than it will be to fix it later.

The proposal itself is simple in concept: add a__(a)iterclose__ method to the iterator protocol, and have (async)forloops call it when the loop is exited, even if this occurs viabreakor exception unwinding. Effectively, we’re taking the current cumbersome idiom (withblock +forloop) and merging them together into a fancierfor.This may seem non-orthogonal, but makes sense when you consider that the existence of generators means thatwith blocks actually depend on iterator cleanup to work reliably, plus experience showing that iterator cleanup is often a desirable feature in its own right.

Alternatives

PEP 525 asyncgen hooks

PEP 525 proposes a set of global thread-local hooks managed by newsys.{get/set}_asyncgen_hooks()functions, which allow event loops to integrate with the garbage collector to run cleanup for async generators. In principle, this proposal andPEP 525 are complementary, in the same way thatwithblocks and __del__are complementary: this proposal takes care of ensuring deterministic cleanup in most cases, whilePEP 525’s GC hooks clean up anything that gets missed. But__aiterclose__provides a number of advantages over GC hooks alone:

  • The GC hook semantics aren’t part of the abstract async iterator protocol, but are instead restrictedspecifically to the async generator concrete type.If you have an async iterator implemented using a class, like:
    classMyAsyncIterator:
    asyncdef__anext__():
    ...
    

    then you can’t refactor this into an async generator without changing its semantics, and vice-versa. This seems very un Python ic. (It also leaves open the question of what exactly class-based async iterators are supposed to do, given that they face exactly the same cleanup problems as async generators.) __aiterclose__,on the other hand, is defined at the protocol level, so it’s duck-type friendly and works for all iterators, not just generators.

  • Code that wants to work on non-CPython implementations like PyPy cannot in general rely on GC for cleanup. Without __aiterclose__,it’s more or less guaranteed that developers who develop and test on CPython will produce libraries that leak resources when used on PyPy. Developers who do want to target alternative implementations will either have to take the defensive approach of wrapping everyforloop in awithblock, or else carefully audit their code to figure out which generators might possibly contain cleanup code and addwithblocks around those only. With__aiterclose__,writing portable code becomes easy and natural.
  • An important part of building robust software is making sure that exceptions always propagate correctly without being lost. One of the most exciting things about async/await compared to traditional callback-based systems is that instead of requiring manual chaining, the runtime can now do the heavy lifting of propagating errors, making itmucheasier to write robust code. But, this beautiful new picture has one major gap: if we rely on the GC for generator cleanup, then exceptions raised during cleanup are lost. So, again, with__aiterclose__,developers who care about this kind of robustness will either have to take the defensive approach of wrapping everyforloop in awithblock, or else carefully audit their code to figure out which generators might possibly contain cleanup code.__aiterclose__plugs this hole by performing cleanup in the caller’s context, so writing more robust code becomes the path of least resistance.
  • The WSGI experience suggests that there exist important iterator-based APIs that need prompt cleanup and cannot rely on the GC, even in CPython. For example, consider a hypothetical WSGI-like API based around async/await and async iterators, where a response handler is an async generator that takes request headers + an async iterator over the request body, and yields response headers + the response body. (This is actually the use case that got me interested in async generators in the first place, i.e. this isn’t hypothetical.) If we follow WSGI in requiring that child iterators must be closed properly, then without__aiterclose__the absolute most minimalistic middleware in our system looks something like:
    asyncdefnoop_middleware(handler,request_header,request_body):
    asyncwithaclosing(handler(request_body,request_body))asaiter:
    asyncforresponse_iteminaiter:
    yieldresponse_item
    

    Arguably in regular code one can get away with skipping thewith block aroundforloops, depending on how confident one is that one understands the internal implementation of the generator. But here we have to cope with arbitrary response handlers, so without __aiterclose__,thiswithconstruction is a mandatory part of every middleware.

    __aiterclose__allows us to eliminate the mandatory boilerplate and an extra level of indentation from every middleware:

    asyncdefnoop_middleware(handler,request_header,request_body):
    asyncforresponse_iteminhandler(request_header,request_body):
    yieldresponse_item
    

So the__aiterclose__approach provides substantial advantages over GC hooks.

This leaves open the question of whether we want a combination of GC hooks +__aiterclose__,or just__aiterclose__alone. Since the vast majority of generators are iterated over using aforloop or equivalent,__aiterclose__handles most situations before the GC has a chance to get involved. The case where GC hooks provide additional value is in code that does manual iteration, e.g.:

agen=fetch_newline_separated_json_from_url(...)
whileTrue:
document=awaittype(agen).__anext__(agen)
ifdocument["id"]==needle:
break
# doesn't do 'await agen.aclose()'

If we go with the GC-hooks +__aiterclose__approach, this generator will eventually be cleaned up by GC calling the generator __del__method, which then will use the hooks to call back into the event loop to run the cleanup code.

If we go with the no-GC-hooks approach, this generator will eventually be garbage collected, with the following effects:

  • its__del__method will issue a warning that the generator was not closed (similar to the existing “coroutine never awaited” warning).
  • The underlying resources involved will still be cleaned up, because the generator frame will still be garbage collected, causing it to drop references to any file handles or sockets it holds, and then those objects’s__del__methods will release the actual operating system resources.
  • But, any cleanup code inside the generator itself (e.g. logging, buffer flushing) will not get a chance to run.

The solution here – as the warning would indicate – is to fix the code so that it calls__aiterclose__,e.g. by using awith block:

asyncwithaclosing(fetch_newline_separated_json_from_url(...))asagen:
whileTrue:
document=awaittype(agen).__anext__(agen)
ifdocument["id"]==needle:
break

Basically in this approach, the rule would be that if you want to manually implement the iterator protocol, then it’s your responsibility to implement all of it, and that now includes __(a)iterclose__.

GC hooks add non-trivial complexity in the form of (a) new global interpreter state, (b) a somewhat complicated control flow (e.g., async generator GC always involves resurrection, so the details of PEP 442 are important), and (c) a new public API in asyncio (await loop.shutdown_asyncgens()) that users have to remember to call at the appropriate time. (This last point in particular somewhat undermines the argument that GC hooks provide a safe backup to guarantee cleanup, since ifshutdown_asyncgens()isn’t called correctly then Ithinkit’s possible for generators to be silently discarded without their cleanup code being called; compare this to the __aiterclose__-only approach where in the worst case we still at least get a warning printed. This might be fixable.) All this considered, GC hooks arguably aren’t worth it, given that the only people they help are those who want to manually call__anext__yet don’t want to manually call__aiterclose__.But Yury disagrees with me on this:-). And both options are viable.

Always inject resources, and do all cleanup at the top level

Several commentators on Python -dev and Python -ideas have suggested that a pattern to avoid these problems is to always pass resources in from above, e.g.read_newline_separated_jsonshould take a file object rather than a path, with cleanup handled at the top level:

defread_newline_separated_json(file_handle):
forlineinfile_handle:
yieldjson.loads(line)

defread_users(file_handle):
fordocumentinread_newline_separated_json(file_handle):
yieldUser.from_json(document)

withopen(path)asfile_handle:
foruserinread_users(file_handle):
...

This works well in simple cases; here it lets us avoid the “N+1 withblocks problem”. But unfortunately, it breaks down quickly when things get more complex. Consider if instead of reading from a file, our generator was reading from a streaming HTTP GET request – while handling redirects and authentication via OAUTH. Then we’d really want the sockets to be managed down inside our HTTP client library, not at the top level. Plus there are other cases where finallyblocks embedded inside generators are important in their own right: db transaction management, emitting logging information during cleanup (one of the major motivating use cases for WSGI close), and so forth. So this is really a workaround for simple cases, not a general solution.

More complex variants of __(a)iterclose__

The semantics of__(a)iterclose__are somewhat inspired by withblocks, but context managers are more powerful: __(a)exit__can distinguish between a normal exit versus exception unwinding, and in the case of an exception it can examine the exception details and optionally suppress propagation.__(a)iterclose__as proposed here does not have these powers, but one can imagine an alternative design where it did.

However, this seems like unwarranted complexity: experience suggests that it’s common for iterables to haveclosemethods, and even to have__exit__methods that callself.close(),but I’m not aware of any common cases that make use of__exit__’s full power. I also can’t think of any examples where this would be useful. And it seems unnecessarily confusing to allow iterators to affect flow control by swallowing exceptions – if you’re in a situation where you really want that, then you should probably use a realwithblock anyway.

Specification

This section describes where we want to eventually end up, though there are some backwards compatibility issues that mean we can’t jump directly here. A later section describes the transition plan.

Guiding principles

Generally,__(a)iterclose__implementations should:

  • be idempotent,
  • perform any cleanup that is appropriate on the assumption that the iterator will not be used again after__(a)iterclose__is called. In particular, once__(a)iterclose__has been called then calling__(a)next__produces undefined behavior.

And generally, any code which starts iterating through an iterable with the intention of exhausting it, should arrange to make sure that __(a)iterclose__is eventually called, whether or not the iterator is actually exhausted.

Changes to iteration

The core proposal is the change in behavior offorloops. Given this Python code:

forVARinITERABLE:
LOOP-BODY
else:
ELSE-BODY

we desugar to the equivalent of:

_iter=iter(ITERABLE)
_iterclose=getattr(type(_iter),"__iterclose__",lambda:None)
try:
traditional-forVARin_iter:
LOOP-BODY
else:
ELSE-BODY
finally:
_iterclose(_iter)

where the “traditional-for statement” here is meant as a shorthand for the classic 3.5-and-earlierforloop semantics.

Besides the top-levelforstatement, Python also contains several other places where iterators are consumed. For consistency, these should call__iterclose__as well using semantics equivalent to the above. This includes:

  • forloops inside comprehensions
  • *unpacking
  • functions which accept and fully consume iterables, like list(it),tuple(it),itertools.product(it1,it2,...), and others.

In addition, ayieldfromthat successfully exhausts the called generator should as a last step call its__iterclose__ method. (Rationale:yieldfromalready links the lifetime of the calling generator to the called generator; if the calling generator is closed when half-way through ayieldfrom,then this will already automatically close the called generator.)

Changes to async iteration

We also make the analogous changes to async iteration constructs, except that the new slot is called__aiterclose__,and it’s an async method that getsawaited.

Modifications to basic iterator types

Generator objects (including those created by generator comprehensions):

  • __iterclose__callsself.close()
  • __del__callsself.close()(same as now), and additionally issues aResourceWarningif the generator wasn’t exhausted. This warning is hidden by default, but can be enabled for those who want to make sure they aren’t inadvertently relying on CPython-specific GC semantics.

Async generator objects (including those created by async generator comprehensions):

  • __aiterclose__callsself.aclose()
  • __del__issues aRuntimeWarningifaclosehas not been called, since this probably indicates a latent bug, similar to the “coroutine never awaited” warning.

QUESTION: should file objects implement__iterclose__to close the file? On the one hand this would make this change more disruptive; on the other hand people really like writingforlineinopen(...): ...,and if we get used to iterators taking care of their own cleanup then it might become very weird if files don’t.

New convenience functions

Theoperatormodule gains two new functions, with semantics equivalent to the following:

defiterclose(it):
ifnotisinstance(it,collections.abc.Iterator):
raiseTypeError("not an iterator")
ifhasattr(type(it),"__iterclose__"):
type(it).__iterclose__(it)

asyncdefaiterclose(ait):
ifnotisinstance(it,collections.abc.AsyncIterator):
raiseTypeError("not an iterator")
ifhasattr(type(ait),"__aiterclose__"):
awaittype(ait).__aiterclose__(ait)

Theitertoolsmodule gains a new iterator wrapper that can be used to selectively disable the new__iterclose__behavior:

# QUESTION: I feel like there might be a better name for this one?
classpreserve(iterable):
def__init__(self,iterable):
self._it=iter(iterable)

def__iter__(self):
returnself

def__next__(self):
returnnext(self._it)

def__iterclose__(self):
# Swallow __iterclose__ without passing it on
pass

Example usage (assuming that file objects implements __iterclose__):

withopen(...)ashandle:
# Iterate through the same file twice:
forlineinitertools.preserve(handle):
...
handle.seek(0)
forlineinitertools.preserve(handle):
...
@contextlib.contextmanager
defiterclosing(iterable):
it=iter(iterable)
try:
yieldpreserve(it)
finally:
iterclose(it)

__iterclose__ implementations for iterator wrappers

Python ships a number of iterator types that act as wrappers around other iterators:map,zip,itertools.accumulate, csv.reader,and others. These iterators should define a __iterclose__method which calls__iterclose__in turn on their underlying iterators. For example,mapcould be implemented as:

# Helper function
map_chaining_exceptions(fn,items,last_exc=None):
foriteminitems:
try:
fn(item)
exceptBaseExceptionasnew_exc:
ifnew_exc.__context__isNone:
new_exc.__context__=last_exc
last_exc=new_exc
iflast_excisnotNone:
raiselast_exc

classmap:
def__init__(self,fn,*iterables):
self._fn=fn
self._iters=[iter(iterable)foriterableiniterables]

def__iter__(self):
returnself

def__next__(self):
returnself._fn(*[next(it)foritinself._iters])

def__iterclose__(self):
map_chaining_exceptions(operator.iterclose,self._iters)

defchain(*iterables):
try:
whileiterables:
forelementiniterables.pop(0):
yieldelement
exceptBaseExceptionase:
defiterclose_iterable(iterable):
operations.iterclose(iter(iterable))
map_chaining_exceptions(iterclose_iterable,iterables,last_exc=e)

In some cases this requires some subtlety; for example,itertools.tee should not call__iterclose__on the underlying iterator until it has been called onallof the clone iterators.

Example / Rationale

The payoff for all this is that we can now write straightforward code like:

defread_newline_separated_json(path):
forlineinopen(path):
yieldjson.loads(line)

and be confident that the file will receive deterministic cleanup without the end-user having to take any special effort,even in complex cases. For example, consider this silly pipeline:

list(map(lambdakey:key.upper(),
doc["key"]fordocinread_newline_separated_json(path)))

If our file contains a document wheredoc[ "key" ]turns out to be an integer, then the following sequence of events will happen:

  1. key.upper()raises anAttributeError,which propagates out of themapand triggers the implicitfinallyblock inside list.
  2. Thefinallyblock inlistcalls__iterclose__()on the map object.
  3. map.__iterclose__()calls__iterclose__()on the generator comprehension object.
  4. This injects aGeneratorExitexception into the generator comprehension body, which is currently suspended inside the comprehension’sforloop body.
  5. The exception propagates out of theforloop, triggering the forloop’s implicitfinallyblock, which calls __iterclose__on the generator object representing the call to read_newline_separated_json.
  6. This injects an innerGeneratorExitexception into the body of read_newline_separated_json,currently suspended at the yield.
  7. The innerGeneratorExitpropagates out of theforloop, triggering theforloop’s implicitfinallyblock, which calls__iterclose__()on the file object.
  8. The file object is closed.
  9. The innerGeneratorExitresumes propagating, hits the boundary of the generator function, and causes read_newline_separated_json’s__iterclose__()method to return successfully.
  10. Control returns to the generator comprehension body, and the outer GeneratorExitcontinues propagating, allowing the comprehension’s__iterclose__()to return successfully.
  11. The rest of the__iterclose__()calls unwind without incident, back into the body oflist.
  12. The originalAttributeErrorresumes propagating.

(The details above assume that we implementfile.__iterclose__;if not then add awithblock toread_newline_separated_jsonand essentially the same logic goes through.)

Of course, from the user’s point of view, this can be simplified down to just:

1.int.upper()raises anAttributeError 1. The file object is closed. 2. TheAttributeErrorpropagates out oflist

So we’ve accomplished our goal of making this “just work” without the user having to think about it.

Transition plan

While the majority of existingforloops will continue to produce identical results, the proposed changes will produce backwards-incompatible behavior in some cases. Example:

defread_csv_with_header(lines_iterable):
lines_iterator=iter(lines_iterable)
forlineinlines_iterator:
column_names=line.strip().split("\t")
break
forlineinlines_iterator:
values=line.strip().split("\t")
record=dict(zip(column_names,values))
yieldrecord

This code used to be correct, but after this proposal is implemented will require anitertools.preservecall added to the firstfor loop.

[QUESTION: currently, if you close a generator and then try to iterate over it then it just raisesStop(Async)Iteration,so code the passes the same generator object to multipleforloops but forgets to useitertools.preservewon’t see an obvious error – the second forloop will just exit immediately. Perhaps it would be better if iterating a closed generator raised aRuntimeError?Note that files don’t have this problem – attempting to iterate a closed file object already raisesValueError.]

Specifically, the incompatibility happens when all of these factors come together:

  • The automatic calling of__(a)iterclose__is enabled
  • The iterable did not previously define__(a)iterclose__
  • The iterable does now define__(a)iterclose__
  • The iterable is re-used after theforloop exits

So the problem is how to manage this transition, and those are the levers we have to work with.

First, observe that the only async iterables where we propose to add __aiterclose__are async generators, and there is currently no existing code using async generators (though this will start changing very soon), so the async changes do not produce any backwards incompatibilities. (There is existing code using async iterators, but using the new async for loop on an old async iterator is harmless, because old async iterators don’t have__aiterclose__.) In addition,PEP 525was accepted on a provisional basis, and async generators are by far the biggest beneficiary of this PEP’s proposed changes. Therefore, I think we should strongly consider enabling __aiterclose__forasyncforloops and async generators ASAP, ideally for 3.6.0 or 3.6.1.

For the non-async world, things are harder, but here’s a potential transition path:

In 3.7:

Our goal is that existing unsafe code will start emitting warnings, while those who want to opt-in to the future can do that immediately:

  • We immediately add all the__iterclose__methods described above.
  • Iffrom__future__importitercloseis in effect, thenfor loops and*unpacking call__iterclose__as specified above.
  • If the future isnotenabled, thenforloops and* unpacking donotcall__iterclose__.But they do call some other method instead, e.g.__iterclose_warning__.
  • Similarly, functions likelistuse stack introspection (!!) to check whether their direct caller has__future__.iterclose enabled, and use this to decide whether to call__iterclose__or __iterclose_warning__.
  • For all the wrapper iterators, we also add__iterclose_warning__ methods that forward to the__iterclose_warning__method of the underlying iterator or iterators.
  • For generators (and files, if we decide to do that), __iterclose_warning__is defined to set an internal flag, and other methods on the object are modified to check for this flag. If they find the flag set, they issue aPendingDeprecationWarning to inform the user that in the future this sequence would have led to a use-after-close situation and the user should use preserve().

In 3.8:

  • Switch fromPendingDeprecationWarningtoDeprecationWarning

In 3.9:

  • Enable the__future__unconditionally and remove all the __iterclose_warning__stuff.

I believe that this satisfies the normal requirements for this kind of transition – opt-in initially, with warnings targeted precisely to the cases that will be effected, and a long deprecation cycle.

Probably the most controversial / risky part of this is the use of stack introspection to make the iterable-consuming functions sensitive to a__future__setting, though I haven’t thought of any situation where it would actually go wrong yet…

Acknowledgements

Thanks to Yury Selivanov, Armin Rigo, and Carl Friedrich Bolz for helpful discussion on earlier versions of this idea.


Source:https://github / Python /peps/blob/main/peps/pep-0533.rst

Last modified:2023-09-09 17:39:29 GMT