Generators have been used to implement coroutine-like functionality. This
allows us to defer to some event loop system which will resume the
after-the-callback section of our code after the IO operation has completed:
def compute(x, y):
print("Compute %s + %s ..." % (x, y))
yield from asyncio.sleep(1.0)
return x + y
def print_sum(x, y):
result = yield from compute(x, y)
print("%s + %s = %s" % (x, y, result))
loop = asyncio.get_event_loop()
(In this example we should note that we will want to spawn other async
activities to get any scalability benefit out of this approach).
When generators were introduced with PEP255 they were described as providing
coroutine-like functionality. PEP342 and PEP380 extended this with the ability
to send exceptions to a generator and to yield from a subgenerator
The term "coroutine" denotes a system where you have more than one routine -
effectively, call stack - active at a time. Because each call stack is
preserved, a routine can suspend its state and yield to a different, named
coroutine. In some languages, there is a yield keyword of some form that
invokes this behaviour.
Why would you want to do this? This provides a primitive form of multitasking
-- cooperative multitasking. Unlike threads, they are not preemptive, which
means that they aren't interrupted until they explicitly yield.
Generators are a subset of this behaviour, right down to the 'yield'
terminology. Wikipedia calls them semicoroutines. However there are two
significant differences between generators and coroutines:
- Generators can only yield to the calling frame.
- Every frame in the stack needs to collaborate in yielding to the calling
frame - so the top frame might yield, and all other calls in the stack
need to be made with yield from.
Diagram about generator-based coroutines require every frame in the call
stacks to collaborate in suspending a stack.