Asynchronous code has many traps for the unwary. In fact, it has plenty of ground that’s treacherous even for pathologically careful coders. For example, certain ways of using asynchrony can reduce the control you have over concurrency and ordering. This can result in subtle bugs, particularly when the asynchronous work operates on shared underlying resources. In this post, I’ll talk about some of the many ways in which this can cause problems, and what to do about it.
I was reminded of this recently while reading a blog about iterators and asynchronous code in C#. It contains a useful insight. I think it also makes a significant mistake: while the author, Sergey Teplyakov, is perfectly aware that the code he shows is faulty, he misdiagnoses the main problem.
Asynchronous code often falls into the same trap as his example. I’ve seen very experienced developers make this mistake in various forms. Anyone using any asynchronous technique needs to understand the nature of this mistake, because if you’re not familiar with it, it’s very easy to get this wrong. Your code will seem to work in simple, unstressed scenarios, but then fall apart under pressure.
Here’s the fundamental issue with that example: it can produce multiple concurrent operations but it fails to control how these use a shared resource. This is a subtle but widespread problem in code that does asynchronous work, or, for that matter, deferred work of any kind. This article is about when and why that happens, and describes a couple of ways to avoid it.
First, I want to point out the useful insight in that blog, because it will turn out to be relevant to one solution to the problem. Sadly, it’s buried towards the end, so it’s worth highlighting:
IAsyncEnumerable<T> is an important abstraction, and it is different from
IEnumerable<Task<T>> in significant ways.
Unfortunately the post dismisses the potential value of
IEnumerable<Task<T>>. I agree it’s inappropriate for the example, but Sergey goes further, saying that he’s “not a fan” of this type. I do not share this view, because that type has important uses. Moreover, I think he’s putting too much of the blame on it for the problems in his example.
Just to be clear, the author knows full well that the example makes a mistake. But he attributes it to poor composability of two C# language features (iterator methods, and asynchronous code). I disagree for two reasons. First, it’s perfectly possible to combine these two language features successfully. Second, you can write code that goes wrong in precisely the same way as his example without using either of these two language features.
So the problem here is more fundamental. The basic issue is a dangerous approach to resource ownership. To understand this, let’s look in detail at how it goes wrong.
Here’s the declaration of the
ParseFile method, which is the code’s entry point:
What exactly is this promising? It returns an enumerable collection of tasks, each of which produces an integer result. This instantly raises a question: having called this method, am I obliged to inspect every task? One of the rules of using
Task<T> is that for each task created, something is supposed to check for failure.
Back in .NET 4.0, when
Task first shipped, the default behaviour in .NET was that if a task failed and nothing observed that failure, .NET would terminate your process. (Tasks use finalizers to detect when they’re about to be garbage collected while in an unobserved faulted state. To avoid the performance problems caused by finalizers they suppress finalization most of the time, enabling it only for as long as the task is in a faulted state, and no code has yet retrieved the
Exception property.) This makes sense when you consider that failure to inspect the exception in a faulted task is the async equivalent of allowing an unhandled exception to reach the top of the stack—.NET will terminate your process if that happens too.
.NET 4.5 changed the default behaviour, and automatic process termination is no longer the default. I always thought that was pretty questionable: it’s all a bit too
On Error Resume Next (Visual Basic’s old mechanism for completely ignoring all errors and ploughing on regardless). This changes the consequences of making this mistake: before, your process would crash out, but now it limps on in an uncertain state. But either way, it’s still a mistake—the fact remains that task failure should not go undiscovered.
So as callers of this
ParseFile method, we need to know: do all the tasks get created as soon as we call the method, or are they created on demand as we iterate? This matters because it determines whether things like this are OK:
If the tasks are created only as we take them from the
IEnumerable<Task<int>> then this is OK—the LINQ
First operator will take the first item, and then dispose the enumerator without running to the end. Only that one task will be created, so we don’t need to worry about the ‘rest’ because they never existed.
If on the other hand the code always creates all of the tasks—perhaps it creates an array and returns that—then the code above is wrong, because if any of the tasks should fail, nothing will notice. (If you write a method that works this way, I’d suggest returning a
Task<T> instead, to make it clear that its a pre-enumerated collection. That way the caller’s responsibilities are unambiguous.)
Note that this will almost always be wrong:
Here we’re taking just the 2nd item, skipping the first. This is definitely wrong because even if the method only generates tasks as we ask for them, we will necessarily ask it to produce the first, skipped task, and there’s nothing observing failure on that.
(I said “almost” earlier because this is not absolutely certain to be true. It’s possible that an implementation of
ParseFile might attach failure completion handlers to these tasks to ensure that failures are detected in all cases. However, that opens up the question of how this method is able to know what is the correct application-wide unhandled error policy.)
As a caller of the function, it’s best to be conservative. We should presume that it’s our job to iterate through the entire collection, and to ensure that failure in any of the tasks will be correctly observed.
Likewise, as an implementer of the function, it’s best to be conservative. We should only produce tasks as and when we’re asked for them.
Given all of this, you might be sympathetic when Sergey says he is “not a big fan of
IEnumerable<Task>.” Faced with such a method, it’s difficult to know where you stand unless the method has been documented to explain how it works. However, it’s not intractable—if either the caller or the implementer (or both) stick to the conservative guidelines above, it will be fine. The example in that blog is a case in point: it will in fact only produce tasks as and when calling code asks for them thanks to the way C# iterators work, so when it comes to responsibility for observing errors, it’s not making too many assumptions about its caller.
However, it makes one different, and entirely unwarranted assumption, and this is where the code really goes wrong: it presumes that code iterating through the returned tasks will await the completion of each task before retrieving the next! It’s not at all obvious that it makes this assumption but using it any other way will fail (and I’ll explain why shortly).
The code should really return an
IAsyncEnumerable<int>, because that exactly expresses this one-thing-after-another characteristic. It prevents error by making it impossible to consume the sequence any other way. However, to be fair to whoever wrote the original code,
IAsyncEnumerable<int> is not built into the .NET Framework, and at the time I’m writing this, there isn’t even a single canonical definition of it. (There are a few definitions scattered across various libraries.) So it’s hardly surprising people don’t use it. Nevertherless, by returning an
ParseFile is writing cheques it cannot cash.
We’ll see why in a moment, but first, I just want to explain something.
In defence of IEnumerable<Task<int>>
The problem here is not, as Sergey Teplyakov suggests, that
IEnumerable<Task<int>> is simply a bad idea. Sometimes it is entirely reasonable to return a collection of tasks, all of which are in flight simultaneously. For example, in distributed systems, a common trick to improve response times at the expense of increasing overall system load is to issue the same query to several different replicas of a data store, and use whichever response comes back first—this can be a good way to reduce the variability in response time that can result from a service instance being briefly slow after it restarts: as long as at least one of the replicas you ask is ‘warm’ you’ll always get a fast response. You can pass an
Task.WhenAny to get back a task that completes as soon as the first request completes. Or it might be that you need to send requests to multiple services due to sharding. In this case, you can pass an
Task.WhenAll, which will return a task that completes once all of the child tasks are complete, and it will produce an
int containing all the results.
In both cases, the concurrent execution of each of the tasks in the
IEnumerable<Task<int>> is desirable. By contrast,
ParseFile is incapable of supporting such concurrent execution, which is why
IEnumerable<Task<int>> is an unsuitable abstraction. (And if it were forced into returning that type anyway due to external constraints, the correct thing for
ParseFile to do would be to enforce the in-order execution internally. The fact that it does not is a bug.)
But why can’t these tasks returned by
ParseFile run concurrently? Finally, well over 1,500 words into this blog, I get to my main point.
Deferred execution and resource ownership
When we invoke a method in .NET, it won’t necessarily do its work immediately. Here are three ways that might work:
- Return an
IEnumerable<T>with a lazy implementation
- Return a
- Return a
Func<T>that performs the relevant work when invoked
This is not even close to being an exhaustive list, of course. It’s just enough to show that there’s a general pattern—instead of returning a result, methods can return something that produces a result later—and also that there are a few ways this can work. This is what I mean by “deferred work”. (Technically, I should not say deferred but deferrable: in all cases a method could choose to do all the work up front. But each provides some scope for doing the work later.)
IEnumerable<T>, the method doesn’t have to decide what results to produce, or whether to produce any results at all, until something starts to enumerate the enumerable (which might never happen). Conversely, a
Task<T> gets to produce a result on its own schedule (and might conceivably decide never to produce one, remaining forever in an uncompleted state). With
Func<T>, the caller is in control of when a result is produced, but unlike
IEnumerable<T>, there’s no latitude for deciding whether or not to produce a result (unless it throws an exception of course).
Interestingly, the example in the original blog combines two of these techniques. The enumerable it returns is inherently deferred, but even once the caller starts asking it for results, each item produced represents some further deferred work. The essence of
IEnumerable<Task<int>> is deferred production of objects representing deferred work. This is absolutely a valid thing to do, but it means you need to be extra careful, because it’s easy to lose track of what happens when.
In fact, it’s very easy to make the kind of mistake I’m discussing even with a single level of deferred execution. Consider this example:
This is a slightly silly function because it’s simplified to demonstrate the point: you’d never really write this exact code. However, the basic technique—returning a thing that doesn’t calculate the result immediately, but which can be invoked later on to calculate the result—is a pretty common one.
It doesn’t work though. Can you see why?
The problem here is that this code uses a
StreamReader to do its work, but it mishandles that resource’s lifetime. It creates it while the
GetContentGetter runs, but because of the
using block, it also disposes it before returning. That means we’ll get an
ObjectDisposedException if we ever try to invoke the function that this method returns.
Here’s a slightly more subtle version of the same class of mistake:
As you can see, the structure of the code is very similar, but this time it’s returning a
Task<string> instead of a
Func<string>. Seeing these two examples next to each other, it’s fairly easy to see why the second one is wrong because it makes the same mistake: it returns something representing deferred work, having already disposed the
StreamReader which that deferred operation relies on.
However, it’s quite easy to miss examples of the second kind. One reason is that you will sometimes get away with it. Sometimes, methods with asynchronous signatures complete the work before they return. (Maybe they don’t need to perform any IO because the information required to complete the work is already cached in memory.)
Additionally, the fixed version looks pretty similar to the broken one. This avoids the problem:
Here, I’ve applied the
async keyword, and I’ve put an
await in the return statement expression. The significance of this is that the C# compiler will generate code that doesn’t execute the remainder of the method until after the task returned by
ReadToEndAsync completes. And in this case “the remainder of the method” is the closing brace of the
using block. Since leaving that block is what causes the
StreamReader to be disposed, delaying it until after the task completes means that we definitely won’t dispose it before the deferred work (reading the file’s contents) is complete.
Sergey’s code made the same class of mistake, but it’s slightly more subtle because he’s written an iterator. I’ve simplified his code here, removing the details not relevant to my point:
This should instantly get your async-sense tingling: we have a method that produces deferred work that makes use of a resource wrapped in a
using block. That’s not always wrong, as the fixed version of
GetContentAsync above shows. But you always need to ask two questions: 1) when will that
using block dispose the resource, and 2) what’s the latest point at which the deferred work might run? If the answer to 2 is later than 1, we have a problem.
So, when will that
StreamReader be disposed? In this case, it’s inside an iterator method—a method that returns an enumerable, and which uses the
yield keyword. Methods of this kind get rewritten by the compiler, enabling them to return to the caller before they have reached the end, and then to continue later on. So there are two situations in which the reader is disposed: 1) when this code reaches the end of the file, falling out of its loop, and 2) when code enumerating the collection decides to stop early (e.g., you used LINQ’s
First operator to get just the first element, or you terminated a
foreach loop early, perhaps by throwing an exception, or by using
return, or even
goto inside the loop.
The upshot is that if code using the tasks produced by
ParseFile doesn’t wait for all tasks to complete before it finishes iterating through the enumeration, we will have a problem, because we will have disposed of the
StreamReader that these tasks use while the tasks are still running. If we’re lucky this will just cause them to fault, but in practice it could cause more subtle problems, because you’re using the reader in an unsupported way. So you might see incorrect results, leaving your process in a corrupted state.
This is still the same basic error as the earlier examples, albeit in a slightly more complex form. (It’s slightly harder to analyse because the moment of disposal is determined by one deferred execution mechanism—C# iterator behaviour—but the work being done on the disposable resource is represented in a separate deferred execution mechanism: tasks.)
This code also makes a related but different mistake.
Every task that it produces uses the same
StreamReader. It only ever constructs one of those, but it returns multiple tasks, each of which calls
ReadLineAsync. And it does nothing to control the order of execution. That’s a bug.
It’s a bug because
StreamReader carries forward the unfortunate legacy of UNIX’s one-size-fits-all mentality, in which access to files has to go through an abstraction that makes sense for a reel-to-reel tape drive even though many modern computer systems now have random access storage devices. This problem is so widespread that it’s possible not to notice that it’s a problem; indeed, the extent to which people praise UNIX’s tendency to model everything through this file abstraction suggests that Stockholm syndrome is quite common here.
The problem I’m alluding to is that UNIX’s file model entails the idea of a current position. Back when tape drives were things developers would commonly encounter, this made a lot of sense: the read/write head of the tape drive must always be at some particular location on the tape, and the natural behaviour of the device is to produce the bytes stored on the tape one after another, moving the position forward as you read. This concept is part of the basics of file IO in UNIX, and is dutifully modelled in .NET by
TextReader has a similar concept (not least because one of the more widely used concrete text reader types is
StreamReader, which layers over
But there’s really no need at to impose this position-oriented model on everything. It’s certainly not how solid-state storage devices work. And even in the cases where you’re still using spinning rust drives, there’s no particular guarantee that the contents of a file will be neatly laid out sequentially on disk. (They often are, and when that happens, the OS can read more efficiently than when they are not. But this doesn’t depend on presenting a position-oriented abstraction for file I/O. Disk caching algorithms smart enough to optimize in the face of fragmentation on disk don’t generally need this kind of hand holding.) So this is an unfortunate relic from UNIX’s origins, in which technology from half a century ago continues to influence modern APIs.
(As it happens, Windows introduced new file IO APIs for the ‘modern’ apps that first appeared in Windows 8. So if you’re writing UWP apps today, you no longer have to act like you’re using a tape drive. However, since .NET needs to run on descendants of UNIX such as Linux and MacOS, file access remains tied to the sequential access model.)
In short, any time you call a method that reads data from a file, such as
ReadLineAsync, you are implicitly reading from the current position in the file. (And to be fair to UNIX, reading plain text from any device is inherently sequential. Without prior knowledge about line lengths there’s no way to jump straight to the 200th line of a file. Although that just makes me want to complain about UNIX’s assumption that plain text files—very much a human-oriented format—are somehow a good medium for interprocess communication.)
Consider what that means for code like this
ParseFile example. Suppose we have a file with 1,000 lines. If we just enumerate the whole thing without waiting for the first task to complete before moving onto the next (e.g., we call
ToList on the result, or perhaps we pass it to
Task.WhenAll) this will rapidly create 1,000 tasks. And in the process of doing this, it will have called
ReadLineAsync 1,000 times (because when you invoke an
async method, it will run at least as far as the first
await, and it won’t return until it reaches an
await expression that returns a task that didn’t complete immediately).
The meaning of the very first of those calls to
ReadLineAsync is clear enough: the file’s current position is at the start of the file, so this should read the first line of text. And once the task is complete we expect the file’s position to be at the start of the second line (or at the end of the file if there is no second line).
What about the second call to
ReadLineAsync? Well as long as the first read has completed, the file position will now be at the 2nd line, so it should read the second line. But here’s the problem: nothing ensures that the first read has completed. So there’s no telling what will happen.
In fact, the documentation for
StreamReader says that
ReadLineAsync can throw an
InvalidOperationException if a previous read is still in progress. So hopefully that’s what’ll happen, and we’ll detect the problem.
What to do?
The general lesson here is: think carefully about shared state or resources when returning deferred work. You need to pick a strategy and stick to it. Here are some possibilities:
- Make each operation self-contained
- Reflect the lack of scope for concurrency with suitable type choices
- Enforce synchronization (and, if necessary, ordering) internally
If you’re going to return something representing deferred work, ideally it should be self-contained: it should create and manage whatever resources it needs internally. That’s what the fixed version of
GetContentAsync above does—it returns a task, and that task opens its own
StreamReader and ensures that it doesn’t dispose of it until it has finished using it. From a .NET perspective, it’s all local to the particular piece of deferred work. (That said, the fact that it uses a file on disk is potentially problematic: that’s a shared resource at the system level, so if you made concurrent multiple calls to
GetContentAsync for the same file, you could still see errors. However, we have at least moved out of the category of incorrect use of an API, and into the realm of operations that are inherently subject to failure.)
If you really can make an operation self-contained, this is the ideal because it provides maximum flexibility for the caller: they can consume your code’s output sequentially if that suits them, or concurrently if they prefer. This is what
IEnumerable<Task<T>> purports to offer, and if you can really deliver on that, it’s a good choice.
Sometimes you can’t do this. It would be quite hard to write the
ParseFile method this way: how would you create a self-contained task that returns the 492nd line of a file? The fact that this code has to process lines in a text file effectively forces it into sequential processing. I suppose each task could create its own
StreamReader skip the first N lines and then extract the N+1th before closing the reader. But even if we ignore the egregious O(N²) performance problems this would introduce it adds a new problem: this approach could well fail because of all the tasks attempting to open the same file at the same time. (E.g., if you open the file in exclusive read mode to ensure that nothing modifies the file while you’re reading from it, it will fail as soon as you’ve got to the point where two in-flight tasks both want the file open. This is the same problem I just mentioned with
GetContent but in this case you’re guaranteed to run into it because all your tasks will definitely target the same file.) And even if you try to work around this with suitable file locking options, there’s another issue: if you’re running on something UNIXy, you’ll likely encounter problems relating to the number of file handles a process is allowed to have open at any one time.
More generally, sometimes you might need to return multiple pieces of deferrable work that all operate on something shared. In this case it’s a file. In other cases it might be some shared in-memory data structure accessible to many threads. Or it might be an abstraction around some external service.
In cases like these, ideally you’d choose the right abstraction. As Sergey points out,
IAsyncEnumerable<int> would be the ideal choice for his example because that is a natural fit for the task at hand: it exposes async behaviour at the exact level where the implementation wants it (i.e., at the moment where we want to fetch the next line), without being forced into behaviour it can’t naturally support (e.g., returning a large number of tasks that can all usefully execute concurrently). Reading all the lines in a file is inherently unsuited to concurrency, but is amenable to each step being asynchronous. This model—sequential but asynchronous—is exactly what
If you don’t have a choice (e.g,. you have to implement a particular interface, so you don’t get to choose your return type) then you may need to do some work to bridge the gap. Here’s one way we might fix the problematic code above:
finally block within the loop here means that before we either a) proceed to the next iteration or b) exit completely, we always ensure that the previous task we returned has now completed.
It’s far from ideal, because calling
Wait on a task is almost always a terrible idea. However, there are sometimes cases where you simply cannot avoid it, and pretty much all of those cases are of the form “Synchronous on the outside, asynchronous on the inside.” It’s most common to encounter this when you’re required to implement an interface that enforces synchronous characteristics but you need to invoke asynchronous methods in your implementation. If that’s the bind you’re in,
Wait may be unavoidable. (Some environments provide a mechanism for bridging this gap. For example, this situation comes up in Visual Studio extensions all the time, but calling
Wait in those will definitely break. So Visual Studio provides a mechanism specifically for bridging this gap: see the “Call async methods from synchronous methods without deadlocking” section of this article )
In conclusion: produce fully self-contained operations if you can. Failing that, use types that embody the right abstraction for the work at hand. And if you can’t do that, make sure you’ve enforced proper synchronization (and, if necessary, ordering) internally.