To begin with, queues are more deterministic than threads. Queues come in two varieties, serial and concurrent. Serial queues are FIFO, so if you enqueue two tasks, you don’t have to worry about race conditions caused by them executing in the reverse order. Or both in parallel. Thread pools offer neither guarantee, so they expose you to race conditions.
Queues make synchronisation less error-prone than synchronized. Or, worse, using raw lock/unlock calls. In both these cases, I often forget to synchronise on the appropriate mutex before accessing the data it’s protecting.
Queues encourage non-blocking usage, where the queue that’s enqueueing a task doesn’t need to wait for its execution to complete before it can make progress. This is important for the UI thread, which should never block for more than 8ms, for the UI to remain responsive at 120 FPS. Another example is real-time processing, say video processing, which is very sensitive to blocking: you don’t want to drop frames and produce a janky video because the thread blocked. Mutexes are a poor fit for such situations, since they block. Queues give you synchronisation without blocking.
Mutexes can deadlock unless you take great care to prevent them, while queues’s non-blocking API make it hard for you to accidentally introduce a deadlock.
Queues are scalable, so you can create tons of them, without worrying about the overhead threads and processes have. If you’re using unscalable primitives like threads, you have to deal with higher-level abstractions like a thread pool. That’s more complexity on your shoulders, which I’d rather leave to the system.
Queues make it trivial to execute some code in another thread. Traditional Java-style threads don’t offer that, so you have to build it yourself, say by having a thread-safe queue, enqueueing tasks in one thread, dequeueing them in another thread, and figuring out how to encode what should be done.
Queues are more efficient than processes, even lightweight ones encouraged in languages like Erlang and Go. This may not matter for your app. But there are ambitious apps for which it does. For example, I work on Noctacam, a computational photography app that processes almost a gigabyte of data in real time on an iPhone. The last thing I want is some well-intentioned abstraction dragging me down to the lowest common denominator. There will always be people trying to build apps at the edge of what’s possible, and these innovative apps won’t be possible if the underlying abstractions impose too high an overhead without letting you opt out. Copying lots of data across a process boundary in real time may not be possible. Copy-on-write doesn’t help if the data is modified and therefore copied when not intended. Again, if you’re talking about almost a gigabyte of data per second, this becomes critical. Queues are efficient.
Queues can have priorities, so that the UI queue or another high-priority queue maintains its responsiveness and performance even if background queues are taking up all the CPU cores. Further, tasks can have priorities of their own that that control their order of execution within the queue.
Here’s an example of when I used it: I needed to perform a task repeatedly, but each invocation wouldn’t take long. So I did it on the main queue. When it was done, I enqueued a minimum-priority task on the main queue to do it again. This let me achieve my goal, without making the UI unresponsive. I could’ve used a separate queue, but that would’ve risked race conditions caused by concurrent, non-deterministic execution.
Queues support a variety of execution styles and advanced features. For example, when you enqueue a task onto a queue, you can block the current thread or let it continue running. You can add a delay, which can either be device time or wall time. You can have concurrent queues that execute multiple operations in parallel, or serial FIFO queues. You can add a dependency between tasks so that a task doesn’t begin executing until its dependencies have finished. You can block the current thread until a given task finishes. Or until all tasks on a given queue finish. You can cancel a task, say if the user presses a Cancel button in the UI. Or cancel all tasks on a queue. You can check the state of a task to see if it’s ready for execution, executing, finished or canceled. You can suspend a queue. You can see how many operations are in a queue, or even peek at the exact operations.
You can see what a full-featured abstraction queues are, supporting a variety of asynchronous coding styles. But they’re easy to get started using—all this power doesn’t result in a steep learning curve. You can get started with just one API call, DispatchQueue.async(). That in itself is an accomplishment. And the code is scalable, efficient and not prone to race conditions and deadlocks. Queues are an under-appreciated marvel of Cocoa.