Rust: Async vs Multi-Threading
About
In this post, we’ll dive into Asynchronous(async) and Multi-Thread. Both help programs run faster and stay responsive. But, they go about it with very different philosophies. We’ll explore these differences using Rust as a practical example.
Multi-threading
”Many Hands Make Light Work”
Multithreading follows a competitive/parallel philosophy. It assumes the bottleneck is the amount of work to be done. To solve this, you break the work into smaller pieces and give each piece to a separate “worker” (thread).
How it works?
Thread vs Process
- Process: Each process has separate memory space including code, data, stack and heap segments. Inter-Process communication(IPC) is generally and slower and more complex.
- Thread: Threads within the same process share the code, data and heap segments, but typically maintain their own stacks. This allows threads to communicate with each other.
Concurrency vs Parallelism
Concurrency - Single Core
How it works:
- CPU rapidly switches between threads (time slicing)
- Each thread gets a small time slice (~milliseconds)
- Context switching creates the illusion of parallel execution
Reality: Only one thread executes at any given moment.
This is true concurrency without parallelism.
Parallelism - Multi Core
How it works:
- Multiple CPU cores execute threads simultaneously
- Each core can run a different thread at the same time
- Reduced need for time slicing between truly parallel threads
Reality: Multiple threads actually execute at the same moment.
This is true parallelism with concurrency.
Note: More threads doesn’t always mean better performance. When you have more threads than CPU cores, context switching overhead occurs, which can actually degrade performance. Note: Executing multiple threads doesn’t guarantee parallel execution
Important Considerations When Using Multi-Threading
1. Race Conditions
Race conditions occur when multiple threads access shared data simultaneously without proper coordination, leading to unpredictable and incorrect results.
The Problem
When two threads read and modify the same variable, they can overwrite each other’s changes. For instance, if Thread A and Thread B both read a counter value of 10, increment it, and write back 11, you’ve lost one increment—the counter should be 12, not 11.
The Solution
Use synchronization mechanisms like locks, mutexes, or atomic operations to ensure only one thread can access critical sections at a time.
How Rust Solves It
Rust’s ownership system prevents data races at compile time. The type system enforces that:
- Only one mutable reference OR multiple immutable references can exist at a time
- Data shared between threads must be wrapped in thread-safe types like
Mutex<T>orArc<T> - The compiler won’t let you compile code with potential data races
use std::sync::{Arc, Mutex};
use std::thread;
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
assert_eq!(*counter.lock().unwrap(), 10);
2. Deadlocks
A deadlock happens when two or more threads are stuck waiting for each other to release resources, creating a situation where none can proceed.
Common Scenario
- Thread A acquires Lock 1, then tries to acquire Lock 2
- Thread B acquires Lock 2, then tries to acquire Lock 1
- Both threads wait indefinitely
Prevention Strategies
- Always acquire locks in the same order across all threads
- Use timeout mechanisms when acquiring locks
- Avoid holding multiple locks when possible
3. Thread Safety
Not all code and data structures are designed to handle concurrent access safely.
Key Considerations
- Many standard libraries provide non-thread-safe collections by default
- Shared objects must either be immutable or properly synchronized
- Use thread-safe alternatives (concurrent collections) or add synchronization yourself
- Be aware of which operations are atomic and which aren’t
How Rust Solves It
Rust uses marker traits to enforce thread safety at compile time:
Sendtrait: Types that can be transferred across thread boundariesSynctrait: Types that can be safely shared between threads- The compiler automatically prevents sending non-thread-safe types to other threads
use std::rc::Rc;
use std::sync::Arc;
use std::thread;
// This won't compile - Rc is not Send
// let rc = Rc::new(5);
// thread::spawn(move || {
// println!("{}", rc);
// });
// This works - Arc is Send + Sync
let arc = Arc::new(5);
thread::spawn(move || {
println!("{}", arc);
});
// Arc Implementation: https://github.com/rust-lang/rust/blob/1.93.0/library/alloc/src/sync.rs#L273-L276
4. Proper Resource Cleanup
Threads that don’t terminate properly can cause serious issues.
Critical Points
- Always ensure threads can be shut down gracefully
- Clean up resources (files, connections, memory) when threads finish
- Use thread pools to manage thread lifecycle
- Implement proper exception handling to prevent threads from dying silently
How Rust Solves It
Rust’s RAII (Resource Acquisition Is Initialization) pattern ensures resources are cleaned up:
MutexGuardautomatically releases locks when droppedJoinHandlemust be explicitly handled or the thread detaches- Panics in threads are isolated and can be caught with
join()
Async
”Don’t Just Sit There, Do Something Else”
Asynchronous programming is a cooperative philosophy. It assumes that the biggest bottleneck in a system isn’t the speed of the CPU, but the wait time for external events like a database response, a file download, or user input.
How it works?
In asynchronous programming, instead of waiting for a slow task to finish(blocking), the program “flags” the task and moves on to the next one.
Event Loop
The event loop is the core mechanism for handling asynchronous operations in Rust. Unlike JavaScript’s built-in event loop, Rust uses async runtimes like Tokio or async-std to manage concurrent execution efficiently.
Rust’s async model is built on Futures and an Executor:
- Futures - Async functions return
Futureobjects representing pending work - Polling - The executor repeatedly polls futures to check if they can progress
- Yielding - When waiting for I/O, futures return
Poll::Pendingand yield control - Waking - Once ready, futures are woken and polled again
- Completion - Finished futures return
Poll::Ready(value)
I/O Work
CPU-bound vs I/O-bound
Understanding the difference between these workload types is crucial for writing efficient async code:
CPU-bound
- Tasks limited by CPU processing power
- Examples: data processing, encryption, compression, mathematical calculations
- Best handled with: threads or
spawn_blocking
I/O-bound
- Tasks limited by input/output operations
- Examples: network requests, file operations, database queries
- Best handled with: async/await
Synchronous vs Asynchronous I/O
Synchronous (Blocking)
- CPU waits until data is received
- Thread blocked during I/O operation Asynchronous (Non-blocking)
- I/O operations are initiated and checked for completion without blocking the thread
- CPU continues other tasks while waiting
Drawbacks of async programming
While async programming offers significant benefits for I/O-bound applications, it comes with its own set of challenges and limitations that developers should understand.
1. Increased Code Complexity
Async code is inherently more complex than synchronous code, making it harder to read, write, and maintain.
The Problem
- Code becomes non-linear with await points scattered throughout
- Error handling becomes more complicated with nested Results and potential panic points
- Stack traces from async code are often harder to interpret
- Mental model shifts from sequential thinking to state machine thinking
2. Function Coloring Problem
Once you go async, everything that calls it must also be async, creating a split in your codebase.
The Problem
- Async functions can only be awaited in other async contexts
- Cannot easily mix sync and async code
- Library authors must often maintain both sync and async versions
- Refactoring from sync to async (or vice versa) ripples through the entire codebase
In Rust
Rust makes the async/sync boundary explicit, which is both good (clarity) and challenging (inflexibility):
async fn async_function() -> String {
"Hello".to_string()
}
fn sync_function() {
// This won't work - can't await in sync function
// let result = async_function().await;
// Must use block_on or similar, which defeats async benefits
let result = futures::executor::block_on(async_function());
}
3. Runtime Overhead and Dependency
Async code requires a runtime to execute, adding complexity and dependencies to your project.
The Problem
- Must choose and configure an async runtime (Tokio, async-std, smol, etc.)
- Runtime adds binary size and startup cost
- Different runtimes may not be compatible with each other
- Runtime configuration affects performance characteristics
In Rust
// Need to choose a runtime
#[tokio::main]
async fn main() {
// Your async code here
}
// Or manually configure
fn main() {
let runtime = tokio::runtime::Runtime::new().unwrap();
runtime.block_on(async {
// Your async code here
});
}
4. Debugging Difficulties
Debugging async code is significantly more challenging than synchronous code.
The Problem
- Stack traces show runtime machinery rather than logical flow
- Debugger stepping through await points is confusing
- State is spread across multiple futures and the runtime
- Race conditions and timing issues are harder to reproduce
Choosing the Right Approach
Both multi-threading and async programming are powerful tools, but they excel in different scenarios:
Use Multi-threading when:
- You have CPU-bound workloads that benefit from parallel computation
- You need true parallelism across multiple cores
- Your tasks are independent and don’t require frequent coordination
- Blocking operations are unavoidable
Use Async when:
- You have I/O-bound workloads with high wait times
- You need to handle thousands of concurrent connections efficiently
- Memory and thread overhead are concerns
- Tasks spend most time waiting for external resources
Hybrid Approach:
Modern applications often combine both! For example, Tokio allows you to use spawn_blocking for CPU-intensive work within an async runtime, giving you the best of both worlds.
Conclusion
Understanding when to use multi-threading vs async programming is crucial for building efficient applications. Multi-threading excels at CPU-bound parallel work, while async shines in I/O-bound concurrent scenarios. Rust’s type system helps you use both safely, preventing common concurrency pitfalls at compile time.
The key is matching your tool to your problem—and sometimes, using both together.