synchronized
Blocks Are a Performance Trap (And What to Use Instead)Unlocking true performance by moving beyond a 25-year-old concurrency primitive. It’s time for an upgrade.
If you’ve ever written multi-threaded Java code, you’ve met synchronized
. It’s the trusty old hammer in our concurrency toolbox, taught in every introductory course. Need to protect a shared resource? Just wrap it in a synchronized
block or slap the keyword on a method, and you're thread-safe. Done.
But let’s be honest. In the world of high-throughput applications and multi-core processors, that trusty old hammer often behaves more like an anchor. Relying on synchronized
for every concurrency problem can silently kill your application's performance and scalability.
It's time we talk about why this happens and explore the sharper, more efficient tools that modern Java provides.
synchronized
The synchronized
keyword is built right into the Java language. It uses an intrinsic lock (or monitor lock) associated with every object. When a thread enters a synchronized
block, it acquires the lock on a specific object. No other thread can acquire the same lock until the first thread exits the block.
It’s wonderfully simple. Consider this classic counter:
public class SlowCounter {
private int count = 0;
// A synchronized method locks the 'this' instance
public synchronized void increment() {
count++;
}
public synchronized int getCount() {
return count;
}
}
This code is correct. It prevents race conditions, and it ensures that changes to count
are visible across all threads. For simple, low-contention scenarios, it works just fine. But its simplicity hides some serious drawbacks.
The problem with synchronized
isn't that it's broken—it's that it’s pessimistic and inflexible. It assumes the worst-case scenario and makes everyone stand in a single-file line.
Here’s where it gets expensive:
synchronized
block is a blocking operation. A thread will wait indefinitely to acquire the lock. There’s no way to time out, no way to check if the lock is available, and no way to back off if you can’t get it.synchronized
locks are not fair. This means there’s no guarantee that the thread waiting the longest will get the lock next. A new, "barging" thread might get it instead, potentially leading to thread starvation where some threads make little to no progress.this
object. This means a thread calling increment()
will block another thread calling a completely unrelated synchronized method on the same object. You’re locking more than you need to, creating unnecessary bottlenecks.Think of synchronized
as a single-lane bridge on a busy highway. It works, but during rush hour, you get a massive traffic jam.
ReentrantLock
The first step up from synchronized
is java.util.concurrent.locks.ReentrantLock
. It does the same job—providing mutual exclusion—but gives you the power tools that synchronized
lacks.
A ReentrantLock
is a programmatic lock. You have to explicitly lock and unlock it, typically in a try-finally
block to ensure the lock is always released, even if an exception occurs.
import java.util.concurrent.locks.ReentrantLock;
public class BetterCounter {
private int count = 0;
private final ReentrantLock lock = new ReentrantLock();
public void increment() {
lock.lock(); // Acquire the lock
try {
count++;
} finally {
lock.unlock(); // Always release the lock in a finally block
}
}
public int getCount() {
lock.lock();
try {
return count;
} finally {
lock.unlock();
}
}
}
Why is this better?
tryLock()
. It attempts to acquire the lock and returns true
or false
immediately. There's also a version with a timeout, tryLock(long timeout, TimeUnit unit)
, which is a game-changer for preventing deadlocks and improving responsiveness.new ReentrantLock(true)
. It will grant the lock to the longest-waiting thread, preventing starvation (though this comes with a slight performance overhead).lock()
and unlock()
calls force you to think about the scope of your critical section, encouraging you to keep it as small as possible.ReadWriteLock
What if your data is read 99% of the time and written only 1% of the time? Using synchronized
or a ReentrantLock
is incredibly inefficient here. Both locks are exclusive, meaning they block readers from reading if another reader is already there. That’s totally unnecessary—readers don’t conflict with each other!
Enter ReentrantReadWriteLock
. It maintains a pair of locks: one for reading and one for writing.
This is perfect for read-heavy data structures like a cache.
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class CacheWithReadWriteLock<K, V> {
private final Map<K, V> map = new HashMap<>();
private final ReadWriteLock lock = new ReentrantReadWriteLock();
public V get(K key) {
lock.readLock().lock(); // Multiple threads can acquire the read lock
try {
return map.get(key);
} finally {
lock.readLock().unlock();
}
}
public void put(K key, V value) {
lock.writeLock().lock(); // Only one thread can acquire the write lock
try {
map.put(key, value);
} finally {
lock.writeLock().unlock();
}
}
}
With this pattern, dozens of threads can read from the cache concurrently, and they will only be blocked when a writer comes along. This dramatically improves throughput for read-dominant workloads.
For simple state updates like counters, flags, or single-value references, even ReentrantLock
can be overkill. The most performant approach is often to avoid locks entirely.
The java.util.concurrent.atomic
package provides a suite of classes like AtomicInteger
, AtomicLong
, and AtomicReference
that use low-level hardware instructions (like Compare-And-Swap, or CAS) to perform updates atomically without ever blocking.
CAS is an optimistic strategy. A thread tries to update a value by telling the processor: "If the current value is still X, update it to Y." If another thread changed the value from X in the meantime, the operation fails, and the thread can simply retry. It's like a transaction that commits only if the data hasn't changed.
Let's rewrite our counter one last time:
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicCounter {
private final AtomicInteger count = new AtomicInteger(0);
public void increment() {
count.incrementAndGet(); // Atomic, lock-free operation
}
public int getCount() {
return count.get();
}
}
Look at how clean that is! No locks, no try-finally
blocks. This version is non-blocking and will scale beautifully across many cores because threads never have to wait. They just retry their update until it succeeds, which is incredibly fast for uncontended operations.
synchronized
served us well for decades, but modern, high-performance systems demand more sophisticated tools. It's not about abandoning synchronized
entirely, but about knowing its limitations and when to reach for a better alternative.
Let's recap the modern concurrency toolkit:
synchronized
: Use it for simple, low-contention code where readability is paramount and performance isn't critical.ReentrantLock
: Your go-to replacement for synchronized
when you need flexibility like timeouts, interruptibility, or fairness.ReadWriteLock
: The perfect choice for read-heavy data structures where you want to allow concurrent reads to maximize throughput.Atomic
Variables: The best and fastest option for managing simple state like counters, flags, or a single reference. Go lock-free whenever you can.The next time you’re about to type synchronized
, take a moment to pause. Ask yourself: Is this really the best tool for the job? Or is there a more performant, scalable, and flexible alternative waiting in java.util.concurrent
?
If you found this breakdown useful, follow for more practical deep dives into building better, faster software.