The Java Memory Model defines how threads interact through memory and what visibility and ordering guarantees the JVM provides.
1. Why the Java Memory Model exists
The JMM exists to answer one critical question:
When multiple threads read and write shared variables, what outcomes are legal?
Without a well-defined memory model:
- Different CPUs and compilers could reorder instructions differently.
- Threads could see stale or inconsistent values.
- Code that “looks obviously correct” in single-threaded reasoning could fail randomly in production under load or on a different CPU.
The JMM:
- Defines happens-before rules that constrain reordering and visibility.
- Lets you write portable concurrent code without knowing hardware details.
- Provides a foundation for the JVM to optimize aggressively while remaining correct.
Pre-Java 5, the model was underspecified; code such as double-checked locking was “apparently safe” but actually broken. The modern JMM (JSR-133) fixed this by clearly defining visibility, ordering, and publication rules.
2. CPU cache vs main memory problem
Modern CPUs do not read and write variables directly from main memory on every operation. They use multiple levels of cache and write buffers.
Conceptual view:
+------------------+ +------------------+
| CPU 1 | | CPU 2 |
| Registers | | Registers |
| L1/L2 Cache | | L1/L2 Cache |
+--------|---------+ +---------|--------+
| |
+----------+-------------+
|
Main Memory
Problems:
- Stale reads: CPU1 writes
flag = truein its cache; CPU2 keeps seeingfalsefrom its own cache. - Reordering (from the point of view of other cores):
- CPUs and compilers may reorder loads/stores as long as single-thread semantics remain the same.
- Across threads, this can expose unexpected intermediate states.
Real-world production bug pattern:
- Thread A sets a “ready” flag and then writes important data.
- Thread B waits until “ready” is true, then reads the data.
- On real hardware, B may see
ready == truebut stale data, if the writes to the data are still in CPU A’s cache or reordered past the flag write.
The JMM provides rules (via volatile, synchronized, etc.) that map to hardware memory barriers so caches are flushed/invalidated in the right order.
3. Thread working memory vs main memory
The JMM models each thread as having its own working memory (conceptual; implemented with registers, caches) plus a shared main memory.
Thread 1 Thread 2
+-----------------+ +-----------------+
| Working Memory | | Working Memory |
| (local copies) | | (local copies) |
+--------|--------+ +--------|--------+
| |
+--------------+------------------+
|
Main Memory
(shared, "true" values)
Key points:
- Each thread may keep cached copies of variables locally.
- Reads/writes in a thread only sometimes go to/come from main memory.
- Without synchronization (HB edges), the JMM allows:
- Stale reads: local copy not updated.
- Writes not visible to other threads for arbitrary time.
- Out-of-order visibility of acts from another thread.
Example scenario (flag-based communication):
class Task {
boolean done = false;
void worker() {
// do work
done = true; // write done
}
void waiter() {
while (!done) {
// busy wait
}
// assume done is true, but may see stale
}
}
On some CPUs:
waitermay spin forever becausedonein its working memory never updates.- Or might observe unexpected interleavings with other state.
This is legal under the JMM because there is no happens-before edge between the write and read of done.
4. Happens-Before relationship
The happens-before (HB) relation is the core of the JMM.
Definition (informal):
If action A happens-before action B, then:
- The effects of A are visible to B.
- A is ordered before B (B cannot see A as happening later).
Important HB rules:
- Program order rule
- In a single thread, earlier actions HB later actions.
- E.g., in one method
x = 1; y = 2;, the write toxHB write toyin that thread.
- Monitor lock rule (synchronized)
- Unlock on monitor M HB subsequent lock on M.
- Everything visible to thread A before it releases a lock will be visible to thread B after it acquires the same lock.
- Volatile variable rule
- A write to a
volatilevariable HB every subsequent read of that same variable. - Ensures visibility and ordering around that volatile.
- A write to a
- Thread start rule
- A call to
Thread.start()HB any actions inside the started thread.
- A call to
- Thread join rule
- All actions in a thread HB a successful return from
Thread.join()on that thread.
- All actions in a thread HB a successful return from
- Transitivity
- If A HB B, and B HB C, then A HB C.
Example (using volatile flag):
class Worker {
int data = 0;
volatile boolean ready = false;
void produce() {
data = 42; // (1)
ready = true; // (2) volatile write
}
void consume() {
while (!ready) { // (3) volatile read
}
int x = data; // (4)
}
}
Happens-before chain:
- (1) → (2) by program order inside
produce. - (2) → (3) by volatile rule.
- (3) → (4) by program order in
consume. - By transitivity, (1) HB (4), so consumer is guaranteed to see
data == 42.
5. Visibility, atomicity, and ordering
Three distinct but related concepts:
5.1 Visibility
- When one thread writes to a variable, is that new value guaranteed to be seen by another thread?
- Provided via:
volatilesynchronized(lock/unlock)finalwith proper publication- Higher-level constructs (e.g.,
java.util.concurrentclasses)
5.2 Atomicity
- An operation is atomic if from other threads’ perspective it happens all-or-nothing; no intermediate state is visible.
- Atomic in JMM:
- Reads/writes of
boolean,byte,char,short,int,float, and object references. - Since Java 5, reads/writes of
longanddoubleare specified as atomic, but visibility is still not guaranteed.
- Reads/writes of
- Not atomic:
- Read-modify-write sequences like
count++(read, increment, write). - Compound operations across multiple variables.
- Read-modify-write sequences like
Example:
int counter = 0;
void increment() {
counter++; // not atomic
}
Multiple threads calling increment() cause lost updates because counter++ decomposes to load, add, store.
5.3 Ordering
- Even if each variable’s read/write is atomic, the relative order of operations matters.
- Within a thread, the JVM and CPU may reorder operations as long as single-thread semantics are preserved.
- JMM plus HB constrain this reordering when there are synchronization edges.
Example bug:
int a = 0, b = 0;
int x, y;
// Thread 1
a = 1; // (1)
x = b; // (2)
// Thread 2
b = 1; // (3)
y = a; // (4)
Surprising but legal result: x == 0 && y == 0 because writes and reads can be reordered and/or seen out of order without HB edges.
6. volatile keyword (what it guarantees and what it does NOT)
6.1 What volatile guarantees
For a volatile variable v:
- Visibility:
- A write to
vis flushed to main memory. - A subsequent read of
vfrom another thread will see that write (or a later one).
- A write to
- Ordering around volatile:
- All writes before a volatile write cannot be reordered after that volatile write.
- All reads after a volatile read cannot be reordered before that volatile read.
- HB relation:
- As noted: write to
vHB read fromv.
- As noted: write to
This is enough to implement safe one-way communication and some simple lock-free patterns.
Example: stop flag for a long-running task.
class Service {
volatile boolean running = true;
void runLoop() {
while (running) {
// do work
}
}
void stop() {
running = false;
}
}
- Without
volatile,runLoopmay never observefalsedue to caching. - With
volatile,stop()’s write will eventually be observed and loop will exit.
6.2 What volatile does NOT guarantee
volatiledoes NOT provide mutual exclusion.- Two threads can still update a volatile variable concurrently and lose updates: java
volatile int counter = 0; void badInc() { counter++; // still non-atomic }
- Two threads can still update a volatile variable concurrently and lose updates: java
volatiledoes NOT make compound actions atomic:- Check-then-act (
if (flag) doSomething()). - Read-modify-write (
v = v + 1).
- Check-then-act (
volatiledoes NOT provide composite invariants across multiple variables.- If you need to maintain a set of related fields in a consistent state, you must use locks or higher-level constructs.
Gil’s rule of thumb: use volatile mainly for flags, status fields, and simple publication, not for complex shared state management.
7. synchronized and monitors
synchronized uses a monitor lock associated with an object.
Two key effects:
- Mutual exclusion:
- Only one thread can hold the monitor at a time → critical section.
- Visibility + ordering (HB):
- A successful exit from a
synchronizedblock (monitor exit) HB a subsequent successful entry (monitor enter) on the same monitor.
- A successful exit from a
Example with safe shared state:
class Counter {
private int count = 0;
public synchronized void inc() {
count++;
}
public synchronized int get() {
return count;
}
}
Properties:
incandgetare atomic with respect to each other.- Writes performed in
incare visible to subsequentgetcalls from other threads because:- Unlock in
incHB lock inget.
- Unlock in
Safe publication via synchronized:
class Holder {
private Config config;
public synchronized void setConfig(Config c) {
config = c; // publish
}
public synchronized Config getConfig() {
return config; // read safely
}
}
- If
Configis immutable (final fields, nothisescape), this ensures safe publication and visibility of internals too.
Why synchronized still matters in production:
- Easy, correct-by-default mechanism if scope is small.
- Enforces both atomicity and visibility, unlike
volatile. - Matches JMM exactly, maps to appropriate memory fences and lock instructions.
8. final fields and safe publication
final has special semantics in the JMM.
8.1 Guarantees for final fields
If:
- All writes to
finalfields occur in the constructor. thisreference does not escape during construction.- The object reference is safely published (via HB, e.g., stored in a volatile or seen after a lock, or via static init).
Then:
- Other threads that see the object are guaranteed to see the correct values of its
finalfields. - Reordering is restricted so that final fields appear fully initialized.
Example immutable config:
class Config {
private final int timeoutMs;
private final String baseUrl;
Config(int timeoutMs, String baseUrl) {
this.timeoutMs = timeoutMs;
this.baseUrl = baseUrl;
}
// getters
}
Safely publishing:
class ConfigHolder {
private volatile Config config; // or use synchronized
void init() {
config = new Config(1000, "https://api");
}
Config get() {
return config;
}
}
- Once
configis visible to another thread, thefinalfieldstimeoutMsandbaseUrlare guaranteed visible as set in the constructor.
8.2 Dangers: this escape
Bad pattern:
class Bad {
final int x;
Bad(ListenerRegistry registry) {
registry.register(this); // this escapes before constructor finishes
x = 42;
}
}
Another thread may call methods on Bad via the registry before x = 42 executes, seeing x as default value (0). This breaks final’s guarantees.
Production impact: bugs in frameworks where this escapes to callbacks / event listeners during object construction.
9. Reordering and compiler/CPU optimizations
The JVM and CPU freely reorder operations as long as:
- Single-threaded program behavior is preserved.
- JMM’s HB rules are not violated.
Common kinds of reordering:
- Reorder of writes:
x = 1; y = 2;→ may commitybeforexin main memory.
- Reorder of reads with writes to other vars:
r1 = y; x = 1;could be executed asx = 1; r1 = y;.
Double-checked locking (classic example):
class Singleton {
private static Singleton instance;
public static Singleton getInstance() {
if (instance == null) { // (1)
synchronized(Singleton.class) { // (2)
if (instance == null) { // (3)
instance = new Singleton(); // (4)
}
}
}
return instance; // (5)
}
}
Problem (pre-JMM fix):
new Singleton()may be reordered as:- Allocate memory.
- Assign reference to
instance. - Run constructor.
- Another thread sees non-null
instancebut partially constructed object.
Modern JMM:
- This pattern only becomes correct if
instanceis declared asvolatile. volatileprevents reorder ofinstanceassignment and constructor finishing visibility.
General rule:
- Assume the compiler and CPU are more aggressive than you think.
- Only HB edges (
volatile,synchronized,finalwith safe publication,join,start, etc.) pin down the ordering you can rely on.
10. Common misconceptions and bugs
Misconception 1: “If it works on my machine, it’s correct”
- Many JMM bugs are hardware-dependent and load-dependent.
- Testing on a single core or under low concurrency often hides issues.
- Production with many cores, high load, different architecture (ARM vs x86_64) exposes them.
Misconception 2: “volatile makes my code thread-safe”
volatileensures visibility and ordering around that variable, but:- Still allows race conditions on non-atomic sequences.
- Typical bug:
volatile int counter = 0;
void inc() {
counter++; // still racy
}
- Under contention, increments are lost.
Misconception 3: “Data is visible if I write then another thread reads”
- Without HB, the JMM allows:
- Reads to see stale values.
- Reordered visibility.
- Need explicit synchronization mechanisms, otherwise you are in data race land, where almost anything can happen.
Misconception 4: “Immutable object is always safe”
- Immutable object is only safe if properly published.
- If a reference to a just-constructed object is written to a non-volatile field and read without HB, another thread can:
- See a partially constructed object (non-final fields).
- Possibly even default values for non-final fields.
Misconception 5: “long/double can be torn”
- Historically some platforms had non-atomic 64-bit ops; since Java 5 the spec requires atomic access to
long/doubleif they are notvolatile. - But atomicity != visibility:
- They can still be stale or out-of-order with other fields.
Misconception 6: “Thread-safe library code makes my usage safe”
- Example: using
ConcurrentHashMapbut publishing values that are mutable and not safely published. - Even if the map is thread-safe, the objects inside must be safe or immutable with correct publication.
Typical production bugs:
- Background thread updating a shared config object without synchronization; other threads read inconsistent fields.
- Stop flags without
volatile, causing threads never to terminate. - Caches written in a “lock-free” way but missing HB edges, leading to corrupt reads.
- Wrong double-checked locking without
volatile.
11. How this is asked in interviews
Below are 8 typical JMM interview questions, with reasoning-focused hint answers.
- Q: Why does the Java Memory Model exist?
A: To provide a consistent specification for visibility and ordering across different CPUs and compilers, so that multi-threaded Java code is portable and predictable. It constrains reordering and cache behavior via happens-before. - Q: Explain the difference between visibility, atomicity, and ordering.
A: Visibility is about whether one thread’s write is seen by another; atomicity is about operations happening as indivisible units; ordering is about in-which-sequence operations from different threads are observed. JMM tools (volatile,synchronized, finals) affect these in different ways. - Q: What guarantees does
volatileprovide, and when is it insufficient?
A: It guarantees visibility and ordering for that variable (write HB read) but not atomicity of compound actions. It’s fine for flags and simple status but not for invariants or read-modify-write without additional synchronization. - Q: Describe the happens-before relation and give some examples.
A: HB defines when one action’s effects are guaranteed visible to another. Examples: program order in a single thread; unlock HB subsequent lock on the same monitor; volatile write HB subsequent read; thread start/join rules. - Q: Why is double-checked locking broken without
volatile?
A: Because withoutvolatile, writes associated with object construction and the write to the reference can be reordered. Another thread can see a non-null reference but a partially constructed object.volatileprohibits such reorderings and ensures visibility. - Q: How does
synchronizedaffect the JMM?
A: It provides mutual exclusion and introduces HB edges: exit from a synchronized block on a monitor HB subsequent entry on that monitor. This guarantees visibility of changes made inside the lock to threads acquiring the same lock later. - Q: What special guarantees does the JMM provide for
finalfields?
A: If final fields are set in the constructor and the object is safely published (nothisescape during construction), other threads seeing the object are guaranteed to see correctly initialized final fields, even without extra synchronization. - Q: Give a real-world scenario where a missing
volatileor synchronization causes a production bug.
A: A service uses a booleanrunningflag to stop a worker thread. Withoutvolatileor synchronization, the worker may loop forever because it never sees the flag update, leading to stuck threads and graceful shutdown failures.
12. Rapid revision summary (bullet points)
- JMM defines how threads interact via memory: what values they can see and in what order.
- Modern CPUs + caches + compiler optimizations cause reordering and stale reads; JMM tames this via happens-before.
- Conceptually, each thread has working memory (caches, registers) plus shared main memory; unsynchronized access can see stale/inconsistent data.
- Happens-before: if A HB B, then B sees A’s effects and A is ordered before B; key rules: program order, lock/unlock, volatile write/read, start/join, transitivity.
- Visibility: other threads seeing up-to-date values; atomicity: indivisible operations; ordering: sequence in which operations appear to occur.
volatile: guarantees visibility + ordering for that variable; does NOT provide mutual exclusion or atomicity of compound actions.synchronized: mutual exclusion + visibility via monitor lock HB rules; exit HB subsequent entry on same monitor.finalfields: with constructor-only writes and safe publication, other threads see fully initialized final fields.- Reordering: compiler/CPU may reorder operations unless constrained by HB; classic bug is broken double-checked locking without
volatile. - Common bugs: non-volatile flags, “works on my machine” races, unsafe publication of supposedly immutable objects, assuming visibility without explicit HB edges.
- Use
volatilefor flags and simple state; use locks or high-level concurrency utilities for composite invariants and multi-field consistency. - In interviews, emphasize why primitives (
volatile,synchronized,final) exist and how they map to visibility, ordering, and atomicity guarantees.