This article focuses on three core mechanisms in backend concurrency control: optimistic locking, pessimistic locking, and distributed locks. They solve different problems: version conflicts, strongly exclusive updates, and cross-node execution ownership. Use this guide to quickly choose the right approach for scenarios such as inventory deduction, balance debits, and scheduled job deduplication. Keywords: concurrency control, distributed locks, optimistic locking.
Technical Specification Snapshot
| Parameter | Details |
|---|---|
| Primary Technologies | Java / SQL / Redis |
| Protocols Involved | SQL transactions, Redis SET NX PX |
| Content Type | Technical blog article |
| Star Count | Not provided in the original |
| Core Dependencies | MySQL/InnoDB, Redis, transaction mechanisms, version field |
These Three Locking Mechanisms Do Not Solve the Same Problem
Many engineers group optimistic locking, pessimistic locking, and distributed locks into the same mental model. That is the most common misunderstanding. The first two primarily address consistency when the same piece of data is modified concurrently, while distributed locks primarily address multiple instances competing for the right to execute a section of logic.
If you do not classify the problem first, you will likely misuse the solution in production. For example, using a distributed lock to replace database consistency control, or using optimistic locking to prevent duplicate execution of scheduled jobs across multiple instances, both leave critical gaps at the boundary.
A One-Sentence Rule of Thumb Helps You Decide First
If you need to protect whether a data value is being corrupted by concurrent updates, prioritize optimistic locking or pessimistic locking. If you need to protect whether a piece of logic can be executed by only one node at a time, prioritize a distributed lock.
-- Optimistic locking: verify that the version is still unchanged when updating
UPDATE product
SET stock = stock - 1,
version = version + 1 -- Increment the version to indicate that this transaction modified the data
WHERE id = 1
AND version = 3; -- Only allow the update if the previous version still matches
This SQL statement rejects overwrite updates that are based on a stale snapshot.
Optimistic Locking Fits Read-Heavy Workloads with Low Write Conflicts
Optimistic locking does not actually lock the resource first. Instead, it assumes that conflicts are rare and validates at commit time whether the data still matches the version originally read. In essence, it is a commit-time validation strategy.
Typical scenarios include product information updates, standard inventory deduction, and configuration updates. These workloads do have concurrency, but conflicts are not continuously high-frequency. The system values throughput more and does not want a large number of requests blocked while waiting for locks.
Version Numbers Are the Most Common Implementation
A common approach is to add a version field to the table. When reading the data, retrieve the version together with the business fields. During the update, include the old version in the WHERE clause. If the update succeeds, no one changed the data in the meantime. If it fails, the data is stale and the application must retry or ask the user to refresh.
-- Read both the business fields and the version during the query phase
SELECT id, stock, version
FROM product
WHERE id = 1;
-- Include the old version during the commit phase to avoid overwrite updates
UPDATE product
SET stock = stock - 1,
version = version + 1 -- Advance the version after a successful update
WHERE id = 1
AND version = 5; -- Core validation: only accept writes based on the latest version
This pair of SQL statements shows the full optimistic locking cycle of “read first, validate later.”
Optimistic Locking Improves High-Concurrency Throughput but Requires Retry Logic
Because it does not hold database lock resources for long periods, it usually delivers better throughput. However, if conflicts become frequent, the failure rate rises noticeably. The application layer must be designed with retries, user feedback, rollback, or compensation logic. Otherwise, the user experience degrades quickly.
Pessimistic Locking Fits High-Conflict, Strong-Consistency Workloads
Pessimistic locking assumes that conflicts will happen, so it locks the resource before performing the modification and forces competing requests to wait. Its focus is not “recover after failure,” but “exclude others before the operation begins.”
Account debits, balance settlement, core inventory reservation, and critical order state transitions are often better suited to pessimistic locking. These workloads cannot tolerate the window between validation and invalidation. The system prefers to sacrifice some concurrency in exchange for more predictable consistency.
Database Row Locks Are the Most Common Pessimistic Locking Implementation
Inside a transaction, execute select ... for update. The database applies an exclusive lock to the target row. As long as the transaction has not committed, other transactions cannot perform equivalent modifications or locking operations on the same row.
BEGIN;
SELECT balance
FROM account
WHERE id = 1
FOR UPDATE; -- Apply a row lock to the account record so other transactions must wait
UPDATE account
SET balance = balance - 80 -- Perform the deduction while holding the lock in the transaction
WHERE id = 1
AND balance >= 80; -- Validate the balance at the same time
COMMIT;
This transaction serializes balance validation and deduction to prevent overdrawing.
Pessimistic Locking Makes Consistency Explicit but Introduces Blocking and Deadlock Risk
It is highly effective on critical paths with strict consistency requirements, but throughput drops. If transactions run too long or lock rows in inconsistent order, the system may encounter lock waits, timeouts, or even deadlocks. You must keep transactions short and maintain a fixed locking order.
Distributed Locks Control Execution Ownership Across Nodes
The biggest difference between distributed locks and the previous two mechanisms is this: a distributed lock does not necessarily lock a data record. Instead, it determines which service instance has the right to execute a piece of logic.
Typical scenarios include single-instance execution of scheduled jobs, globally unique tasks, duplicate consumption prevention for the same order, and batch job coordination. A local lock such as synchronized works only within one process. In a multi-instance deployment, it is no longer sufficient.
Redis Is the Most Common Implementation in Real Systems
The most basic pattern is SET key value NX PX ttl. Here, NX ensures the key is created only if it does not already exist, PX sets an expiration time so the lock does not remain forever after a crash, and value identifies the lock owner to prevent one instance from deleting another instance’s lock.
SET cancel_timeout_order_task_lock unique_value NX PX 30000
This command ensures that only one instance among multiple service instances can acquire a task lock with a 30-second TTL.
-- Verify the owner before releasing the lock to avoid deleting another instance's lock by mistake
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1]) -- Delete the key only if the lock value matches
else
return 0 -- A mismatch means the lock no longer belongs to the current instance
end
This Lua script safely releases a Redis-based distributed lock.
AI Visual Insight: The diagram shows two service instances competing for the same task lock at the same time. Only one instance acquires execution ownership and enters the business flow, while the other exits or retries. This highlights that a distributed lock controls cross-node task mutual exclusion, not row-level database updates.
The Hard Part of Distributed Locks Is in the Edge Cases, Not the Command
The real complexity is not acquiring the lock itself, but handling renewal, expiration, release, primary-replica failover, network jitter, and zombie instances. Distributed locks should usually be paired with idempotency design: the lock reduces duplicate execution, while idempotency guarantees correct outcomes.
Locking Strategy Should Start with the Conflict Target and Then the Consistency Level
If the conflict target is the same row of data, first choose between optimistic locking and pessimistic locking. If the conflict target is execution ownership for the same business logic, you are in distributed lock territory. This is the most effective decision path.
One Table Helps You Decide Quickly
| Type | Problem Solved | Control Target | Common Implementations | Best-Fit Scenarios | Main Cost |
|---|---|---|---|---|---|
| Optimistic Locking | Prevent stale data from overwriting newer data | Data version consistency | version, CAS, timestamp |
Read-heavy workloads, low conflict | Retry overhead on failure |
| Pessimistic Locking | Guarantee serialized modification of critical data | Data access ownership | Row locks, FOR UPDATE |
High conflict, strong consistency | Blocking, deadlock risk |
| Distributed Lock | Guarantee single execution across nodes | Business execution ownership | Redis, ZooKeeper | Scheduled jobs, global mutual exclusion | Complex edge cases, must prevent accidental unlock |
Recommended Answers for Three Common Scenarios
For standard e-commerce inventory deduction, prioritize optimistic locking. For balance debits and financial bookkeeping, prioritize pessimistic locking. For scheduled jobs, duplicate consumption prevention, and globally unique jobs, prioritize distributed locks.
FAQ
Q1: Can a distributed lock replace optimistic locking or pessimistic locking?
No. A distributed lock mainly decides who gets to execute. It does not inherently guarantee correct concurrent updates inside the database. If core data consistency is involved, you still need optimistic locking, pessimistic locking, or transactional constraints.
Q2: Should inventory deduction use optimistic locking or pessimistic locking?
If the workload involves standard products, is read-heavy, and can tolerate retry after failure, optimistic locking is the better default. If the workload is a flash sale with high conflict and high oversell risk, you usually need a combination of pessimistic control, inventory pre-deduction, queue-based traffic shaping, or similar patterns.
Q3: Do I still need idempotency after adding a distributed lock?
Yes. A distributed lock only reduces the probability of duplicate execution. It does not cover network timeouts, lock expiration, message redelivery, or similar failure modes. Idempotency protects the result layer, while distributed locks control the process layer. You should use both.
AI Readability Summary
This article systematically breaks down the applicability boundaries, implementation patterns, and decision criteria for optimistic locking, pessimistic locking, and distributed locks. Using common scenarios such as inventory deduction, account debits, and scheduled jobs, it clarifies how these three mechanisms address two fundamentally different problems: data contention and execution ownership contention.