Amortized analysis shows why data structures with occasionally expensive operations can still remain efficient over time. Using dynamic array resizing as the main thread, this article explains aggregate analysis, the accounting method, and the potential method, then extends the discussion to monotonic queues, two-stack queues, hash table resizing, and more. Keywords: amortized analysis, potential method, dynamic arrays.
Technical Specification Snapshot
| Parameter | Details |
|---|---|
| Domain | Data Structures and Algorithm Complexity Analysis |
| Core Methods | Aggregate Analysis, Accounting Method, Potential Method |
| Typical Structures | Dynamic Arrays, Stacks, Queues, Hash Tables, Union-Find |
| Mathematical Foundation | Geometric Series, Potential Functions, Total Cost Upper Bounds |
| Applicable Languages | C++, Java, Python, Go |
| Complexity Keywords | Amortized O(1), Amortized O(log n) |
| Source | Reconstructed from public blog content |
| Stars | N/A |
| Core Dependencies | No specific library dependency; relies on foundational algorithm theory |
Amortized Analysis Focuses on the Total Cost of an Entire Operation Sequence
Amortized analysis does not depend on the input distribution, and it is different from average-case analysis. Average-case analysis usually relies on probabilistic assumptions, while amortized analysis asks only one question: can the total cost of any valid operation sequence be bounded by a low-order upper bound?
For an operation sequence σ = (o1, o2, …, om), if the total cost satisfies C(σ) = Σci, and you can prove that C(σ) ≤ m·f(m), then the amortized complexity of this class of operations is f(m). This means a single step may be expensive, but the long-run total is still cheap.
Dynamic Arrays Are the Best Entry Point for Understanding Amortized Analysis
A dynamic array maintains two states: size, the current number of elements, and capacity, the current allocated capacity. When size < capacity, appending at the end costs only constant time. When size = capacity, the array must allocate a larger block of memory and copy the old elements, so the cost of a single operation can reach O(n).
class DynamicArray:
def __init__(self):
self.size = 0
self.capacity = 1
self.data = [None] * self.capacity
def push_back(self, x):
if self.size == self.capacity:
# Trigger resizing when capacity is full; double the capacity
new_capacity = self.capacity * 2
new_data = [None] * new_capacity
for i in range(self.size):
# Copy old elements; this is the source of the expensive operation
new_data[i] = self.data[i]
self.data = new_data
self.capacity = new_capacity
# Write the new element
self.data[self.size] = x
self.size += 1
This code shows why appending to a dynamic array has the complexity pattern of “linear in a single worst case, constant in amortized cost overall.”
Aggregate Analysis Directly Proves That the Total Cost of the First n Insertions Is Linear
If the capacity grows by doubling, such as 1, 2, 4, 8, …, then in the first n insertions, the cost of ordinary writes is n. The total copy cost during resizing is 1 + 2 + 4 + … + 2^(k-1), where 2^(k-1) < n ≤ 2^k.
By the geometric series formula, this copy cost is less than 2n, so the total cost satisfies C(n) ≤ n + 2n = 3n. Therefore, the total cost of the first n push_back operations is O(n), and the amortized cost per operation is O(1).
The Accounting Method Charges Future Expensive Operations in Advance
The accounting method works by charging a uniform fee to each operation. For example, charge 3 for every push_back. If the current operation does not trigger resizing, its real cost is only 1, so the extra 2 is stored as credit to pay for future copy costs during resizing.
The key idea is not that a resize is cheap, but that resizes happen sparsely enough. Between two consecutive resizes, enough ordinary insertions must occur, and the credits collected from those insertions are sufficient to cover the next copy operation.
# Pseudocode for the accounting method
credit = 0
for op in operations:
amortized_charge = 3 # Charge the same amount for each operation
real_cost = op.real_cost()
credit += amortized_charge - real_cost # Store the remainder as credit
assert credit >= 0 # Credit must never become negative
This pseudocode shows that as long as the credit never becomes negative, the amortized charging scheme is valid.
The Potential Method Turns Credit Into a Function of Data Structure State
The potential method is an abstract form of the accounting method. It defines a potential function Φ(Di) for each state Di, and defines the amortized cost of the i-th operation as the real cost plus the change in potential.
For dynamic arrays, a classic potential function is Φ = 2 × size – capacity. It captures the intuition that the closer the array is to being full, the closer it is to the next resize, so the more potential it should accumulate.
If an insertion does not trigger resizing, the real cost is 1, size increases by 1, and capacity stays the same, so the potential increases by 2 and the amortized cost is 3. If an insertion triggers resizing, the real cost is m + 1, but the potential drops significantly, and the final amortized cost is still the constant 3.

AI Visual Insight: This diagram shows how a dynamic array changes state near its capacity boundary. The key information typically includes the relative positions of size and capacity, the resize trigger point, and the doubled capacity after expansion. It visually explains how a small number of expensive resizes are diluted by a large number of low-cost insertions.
def amortized_cost(real_cost, phi_old, phi_new):
# Potential method definition: amortized cost = real cost + change in potential
return real_cost + (phi_new - phi_old)
This formula-style code summarizes the unified expression of the potential method.
Many Classic Data Structures Rely on the Same Amortized Reasoning
The Total Cost of Stack MULTIPOP Operations Is Still Linear
A single MULTIPOP(k) may pop many elements, but each element can be pushed at most once and popped at most once. Therefore, in any operation sequence, the total number of pops never exceeds the total number of pushes. The total cost is naturally O(m), so the amortized cost per operation is O(1).
Binary Counter Increment Has Amortized O(1) Cost
The cost of one INCREMENT is defined as the number of bit flips. Although an operation such as 0111 -> 1000 flips 4 bits, bit i flips only once every 2^i increments. Summing the number of flips over all bits still gives a linear total cost smaller than 2n.
def increment(bits):
i = 0
while i < len(bits) and bits[i] == 1:
bits[i] = 0 # Carry propagation: flip 1 to 0
i += 1
if i < len(bits):
bits[i] = 1 # Find the first 0 and set it to 1
This code shows why counter increment exhibits the pattern of “occasional long carry chains, but linear total cost overall.”
The while Loop in a Monotonic Queue Does Not Cause Quadratic Complexity
In the sliding window maximum problem, the back of the queue may be popped multiple times during a single insertion. However, each element is enqueued once and dequeued once, so the total number of iterations across all while loops is still O(n).
The Transfer Cost in a Queue Built from Two Stacks Can Be Paid in Advance
In a two-stack queue, one dequeue() may trigger a full transfer from one stack to the other, which looks expensive. But from the perspective of an element’s lifecycle, each element undergoes at most four primitive actions: push to S_in, pop from S_in, push to S_out, and pop from S_out. Therefore, charging each element a constant fee in advance is enough to cover its full cost.
Resize and Shrink Policies in Production Must Avoid Thrashing
If a dynamic array expands as soon as it becomes full and shrinks as soon as it has one free slot, then alternating push_back and pop_back near the capacity boundary will trigger frequent expansion and contraction, causing performance thrashing.
The correct approach is to separate the thresholds. For example, expand to 2× capacity only when full, but shrink to capacity / 2 only when size <= capacity / 4. This hysteresis interval effectively eliminates boundary oscillation and preserves amortized O(1) complexity for both push_back and pop_back.
def maybe_shrink(size, capacity):
if size <= capacity // 4 and capacity > 1:
# Use a lower shrink threshold than the expansion threshold to avoid frequent thrashing
return capacity // 2
return capacity
This code shows why separating expansion and shrink thresholds is critical for stable complexity in production systems.
More Advanced Data Structures Use Amortized Analysis as a Complexity Foundation
Path compression and union by rank in Union-Find, adaptive rotations in splay trees, and deferred maintenance in Fibonacci heaps all rely on the same core idea: an expensive operation is not wasteful, but instead prepays optimization cost for many future operations.
For that reason, amortized analysis is not just a mathematical trick. It is a foundational lens for understanding the real performance of advanced data structures. It answers not “how expensive can one worst-case step be,” but “is the structure efficient over sustained use?”
FAQ
FAQ 1: What is the essential difference between amortized analysis and average-case analysis?
Average-case analysis depends on a probability distribution and expected-value calculations. Amortized analysis does not rely on randomness. Instead, it guarantees that the total cost of any valid operation sequence remains bounded, so its conclusion is more robust.
FAQ 2: Why is dynamic array append worst-case O(n), but still described as O(1)?
Because O(1) here refers to amortized complexity, not worst-case complexity for a single operation. A small number of resize operations are expensive, but they occur infrequently, so the long-run total cost still grows linearly with the number of operations.
FAQ 3: If I see a while loop inside another loop, how can I tell whether amortized analysis applies?
The key question is whether the loop consumes non-recoverable work units. If each element, once processed, does not return to its previous state—for example, dequeued elements in a monotonic queue do not re-enter the queue—then the total amount of work can often be bounded linearly by amortized analysis.
Core Summary: This article systematically reconstructs the core ideas of amortized analysis. Using dynamic array resizing as the central example, it explains aggregate analysis, the accounting method, and the potential method, then extends them to classic scenarios such as stacks, multipop operations, binary counters, monotonic queues, two-stack queues, and hash table resizing. The goal is to help developers accurately understand why some operations are expensive individually yet efficient overall.