Technical Snapshot
| Parameter | Description |
|---|---|
| Problem | P14637 / NOIP2025 Tree Value |
| Primary Language | C++ |
| Core Algorithms | Tree DP, DFS Order, Fenwick Tree |
| Time Complexity | O(nm log n) |
| Space Complexity | O(nm) |
| Data Structure Dependencies | vector, Fenwick Tree, 2D DP |
| Original Source | Adapted from a Blog Park solution article |
| Star Count | Not provided in the original article |
This problem can be abstracted into one main chain plus several side-chain contributions
The key observation in this problem is that every optimal solution on the tree must contain one main chain that merges upward from below. Nodes on this chain keep extending value upward, while nodes outside the main chain only contribute to their own local chains.
The hard part is not proving that a main chain exists. The hard part is that membership in the main chain is not fixed. Whether a node belongs to the main chain changes the profit distribution of the entire subtree, so you cannot decide it greedily from top to bottom.
The state definition must encode both contribution length and main-chain membership
Let dp[i][j][0/1] denote the maximum value in the subtree rooted at i when node i contributes a length of j. Here, means i is not on the main chain, and 1 means i is on the main chain.
This state design serves two purposes. First, it explicitly encodes contribution as a segment length j. Second, it separates “whether the node belongs to the main chain” into a binary state, which makes transitions easier to enumerate.
// dp[u][j][0]: the best value when u is not on the main chain
// and contributes a length of j
// dp[u][j][1]: the best value when u is on the main chain
// and contributes a length of j
for (int j = 1; j <= deep[u]; j++) {
dp[u][j][0] = j; // Start with u contributing j by itself
dp[u][j][1] = j; // If u is on the main chain, the base contribution still starts from j
}
This initialization means: first consider only the current node itself, then gradually merge subtree information.
The non-main-chain state can be merged first, then one child can be upgraded into the main-chain branch
For dp[u][j][0], you can first assume that none of the children belongs to the main chain. In that case, you simply add the same-level contribution from every child.
for (auto son : v[u]) {
dp[u][j][0] += dp[son][j][0]; // First merge all children as non-main-chain branches
}
This step establishes the default baseline. After that, you can select one child and turn it into the continuation of the main chain.
The essence of the main-chain transition is replacing one child’s contribution model
Because at most one child of a node can continue the main chain, the update for dp[u][j][1] works like this: start from the baseline where no child is on the main chain, then choose one child son and replace dp[son][j][0] with dp[son][j + 1][1].
for (auto son : v[u]) {
dp[u][j][1] = max(
dp[u][j][1],
dp[son][j + 1][1] + dp[u][j][0] - dp[son][j][0] // Replace a regular child with the main-chain child
);
}
This formula is critical. It avoids recombining the entire subtree from scratch and performs only one incremental replacement, which makes the transition both clean and optimizable.
DFS order compresses each subtree into a contiguous interval and enables range updates by depth
The truly complex part of the original problem is that some contributions do not depend only on direct children. You also need to count how nodes on a deeper level can contribute before they merge into the main chain.
This is where DFS order helps. Any subtree corresponds to a contiguous interval in DFS order. Therefore, if one child subtree contributes the same increment to a certain depth layer, you can convert it into a range update plus point query.
int lowbit(int x) { return x & -x; }
void change(int x, int delta, int dep) {
for (; x <= n; x += lowbit(x)) {
tree[x][dep] += delta; // Maintain the Fenwick Tree by depth dep
}
}
int getsum(int x, int dep) {
int res = 0;
for (; x; x -= lowbit(x)) {
res += tree[x][dep]; // Query the accumulated contribution at this position and depth
}
return res;
}
This Fenwick Tree does not maintain node values in the usual sense. Instead, it maintains contribution increments indexed by DFS-order position and depth.
Interval contribution updates solve the chain-contribution counting problem from the ancestor’s perspective
When processing node u, if a child son contributes a fixed value to dp[u][j][0], then that contribution applies to every DFS-order position inside the subtree of son. So you can record it as a range update:
change(in[son], dp[u][j][0] - dp[son][j][0], j); // Add the increment at the left boundary of the subtree
change(ou[son] + 1, dp[son][j][0] - dp[u][j][0], j); // Cancel it after the right boundary of the subtree
This means that inside the DFS interval covered by son’s subtree, you uniformly record the background contribution inherited from ancestor u.
Scanning nodes inside the subtree recovers the best non-main-chain state for a given depth difference
for (int p = in[u] + 1; p <= ou[u]; p++) {
int x = wrd[p];
int d = deep[x] - deep[u];
dp[u][d][0] = max(
dp[u][d][0],
getsum(p, d) + dp[x][d + 1][1] // Background contribution + gain from letting x merge into the main chain upward
);
}
This step effectively enumerates the scenario where some node in the subtree eventually merges into the main chain, then combines the ancestor-side accumulated contribution with the main-chain gain of that node.
The full implementation must handle cleanup for multiple test cases carefully
The original problem has multiple test cases, so you must fully clear the adjacency list, Fenwick Tree, DP array, and DFS counter. Otherwise, leftover state from the previous case will corrupt the answer.
for (int i = 1; i <= n; i++) {
v[i].clear(); // Clear the adjacency list
}
memset(tree, 0, sizeof(tree)); // Clear the Fenwick Tree
memset(dp, 0, sizeof(dp)); // Clear the DP table
cnt = 0; // Reset the DFS-order counter
This is not a minor implementation detail. It is a required condition for getting accepted.
The complexity of this solution comes from state count multiplied by Fenwick Tree maintenance
The total number of states is roughly n * m. Each key update uses the Fenwick Tree to maintain interval increments, so the overall complexity is O(nm log n). This is efficient enough within the problem constraints.
Compared with brute-force enumeration of the main-chain path, this approach has a clear advantage: first use DP to remove structural uncertainty, then use DFS order to convert the tree problem into interval operations on a sequence, and finally reduce the complicated transition into a standard data-structure workflow.
Reference Code (Cleaned-Up Version)
#include <bits/stdc++.h>
using namespace std;
int t, n, m;
int f[8005], deep[8005], in[8005], ou[8005], cnt, wrd[8005];
int dp[8005][805][2], tree[8005][805];
vector
<int> v[8005];
int lowbit(int x) {
return x & (-x);
}
void change(int x, int y, int id) {
for (; x <= n; x += lowbit(x)) {
tree[x][id] += y; // Fenwick Tree point update in prefix form
}
}
int getsum(int x, int id) {
int res = 0;
for (; x; x -= lowbit(x)) {
res += tree[x][id]; // Query the accumulated contribution at a position
}
return res;
}
void dfs(int u) {
deep[u] = deep[f[u]] + 1;
in[u] = ++cnt;
wrd[cnt] = u;
for (auto son : v[u]) dfs(son);
ou[u] = cnt;
for (int j = 1; j <= deep[u]; j++) {
dp[u][j][0] = dp[u][j][1] = j; // Initialize u's own contribution
for (auto son : v[u]) {
dp[u][j][0] += dp[son][j][0]; // First merge all non-main-chain children
}
for (auto son : v[u]) {
dp[u][j][1] = max(dp[u][j][1], dp[son][j + 1][1] + dp[u][j][0] - dp[son][j][0]);
change(in[son], dp[u][j][0] - dp[son][j][0], j); // Range add on the subtree interval
change(ou[son] + 1, dp[son][j][0] - dp[u][j][0], j); // Range subtract after the subtree interval
}
}
for (int p = in[u] + 1; p <= ou[u]; p++) {
int x = wrd[p];
int d = deep[x] - deep[u];
dp[u][d][0] = max(dp[u][d][0], getsum(p, d) + dp[x][d + 1][1]);
}
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
cin >> t;
while (t--) {
cin >> n >> m;
for (int i = 2; i <= n; i++) {
cin >> f[i];
v[f[i]].push_back(i); // Build the tree
}
dfs(1);
cout << dp[1][1][1] << "\n"; // The answer starts from the root as the head of the main chain
for (int i = 1; i <= n; i++) v[i].clear();
memset(tree, 0, sizeof(tree));
memset(dp, 0, sizeof(dp));
cnt = 0;
}
return 0;
}
This implementation fully realizes the core idea of main-chain modeling, tree DP, and DFS-order Fenwick Tree optimization.
The image metadata indicates that the original content came from a blog page rather than the official problem statement
AI Visual Insight: This image shows a WeChat sharing animation prompt from a blog webpage. It indicates that the source content was captured from a Blog Park frontend page rather than an official problem PDF, online judge statement, or code repository document. When turning this material into technical documentation, you should actively remove navigation clutter, ads, and social-widget noise.
FAQ: The three questions developers care about most
Why does dp[u][j][1] only need to enumerate one child as the continuation of the main chain?
Because structurally, the main chain passing through node u can continue downward along only one path. It cannot simultaneously pass through multiple child subtrees, so at most one child can lie on the main chain.
Why can DFS order support interval processing in this problem?
Because the entire subtree of any node is always contiguous in DFS order. That lets you map subtree contributions to range updates and avoid repeatedly enumerating every descendant, which significantly reduces complexity.
What is the easiest part to get wrong in this problem?
First, the boundary condition and meaning of j + 1. Second, the Fenwick Tree stores layered contributions by depth rather than ordinary weights. Third, with multiple test cases, you must completely clear all arrays and adjacency lists.
Core Summary
This article reconstructs a solution to NOIP2025 P14637 Tree Value by focusing on main-chain modeling, the dp[i][j][0/1] state definition, DFS-order interval compression, and Fenwick Tree optimization. It shows why the seemingly complex transitions can be reduced to O(nm log n) and includes a complete C++ implementation with translated comments.