How XiaoHongShu Web x-s Request Signing Works: Parameter Flow, Node.js Browser Emulation, and mnsv2 Call Chain

This article examines the XiaoHongShu Web x-s request signing mechanism, covering parameter preprocessing, the core mnsv2 call, custom Base64 encoding, x-s-common device fingerprinting, and Node.js browser emulation. It addresses common issues such as hard-to-trace signing flows, strong browser dependencies, and error-prone cross-language invocation. Keywords: x-s, mnsv2, browser emulation

Technical specification snapshot

Parameter Value
Research target XiaoHongShu Web homefeed request signing
Primary languages JavaScript, Python
Runtime environment Node.js 16+, Python 3
Key protocols HTTPS / JSON
Core request headers x-s, x-t, x-s-common, x-b3-traceid
Core dependencies requests, subprocess, Node.js runtime
Star count Not provided in the original source
Core challenges JavaScript obfuscation, browser environment emulation, custom Base64

This article focuses on a structural understanding of the Web signing flow

The real value of the source material is not in reproducing a specific platform API. Instead, it demonstrates a typical Web frontend protection chain: the request body participates in hashing, an obfuscated function produces the signature, a structured payload is encoded again, and the final validation combines device fingerprints with timestamps.

From an engineering perspective, x-s is not a standalone algorithm. It is a composite chain that depends on browser context, cookies, user agent, and time. You need to understand the entire chain to identify the real cause of signature failures.

The signing flow can be abstracted into four stages

Request parameter normalization -> Hash preprocessing -> Obfuscated function generates the core signature -> Custom encoding writes the result into request headers

This flow shows that signature generation is not a single function call. It is the result of multiple linked stages.

The x-s input dependencies start with the path and a compact JSON request body

The research target is POST /api/sns/web/v1/homefeed. The request body must use compact JSON without spaces. Otherwise, the same business parameters can produce a different hash value, which ultimately leads to a mismatched signature.

During preprocessing, the code constructs three key inputs: f, c, and d. Specifically, f = url_path + json_body, c = MD5(f), and d = MD5(url_path). These three values are then passed into the obfuscated function mnsv2.

import json, hashlib

url_path = '/api/sns/web/v1/homefeed'
data = {
    "cursor_score": "",
    "num": 15,
    "refresh_type": 1
}
json_body = json.dumps(data, separators=(',', ':'))  # Use compact JSON to avoid whitespace affecting the signature
f = url_path + json_body                              # Concatenate the path and request body directly
c = hashlib.md5(f.encode()).hexdigest()              # Compute the MD5 of the concatenated value
d = hashlib.md5(url_path.encode()).hexdigest()       # Compute the MD5 of the path alone

This code generates the three base inputs required by mnsv2.

mnsv2 is the signing core, but it behaves more like a black-box execution point

The original material notes that the obfuscated script 02_source.js attaches mnsv2 to window. It is usually wrapped in string-array self-decryption and immediately invoked validation logic, which makes full manual static restoration inefficient. In practice, it is more effective to execute the original code directly after creating a compatible runtime environment.

That means the reverse-engineering focus should not be on full decompilation. It should be on building a minimal runnable environment. As long as the required dependent objects exist, the original function can often generate x3 consistently.

The core signature value comes from the runtime result of mnsv2

window.mnsv2 = function (f, c, d) {
  // This usually contains obfuscated string decryption and internal keystream logic
  // The core goal is to generate the signature fragment x3 from f, c, and d
  return x3;
};

The key point of this pseudocode is that mnsv2 does not return the final x-s. It only returns the internal field x3.

The final x-s is an encoded structured payload, not a direct return value

After obtaining x3, you still need to place it into a fixed structure, apply a custom alphabet Base64 encoding, and prepend the XYS_ prefix. The structure usually includes the version, application identifier, system platform, signature value, and object type.

This explains a common misunderstanding: many people take the return value from mnsv2 and write it directly into the request header, but the request still fails because the server validates the fully encoded structured payload instead.

{
  "x0": "4.3.1",
  "x1": "xhs-pc-web",
  "x2": "Windows",
  "x3": "<mnsv2 output>",
  "x4": "object"
}

This JSON defines the payload that will be encoded into x-s, not the final header text itself.

x-s-common reflects a security design that includes device fingerprint validation

x-s-common also uses custom Base64 encoding. After decoding it, you can see device, platform, version, and other fingerprint-related fields. It is relatively stable within a session, so it is often reused as a fixed header.

From a security design perspective, x-s provides request-level dynamism, while x-s-common provides environmental continuity. Together, they improve the platform’s ability to detect scripted or otherwise abnormal requests.

Its encoding method is consistent with x-s

Structured JSON -> encodeURIComponent -> UTF-8 byte sequence -> custom Base64 alphabet mapping

This process shows that the platform does not use standard Base64 directly. It customizes the output character mapping layer.

Node.js browser emulation is the key prerequisite for running the original obfuscated script

Because the original JavaScript depends on browser objects such as window, document, navigator, and XMLHttpRequest, running it directly in Node.js usually throws errors. You need to construct a minimal runtime and provide the properties that the script reads.

The two most important consistency requirements are these: document.cookie must match the cookie sent with the real request, and navigator.userAgent must match the user agent in the request headers. Otherwise, even if the function executes successfully, the result may still be invalid.

global.window = global;                    // Let the Node global object simulate window
global.self = global;
global.top = global;
window.addEventListener = function () {};  // Prevent errors when event listeners are registered
window.chrome = {};                        // Simulate a Chrome environment

global.navigator = {
  userAgent: 'Mozilla/5.0 ...',            // Must match the user agent in the request headers
  webdriver: false                         // Avoid automation markers
};

global.document = {
  cookie: 'a1=xxx; webId=xxx',             // Must match the real request cookies
  getElementsByTagName() { return []; }
};

This code provides the minimal browser environment required to execute the obfuscated script.

A Python and Node.js hybrid approach works well for stable signature integration

A practical architecture is to let Python orchestrate the business request while Node.js executes the original signing script inside the emulated browser environment. Python can call the Node entry file through subprocess.run and read the result from standard output.

The most common problems here are not algorithmic mistakes. They are path errors, encoding issues, and debug logs polluting the output. For that reason, the entry script should have a single responsibility and return only the final signature result.

You should use absolute paths for cross-language invocation

import os
import subprocess

js_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), '03_main.js')
result = subprocess.run(
    ['node', js_path, 'get_x_s', '{}'],   # Use an absolute path to avoid failures when the working directory changes
    capture_output=True,
    text=True,
    encoding='utf-8'
)
print(result.stdout.strip())

This code lets Python invoke the Node.js signing entry point reliably and read the returned value.

This chain reveals three common patterns in modern Web frontend protection

First, signature algorithms usually do not exist in isolation. They are tightly bound to execution context. Second, the purpose of obfuscation is not absolute irreversibility. It is to increase the cost of locating and reproducing the logic. Third, the real difficulty usually lies not in the algorithm itself, but in the runtime environment, input formatting, and multi-parameter consistency.

For that reason, the most effective way to study this kind of mechanism is not to stare at a single function. It is to build a complete view of input normalization, runtime dependencies, and structured output.

FAQ

1. Why can the same request body sometimes generate different x-s values?

Because the signature depends not only on the request body, but also on the path, timestamp, cookies, user agent, and runtime environment. In particular, whether the JSON is compact and whether the cookies are consistent both affect the final result.

2. Why does the request still fail even after locating and calling mnsv2?

Because mnsv2 only outputs x3. You still need to assemble it into a fixed structure, apply the custom Base64 encoding, and add the XYS_ prefix to produce the final x-s value.

3. What is the easiest thing to miss when emulating the browser environment in Node.js?

The most commonly missed parts are document.cookie, navigator.userAgent, and stub implementations for several browser objects. The first two directly affect signature consistency, while the last group determines whether the obfuscated script can run at all.

AI Readability Summary

This article uses public frontend code and normal network traffic as the basis for a structured analysis of the XiaoHongShu Web x-s request signing mechanism. It focuses on parameter preprocessing, mnsv2 execution, custom Base64 encoding, x-s-common device fingerprinting, Node.js browser emulation, and the Python invocation chain to help developers understand how Web request protection is designed.