How to Connect DeepSeek V4 to Claude Code: Setup, Flyway Migration Debugging, and Agent Coding Evaluation

DeepSeek V4 can connect directly to Claude Code through an Anthropic-compatible API. Its core value is enabling lower-cost multi-file code analysis, database migration diagnostics, and model configuration upgrades. This article focuses on real development tasks, setup steps, and cost-performance tradeoffs. Keywords: DeepSeek V4, Claude Code, Flyway.

The technical specification snapshot establishes the integration baseline

Parameter Details
Primary languages Java, TypeScript, Shell, SQL
Integration protocol Anthropic-compatible API
Model variants DeepSeek-V4-Pro, DeepSeek-V4-Flash
Context window 1M tokens
Open-source license MIT
Typical toolchain Claude Code, CC Switch, Flyway, Spring Boot
Popularity reference The original source did not provide a repository star count; the projects discussed are open-source implementation examples
Core dependencies @anthropic-ai/claude-code, spring-boot-starter-flyway, flyway-database-postgresql

DeepSeek V4 can already power Claude Code workflows at a much lower cost

The core conclusion from the source material is straightforward: DeepSeek V4 is not only about chat generation. Its real value appears in Agent Coding scenarios. It targets problems that are much closer to real software development, such as cross-file understanding, project refactoring, and configuration repair.

Compared with using Anthropic’s official models directly, this approach is attractive for two reasons. First, the price is significantly lower. Second, DeepSeek exposes an Anthropic-compatible API, so Claude Code can connect with almost zero adaptation.

You can connect Claude Code by using a simple configuration file

If Claude Code is not installed locally yet, start by installing the official CLI:

npm install -g @anthropic-ai/claude-code  # Install the Claude Code CLI tool

This step gives you the claude command, which you can use later to call the model directly from a local project.

Then edit ~/.claude/settings.json and switch Claude Code’s backend to DeepSeek:

{
  "env": {
    "ANTHROPIC_AUTH_TOKEN": "your_deepseek_api_key",
    "ANTHROPIC_BASE_URL": "https://api.deepseek.com/anthropic",
    "ANTHROPIC_MODEL": "DeepSeek-V4-Pro",
    "API_TIMEOUT_MS": "3000000",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
  }
}

The key part of this configuration is pointing ANTHROPIC_BASE_URL to DeepSeek’s compatible endpoint and setting ANTHROPIC_MODEL to either Pro or Flash.

You can also use CC Switch to manage multiple providers visually

For teams that switch frequently between DeepSeek, Claude, MiniMax, and other models, CC Switch is a better fit. It acts as a provider management layer for Claude Code and lets you manage Providers, Skills, MCP, and prompts from one place.

AI Visual Insight: This interface shows the main workspace of a model provider management tool. The focus is unified switching between different LLM backends, organizing skill plugins, and managing runtime settings. It is especially useful for Agent Coding workflows that require frequent comparisons across providers.

When you add a custom Provider in CC Switch, you only need to enter DeepSeek’s Base URL and API Key, then set the model name to DeepSeek-V4-Pro or DeepSeek-V4-Flash.

CC Switch Add DeepSeek Provider AI Visual Insight: This image shows the custom Provider creation form, including fields such as Base URL, API Key, and model name. It demonstrates that the tool is not tied to a single vendor. Instead, it redirects Claude Code’s request path to DeepSeek through a compatible API.

You can verify the model switch directly with the status command

After starting claude, run /status. If the current model shows DeepSeek-V4-Pro, the integration is working.

Verify Whether the Integration Is Active AI Visual Insight: This screenshot focuses on the model information shown in the terminal or status panel. It helps confirm which backend model Claude Code is actually calling, so you can avoid situations where the configuration file has changed but runtime traffic still goes to the old provider.

DeepSeek V4 shows strong one-pass completion ability in model configuration upgrade tasks

The first hands-on task was to update model presets across multiple providers and upgrade the original plain-text model input field into a dropdown selector. The task looks simple, but it actually combines two steps: searching for the latest model names online and updating frontend configuration.

The key point here is not code generation. It is obtaining correct facts first. Without a search capability like Tavily, the model can easily produce outdated model versions based on stale knowledge.

/tavily-search Search for the latest models from DeepSeek, GLM, and OpenAI, then update the default model recommendations and examples in the global configuration.

This kind of prompt helps the model establish current facts before modifying code, which reduces the risk of changing the code correctly while updating the wrong data.

Search and Update the Latest LLM Models AI Visual Insight: This screenshot reflects the process of running retrieval and modification tasks inside the terminal. It typically involves reading configuration files, comparing candidate model names, updating default values, and revising frontend enum lists. This is a classic example of a search-augmented coding agent workflow.

The final set of modified files included application.yml, .env.example, and SettingsPage.tsx. That shows DeepSeek V4 Pro does more than patch a single configuration point. It can understand the linkage between backend defaults, environment variable examples, and frontend interactions.

Edit DeepSeek Model Configuration AI Visual Insight: This image shows the resulting state of the model configuration UI. The key change is turning free-text input into structured dropdown presets so that providers, model names, and embedding options form a maintainable frontend enum system and reduce user input errors.

DeepSeek V4 demonstrates engineering-grade root cause analysis in database migration diagnostics

The second hands-on task highlights the value of Agent Coding even more clearly. In the same project, two SQL files existed, but only one ran automatically. The developer no longer remembered the original startup mechanism. This is a classic legacy engineering diagnosis problem.

The model first identified the root cause: init.sql was explicitly mounted through Docker Compose and executed at container startup, while V2__knowledge_skill.sql was not part of that startup path. At the same time, the project had not integrated a migration framework such as Flyway.

Analysis of Why the Database Table Was Not Executed AI Visual Insight: This screenshot captures the model’s combined analysis of container mounts, SQL initialization paths, and application startup phases. The key technical distinction is that database container initialization scripts and application-level migration tools are two completely different execution paths.

It then proposed an improved solution: introduce Flyway and rename SQL files according to versioning conventions. The advantage of this approach is that it converts a one-time initialization script into a migration sequence that is traceable, replayable, and auditable.


<dependency>

<groupId>org.springframework.boot</groupId>

<artifactId>spring-boot-starter-flyway</artifactId> <!-- Use the official Starter to trigger auto-configuration -->
</dependency>
<dependency>

<groupId>org.flywaydb</groupId>

<artifactId>flyway-database-postgresql</artifactId> <!-- Add PostgreSQL dialect support -->
</dependency>

This dependency setup addresses the core reason Flyway does not run automatically under Spring Boot 4.x.

The Spring Boot 4.x auto-configuration split is a frequent hidden pitfall

One of the most valuable points in the source article is that flyway-core alone is not enough to make Spring Boot 4.x run migrations automatically. The auto-configuration capability has been split out of the traditional module, so you must use the official Starter for successful wiring.

Problems like this are hard to troubleshoot because the application may start without errors and show no obvious log output. Flyway simply fails silently. For an AI agent, if log feedback and dependency tree analysis are missing, it can easily keep changing configuration files in the wrong direction.

DeepSeek’s Summary After Completing Flyway Integration AI Visual Insight: This screenshot shows the model’s converged result after multiple debugging rounds. It demonstrates that the model can do more than propose a migration strategy. It can also correct itself based on failure logs and complete dependency additions, naming convention updates, and execution-order validation.

DeepSeek V4-Flash can already support cost-sensitive business scenarios

The third hands-on task connected DeepSeek to an AI interview platform and used deepseek-v4-flash directly to generate mock interview questions and answers. This scenario prioritizes cost and throughput rather than maximum reasoning depth.

Switch the Interview Platform Model to deepseek-v4-flash AI Visual Insight: This interface shows the model-switching entry point inside a business application. The focus is replacing the default model through a configuration panel and verifying that the new model can be consumed seamlessly by business logic, prompt templates, and user interaction flows.

The result suggests that Flash can already deliver usable generation quality in non-thinking mode. For tasks such as interview generation, knowledge Q&A drafts, and batch summarization, this low-cost, high-throughput model offers strong practical value.

Mock Interview Evaluation Results AI Visual Insight: This image reflects the closed-loop flow from resume input to mock Q&A generation and then evaluation feedback. It shows that Flash can handle lightweight reasoning tasks for end users, although completeness and depth may still be limited for more complex questions.

DeepSeek V4’s positioning is defined by both its capabilities and pricing

From a specification standpoint, both V4-Pro and V4-Flash support a 1M-token context window and provide non-thinking and multiple thinking modes. The former is better suited to complex coding and engineering analysis, while the latter fits cost-sensitive tasks.

From a pricing standpoint, Flash output costs far less than Claude Sonnet 4.7, and Pro is also significantly cheaper. Their most practical value is not to replace top-tier closed-source models across the board, but to find the optimal point between acceptable quality and substantial cost reduction.

claude /status  # Check which model is actually active
# If you need to migrate from an older model name, replace it with DeepSeek-V4-Pro or DeepSeek-V4-Flash

This means that for teams that already rely on Anthropic-compatible workflows, migration effort centers mainly on model name updates and evaluation validation rather than SDK rewrites.

The conclusion is that DeepSeek V4 works best as a high-value engineering assistant rather than the only primary model

Across the three hands-on scenarios, DeepSeek V4 stands out for three reasons: it plugs into existing Claude Code toolchains, it can handle moderately complex engineering tasks, and it offers a major pricing advantage. In particular, it delivers strong ROI in scenarios such as configuration repair, migration script cleanup, and model replacement in production business platforms.

However, if the task involves highly difficult programming, complex reasoning chains, or extremely high one-shot success requirements, V4-Pro still does not fully match the strongest closed-source models. A safer strategy is to use Flash for batch and lightweight tasks, Pro for medium-complexity engineering work, and reserve the hardest tasks for stronger models.

FAQ: The three questions developers care about most

Why can DeepSeek V4 connect directly to Claude Code?

Because DeepSeek provides an Anthropic-compatible API. Once you update ANTHROPIC_BASE_URL, the API key, and the model name, Claude Code can forward requests directly to DeepSeek.

Why does adding only flyway-core fail to execute migrations automatically?

In Spring Boot 4.x, Flyway auto-configuration has been split into the official Starter. The third-party core library alone is not enough to trigger auto-configuration. You should use spring-boot-starter-flyway.

How should you choose between DeepSeek-V4-Pro and DeepSeek-V4-Flash?

If you care more about complex coding tasks, cross-file analysis, and issue diagnosis, choose Pro first. If you care more about low-cost generation, business Q&A, and large-scale invocation, choose Flash first. In practice, the best approach is usually a mixed deployment.

Core summary

This article walks through the complete path for integrating DeepSeek V4 with Claude Code in a real project, covering Anthropic-compatible API setup, visual switching with CC Switch, Flyway database migration integration, AI interview platform adoption results, and a pricing comparison between V4 Pro and Flash.