This article focuses on the end-to-end delivery workflow for Spring Boot services on Kubernetes. It covers jar vs. war artifacts, Maven builds, multi-stage Docker packaging, Jenkins releases, Kubernetes deployment, and JVM troubleshooting to solve a common operations pain point: knowing how to release, but not how to diagnose failures. Keywords: Spring Boot, Kubernetes, Jenkins.
Technical Specifications at a Glance
| Parameter | Details |
|---|---|
| Core Languages | Java, YAML, Dockerfile, Bash |
| Runtime Platforms | Spring Boot, Docker, Kubernetes, Jenkins |
| Delivery Protocols | OCI images, HTTP, K8S Deployment |
| Star Count | Not provided in the original article |
| Core Dependencies | Maven, JDK, Harbor, Git, embedded Tomcat |
Operations teams must understand the Spring Boot delivery pipeline
Many release failures do not happen at kubectl apply. They usually occur earlier, at the source code, dependency, image, or JVM parameter layer. If an operations engineer only knows how to change an image tag, they often cannot determine whether the problem is in the build, push, or startup stage.
The Spring Boot delivery pipeline can be summarized as follows: source code enters the Maven build process, produces an executable jar, gets packaged into an image with Docker, is pushed to an image registry by Jenkins, and is finally rolled out through Kubernetes orchestration. If any step is unstable, release risk increases significantly.
The nature of Java artifacts determines how applications run
A jar is the standard Java archive format. In Spring Boot, the most common form is an executable jar, which already includes dependencies and an embedded web container. That means you can start it directly with java -jar, which makes it especially suitable for containerized deployment.
A war, by contrast, belongs more to the traditional Java web delivery model. It depends on an external Servlet container such as Tomcat. Its deployment path, container configuration, and runtime responsibility are clearly different from a jar, so it is less streamlined than an executable jar in cloud-native environments.
myapp.jar
├── META-INF/
├── BOOT-INF/
│ ├── classes/ # Compiled business code
│ └── lib/ # Runtime dependencies
└── org/springframework/boot/loader/
This structure shows that a Spring Boot jar already contains its own launcher and dependencies, so it can run directly.
Spring Boot simplifies the deployment model through an embedded container
The core value of Spring Boot is not that it replaces Spring. Its value is that it standardizes initialization, dependency management, auto-configuration, and runtime behavior. The biggest operational benefit is that the deliverable changes from “application + external Tomcat” to “a single executable package.”
When a project introduces spring-boot-starter-web, embedded Tomcat is packaged into the application artifact itself. When the container starts, the JVM process launches both the application and the web container directly, without relying on middleware preinstalled on the host.
Maven is the de facto standard for build and dependency management
Maven is responsible for downloading dependencies, running compilation, testing, and packaging. The pom.xml file determines which libraries the project uses, which plugins it adopts, and whether it ultimately produces a jar or a war. For that reason, when a build fails, you should first verify that the dependency source, versions, and plugin configuration are consistent.
mvn clean package -Dmaven.test.skip=true # Skip tests and package directly
This command generates the Spring Boot executable jar and is the most common packaging entry point in CI/CD pipelines.
Multi-stage Docker builds are better suited for production pipelines
In production, separating the Maven build environment from the runtime environment reduces image size and improves reproducibility at the same time. Jenkins does not need a complex build toolchain installed on the host. It only needs to trigger docker build.
FROM maven:3.3.9 AS build
COPY . /usr/app/
RUN cd /usr/app && mvn clean package -Dmaven.test.skip=true # Produce the jar during the build stage
FROM your-jdk-runtime:8
COPY --from=build /usr/app/your-app/target/your-app.jar /usr/local/apps/ # Copy only the runtime artifact
WORKDIR /usr/local/apps/
EXPOSE 8080
EXPOSE 10090
CMD ["./start.sh", "your-app.jar"] # Start the service through the startup script
This Dockerfile demonstrates the standard practice of separating the build image from the runtime image.
Jenkins pipelines move build artifacts into the cluster
A typical workflow includes pulling deployment templates, fetching source code, building the image, generating a tag, pushing to Harbor, replacing the image in the Deployment, and triggering the release. Here, the real artifact is not the jar on the host machine, but the final runnable image that can be pulled by the cluster.
Use a consistent tag strategy, such as “timestamp + commit ID + random suffix.” This makes rollback, auditing, and cross-environment traceability much clearer, and it reduces the chance of accidentally releasing an outdated image.
Release failures usually cluster around these breakpoints
Common failure points include an unavailable Maven repository, expired Git credentials, mismatched JDK versions, incorrect paths in a multi-stage Docker build, insufficient Harbor quota, missing imagePullSecrets, incorrect probe configuration, and JVM heap settings that exceed the container memory limit.
kubectl describe pod <pod-name> # View events and probe failure reasons
kubectl logs <pod-name> # View application startup logs
kubectl get events -n
<ns> # View namespace-level abnormal events
These commands help you quickly determine whether the issue is in scheduling, image pulling, startup, or health checking.
Successful Kubernetes releases depend on correct resource and probe settings
A Pod entering Running does not mean the service is actually available. Release quality depends on probes, configuration injection, port listening, and dependency initialization order. If the readinessProbe runs too early, the service may be repeatedly removed from traffic, creating a false failure signal.
At the same time, image, configuration, and secrets must change together. Many incidents are not caused by a broken image, but by a new version that depends on a ConfigMap or Secret that was not updated in sync, which eventually leads to CrashLoopBackOff.
JVM parameters must align with the container memory model
Java service stability depends heavily on the JVM. If the container limit is 2Gi, but you configure an oversized heap or ignore Metaspace, thread stacks, and direct memory, Kubernetes may terminate the Pod directly as OOMKilled.
env:
- name: JAVA_OPTS
value: "-XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=75.0" # Allocate heap based on a percentage of container memory
resources:
limits:
memory: "2Gi"
This configuration keeps the JVM heap aligned with the container memory limit and reduces OOM risk.
JVM monitoring becomes the second battlefield after release
After deployment, operations teams should not only check whether the Pod is alive. They should also observe GC frequency, Full GC pause time, thread states, and heap usage trends. Many Java service issues are not instantaneous failures. They degrade gradually due to memory leaks, thread blocking, and large object accumulation.
If Full GC occurs more than once per minute, or if a single pause exceeds one second, treat it as a high-risk signal. At that point, combine logs, monitoring, and live diagnostic data to determine whether the issue comes from configuration or a code-level leak.
find / -name "gc*.log" 2>/dev/null # Search for GC logs
kubectl logs <pod-name> | grep "Full GC" # Filter Full GC records
jmap -dump:format=b,file=/app/heapdump.hprof 1 # Export a heap dump
These commands help collect GC and heap evidence from a running Pod for follow-up troubleshooting.
Operations teams should establish standardized release and rollback mechanisms
The key to improving release success is not becoming better at manual deployments. It is establishing a stable baseline: pin JDK and Maven versions, standardize image naming, build dependency caches, verify that images are pullable, pre-validate YAML and RBAC, and retain the previous known-good tag.
In production, canary releases and fast rollback matter just as much. After a failure, using rollout undo or switching back to the previous tag is safer than modifying parameters directly in production, and it restores business continuity more reliably.
FAQ
1. Why does Spring Boot on Kubernetes usually use jar instead of war?
Because an executable jar already includes embedded Tomcat and its dependencies. That makes the image structure simpler, the startup model consistent, and the application better suited for independent deployment, scaling, and rollback in container environments.
2. If Jenkins does not explicitly run mvn, why can the image still produce a jar?
Because mvn clean package is placed inside the multi-stage Docker build. Jenkins only triggers docker build, while the actual Maven packaging happens inside the build container.
3. Why is the service still unavailable even though the Pod shows Running?
Common causes include an incorrect probe path, a port that is not listening, dependencies that are not ready, missing configuration, or a JVM process that starts and then quickly runs into OOM. Running only means the container process exists. It does not mean the application is ready to serve traffic.
AI Readability Summary: This article systematically explains the full Spring Boot delivery chain, from Maven packaging and multi-stage Docker builds to Jenkins pipeline releases, Kubernetes deployment, and JVM operations troubleshooting. It helps operations and platform engineers improve release success rates and accelerate fault isolation.