CloudHub is MuleSoft’s managed deployment platform. It handles container orchestration, scaling, and health monitoring so you don’t have to. That convenience comes at a steep price: you’re paying per-vCore for what is essentially managed hosting on top of Kubernetes infrastructure that MuleSoft operates behind the scenes.
Kubernetes gives you the same deployment capabilities — auto-scaling, health checks, rolling updates, self-healing — without per-vCore licensing. You run your integration workloads the same way you run every other service in your organization: as containerized applications on infrastructure you control.
This post covers how to get from CloudHub to Kubernetes. Not the theory — the actual steps, tooling decisions, and architecture patterns you need to execute the migration.
If your organization already runs Kubernetes for application workloads, the case for moving integrations onto the same platform is straightforward. You already have the cluster, the ops team, and the deployment tooling. Running MuleSoft separately means maintaining a parallel deployment model with its own scaling rules, monitoring, and cost structure — all for the privilege of paying Salesforce per-vCore licensing fees.
Here’s what Kubernetes gives you that CloudHub charges a premium for:
Your ops team likely already knows Kubernetes. Your developers likely already know Docker. The skills transfer is minimal compared to learning CloudHub’s proprietary deployment model.
Here’s what you’re building. Each component in the MuleSoft stack has a direct open-source equivalent that runs on Kubernetes:
The result is a system built entirely on open-source components running on infrastructure you control. No vendor lock-in at any layer.
Before you write any code, catalog everything. Open your Anypoint Platform and document every Mule application: what it does, what systems it connects to, how much traffic it handles, and who owns it.
Classify each flow by migration complexity:
Prioritize by combining business criticality with migration difficulty. Start with simple, non-critical flows. They let you build confidence, establish patterns, and shake out infrastructure issues without risking production traffic. Complex, critical flows come last — by then you have proven templates, battle-tested CI/CD, and a team that’s migrated a dozen flows already.
See our migration cost breakdown for per-flow pricing at each complexity level.
Stand up your Kubernetes foundation before you migrate a single flow. You want the target environment ready and proven before you start moving workloads.
Use a managed Kubernetes service unless you have a strong reason not to. Self-managed Kubernetes adds operational overhead that has nothing to do with your integration workloads.
Start small. Three nodes with m5.large (2 vCPU, 8 GB RAM) or equivalent is enough to run your first batch of migrated flows plus monitoring infrastructure. You can scale the node pool as you migrate more workloads. Kubernetes makes this trivial — add nodes, and the scheduler distributes pods automatically.
One namespace per environment is the simplest approach: dev, staging, prod. Apply resource quotas per namespace to prevent one environment from starving another. If you have multiple teams owning different integrations, consider team-based namespaces within each environment: prod-orders, prod-payments, prod-inventory.
Your pipeline needs to do four things: build the Java application (Maven or Gradle), build the Docker image, push it to a container registry, and apply Kubernetes manifests. GitHub Actions, GitLab CI, or Jenkins all work. Pick whatever your team already uses. A typical pipeline stage looks like this:
mvn clean package — build and run testsdocker build -t your-registry/flow-name:$GIT_SHA . — containerizedocker push your-registry/flow-name:$GIT_SHA — push to ECR/ACR/GCRkubectl set image deployment/flow-name flow-name=your-registry/flow-name:$GIT_SHA — deployNever store credentials in ConfigMaps, code, or container images. Use Kubernetes Secrets as the baseline, and layer on the External Secrets Operator to sync from your cloud provider’s secret store (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault). This keeps credentials out of your Git repository and gives you centralized rotation.
Your first migrated flow is the most important one — not because of the flow itself, but because it establishes the template every subsequent migration will follow. Invest the time to get this right.
Create a Spring Boot project with the Camel BOM (Bill of Materials) for dependency management. Core dependencies: camel-spring-boot-starter, camel-http, camel-jackson, and spring-boot-starter-actuator. Add protocol-specific components (SFTP, JMS, Kafka) as each flow requires them.
Establish a consistent layout across all migrated applications:
src/main/java/.../routes/ — Camel route definitionssrc/main/java/.../processors/ — Business logic processorssrc/main/java/.../transforms/ — Data transformation classessrc/main/java/.../config/ — Spring configurationsrc/main/resources/application.yml — Externalized configurationsrc/test/ — Unit and integration testsUse a multi-stage build. The first stage compiles with Maven and runs tests. The second stage copies the fat JAR into a minimal JRE image. This keeps your production image small (~150 MB) and secure (no build tools in the runtime image).
Each application needs four manifests at minimum:
Map Spring Boot Actuator endpoints to Kubernetes probes:
/actuator/health/liveness — Is the JVM alive? If this fails, Kubernetes restarts the pod./actuator/health/readiness — Is the application ready to receive traffic? If this fails, Kubernetes removes the pod from service endpoints until it recovers.Configure Logback to output JSON-formatted logs. Every log line should include the timestamp, log level, thread name, Camel route ID, and a correlation ID for request tracing. JSON logs are trivially parseable by Loki, Elasticsearch, or any centralized logging system. This is already better than CloudHub’s logging console.
This template — the project structure, Dockerfile, Kubernetes manifests, health checks, and logging configuration — becomes your blueprint. Every subsequent migration starts by copying this template and replacing the route logic.
With the template established, migration becomes systematic. Each Mule flow follows the same process:
Map the Mule flow’s XML configuration to a Camel route definition. Mule’s HTTP Listener becomes Camel’s rest() DSL or a from("netty-http:...") endpoint. Mule’s Choice router becomes Camel’s .choice().when().otherwise(). The Enterprise Integration Patterns are the same in both frameworks — they just use different syntax.
DataWeave transforms become Java code using Jackson for JSON manipulation, JAXB or Jackson XML for XML processing, and plain Java for mapping logic. Simple transforms (field renaming, type conversion) are quick. Complex DataWeave scripts with recursive functions and dynamic schema handling take more effort. See our DataWeave vs. Java comparison for patterns and examples.
Mule connectors map to Camel components. The major ones:
camel-http or camel-netty-httpcamel-ftp (supports SFTP natively)camel-jms or camel-amqpcamel-jdbc or camel-sqlcamel-kafkacamel-salesforcecamel-fileCamel has 300+ components. In practice, we’ve never encountered a Mule connector without a Camel equivalent.
Mule’s on-error-propagate and on-error-continue map to Camel’s onException() with .handled(true/false). Dead letter channels in Camel work the same conceptually — failed messages route to an error queue for inspection and reprocessing. Camel’s error handling is arguably more flexible because you can define exception policies at the route level, the context level, or both.
Write tests before you cut over. Camel’s test framework (camel-test-spring-junit5) lets you mock endpoints, inject test messages, and assert on message bodies and headers. Use @MockEndpoints and AdviceWith to isolate route segments for unit testing. Integration tests hit real test instances of external systems — your staging database, a test SFTP server, a dev Kafka cluster.
For critical flows, run both the MuleSoft and Camel versions simultaneously. Route a copy of production traffic to the Camel application (shadow mode) or split traffic between them (canary deployment). Compare outputs. Once the Camel version produces identical results over a sufficient period — typically 1–2 weeks — cut over traffic entirely and decommission the Mule flow.
Replacing Anypoint Monitoring is not a sacrifice — it’s an upgrade. The open-source observability stack running on Kubernetes gives you more visibility than Anypoint ever did, with no per-seat licensing.
Spring Boot Actuator exposes Camel metrics via the /actuator/prometheus endpoint. Prometheus scrapes these automatically using Kubernetes service discovery. Out of the box you get: route-level throughput, processing latency (mean, p95, p99), error rates, inflight exchange counts, and JVM metrics (heap usage, GC pauses, thread counts).
Build Grafana dashboards for each Camel application. Key panels: requests per second by route, latency distribution, error rate over time, pod CPU and memory utilization, and JVM heap usage. Create a top-level dashboard that shows all integration applications at a glance — this replaces the Anypoint Runtime Manager view.
Configure Grafana alerts or Prometheus Alertmanager to fire on SLA breaches: error rate above threshold, latency exceeding target, pod restart loops, or disk pressure on persistent volumes. Route alerts to Slack, PagerDuty, or email — whatever your team already uses for incident response.
Instrument your Camel applications with OpenTelemetry (Camel has native support via camel-opentelemetry). Traces flow through Jaeger or Grafana Tempo, giving you end-to-end visibility across service boundaries. When a request touches an API gateway, a Camel route, a database, and a downstream HTTP service, you see the full trace with timing for each hop. Anypoint Monitoring doesn’t give you this.
JSON-formatted logs from all Camel pods flow into Grafana Loki (lightweight, integrates natively with Grafana) or Elasticsearch (more powerful querying, higher operational overhead). Either way, you get centralized search, filtering by application/route/correlation ID, and log-to-trace correlation. Set up log retention policies to manage storage costs.
If you’re using Anypoint API Manager for rate limiting, authentication, and API governance, you need a replacement. Two solid open-source options run natively on Kubernetes.
Kong deploys as a Kubernetes Ingress controller. It handles rate limiting, key authentication, OAuth2, request/response transformation, and logging at the gateway level. Configuration is declarative via Kubernetes CRDs (Custom Resource Definitions) — no GUI required, everything lives in version control alongside your application manifests.
APISIX is a high-performance alternative to Kong with a similar feature set. It’s particularly strong on dynamic routing and plugin extensibility. Both are solid choices — pick whichever has better community support for your specific requirements.
Replace RAML specs in Anypoint Exchange with OpenAPI (Swagger) specifications. Camel can auto-generate OpenAPI specs from your REST DSL route definitions. Host API docs on a developer portal (Kong Developer Portal, Backstage, or even a static site generated from your OpenAPI specs).
See our API gateway comparison page for a detailed breakdown of Kong, APISIX, and other options.
This is where the math gets compelling. Let’s compare a typical deployment: 8 vCores on CloudHub with a Platinum-tier Anypoint subscription.
m5.xlarge nodes on-demand (4 vCPU, 16 GB each): ~$16K/yearWith reserved instances or savings plans, the compute cost drops to roughly $10K/year, bringing the total under $15K/year.
The difference is staggering: $600K–$860K vs. $15K–$21K. Even accounting for the one-time migration cost, you break even within months, not years. The ongoing savings are pure licensing margin that Salesforce was capturing and you’re now keeping.
Run your own numbers through our savings calculator to see the comparison for your specific deployment size.
We’ve done enough of these migrations to know where teams get into trouble. Avoid these mistakes:
m5.large nodes is a reasonable starting point, but monitor resource utilization from the beginning. Running out of CPU or memory during migration creates unnecessary production risk.The migration from CloudHub to Kubernetes is not a leap of faith. It’s a well-understood pattern that thousands of organizations have executed. The tools are mature, the architecture patterns are proven, and the cost savings are unambiguous.
Start with the assessment. Catalog your flows, classify by complexity, and build a phased migration plan. Set up your Kubernetes cluster and CI/CD pipeline. Migrate a simple flow end-to-end to prove the pattern. Then accelerate.
The hardest part is not the technology — it’s the decision to start. Every month you delay is another month of vCore licensing fees you didn’t need to pay.