Network
Shiprr routes app traffic through managed edge entry points before forwarding requests to healthy app replicas. This gives apps regional placement, managed TLS, access control, redirects, rewrites, and custom-domain routing without making you operate the edge layer yourself.
Architecture#
Shiprr deploys applications across distributed runtime regions and routes traffic through managed edge and runtime entry points. Each app region selection influences where new replicas are placed first. If preferred capacity is unavailable, the platform may place load in the nearest healthy region according to policy.
Default app hostnames and platform entry points can use anycast-backed edge routing where available. Custom domains point at stable Shiprr edge targets so DNS can stay stable while the platform manages routing behind the scenes.
Capacity#
- Regional uplink: runtime infrastructure uses minimum 10 Gbps uplink profiles for workload traffic.
- Aggregate edge-to-runtime: current regional infrastructure capacity is over 100 Gbps across active clusters, allowing high-volume routing for active apps.
- Load behavior: runtime placement and network balancing are handled through platform placement and routing rules, not fixed single-host affinity.
Edge routing vs CDN#
Shiprr's edge layer is regional ingress for apps. It handles routing, TLS, custom domains, access control, redirects, and rewrites before forwarding requests to your app.
Shiprr is not currently a general-purpose CDN. Dynamic app requests are proxied to your app, and Shiprr does not yet provide customer-facing edge cache rules, cache purging, or static asset CDN controls.
What this means for app performance#
High-capacity links support healthy sustained traffic and burst handling, but user-facing latency is still bound by request path, app runtime efficiency, database location, and upstream dependencies.
For API-heavy workloads, expected gains are usually from better region selection and app-level efficiency rather than just raw link speed.
Egress policy and billing boundaries#
Shiprr bills outbound transfer by usage band in billing. High-throughput scenarios should consider caching and response compression to reduce repeat payloads on high-traffic workloads.
Reliability and operations#
- Deployment failures are isolated per app replica; retries are handled per workflow.
- Traffic can be shifted by region preference and placement rules when capacity changes.
- Platform monitoring tracks runtime health and job behavior for faster issue detection.
FAQ#
Is this a premium network?
We aim for high-throughput infra and regional routing first. For best results we focus on realistic operational throughput and placement behavior rather than branded-only claims.
Can you guarantee latency?
No fixed global latency guarantee is made. Public internet, client geography, and upstream services impact end-to-end response times.
Is it all in one region?
No. Runtime and build capacity is distributed across regions and placed according to region selection and healthy capacity.