01: Overview¶
1.1 Introduction¶
LocalCloudLab is a name of this learning and development environment project.
LocalCloudLab is a fully self-hosted, production-grade development and testing environment designed to replicate modern cloud-native architectures using open‑source technologies. Instead of relying on remote cloud providers such as AWS, Azure, or GCP, LocalCloudLab allows engineers, architects, and DevOps practitioners to build an entire microservices ecosystem on a single hosted Linux server.
The goal is simple: provide a safe, isolated, fully controlled environment where complex real‑world systems can be designed, deployed, tested, observed, secured, and scaled—without external dependencies and without the unpredictable limitations of managed cloud platforms.
LocalCloudLab enables developers to:
• Experiment with Kubernetes in a realistic environment.
• Build multi‑service .NET applications.
• Test service‑to‑service communication under an Envoy Gateway.
• Manage SSL certificates with cert‑manager.
• Deploy and observe services using Grafana, Prometheus, Loki, Seq, and Jaeger.
• Work with backing services such as PostgreSQL, Redis, and RabbitMQ.
• Design CI/CD pipelines using GitHub Actions and eventually ArgoCD.
• Simulate production‑grade networking with MetalLB.
• Implement security patterns like RBAC, NetworkPolicies, secrets control, and more.
LocalCloudLab is intentionally opinionated. It brings together dozens of DevOps practices into one unified project so that you can learn, test, debug, and master them.
1.2 Why LocalCloudLab Exists¶
Modern production systems are extremely complex. Real microservice ecosystems involve:
• Containers
• Orchestrators (Kubernetes)
• API gateways
• Observability stacks
• Persistent storage
• Secrets management
• SSL certificates
• Message brokers
• Distributed tracing
• Continuous deployment
• Multi‑service architectures
• Strict security controls
The challenge is that cloud providers hide many details. When you deploy to AWS EKS, you do not manage your control plane. When you deploy to Azure App Service, you do not deal with ingress controllers. When you use cloud databases, you rarely think about PVCs (PersistentVolumeClaims) or StatefulSets.
LocalCloudLab forces you to understand everything — because you control everything.
This environment allows you to:
• Learn Kubernetes deeply without abstractions.
• Understand how traffic flows from the Internet → Gateway → Service → Pod.
• Debug SSL issues, routing problems, certificate renewals, load balancer behavior.
• See exactly how logs, metrics, and traces are collected and correlated.
• Practice proper architecture design, clean code, and microservice isolation.
• Reproduce production incidents in a safe environment.
• Train for DevOps roles by managing a full stack alone.
LocalCloudLab is both a learning tool and a practical deployment environment for real projects.
1.3 High‑Level Architecture¶
LocalCloudLab consists of several layers stacked on top of each other. Each layer has a responsibility and interacts with other layers cleanly.
Below is a simplified ASCII diagram representing the architecture:
┌───────────────────────────────────────────────────────────┐
│ CLIENT / INTERNET │
└───────────────────────────────────────────────────────────┘
|
v
┌───────────────────────────────────────────────────────────┐
│ Envoy Gateway │
│ (HTTP, HTTPS, Routing, TLS termination) │
└───────────────────────────────────────────────────────────┘
|
v
┌───────────────────────────────────────────────────────────┐
│ Kubernetes (k3s) │
│ Deployments | Services | Pods | ConfigMaps | Secrets │
└───────────────────────────────────────────────────────────┘
|
┌───────────────┴──────────────────────────────────┐
v v
┌───────────────────────┐ ┌────────────────────────┐
│ Application Layer │ │ Infrastructure Layer │
│ Search API (C#/.NET) │ │ PostgreSQL | Redis │
│ Checkin API (.NET) │ │ RabbitMQ | Storage │
└───────────────────────┘ └────────────────────────┘
|
v
┌───────────────────────────────────────────────────────────┐
│ Observability │
│ Prometheus | Grafana | Loki | Promtail | Seq | Jaeger │
└───────────────────────────────────────────────────────────┘
Each component plays an essential role. Later in this guide, every box in the diagram will be broken down in depth with installation, configuration, YAML manifests, troubleshooting, and best practices.
1.4 Major Components Overview¶
Here is a brief summary of the technologies that form the foundation of LocalCloudLab:
Kubernetes (k3s):
A lightweight distribution of Kubernetes optimized for a single-node cluster.
It includes all essential components: kube-apiserver, scheduler, controller-manager,
kubelet, containerd, network plugins, and more.
MetalLB:
A LoadBalancer implementation for bare-metal environments. It assigns external IPs
to Services of type LoadBalancer, mimicking public cloud behavior.
Envoy Gateway:
A powerful, modern ingress/gateway built on the Envoy proxy. Handles HTTP/S traffic,
routing, path-based rules, TLS termination, and more.
cert-manager:
Automates issuance and renewal of SSL/TLS certificates using Let's Encrypt ACME.
Observability Stack:
Prometheus – Metrics scraping and storage.
Grafana – Dashboards and visualization.
Loki – Logs aggregation.
Promtail – Log shipping from nodes.
Seq – Structured logging from .NET.
Jaeger – Distributed tracing.
OpenTelemetry – Instrumentation and trace exporting.
Data Services:
PostgreSQL – Primary relational database.
Redis – In-memory cache / key-value store.
RabbitMQ – Message broker for event-driven patterns.
CI/CD:
GitHub Actions – Builds, tests, deploys applications.
ArgoCD – GitOps deployment tool (optional but recommended).
Application Layer:
Search API – Example microservice (C#/.NET).
Checkin API – Second microservice (C#/.NET).
Both deployed in Kubernetes using Deployment, Service, and HTTPRoute.
1.5 Philosophy and Goals of LocalCloudLab¶
LocalCloudLab is designed around five core principles:
1. Everything must be reproducible.
2. All configuration should be version-controlled in Git.
3. The environment must reflect real production patterns as closely as possible.
4. Observability must be first-class, not an afterthought.
5. Security must be built-in from the beginning.
These principles ensure that the environment grows organically into a fully‑fledged DevOps/Cloud platform that is both educational and practical.
(End of Part 1. More will be appended.)
LocalCloudLab – Section 01: Overview (Part 2)¶
1.6 Understanding the Layered Architecture in Depth¶
In a modern cloud-native system, each layer of the infrastructure has well-defined responsibilities. LocalCloudLab mirrors these real-world patterns closely, ensuring that engineers gain hands-on familiarity with concepts that directly translate into production and enterprise environments.
1.6.1 The External Interface Layer¶
This is the point of entry for all client requests, whether originating from: • Browsers • Mobile applications • API consumers • Internal microservices • Monitoring systems • Developer tools and automated scripts
In LocalCloudLab, the external interface layer is represented by: • DNS records (e.g., search.hershkowitz.co.il) • Public IPs assigned by MetalLB • Envoy Gateway listeners on ports 80 and 443
This layer determines how traffic is exposed, secured, and routed.
1.6.2 Gateway and Ingress Layer¶
Envoy Gateway is the heart of external traffic delivery. It handles: • HTTP and HTTPS protocols • Routing rules • Virtual host definitions • Path-based routing • TLS termination via certificates issued by cert-manager • Advanced filters, rewrites, caching policies (future options)
Envoy Gateway is more modern and flexible than traditional ingress controllers such as NGINX Ingress. It uses the Kubernetes Gateway API standard, promoting consistent, future-proof patterns.
1.6.3 Control Plane (Kubernetes API)¶
The Kubernetes control plane manages the entire cluster's operation. In LocalCloudLab’s k3s: • kube-apiserver manages API requests. • etcd (or SQLite in k3s single-node) stores cluster state. • scheduler assigns pods to nodes. • controller-manager ensures workloads match their desired state. • cloud-controller-manager functionality is replaced by built-in k3s logic.
The control plane reacts automatically whenever: • Deployments change • Pods crash or restart • Services are created or deleted • Configurations update • TLS certificates change • Infrastructure scales
1.6.4 Data Plane (Worker Components)¶
The data plane actually runs workloads: • kubelet runs containers on the node. • containerd manages container runtime execution. • CoreDNS handles internal DNS resolution. • CNI (Container Network Interface) provides networking between pods.
Applications such as Search API and Checkin API live here.
1.6.5 Internal Services Layer¶
Services provide stable network access to pods. A Service abstracts: • Internal load balancing • Pod-to-pod communication • Service discovery via DNS names (e.g., search-api.kg-search.svc.cluster.local)
This layer hides implementation details (number of pods, IP addresses) behind DNS-based identities.
1.6.6 Observability Layer¶
Modern distributed systems cannot operate without observability. LocalCloudLab includes: • Metrics (Prometheus) • Dashboards (Grafana) • Logs (Loki, Promtail, Seq) • Distributed traces (Jaeger) • Trace propagation via OpenTelemetry
Observability provides answers to: • Why did a request fail? • Which microservice was slow? • What SQL queries were executed? • How long did each part of a request take? • Did errors originate from client, network, or code? • Was the system overloaded at the time?
1.6.7 Backing Services Layer¶
LocalCloudLab includes optional but realistic data services: • PostgreSQL — primary database • Redis — cache/store for temporary or volatile data • RabbitMQ — message broker for async communication
These services represent the majority of real-world backend architectures for .NET systems.
1.6.8 CI/CD and GitOps Layer¶
Automation is critical in a modern system. LocalCloudLab introduces: • GitHub Actions — build and deploy applications • ArgoCD (optional) — declarative GitOps pipeline • Container registry (GHCR or Harbor)
With GitOps: • Kubernetes becomes fully managed by version-controlled manifests. • Every change is traceable. • Rollbacks become trivial. • Deployment drift is eliminated.
1.7 End-to-End Request Flow¶
Understanding how a single HTTP request flows through the system is essential. This view helps developers reason about: • Latency • Routing failures • TLS issues • Networking misconfigurations • Correctness of Kubernetes manifests
Below is a detailed flow for a request to the Search API:
[1] User enters URL:
https://search.hershkowitz.co.il
[2] DNS resolves the domain to the public IP assigned by MetalLB.
[3] Request arrives at the server’s network interface.
[4] Envoy Gateway receives the request on port 443 (HTTPS).
[5] Envoy terminates TLS using certificate provided by cert-manager.
[6] Envoy inspects hostname:
search.hershkowitz.co.il
and matches it to an HTTPRoute bound to the main Gateway.
[7] Envoy forwards the request to the Kubernetes Service:
search-api.kg-search.svc.cluster.local
[8] The Service load-balances the request to one of the running pods.
[9] Search API (.NET) processes the request:
• Validates input
• Calls Redis or PostgreSQL
• Triggers business logic
• Emits logs, metrics, and traces
[10] OpenTelemetry SDK exports traces:
• To OTel Collector
• Then to Jaeger
[11] Serilog writes logs:
• To Seq (structured logs)
• To console (for Promtail → Loki ingestion)
[12] Prometheus scrapes metrics from pods and system components.
[13] Search API returns a response to Envoy Gateway.
[14] Envoy Gateway returns the response to the client.
Understanding this chain helps isolate issues quickly. For example: • TLS misconfig? Step 4–5. • Wrong routing? Step 6. • Pod unreachable? Step 7–8. • Exception thrown? Step 9. • Missing traces? Step 10–11. • Slow metrics? Step 12.
1.8 Naming Conventions in LocalCloudLab¶
Consistent naming is essential in scalable systems. LocalCloudLab uses clean conventions to simplify maintenance.
1.8.1 Namespace naming¶
kg-search
kg-checkin
monitoring
logging
data
envoy-gateway-system
cert-manager
1.8.2 Service and Deployment naming¶
search-api
checkin-api
postgres
redis
rabbitmq
1.8.3 ConfigMap and Secret naming¶
search-config
search-secrets
postgresql-credentials
rabbitmq-config
1.8.4 Gateway and Routing objects¶
main-gateway
search-route
checkin-route
1.8.5 Hostnames¶
search.hershkowitz.co.il
checkin.hershkowitz.co.il
These conventions reflect real-world standards used across cloud environments.
1.9 Requirements for Running LocalCloudLab¶
Hardware, software, and networking requirements include:
1.9.1 Server hardware¶
Minimum: • 4 vCPU • 16 GB RAM • 120 GB SSD
Recommended: • 8+ vCPU • 32 GB RAM • 250+ GB NVMe SSD
1.9.2 Networking¶
• Static public IP
• Ability to create DNS A records
• Open ports 22, 80, 443
1.9.3 Software prerequisites¶
On server: • Ubuntu 22.04 • Docker (optional) • k3s • Helm
On Windows development machine: • Visual Studio 2022 • Docker Desktop (optional) • kubectl • Git CLI
(End of Part 2. More will be appended.)
LocalCloudLab – Section 01: Overview (Part 3)¶
1.10 Deep Dive: Interaction Between Core Components¶
A cloud-native system is a living organism where each subsystem communicates, reacts, fails, recovers, and self-heals in predictable ways—if designed correctly. LocalCloudLab models this behavior at a smaller scale, making every interaction visible and understandable.
Below is a detailed exploration of how LocalCloudLab’s core components work together.
1.10.1 DNS → Gateway → Kubernetes → Pod Flow¶
The end-to-end path for any request begins outside the cluster:
• The client queries DNS.
• DNS resolves domain → the MetalLB-assigned IP for the Gateway.
• The request reaches Envoy Gateway.
• Envoy validates hostnames, TLS certificates, and routing rules.
• Envoy forwards the request to the appropriate Kubernetes Service.
• The Service picks a Pod (via kube-proxy or IPVS).
• The Pod processes the request.
• Logs, traces, and metrics are generated and exported.
• The response flows backward through the same chain.
Each step is observable via logs or metrics, making LocalCloudLab ideal for debugging distributed issues.
1.10.2 How MetalLB Assigns an External IP¶
MetalLB monitors Kubernetes Services of type LoadBalancer:
• When a new LoadBalancer is created (e.g., Envoy Gateway service),
MetalLB watches for Events.
• It allocates an external IP from the configured pool.
• It announces the IP via ARP (L2 mode) to the network.
• The server now effectively “owns” that IP on the LAN.
This behavior simulates how cloud providers like AWS assign Elastic IPs to LoadBalancers.
1.10.3 How Envoy Gateway Discovers Kubernetes Routes¶
Envoy Gateway monitors objects defined under Kubernetes Gateway API:
• Gateways
• HTTPRoutes
• TLSRoutes (if used)
• BackendPolicies
• Service objects
When a new HTTPRoute is created:
• Envoy validates the route’s parentRefs and hostnames.
• It builds routing tables dynamically.
• It updates its listener configuration.
• Envoy reloads config without downtime.
This dynamic config system is what makes Envoy a highly reliable gateway.
1.11 Kubernetes Core Concepts (Deep Explanation)¶
Understanding Kubernetes fundamentals is crucial before diving deeper into deployments, observability, or CI/CD pipelines. Below are the main conceptual building blocks.
1.11.1 Pods: The Smallest Compute Unit¶
A Pod represents:
• One or more tightly coupled containers
• A shared network namespace (one IP per pod)
• Shared storage volumes
• A unit of deployment, restart, scaling
Pods are not designed to be created manually for production. Instead, you use controllers (Deployments, StatefulSets) that maintain pod lifecycle automatically.
1.11.2 Deployments and ReplicaSets¶
Deployments define:
• Desired number of replicas
• Container images
• Environment variables / secrets
• Labels and selectors
• Pod template
Deployments manage ReplicaSets, and ReplicaSets manage Pods. This layering provides:
• Rollouts
• Rollbacks
• Zero-downtime updates
• Scaling behavior
Example failures solved by Deployments:
• If a Pod crashes → it is recreated.
• If a node reboots → Pods reschedule.
• If you push a new image → rollout begins.
1.11.3 Services: Stable Network Endpoints¶
A Kubernetes Service provides:
• A stable virtual IP (ClusterIP)
• Load balancing across Pod replicas
• DNS resolution inside the cluster
• A contract between consumers and providers
Types of Services:
ClusterIP → internal-only access
NodePort → exposes ports on all nodes (rarely needed)
LoadBalancer → integrates with MetalLB for external IPs
1.11.4 ConfigMaps and Secrets¶
ConfigMaps → non-sensitive configuration (YAML, JSON, strings)
Secrets → sensitive configuration (passwords, tokens, connection strings)
In LocalCloudLab, Secrets may store:
• PostgreSQL passwords
• Redis authentication
• RabbitMQ credentials
• API keys for external services
Best practice: never commit Secrets to Git; use encoders or sealed-secrets if needed.
1.11.5 Namespaces¶
Namespaces isolate groups of resources logically. LocalCloudLab uses:
kg-search
kg-checkin
monitoring
logging
data
default
envoy-gateway-system
cert-manager
Namespaces help avoid naming collisions, apply security policies, and group workloads by domain.
1.11.6 Kubernetes DNS¶
Every Service gets a DNS name:
service-name.namespace.svc.cluster.local
Example:
search-api.kg-search.svc.cluster.local
This makes inter-service communication straightforward.
1.12 Networking Foundations¶
Networking is one of the most misunderstood parts of Kubernetes. LocalCloudLab helps clarify these concepts because everything happens in one node—making it easier to trace.
1.12.1 Pod Networking¶
Each Pod receives:
• A virtual Ethernet interface
• An IP address from the Pod CIDR range
• Routing rules injected by the network plugin
Pod-to-pod communication flows entirely through virtual interfaces—never through NAT when inside the same node.
1.12.2 Service Networking¶
Services receive IPs from the Service CIDR.
The cluster’s networking is defined by:
• Pod CIDR range
• Service CIDR range
These ranges must not overlap with your LAN or VPN.
1.12.3 How Envoy Reaches Pods¶
Envoy does not connect directly to Pods. Instead, it connects to:
search-api.kg-search.svc.cluster.local
The kube-proxy component handles the translation:
• Maps Service IP → Pod IP
• Balances traffic evenly
• Removes dead endpoints automatically
1.12.4 Health Probes¶
Every Pod should define:
readinessProbe (is the Pod ready to serve traffic?)
livenessProbe (should the Pod be restarted?)
Without probes, Kubernetes may route traffic prematurely.
1.13 Observability Philosophy (The Golden Triangle)¶
Modern DevOps relies on three pillars:
Logs – What happened?
Metrics – How is the system behaving?
Traces – Why did it happen?
LocalCloudLab implements:
• Logs → Loki + Seq
• Metrics → Prometheus
• Traces → Jaeger + OTel
1.13.1 Logs¶
Logs reveal:
• Exceptions
• Request details
• Background job behavior
• Pod events
Promtail ships logs → Loki, while .NET sends structured logs → Seq.
1.13.2 Metrics¶
Prometheus scrapes:
• CPU, RAM
• Pod restarts
• HTTP latency
• Database query durations (if instrumented)
Metrics answer: Is the system healthy?
1.13.3 Traces¶
Traces follow a request end-to-end across services.
In LocalCloudLab:
ASP.NET Core → OTel SDK → OTel Collector → Jaeger
Traces answer: Where is the slowdown?
1.14 Production Parallels¶
LocalCloudLab is not a toy. It mirrors real environments:
AWS EKS → k3s
AWS ELB → MetalLB
AWS ACM → cert-manager
AWS ALB → Envoy Gateway
CloudWatch → Grafana/Loki
X-Ray → Jaeger
RDS → PostgreSQL
ElasticCache → Redis
SQS → RabbitMQ
This makes the environment ideal for training, prototyping, and DevOps practice.
(End of Part 3 — more will be appended.)
LocalCloudLab – Section 01: Overview (Part 4)¶
1.15 Reliability & Failure Behavior in LocalCloudLab¶
In a distributed system, reliability is not an optional luxury—it is a necessity. Even in a single-node environment like LocalCloudLab, failure modes occur regularly. Understanding how the system behaves under these failures is foundational to becoming proficient in DevOps, SRE, and cloud-native engineering.
Below are the core reliability mechanisms LocalCloudLab demonstrates.
1.15.1 Pod Crashes and Automatic Restarts¶
Pods may crash for many reasons:
• Uncaught exceptions in .NET applications
• Memory leaks
• Out-of-memory (OOMKill)
• Misconfigured environment variables
• Failing health probes
• External dependency timeouts (e.g., PostgreSQL unavailable)
Kubernetes responds by:
• Restarting the Pod automatically
• Logging the cause in `kubectl describe pod`
• Recording events in the namespace
• Updating ReplicaSet status
• Maintaining the desired replica count (e.g., always 1 or 3)
This self-healing behavior is one of the biggest strengths of Kubernetes and is demonstrated clearly in LocalCloudLab.
1.15.2 Node Pressure and Eviction Scenarios¶
When the node runs out of:
• Memory
• Disk space
• CPU cycles
• Inodes
Kubernetes begins to evict Pods to protect system stability.
Example eviction reasons:
• Evicted: The node had insufficient memory.
• Evicted: The node encountered disk pressure.
This is critically important to understand, as such events can cause service disruptions.
In LocalCloudLab, because everything runs on one machine, resource planning and monitoring become extremely visible and educational.
1.15.3 Gateway and Ingress Reliability¶
Envoy Gateway operates independently of application workloads:
• If Search API pods restart → Envoy updates routes dynamically.
• If a Service is unavailable → Envoy returns 503s with diagnostic details.
• If TLS certificates are rotated → Envoy automatically reloads new certs.
This decoupling ensures the gateway remains stable even when internal services experience instability.
1.15.4 Observability During Failures¶
The LocalCloudLab observability stack reveals failures transparently:
• Loki → container logs show specific errors
• Seq → structured logs expose application logic failures
• Prometheus → CPU/RAM spikes, query durations, pod restarts
• Jaeger → slow traces or missing spans
• Grafana → dashboards aggregating all components
This makes LocalCloudLab a perfect environment to simulate incident response procedures.
1.16 The Role of Automation and Declarative Design¶
Kubernetes and modern DevOps tools rely on automation principles that ensure consistency, predictability, and safety.
1.16.1 Declarative vs Imperative¶
Imperative approach:
“Create a pod now with these settings.”
Declarative approach:
“The system should always have 2 instances of this application.”
Kubernetes reconciles the desired state with the actual state continuously.
1.16.2 Control Loops and Reconciliation¶
Many Kubernetes components operate as “loops”:
Controller:
“Should there be a Deployment with 2 Pods? Yes.”
“Are there 2 Pods running? No.”
→ Create Pods until desired state is met.
cert-manager:
“Should this certificate be valid? Yes.”
“Is the certificate expired? Yes.”
→ Renew certificate.
HorizontalPodAutoscaler:
“Is CPU above threshold? Yes.”
→ Increase replicas.
This automated behavior simplifies operational overhead dramatically.
1.16.3 Eliminating Configuration Drift¶
With GitOps (ArgoCD), the system reads configuration from Git. If the cluster drifts (e.g., manual kubectl apply), ArgoCD self-corrects by reapplying the expected manifests.
This gives you:
• Auditable history
• Rollback capability
• Immutable infrastructure patterns
1.17 Layered Security Architecture¶
LocalCloudLab is designed with a layered defense model inspired by real-world cloud deployments.
1.17.1 External → Internal Trust Boundaries¶
Traffic entering the environment should cross several layers:
• TLS termination at Envoy
• Routing validation via HTTPRoutes
• Namespace partitioning in Kubernetes
• Pod-level security contexts
• RBAC permissions for API access
• NetworkPolicies limiting service communication
1.17.2 Secret Management¶
Secrets include:
• Database usernames/passwords
• RabbitMQ credentials
• Redis passwords
• API tokens
• TLS private keys
Best practices:
• Store sensitive values in Kubernetes Secrets
• Never store plaintext secrets in Git
• Use encrypted systems (e.g., Sealed Secrets or External Secrets Operator)
1.17.3 TLS and Encryption¶
cert-manager ensures:
• Automated issuance of certificates
• Automated renewal
• Trusted connections between clients and gateway
Future expansions could include:
• mTLS between services
• Service mesh (e.g., Istio, Linkerd)
1.17.4 NetworkPolicies¶
These allow:
• Denying all cross-namespace traffic by default
• Allowing only approved flows (e.g., Search API → PostgreSQL)
• Protecting backing services from unauthorized access
Security must be intentional, not accidental.
1.18 Scaling Considerations¶
Although LocalCloudLab runs on a single-node server, scaling concepts still apply.
1.18.1 Vertical vs Horizontal Scaling¶
Vertical: • Increase CPU/RAM on the server • Resize pods with resource requests/limits
Horizontal: • Increase replicas of apps • Use load balancing across pods
1.18.2 Horizontal Pod Autoscaling (HPA)¶
HPA can scale based on:
• CPU usage
• Memory usage
• Custom metrics
For example:
search-api:
scale to 3 replicas when CPU > 70%
1.18.3 Database Scaling¶
PostgreSQL in a single-node cluster cannot be horizontally scaled easily. Future enhancements include:
• Streaming replicas
• External managed databases
• Sharding strategies
1.18.4 Gateway Scaling¶
Envoy Gateway can be scaled by increasing its replica count. MetalLB continues to direct traffic to all gateway pods.
1.18.5 Multi-node k3s Expansion¶
LocalCloudLab supports expansion to multi-node clusters, covered later in Section 21.
1.19 Designing Applications for LocalCloudLab¶
Your services (Search API, Checkin API) should follow cloud-native design principles.
1.19.1 The 12-Factor Application Principles¶
Key principles include:
• Codebase in Git
• Config in environment variables or Secrets
• Backing services treated as attachable resources
• Stateless processes
• Logs emitted to stdout/stderr
• Declarative dependencies
1.19.2 Use of Environment Variables¶
Applications should read:
• Database connection strings
• Redis URL
• RabbitMQ credentials
• Service-specific configs
All injected via Kubernetes Deployments.
1.19.3 Retry, Timeout, and Circuit Breakers¶
Distributed systems fail often. Implement:
• Retry policies
• Timeout policies
• Circuit breakers (Polly for .NET)
• Exponential backoff strategies
1.19.4 Idempotency and Safe Operations¶
Idempotent endpoints:
• Prevent duplicate writes
• Improve resiliency
• Ensure safe recovery after failures
1.19.5 API Versioning Strategy¶
Always design APIs with:
/v1/search
/v2/search
This avoids breaking changes downstream.
1.19.6 Logging Strategy¶
Logs should be:
• Structured (JSON)
• Consistent
• Correlated with TraceId and SpanId
• Useful for debugging distributed flows
1.20 Summary of Section 1¶
Section 1 provided a comprehensive foundation for LocalCloudLab, including:
• Architectural overview
• Layered system explanation
• Interaction between components
• Networking fundamentals
• Observability philosophy
• Security considerations
• Application design principles
• Real-world production parallels
You now have the conceptual framework necessary to understand the installation, configuration, and operation of the full LocalCloudLab environment.
In Section 2, we begin the hands-on journey with setting up a fresh Linux server, implementing security best practices, installing necessary tools, and preparing the environment for Kubernetes (k3s).
(End of Section 01 — Complete)