PostgreSQL – Full Guide¶
7.1 Introduction¶
PostgreSQL is the primary relational database engine used in LocalCloudLab. It provides:
• ACID-compliant transactions
• Rich SQL capabilities
• JSON support
• Full-text search
• Strong indexing capabilities
• Excellent performance in microservice architectures
• Native compatibility with .NET via Npgsql
In this section, we will cover:
• Installing PostgreSQL on Kubernetes (using Helm)
• Understanding the storage model and persistence
• Connecting microservices (.NET APIs) to PostgreSQL
• Running schema migrations safely
• Monitoring database health and performance
• Backups and restore strategies
• High-level failover considerations
• Best practices for production-like deployments
By the end of Section 7, your database layer will be fully operational, persistent, observable, and integrated with the LocalCloudLab cluster.
7.2 PostgreSQL Architecture in LocalCloudLab¶
Your environment uses single-node k3s, therefore PostgreSQL is deployed:
• As a StatefulSet in Kubernetes
• With a PersistentVolumeClaim for data durability
• Behind a ClusterIP service within the cluster
The typical flow:
+-------------------+
| .NET API |
| (Search / Checkin)|
+---------+---------+
|
| Npgsql (pooling, SSL, tracing)
v
+---------+---------+
| PostgreSQL |
| StatefulSet + PVC |
+-------------------+
|
| Persistent storage (local-path on k3s)
v
+-------------------+
| Disk / Volume |
+-------------------+
Why PostgreSQL?¶
✔ Stable and proven
✔ Excellent for relational workloads
✔ Very strong ecosystem with .NET (Npgsql, EF Core)
✔ Easy horizontal read-scaling (replicas) if needed later
✔ Strong indexing & performance tooling
✔ Easier migration path from MS SQL than many other engines
7.3 Installing PostgreSQL with Helm (Bitnami Chart)¶
We use the Bitnami PostgreSQL Helm chart because it is:
• Production-grade
• Actively maintained
• Well-documented
• Supports replication and backup strategies
• Works great with k3s and local-path storage
All commands below run against your k3s cluster from either the Linux server or your Windows machine using kubectl with the correct kubeconfig.
7.3.1 Add the Helm repo¶
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
7.3.2 Create namespace “database”¶
kubectl create namespace database
7.3.3 Install PostgreSQL¶
helm install postgres bitnami/postgresql -n database --set auth.postgresPassword=YourStrongPassword --set primary.persistence.size=20Gi --set primary.persistence.storageClass="local-path"
Parameters:
• auth.postgresPassword
The password for the postgres superuser. Use a strong, unique value.
• primary.persistence.size
The PersistentVolumeClaim size. 20Gi is fine for a lab; adjust as needed.
• primary.persistence.storageClass
For k3s, the default storage class is often "local-path".
You can verify with:
kubectl get storageclass
Check installation:
kubectl get pods -n database
kubectl get pvc -n database
kubectl get svc -n database
Expected service name:
postgres-postgresql.database.svc.cluster.local
7.3.4 Understanding StatefulSet and PVC¶
PostgreSQL is deployed as a StatefulSet:
• Pod name is stable: postgres-postgresql-0
• PersistentVolumeClaim keeps its data even if the Pod restarts
• Data directory lives under /bitnami/postgresql inside the container
You can inspect the pod:
kubectl describe statefulset postgres-postgresql -n database
7.4 Connecting .NET APIs to PostgreSQL¶
Your .NET microservices (Search API, Checkin API) talk to PostgreSQL using the Npgsql driver (and often EF Core).
7.4.1 Connection string format¶
Inside the cluster, the hostname is:
postgres-postgresql.database.svc.cluster.local
A typical connection string:
Host=postgres-postgresql.database.svc.cluster.local;Port=5432;
Database=appdb;Username=postgres;Password=YourStrongPassword;
If you don’t use SSL inside the cluster, you can explicitly set:
SSL Mode=Disable
Example full string in appsettings.json:
"ConnectionStrings": {
"Postgres": "Host=postgres-postgresql.database.svc.cluster.local;Port=5432;Database=appdb;Username=postgres;Password=YourStrongPassword;Pooling=true;Minimum Pool Size=5;Maximum Pool Size=50;SSL Mode=Disable"
}
7.4.2 Registering DbContext in .NET¶
Example for Search API:
builder.Services.AddDbContext<SearchDbContext>(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("Postgres")));
Typical pattern for Program.cs:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<SearchDbContext>(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("Postgres")));
var app = builder.Build();
app.MapGet("/health", () => "ok from Search");
app.Run();
7.4.3 Performance recommendations for connection usage¶
• Use connection pooling (enabled by default in Npgsql).
• Set MaxPoolSize and MinPoolSize explicitly in the connection string.
• Always use async versions of EF Core / Npgsql methods.
• Avoid long-lived transactions.
• Only select the columns you need (avoid SELECT * in hot paths).
• Close/Dispose DbContext instances quickly (scoped per-request).
7.4.4 Testing connectivity from inside the cluster¶
Open a shell in a pod (e.g., Search API or any utility pod):
kubectl exec -it <search-pod-name> -n search -- bash
Inside the container:
apt-get update && apt-get install -y postgresql-client
psql -h postgres-postgresql.database.svc.cluster.local -U postgres -d appdb
If you can connect and run:
SELECT 1;
then networking, DNS, and authentication are correct.
7.5 Database Schema Migrations¶
In a microservices environment, schema management must be predictable, automated, and safe. LocalCloudLab uses EF Core migrations for schema evolution.
This section covers:
• How migrations work
• How to generate and apply them
• Zero-downtime migration patterns
• CI/CD considerations
7.5.1 EF Core migrations workflow¶
Typical local workflow:
1. Modify your DbContext / entity classes.
2. Generate a new migration:
dotnet ef migrations add AddSearchHistoryTable
3. Examine the generated migration to verify correctness.
4. Apply migrations locally:
dotnet ef database update
5. Run tests.
6. Commit the migration files to Git.
Your migration file contains two methods:
• Up() → defines changes to apply
• Down() → defines changes to revert
Rule: All schema changes MUST go through migrations and MUST be committed to Git.
7.5.2 Applying migrations inside Kubernetes¶
You have two main strategies:
Strategy A – Apply migrations on app startup (simple)
Inside Program.cs:
using (var scope = app.Services.CreateScope())
{
var db = scope.ServiceProvider.GetRequiredService<SearchDbContext>();
db.Database.Migrate();
}
Pros: ✔ Easy to implement. ✔ Works fine for single-node, low-traffic dev environments (like LocalCloudLab).
Cons: ✗ Risky in high-traffic or multi-replica setups. ✗ Complicates rollbacks.
Strategy B – Apply migrations via Kubernetes Job (recommended for larger setups)
Create a Job that:
• Runs the .NET app in "migration" mode
• Applies migrations, then exits
This is more explicit and easier to control in CI/CD pipelines.
7.5.3 Zero-downtime migration patterns¶
To avoid breaking production traffic (even in a lab you should practice this), follow these patterns:
✔ Add fields in a backwards-compatible way:
• Add new columns as nullable
• Add new tables without removing old ones
✔ Stepwise migrations:
1. Add new column (nullable).
2. Deploy application that writes to both old and new fields.
3. Backfill data into the new column.
4. Make new column non-nullable and update app to rely on it.
5. Remove old column once nothing uses it.
Avoid:
✗ Direct column renames without alias columns.
✗ Dropping columns still read by any service.
✗ Making new columns non-nullable without defaults or backfill.
7.5.4 Shared vs per-service schemas¶
Option A: Shared database, separate schemas per service (recommended here)
• Single PostgreSQL database instance.
• One schema per service: search, checkin, etc.
• No cross-schema foreign keys to prevent tight coupling.
Example DB layout:
appdb:
schema search
schema checkin
schema shared (optional)
Option B: Separate database per service
• More isolation, more complexity.
• Suitable for very large systems.
LocalCloudLab adopts Option A for simplicity and teaching value.
7.6 PostgreSQL Monitoring¶
Monitoring the health and performance of PostgreSQL is critical. PostgreSQL exposes rich metrics via:
• pg_stat_activity
• pg_stat_statements
• System catalogs
• Logs (slow queries, errors)
7.6.1 Enabling pg_stat_statements¶
pg_stat_statements tracks execution statistics of all SQL statements.
In the Bitnami chart, you can enable it by adding to values:
--set primary.extendedConf="shared_preload_libraries='pg_stat_statements'"
Or by using a custom ConfigMap that appends:
shared_preload_libraries = 'pg_stat_statements'
After changing, you must restart PostgreSQL (Helm upgrade or Pod restart).
7.6.2 Install PostgreSQL Exporter¶
Install postgres-exporter:
helm install postgres-exporter prometheus-community/prometheus-postgres-exporter -n database --set datasource.host=postgres-postgresql.database.svc.cluster.local --set datasource.user=postgres --set datasource.password=YourStrongPassword
This exposes a /metrics endpoint with:
• Connection counts
• Database size
• Cache hit stats
• Query timings (if stat_statements enabled)
• Deadlocks, temp files, etc.
7.6.3 Import Grafana dashboards¶
In Grafana, import community dashboards such as:
• PostgreSQL Overview (e.g. ID 9628)
• PostgreSQL Details (e.g. ID 455)
These dashboards will show:
• Database size and growth
• Connections per DB
• Query latency
• Index usage and scan types
7.6.4 Key metrics to watch¶
• deadlocks
• sequential scan rate vs. index scans
• index usage ratio
• buffer cache hit ratio (aim for > 95%)
• temp file usage (spills to disk)
• slow query counts
• active connections
• replication lag (if using replicas)
7.7 Backups & Disaster Recovery¶
Backups are non-negotiable. Even in a lab environment, practice good backup hygiene to build habits.
You have two broad options:
• Logical backups (pg_dump)
• Physical backups (base backups + WAL)
7.7.1 Logical backups using pg_dump¶
Logical backups are easy to inspect and restore. For LocalCloudLab they are sufficient.
From your machine (port-forward) or from inside the pod:
Inside the PostgreSQL pod:
kubectl exec -it postgres-postgresql-0 -n database -- pg_dump -U postgres appdb > /tmp/appdb_backup.sql
Then copy the file out if needed:
kubectl cp database/postgres-postgresql-0:/tmp/appdb_backup.sql ./appdb_backup.sql
Pros:
✔ Easy to read/edit.
✔ Great for small-to-medium DBs.
Cons:
✗ Slower for huge datasets.
✗ No point-in-time restore via WAL logs.
7.7.2 Physical backups¶
For completeness, PostgreSQL can be backed up using physical methods like:
pg_basebackup
Typically used when:
• Data volume is large.
• You require PITR (point-in-time recovery).
• You have replicas and WAL archiving.
For LocalCloudLab, logical backups are generally enough.
7.7.3 Automated backups with CronJob¶
You can schedule nightly backups using a Kubernetes CronJob.
Create k8s/database/backup-job.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: database
spec:
schedule: "0 3 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: postgres:16
command: [ "/bin/sh", "-c" ]
args:
- pg_dump -h postgres-postgresql -U postgres appdb > /backup/appdb-$(date +%F).sql
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: postgres-postgresql
key: postgres-password
volumeMounts:
- name: backupdir
mountPath: /backup
restartPolicy: OnFailure
volumes:
- name: backupdir
hostPath:
path: /var/backups/postgres
Apply it:
kubectl apply -f k8s/database/backup-job.yaml
Backups will appear under /var/backups/postgres on the node.
7.7.4 Restoring a logical backup¶
Copy the backup into the pod:
kubectl cp ./appdb_backup.sql database/postgres-postgresql-0:/tmp/appdb_backup.sql
Then restore:
kubectl exec -it postgres-postgresql-0 -n database -- psql -U postgres -d appdb -f /tmp/appdb_backup.sql
Warning: Restores should typically be done into a fresh database or after dropping/recreating. In a real environment, always verify the restore plan on a non-production copy first.
7.8 Failover & High Availability¶
LocalCloudLab currently runs on a single node, so true high availability is not implemented. However, understanding HA patterns is important for future scaling.
Bitnami PostgreSQL supports replication architecture.
7.8.1 Enabling replication in the chart¶
You can install PostgreSQL in replicated mode via:
helm install postgres bitnami/postgresql -n database --set architecture=replication --set auth.postgresPassword=YourStrongPassword --set auth.replicationPassword=AnotherStrongPassword --set primary.persistence.size=20Gi --set readReplicas.persistence.size=20Gi
This creates:
• 1 primary pod
• N read replica pods (configurable)
7.8.2 Failover behavior¶
In this basic Bitnami replication mode:
• If primary fails, replicas do not automatically become writable.
• Manual promotion is required (or an external tool).
For automatic failover, use:
• Patroni
• Stolon
• Other HA orchestration tools
These are outside the minimal LocalCloudLab scope but good for learning later.
7.8.3 Application connection patterns¶
When using replication, applications must consider:
• Reads vs writes:
- Writes → primary
- Reads → replica(s) (optional)
• Connection URLs:
- Primary service: postgres-postgresql-primary
- Read service: postgres-postgresql-read
For LocalCloudLab’s single-node target, you typically use only the primary.
7.9 Summary of Section 7¶
In this section, you:
✔ Installed PostgreSQL using the Bitnami Helm chart
✔ Configured persistent storage via Kubernetes PVC
✔ Connected .NET microservices using Npgsql and EF Core
✔ Designed and applied EF Core database migrations
✔ Learned safe, zero-downtime migration strategies
✔ Enabled monitoring with postgres-exporter + Grafana dashboards
✔ Implemented backup and restore workflows using pg_dump
✔ Reviewed replication and high availability patterns
With PostgreSQL now fully operational and observable, LocalCloudLab’s data layer is complete.
In the next section (Section 08), you will:
• Install Redis
• Configure it as a cache and transient data store
• Use it for rate limiting, background work coordination, or Pub/Sub
• Integrate it with your .NET APIs to improve performance and reduce database load
(End of Section 07 — Complete)