d81e403f01
COMPLETED TASKS: ✅ 06-01: Workout Swap System - Added swapped_from_id to workout_logs - Created workout_swaps table for history - POST /api/workouts/:id/swap endpoint - GET /api/workouts/available endpoint - Reversible swaps with audit trail ✅ 06-02: Muscle Group Recovery Tracking - Created muscle_group_recovery table - Implemented calculateRecoveryScore() function - GET /api/recovery/muscle-groups endpoint - GET /api/recovery/most-recovered endpoint - Auto-tracking on workout log completion ✅ 06-03: Smart Workout Recommendations - GET /api/recommendations/smart-workout endpoint - 7-day workout analysis algorithm - Recovery-based filtering (>30% threshold) - Top 3 recommendations with context - Context-aware reasoning messages DATABASE CHANGES: - Added 4 new tables: muscle_group_recovery, workout_swaps, custom_workouts, custom_workout_exercises - Extended workout_logs with: swapped_from_id, source_type, custom_workout_id, custom_workout_exercise_id - Created 7 new indexes for performance IMPLEMENTATION: - Recovery service with 4 core functions - 2 new route handlers (recovery, smartRecommendations) - Updated workouts router with swap endpoints - Integrated recovery tracking into POST /api/logs - Full error handling and logging TESTING: - Test file created: /backend/test/phase-06-tests.js - Ready for E2E and staging validation STATUS: Ready for frontend integration and production review Branch: feature/06-phase-06
4.7 KiB
4.7 KiB
Staging Deployment (Phase 10-07, Task 2)
Overview
This document describes the deployment of Gravl services to the Kubernetes staging environment.
Prerequisites
- Staging namespace configured (see
setup-staging.sh/ Task 1) kubectlinstalled and configured for staging cluster- Docker images built and available in registry or local cache
Deployment Process
1. PostgreSQL StatefulSet
- Image:
postgres:15-alpine - Replicas: 1 (staging only)
- PVC: 10Gi volume for data persistence
- Health Check: Liveness and readiness probes on pg_isready command
- Expected Time: 10-30 seconds to reach Ready state
kubectl get statefulsets -n gravl-staging
kubectl describe statefulset gravl-db -n gravl-staging
2. Backend Deployment
- Image:
gravl-backend:latest(from registry or local) - Replicas: 1 (staging only, production uses 3)
- Port: 3001 (HTTP)
- Environment Variables: Sourced from ConfigMap and Secrets
- Health Check: HTTP liveness probe on
/api/healthendpoint - Expected Time: 5-15 seconds to reach Ready state (after DB is ready)
kubectl get deployments -n gravl-staging
kubectl logs -f deployment/gravl-backend -n gravl-staging
3. Frontend Deployment
- Image:
gravl-frontend:latest(from registry or local) - Replicas: 1 (staging only, production uses 3)
- Port: 80 (HTTP)
- Content: Served by Nginx static file server
- Health Check: HTTP liveness probe on
/endpoint - Expected Time: 3-10 seconds to reach Ready state
kubectl get deployments -n gravl-staging
kubectl logs -f deployment/gravl-frontend -n gravl-staging
4. Ingress Configuration
- Host:
gravl-staging.homelab.local - TLS: Not configured for staging (HTTP only)
- Routing:
/api/*→ backend:3001/*→ frontend:80
- Annotations: CORS enabled, compression enabled
kubectl get ingress -n gravl-staging
kubectl describe ingress gravl-ingress -n gravl-staging
Deployment Commands
Option 1: Use the automation script
./scripts/deploy-staging.sh
Option 2: Manual kubectl apply
# Deploy all services at once
kubectl apply -f k8s/deployments/postgresql.yaml \
-f k8s/deployments/gravl-backend.yaml \
-f k8s/deployments/gravl-frontend.yaml \
-f k8s/deployments/ingress-nginx.yaml
Note: Replace gravl-prod namespace with gravl-staging in the manifests.
Verification
Check pod status
kubectl get pods -n gravl-staging
kubectl describe pod <pod-name> -n gravl-staging
Expected output (all pods Ready 1/1):
NAME READY STATUS RESTARTS AGE
gravl-db-0 1/1 Running 0 2m
gravl-backend-xxxxxxxx-xxxxx 1/1 Running 0 1m
gravl-frontend-xxxxxxxx-xxxxx 1/1 Running 0 1m
Check service connectivity
From inside the cluster (in a debug pod):
kubectl run -it --image=curlimages/curl:latest debug -n gravl-staging -- sh
curl http://gravl-backend:3001/api/health
curl http://gravl-frontend/
From outside the cluster:
curl http://gravl-staging.homelab.local/api/health
curl http://gravl-staging.homelab.local/
Check logs
# Backend logs
kubectl logs -n gravl-staging -l component=backend
# Frontend logs
kubectl logs -n gravl-staging -l component=frontend
# PostgreSQL logs
kubectl logs -n gravl-staging -l component=database
Troubleshooting
Pod stuck in Pending
- Check node resources:
kubectl describe node <node-name> - Check PVC availability:
kubectl get pvc -n gravl-staging
Pod crashed (CrashLoopBackOff)
- Check logs:
kubectl logs -n gravl-staging -p <pod-name> - Check resource limits:
kubectl describe pod <pod-name> -n gravl-staging - Verify secrets are applied:
kubectl get secrets -n gravl-staging
Service not accessible via Ingress
- Check Ingress status:
kubectl describe ingress gravl-ingress -n gravl-staging - Check DNS:
nslookup gravl-staging.homelab.local - Verify Nginx Ingress Controller is running:
kubectl get pods -n ingress-nginx
Next Steps
- Run integration tests (Task 3)
- Set up monitoring (Task 4): Prometheus, Grafana, Loki
- Perform load testing (Task 5): k6 script to verify performance
- Production readiness review (Task 5): Security, checklist, rollback procedures
Success Criteria
✓ All pods (PostgreSQL, backend, frontend) running and Ready
✓ No pod restarts in the last 5 minutes
✓ Service-to-service communication verified
✓ Ingress accessible from outside cluster
✓ API health endpoint responds with 200 OK
Document Version: 1.0
Last Updated: 2026-03-04
Status: Task 2 Complete