Busy reworking the demo. Recently I migrated the K3S instance
from Vultr to AWS. Both work fine but I wanted to consolidate workflows
and leverage unused RI credits I have in AWS. I’m not running any workloads
on this instance that would incur any significant cost in terms of network
traffic.
I’m posting the draft as a non-draft to test the new website layout and
behavior across devices and viewport sizes.
Shane
Fenced code block syntax highlighting test
# Commentfunction ls_lh {base_dir="${1:-.}"for idx in $(seq 1); do ls -lh "${base_dir}" | cat
done}echo "scale=512; 4*a(1);" | bc -l
Objectives
Identify scaling bottlenecks
Interact with scaling levers
Identify key reliability metrics
Assess infrastructure performance efficiency
Assess infrastructure cost efficiency
Sections
instance bringup
terraform unit
sizing (mem, disk), ami, network
terraform ops workflow
instance provisioning
ansible directory structure
ansible ops workflow
secrets management
access control, rotation, revokation
vault secrets
ci/cd secrets
sealed secrets
k8s secrets
observability
logging
Cluster level logging
metrics
prometheus
grafana
influxdb
tracing
service mesh
jaeger
choosing k8s or standalone deployment
convergence using google hosted prometheus (multi-layer)
app scaling
manual
hpa
cluster scaling
manual
node-autoscaler
load generation
method-1-k6.sh (see ansible playbook for instance-caddy)
method-2-ab.sh (see ansible playbook for instance-caddy)
method-3-wrk.sh (see ansible playbook for instance-caddy)