** This section is generated by Google Gemini by tracking all errors made while creating this RLO and getting Gemini to summarize it.
1. GitLab Runner namespace missing
• Problem: helm install failed because the gitlab-runner namespace didn't exist
• Solution: microk8s kubectl create namespace gitlab-runner before running helm install
2. Kaniko couldn't reach the MicroK8s built-in registry (localhost:32000)
• Problem: localhost:32000 inside a pod refers to the pod itself, not the host
• Solution: Eventually abandoned the built-in registry entirely - it uses iptables-only NodePort with no real TCP socket, making it unreachable from pods on VMware
3. Jobs running on GitLab's shared SaaS runners instead of self-hosted runner
• Problem: Pipeline ran on saas-linux-small-amd64 GitLab infrastructure, which has no access to the local registry
• Solution: Added tags: to .gitlab-ci.yml jobs matching the self-hosted runner's tag
4. Kaniko couldn't reach registry via internal cluster DNS (registry.container-registry.svc.cluster.local)
• Problem: Runner pods used 169.254.169.254:53 as DNS instead of CoreDNS — DNS resolution failed
• Solution: Skipped DNS entirely, moved to IP-based addressing
5. Kaniko couldn't reach registry via node IP (192.168.234.128:32000) or cluster IP (10.152.183.197:5000)
• Problem: VMware NAT networking caused i/o timeouts - iptables-based NodePort/ClusterIP routing doesn't work reliably on VMware single-node setups
• Solution: Ran a standalone Docker registry container on the host (docker run -p 5000:5000 registry:2), creating a real TCP socket, bypassing iptables entirely
6. localhost resolving to IPv6 [::1]
• Problem: Even with host networking, localhost:32000 hit [::1] which the registry wasn't listening on
• Solution: Use 127.0.0.1 explicitly
7. GitLab variable masking rejected kubeconfig
• Problem: GitLab can't mask variables containing whitespace/newlines — raw kubeconfig failed
• Solution: Base64 encoded it with microk8s config | base64 -w 0, then decoded in the pipeline with base64 -d
8. Deploy job couldn't reach the Kubernetes API server
• Problem: Kubeconfig used 192.168.234.128:16443 — the node IP, unreachable from inside pods on VMware
• Solution: Replaced with https://kubernetes.default.svc:443 — the internal cluster API address always reachable from within the cluster
9. Base64 kubeconfig was invalid in GitLab
• Problem: base64: invalid input — the value was copied incorrectly (truncated)
• Solution: Regenerated with microk8s config | sed '...' | base64 -w 0, verified by decoding locally before storing
10. rollout restart returned "not found"
• Problem: Command ran without -n default, defaulting to the gitlab-runner namespace where the deployment doesn't exist
• Solution: Added -n default to all kubectl rollout commands
11. Updated HTML not reflecting after pipeline ran
• Problem: imagePullPolicy defaulted to IfNotPresent — Kubernetes reused the cached image even though a new one was pushed
• Solution: Added imagePullPolicy: Always to the deployment and kubectl rollout restart to the deploy job
12. Ingress YAML rejected by Kubernetes
• Problem: rules list item was missing the - before http:, making it a map instead of a list — Kubernetes error: "cannot restore slice from map"
• Solution: Added the missing - before http: in the ingress spec