🔧 Minikube Remote Access from Local Machine
This guide explains how to access a Minikube Kubernetes cluster running on an AWS EC2 instance from your local machine using a tunnel-based setup.
📘 Prerequisites
- Minikube is installed and running on an AWS EC2 instance.
- You have SSH access to the EC2 server.
kubectl
is installed on your local machine.- A local file path (e.g.,
~/minikube-ai/config
) is used to store the remotekubeconfig
.
✅ Step-by-Step Guide
🔹 Step 1: Get Minikube kubeconfig from EC2
On your EC2 server (as root or minikube user):
cat ~/.kube/config
Copy the content to a local file on your local machine:
mkdir -p ~/minikube-ai
nano ~/minikube-ai/config
# Paste the contents here
🔹 Step 2: Fix kubeconfig paths (cert/key files)
In the copied config
file, update all file paths to absolute paths on your local machine.
You’ll need the following files from EC2:
/root/.minikube/ca.crt
/root/.minikube/profiles/ai/client.crt
/root/.minikube/profiles/ai/client.key
Transfer them using scp
:
scp -i key.pem ubuntu@<EC2_PUBLIC_IP>:/root/.minikube/ca.crt ~/minikube-ai/
scp -i key.pem ubuntu@<EC2_PUBLIC_IP>:/root/.minikube/profiles/ai/client.crt ~/minikube-ai/
scp -i key.pem ubuntu@<EC2_PUBLIC_IP>:/root/.minikube/profiles/ai/client.key ~/minikube-ai/
Now update ~/minikube-ai/config
to reference the correct local paths:
certificate-authority: /home/<your-user>/minikube-ai/ca.crt
client-certificate: /home/<your-user>/minikube-ai/client.crt
client-key: /home/<your-user>/minikube-ai/client.key
🔹 Step 3: Setup SSH Tunnel
Create a local port-forward tunnel to Minikube’s Kubernetes API Server:
ssh -i key.pem -L 8443:192.168.49.2:8443 ubuntu@<EC2_PUBLIC_IP>
192.168.49.2
is Minikube’s internal IP.- Keep this terminal running. Or run it with
autossh
or as a background job.
🔹 Step 4: Update API Server URL in kubeconfig
Edit your ~/minikube-ai/config
:
server: https://localhost:8443
This tells kubectl
to connect to the local port from your SSH tunnel.
🔹 Step 5: Set kubeconfig and default namespace
You can either pass --kubeconfig
every time or set it globally:
export KUBECONFIG=~/minikube-ai/config
👉 Add this to your ~/.bashrc
or ~/.zshrc
:
echo 'export KUBECONFIG=~/minikube-ai/config' >> ~/.bashrc
source ~/.bashrc
Set the default namespace (ai-assistant
in this case):
kubectl config set-context ai --namespace=ai-assistant
✅ Final Test
Once the SSH tunnel is active and everything is configured:
kubectl get pods
You should see pods running in the ai-assistant
namespace.
🔁 Troubleshooting
Issue | Solution |
---|---|
localhost:8080 connection refused | You're not using the correct kubeconfig . Set KUBECONFIG or pass --kubeconfig=... |
certificate signed by unknown authority | Ensure the ca.crt , client.crt , and client.key paths are correct |
CrashLoopBackOff | Pod is crashing—check logs with kubectl logs <pod> |
🔐 Security Notes
- Only open port 22 on EC2 (no need to expose Kubernetes API publicly).
- Always use an SSH key with limited access.
- Optionally, restrict SSH to specific IPs via AWS Security Groups.