Kops : http://github.com/kubernetes/kops/releases
sudo apt install python-pip
sudo pip install awscli
create an user for k8s pupose in aws.
--------------------------
AWS console -- security & identity -- identity & access management--users--create new user (with access key generation checked)
attach administratoraccess policy to this user in the IAM permissions tab.
create a new S3 bucket.
get a free domain from dot.tk and goto dns management in aws Route53.dns management -> create hosted zone ->provide the domain name we have created.
It will list some name servers(ns-something), copy them to namecheap.com for dns routing as NSRecords.
aws configure (provide the access key details)
ls -ahl ~/.aws/
Install Kubectl:
-----------------
download the file and add it to path or move it to /usr/local/bin
and give execution permissions to it.
create new ssh keys to login to cluster:
-----------------------------------------------
ssh-keygen -f .ssh/id_rsa
cat .ssh/id_rsa.pub --> public key --> upload this to instances to access them,we login with private key
Now create cluster on aws using aws cli:
-------------------------------------------
In the ouptut it will give suggestions at end as below to update configurations.
As out state is in S3, provide --state parameter for above all the commands.
kops edit cluster <our_domain_created> --state=s3://kops-state-b429b
cat ~/.kube/config
at the end, login details to cluster will be there.
kubectl get node --> will show nodes
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port:8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get service
aws console--networking--vpc--secuity groups--edit inbound rules of masters.<our_domain>
After adding above inbound rule to all ip range.
if you look at route53, there is api.<our_domain>, use this to connect to our master node.
use this hostname and the port.. to check in browser.
Delete the cluser using below command.
sudo apt install python-pip
sudo pip install awscli
create an user for k8s pupose in aws.
--------------------------
AWS console -- security & identity -- identity & access management--users--create new user (with access key generation checked)
attach administratoraccess policy to this user in the IAM permissions tab.
create a new S3 bucket.
get a free domain from dot.tk and goto dns management in aws Route53.dns management -> create hosted zone ->provide the domain name we have created.
It will list some name servers(ns-something), copy them to namecheap.com for dns routing as NSRecords.
aws configure (provide the access key details)
ls -ahl ~/.aws/
Install Kubectl:
-----------------
download the file and add it to path or move it to /usr/local/bin
and give execution permissions to it.
create new ssh keys to login to cluster:
-----------------------------------------------
ssh-keygen -f .ssh/id_rsa
cat .ssh/id_rsa.pub --> public key --> upload this to instances to access them,we login with private key
Now create cluster on aws using aws cli:
-------------------------------------------
In the ouptut it will give suggestions at end as below to update configurations.
As out state is in S3, provide --state parameter for above all the commands.
kops edit cluster <our_domain_created> --state=s3://kops-state-b429b
cat ~/.kube/config
at the end, login details to cluster will be there.
kubectl get node --> will show nodes
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port:8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get service
aws console--networking--vpc--secuity groups--edit inbound rules of masters.<our_domain>
After adding above inbound rule to all ip range.
if you look at route53, there is api.<our_domain>, use this to connect to our master node.
use this hostname and the port.. to check in browser.
Delete the cluser using below command.
No comments:
Post a Comment