3 buiding blocks:
-----------------
providers
resources
outputs(formatted info of what to show to user like nginx url,etc)
other parts include: DataSource(pull data from providers like image id,etc) and Variable
provisioner is part of resource
within providers we have resources
In packer we searh for builders, here we will search for providers(like aws,azure,etc).
vm,network adapter,security group, any thing is considered as a resource.
.tf --> terraform files
to the build command we will give directory where the files exist.
Unlike packer, where input is directory.
so multiple terraform files(.tf) will be considered if in same directory.
Terraform Homepage
to see list of resources for a particular provider, you can navigate to the terraform webpage.
Example for aws: https://www.terraform.io/docs/providers/aws/index.html
when we run terraform, it maintains the state file(results stored) once machine created.
this is crosschecked when we run the same command.
It will be there in the same directory by default.(.tfstate)
It is used for idempotency.
we can keep in common place also so multiple persons can compare to it so won't get multiple environments when ran by multiple persons same command.
terraform uses DSL --- has its own domain specific language.
# is the comment start there.
But with packer, it uses generic json format.
terrform picks .tf files completely.
We can follow a standard for our easines..like main.tf for the main terraform file.
terraform init --> will download the terraform providers into the .terraform folder.
terraform validate .
terraform plan --- explains what is going to do if we use apply command
----
In terraform provisioning will not work, untill we have login details and we need to login.Because packer is inside the machine but terraform will just send aws commands and it is outside.
So we need a connection object.
We need to tell username and location of pem file.
connection {//---}
types of connections for any kind of provider: https://www.terraform.io/docs/provisioners/connection.html
we can use variables declared which you can see in variables.tf.
Along with variables declared, you can use names of other resources also as variables.
like ${aws_security_group.allow_all.name}, where
aws_security_group - "type of resource"
allow_all - "resource name we given"
using dotoperator we are mentionining to use name.
you can keep the key value pairs of variables which are declared but not kept default values in the variables.tf in a separate .tfvars file or can pass as arguments at commandline using -var "<key=value>"
you can apply after vaidation by passing same arguments.
Or, use plan and with -o redirect to a plan file and use that plan file.
terraform apply file1.plan .
for any resource search in google as "terraform aws provider" and in the first link, search for that particular resource type.like ec2-instance,rds,etc
for azure, like awscli, we has azure cli.for programmatic access.
main.tf
variables.tf
terraform.tfvars
-----------------
providers
resources
outputs(formatted info of what to show to user like nginx url,etc)
other parts include: DataSource(pull data from providers like image id,etc) and Variable
provisioner is part of resource
within providers we have resources
In packer we searh for builders, here we will search for providers(like aws,azure,etc).
vm,network adapter,security group, any thing is considered as a resource.
.tf --> terraform files
to the build command we will give directory where the files exist.
Unlike packer, where input is directory.
so multiple terraform files(.tf) will be considered if in same directory.
Terraform Homepage
to see list of resources for a particular provider, you can navigate to the terraform webpage.
Example for aws: https://www.terraform.io/docs/providers/aws/index.html
when we run terraform, it maintains the state file(results stored) once machine created.
this is crosschecked when we run the same command.
It will be there in the same directory by default.(.tfstate)
It is used for idempotency.
we can keep in common place also so multiple persons can compare to it so won't get multiple environments when ran by multiple persons same command.
terraform uses DSL --- has its own domain specific language.
# is the comment start there.
But with packer, it uses generic json format.
terrform picks .tf files completely.
We can follow a standard for our easines..like main.tf for the main terraform file.
terraform init --> will download the terraform providers into the .terraform folder.
terraform validate .
terraform plan --- explains what is going to do if we use apply command
----
In terraform provisioning will not work, untill we have login details and we need to login.Because packer is inside the machine but terraform will just send aws commands and it is outside.
So we need a connection object.
We need to tell username and location of pem file.
connection {//---}
types of connections for any kind of provider: https://www.terraform.io/docs/provisioners/connection.html
we can use variables declared which you can see in variables.tf.
Along with variables declared, you can use names of other resources also as variables.
like ${aws_security_group.allow_all.name}, where
aws_security_group - "type of resource"
allow_all - "resource name we given"
using dotoperator we are mentionining to use name.
you can keep the key value pairs of variables which are declared but not kept default values in the variables.tf in a separate .tfvars file or can pass as arguments at commandline using -var "<key=value>"
you can apply after vaidation by passing same arguments.
Or, use plan and with -o redirect to a plan file and use that plan file.
terraform apply file1.plan .
for any resource search in google as "terraform aws provider" and in the first link, search for that particular resource type.like ec2-instance,rds,etc
for azure, like awscli, we has azure cli.for programmatic access.
main.tf
provider "aws"{ region = "${var.region}" access_key = "${var.accesskey}" secret_key = "${var.secretkey}" } resource "aws_instance" "appserver1"{ ami = "${var.imageid}" instance_type = "${var.instancetype}" key_name = "${var.key}" security_groups = ["${aws_security_group.allow_all.name}"]#["JPMC Mart Test"] connection{ host = self.public_ip user = "ubuntu" private_key = "${file(var.privatekeypath)}" } provisioner "remote-exec"{ inline = [ "sudo apt-get update", "sudo apt-get install tomcat7 -y" ] } } resource "aws_security_group" "allow_all"{ name = "allow_all" description = "Allow all inbound traffic" vpc_id = "${var.vpcid}" ingress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
variables.tf
variable "region" { default = "us-west-2" } variable "accesskey"{ type="string" } variable "secretkey"{ type="string" } variable "imageid" { default = "<amiid>" description="ubuntu 14 image" } variable "key" { default = "<keypairname>" } variable "instancetype" { default = "t2.micro" } variable "vpcid" { default = "<vpcid_aws>" } variable "privatekeypath" {}
terraform.tfvars
accesskey="<accesskeyaws>" secretkey="<secretkeyaws>" privatekeypath="./keys/keyfile.pem"
No comments:
Post a Comment