Day 71 - Let's prepare for some interview questions of Terraform 🔥  (Feb 5, 2024)

Day 71 - Let's prepare for some interview questions of Terraform 🔥 (Feb 5, 2024)

🙏 Introduction:

In this blog, we will cove some Terraform interview question.

What is Terraform and how it is different from other IaaC tools?

Terraform is an Infrastructure as Code (IaC) tool that allows you to define and provision infrastructure resources in a declarative way. It helps you manage infrastructure across various cloud providers and on-premises environments.

Terraform is different from other IaC tools because of its unique characteristics:

  • Multi-cloud support: Terraform supports multiple cloud providers, including AWS, Azure, Google Cloud, and more, allowing you to manage infrastructure resources across different platforms from a single configuration.

  • Declarative language: Terraform uses a declarative language to describe the desired state of your infrastructure, allowing you to define what resources you need without specifying the detailed steps for provisioning them.

  • Infrastructure Graph: Terraform builds an infrastructure graph based on your configuration, which helps determine the order of resource creation or modification, ensuring the desired state is achieved efficiently.

  • Infrastructure Graph: Terraform builds an infrastructure graph based on your configuration, which helps determine the order of resource creation or modification, ensuring the desired state is achieved efficiently.

  • Plan and Apply: Terraform provides a plan phase that shows you what changes will be made to your infrastructure before applying them, giving you an opportunity to review and validate the changes before implementation.

Overall, Terraform stands out for its multi-cloud support, declarative language, infrastructure graph, resource abstraction, and plan/apply workflow, making it a popular choice for managing infrastructure as code.

How do you call a main.tf module?

In Terraform, a configuration is typically organized into modules. To call a main.tf module, you use the module block in your Terraform configuration. Here's an example

module "example" {
  source = "./path/to/main.tf"
  # other module configuration options
}

In the above example, we are calling a example module located at the ./path/to/module directory.

What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies?

Sentinel is a tool made by HashiCorp. It helps you set and enforce rules (policies) for your cloud infrastructure.

Here are a few examples of how Sentinel can be used for policies in Terraform

  • Resource Naming Conventions: Ensure that resources are named according to a specific naming convention
import "tfplan"

# Enforce resource naming conventions
main = rule {
  all tfplan.resources.aws_instance as _, instances {
    all instances as _, r {
      r.attributes.name =~ "^prod-.*$"
    }
  }
}
  • Tagging Requirements: Enforce the presence of specific tags on resources.
import "tfplan"

# Enforce tagging requirements
main = rule {
  all tfplan.resources as _, resources {
    all resources as _, r {
      r.attributes.tags != null
      r.attributes.tags.Environment == "production"
    }
  }
}
  • Security Group Rules: Enforce specific security group rules to control network access.
import "tfplan"

# Enforce security group rules
main = rule {
  all tfplan.resources.aws_security_group_rule as _, rules {
    all rules as _, r {
      r.attributes.type == "ingress"
      r.attributes.from_port < 1024
    }
  }
}
  • Instance Size Limits: Enforce limitations on the size of EC2 instances.
import "tfplan"

# Enforce instance size limits
main = rule {
  all tfplan.resources.aws_instance as _, instances {
    all instances as _, r {
      r.attributes.instance_type != "t2.micro"
    }
  }
}
  • Provider Restrictions: Limit the use of specific cloud providers.
import "tfplan"

# Enforce provider restrictions
main = rule {
  all tfplan.resources as _, resources {
    all resources as _, r {
      r.provider != "aws"
    }
  }
}

These policies help you maintain good practices, security, and organization in your infrastructure. Sentinel checks these rules before applying any changes to your cloud resources. If the changes violate the rules, Sentinel prevents them from happening

You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?

To create multiple instances of the same resource in Terraform, you can utilize the count or for_each meta-arguments in your configuration file.

  1. Using the count meta-argument:
  • Add a count parameter to the resource block that requires multiple instances.

  • Specify the desired number of instances in the count parameter.

  • Use the count.index variable within the resource block to differentiate each instance.

resource "aws_instance" "example" {
  count = 3

  # Configuration for each instance
  ami           = "ami-abc123"
  instance_type = "t2.micro"

  # Unique tags for each instance
  tags = {
    Name = "Instance ${count.index}"
  }
}

2. Using the for_each meta-argument:

  • Define a map or set variable that specifies the different instances and their respective configurations.

  • Use the for_each parameter in the resource block to iterate over the map or set variable.

  • Access the current element of the map or set within the resource block using the each keyword.

variable "instances" {
  type = map(object({
    ami           = string
    instance_type = string

  default = {
    "instance1" = {
      ami           = "ami-abc123"
      instance_type = "t2.micro"
    },
    "instance2" = {
      ami           = "ami-def456"
      instance_type = "t2.small"
    }
  }
}

resource "aws_instance" "example" {
  for_each = var.instances

  # Configuration for each instance
  ami           = each.value.ami
  instance_type = each.value.instance_type

  # Unique tags for each instance
  tags = {
    Name = each.key
  }
}

By using either the count or for_each meta-arguments, you can dynamically create multiple instances of the same resource with distinct configurations based on your needs.

You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?

Set the environment variable TF_LOG=TRACE

Setting TF_LOG=TRACE as an environment variable will enable Terraform to generate detailed debug logs, including information about where it is loading providers from.

How would you save any particular resource while destroying the complete infrastructure?

When executing the terraform destroy command, it will remove all the resources created by Terraform and destroy the infrastructure. If you want to save a particular resource from being destroyed, you can use Terraform's resource lifecycle management feature.

To save a specific resource, you can modify the resource block in your Terraform configuration file by adding the lifecycle block and setting the prevent_destroy argument to true. This prevents Terraform from destroying that specific resource during the execution of terraform destroy.

Here’s an example of how you can save a resource from being destroyed:

resource "aws_instance" "example" {
  # Resource configuration...

  lifecycle {
    prevent_destroy = true
  }
}

Which module is used to store .tfstate file in S3?

The module used to store the .tfstate file in an S3 bucket is called the "S3 backend" module. It is a built-in module in Terraform that enables remote state management by storing the state file in an S3 bucket.

To configure the S3 backend in your Terraform configuration, you need to specify the backend configuration block in your main.tf file or in a separate backend configuration file. Here's an example:

terraform {
  backend "s3" {
    bucket         = "your-bucket-name"
    key            = "path/to/your/state/file.tfstate"
    region         = "your-aws-region"
    encrypt        = true
    dynamodb_table = "your-dynamodb-lock-table"
  }
}

How do you manage sensitive data in Terraform, such as API keys or passwords?

To ensure security and compliance. HashiCorp recommends using the following approaches

Use Variables and Input Parameters: Define input variables for sensitive data and pass them as parameters during Terraform execution. This helps in keeping sensitive information outside of your Terraform configuration files.

variable "aws_access_key" {
  description = "AWS Access Key"
}

provider "aws" {
  access_key = var.aws_access_key
  # other provider configurations
}

Environment Variables: Leverage environment variables to set sensitive information like API keys. This approach keeps secrets out of your code and allows for flexibility across different environments.

provider "aws" {
  access_key = var.AWS_ACCESS_KEY
  secret_key = var.AWS_SECRET_KEY
  # other provider configurations
}

Terraform Secret Management Systems: Integrate with secret management systems, such as HashiCorp Vault or other third-party solutions. These tools securely store and manage sensitive information.

provider "aws" {
  access_key = vault("secret/aws/access_key")
  secret_key = vault("secret/aws/secret_key")
  # other provider configurations
}

Use sensitive Attribute: Mark sensitive variables with the sensitive attribute to prevent the display of sensitive data in the Terraform CLI output

variable "db_password" {
  type        = string
  description = "Database password"
  sensitive   = true
}

How would you provision an S3 bucket and a user with read and write access to the bucket in Terraform?

To provision an S3 bucket and a user with read and write access to the bucket in Terraform, you would use the following resources:

aws_s3_bucket: This resource is used to create an S3 bucket. You can define its properties such as bucket name, access control, and other configuration options.

resource "aws_s3_bucket" "example_bucket" {
  bucket = "my-bucket"
  acl    = "private"
}

aws_iam_user: This resource is used to create an IAM user. IAM users are used to manage access to AWS services.

resource "aws_iam_user" "example_user" {
  name = "my-user"
}

aws_iam_user_policy_attachment: This resource is used to attach an IAM policy to the IAM user. The policy defines the permissions and access rights for the user.

resource "aws_iam_user_policy_attachment" "example_user_attachment" {
  user       = aws_iam_user.example_user.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

Who maintains Terraform providers?

Terraform providers are maintained by the respective cloud service providers or organizations. For example, AWS maintains the AWS provider, and Google maintains the Google Cloud provider. The providers are typically open source, and the community and contributors also play a role in maintaining them.

How can we export data from one module to another?

In Terraform, you can export data from one module to another by using outputs.

To export data from a module, you define output variables in the module’s configuration. For example, within the module’s main.tf file, you can define an output block like this:

output "example_output" {
  value = <value_to_export>
}

The value attribute specifies the value you want to export. It can be a variable, resource attribute, or any other valid expression.

To import the exported data in another module, you use the module block and reference the output variable. For example, in the other module’s configuration, you can include:

module "example_module" {
  source = "./path/to/module"

  example_input = module.module_name.example_output
}

In this example, module_name is the name of the module where the output variable is defined, and example_output is the exported value you want to import.

By referencing module.module_name.example_output, you can access the exported value and use it within the other module's configuration.