Author: admin

Mastering Podman: A Comprehensive Guide with Detailed Command Examples

Mastering Podman on Ubuntu: A Comprehensive Guide with Detailed Command Examples

Podman has become a popular alternative to Docker due to its flexibility, security, and rootless operation capabilities. This guide will walk you through the installation process and various advanced usage scenarios of Podman on Ubuntu, providing detailed examples for each command.

Table of Contents

.

1. How to Install Podman

To get started with Podman on Ubuntu, follow these steps:

Update Package Index

Before installing any new software, it’s a good idea to update your package index to ensure you’re getting the latest version of Podman:

.

.

sudo apt update

Install Podman

With your package index updated, you can now install Podman. This command will download and install Podman and any necessary dependencies:

.

.

sudo apt install podman -y

Example Output:

kotlin

.

Reading package lists… Done

Building dependency tree

Reading state information… Done

The following additional packages will be installed:

After this operation, X MB of additional disk space will be used.

Do you want to continue? [Y/n] y

Setting up podman (4.0.2) …

Verifying Installation

After installation, verify that Podman is installed correctly:

.

.

podman –version

Example Output:

.

podman version 4.0.2

2. How to Search for Images

Before running a container, you may need to find an appropriate image. Podman allows you to search for images in various registries.

Search Docker Hub

To search for images on Docker Hub:

.

.

podman search ubuntu

Example Output:

lua

.

INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED

docker.io docker.io/library/ubuntu Ubuntu is a Debian-based Linux operating sys… 12329 [OK]

docker.io docker.io/ubuntu-upstart Upstart is an event-based replacement for the … 108 [OK]

docker.io docker.io/tutum/ubuntu Ubuntu image with SSH access. For the root p… 39

docker.io docker.io/ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 9 [OK]

This command will return a list of Ubuntu images available in Docker Hub.

3. How to Run Rootless Containers

One of the key features of Podman is the ability to run containers without needing root privileges, enhancing security.

Running a Rootless Container

As a non-root user, you can run a container like this:

.

.

podman run –rm -it ubuntu

Example Output:

ruby

.

root@d2f56a8d1234:/#

This command runs an Ubuntu container in an interactive shell, without requiring root access on the host system.

Configuring Rootless Environment

Ensure your user is added to the subuid and subgid files for proper UID/GID mapping:

.

.

echo “$USER:100000:65536” | sudo tee -a /etc/subuid /etc/subgid

Example Output:

makefile

.

user:100000:65536

user:100000:65536

4. How to Search for Containers

Once you start using containers, you may need to find specific ones.

Listing All Containers

To list all containers (both running and stopped):

.

.

podman ps -a

Example Output:

.

.

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

d13c5bcf30fd docker.io/library/ubuntu:latest 3 minutes ago Exited (0) 2 minutes ago confident_mayer

Filtering Containers

You can filter containers by their status, names, or other attributes. For instance, to find running containers:

.

.

podman ps –filter status=running

Example Output:

.

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

No output indicates there are no running containers at the moment.

5. How to Add Ping to Containers

Some minimal Ubuntu images don’t come with ping installed. Here’s how to add it.

Installing Ping in an Ubuntu Container

First, start an Ubuntu container:

.

.

podman run -it –cap-add=CAP_NET_RAW ubuntu

Inside the container, install ping (part of the iputils-ping package):

.

.

apt update

apt install iputils-ping

Example Output:

mathematica

.

Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]

Setting up iputils-ping (3:20190709-3) …

Now you can use ping within the container.

6. How to Expose Ports

Exposing ports is crucial for running services that need to be accessible from outside the container.

Exposing a Port

To expose a port, use the -p flag with the podman run command:

.

.

podman run -d -p 8080:80 ubuntu -c “apt update && apt install -y nginx && nginx -g ‘daemon off;'”

Example Output:

.

54c11dff6a8d9b6f896028f2857c6d74bda60f61ff178165e041e5e2cb0c51c8

This command runs an Ubuntu container, installs Nginx, and exposes port 80 in the container as port 8080 on the host.

Exposing Multiple Ports

You can expose multiple ports by specifying additional -p flags:

.

.

podman run -d -p 8080:80 -p 443:443 ubuntu -c “apt update && apt install -y nginx && nginx -g ‘daemon off;'”

Example Output:

wasm

.

b67f7d89253a4e8f0b5f64dcb9f2f1d542973fbbce73e7cdd6729b35e0d1125c

7. How to Create a Network

Creating a custom network allows you to isolate containers and manage their communication.

Creating a Network

To create a new network:

.

.

podman network create mynetwork

Example Output:

.

mynetwork

This command creates a new network named mynetwork.

Running a Container on a Custom Network

.

.

podman run -d –network mynetwork ubuntu -c “apt update && apt install -y nginx && nginx -g ‘daemon off;'”

Example Output:

.

1e0d2fdb110c8e3b6f2f4f5462d1c9b99e9c47db2b16da6b2de1e4d9275c2a50

This container will now communicate with others on the mynetwork network.

8. How to Connect a Network Between Pods

Podman allows you to manage pods, which are groups of containers sharing the same network namespace.

Creating a Pod and Adding Containers

.

.

podman pod create mypod

podman run -dt –pod mypod ubuntu -c “apt update && apt install -y nginx && nginx -g ‘daemon off;'”

podman run -dt –pod mypod ubuntu -c “apt update && apt install -y redis-server && redis-server”

Example Output:

.

f04d1c28b030f24f3f7b91f9f68d07fe1e6a2d81caeb60c356c64b3f7f7412c7

8cf540eb8e1b0566c65886c684017d5367f2a167d82d7b3b8c3496cbd763d447

4f3402b31e20a07f545dbf69cb4e1f61290591df124bdaf736de64bc3d40d4b1

Both containers now share the same network namespace and can communicate over the mypod network.

Connecting Pods to a Network

To connect a pod to an existing network:

.

.

podman pod create –network mynetwork mypod

Example Output:

.

f04d1c28b030f24f3f7b91f9f68d07fe1e6a2d81caeb60c356c64b3f7f7412c7

This pod will use the mynetwork network, allowing communication with other containers on that network.

9. How to Inspect a Network

Inspecting a network provides detailed information about the network configuration and connected containers.

Inspecting a Network

Use the podman network inspect command:

.

.

podman network inspect mynetwork

Example Output:

json

.

[

{

“name”: “mynetwork”,

“id”: “3c0d6e2eaf3c4f3b98a71c86f7b35d10b9d4f7b749b929a6d758b3f76cd1f8c6”,

“driver”: “bridge”,

“network_interface”: “cni-podman0”,

“created”: “2024-08-12T08:45:24.903716327Z”,

“subnets”: [

{

“subnet”: “10.88.1.0/24”,

“gateway”: “10.88.1.1”

}

],

“ipv6_enabled”: false,

“internal”: false,

“dns_enabled”: true,

“network_dns_servers”: [

“8.8.8.8”

]

}

]

This command will display detailed JSON output, including network interfaces, IP ranges, and connected containers.

10. How to Add a Static Address

Assigning a static IP address can be necessary for consistent network configurations.

Assigning a Static IP

When running a container, you can assign it a static IP address within a custom network:

.

.

podman run -d –network mynetwork –ip 10.88.1.100 ubuntu -c “apt update && apt install -y nginx && nginx -g ‘daemon off;'”

Example Output:

.

f05c2f18e41b4ef3a76a7b2349db20c10d9f2ff09f8c676eb08e9dc92f87c216

Ensure that the IP address is within the subnet range of your custom network.

11. How to Log On to a Container with

Accessing a container’s shell is often necessary for debugging or managing running applications.

Starting a Container with

If the container image includes , you can start it directly:

.

.

podman run -it ubuntu

Example Output:

ruby

.

root@e87b469f2e45:/#

Accessing a Running Container

To access an already running container:

.

.

podman exec -it <container_id>

Replace <container_id> with the actual ID or name of the container.

Example Output:

ruby

.

root@d2f56a8d1234:/#

.

.

How to renable the tempurl in latest Cpanel

.

As some of you have noticed the new cpanel by default has a bunch of new default settings that nobody likes.

FTPserver is not configured out of the box.

TempURL is disabled for security reasons. Under certain conditions, a user can attack another user’s account if they access a malicious script through a mod_userdir URL.

So they removed it by default.

.

They did not provide instructions for people who need it. You can easily enable it BUT php wont work on the temp url unless you do the following

remove below: and by remove I mean you need to recompile easyapache 4 the following changes.

mod_ruid2
mod_passenger
mod_mpm_itk
mod_proxy_fcgi
mod_fcgid

Install

Mod_suexec
mod_suphp

Then go into Apache_mode_user dir tweak and enable it and exclude default host only.
It
wont save the setting in the portal, but the configuration is updated. If you go back and look it will look like the settings didnt take. Looks like a bug in cpanel they need to fix on their front end.

Then PHP will work again on the tempurl

.

How to integrate VROPS with Ansible

Automating VMware vRealize Operations (vROps) with Ansible

In the world of IT operations, automation is the key to efficiency and consistency. VMware’s vRealize Operations (vROps) provides powerful monitoring and management capabilities for virtualized environments. Integrating vROps with Ansible, an open-source automation tool, can take your infrastructure management to the next level. In this blog post, we’ll explore how to achieve this integration and demonstrate its benefits with a practical example.

What is vRealize Operations (vROps)?

vRealize Operations (vROps) is a comprehensive monitoring and analytics solution from VMware. It helps IT administrators manage the performance, capacity, and overall health of their virtual environments. Key features of vROps include:

 Performance Monitoring: Continuous tracking of VMs, hosts, and other resources.
 Capacity Management: Planning and optimizing resource usage.
 Troubleshooting: Identifying and resolving issues promptly.
 Automated Actions: Responding to specific events with predefined actions.

Why Integrate vROps with Ansible?

Integrating vROps with Ansible allows you to automate routine tasks, enforce consistent configurations, and rapidly respond to changes or issues in your virtual environment. This integration enables you to:

 Automate Monitoring Setup: Configure monitoring for new virtual machines or environments automatically.
 Trigger Remediation Actions: Automate responses to alerts generated by vROps.
 Generate Reports: Automate the creation and distribution of performance and capacity reports.
 Maintain Configuration Compliance: Ensure consistent vROps configurations across environments.

Setting Up the Integration

Prerequisites

Before you start, ensure you have:

1.vROps Environment: A running instance of VMware vRealize Operations.
2.Ansible Installed: Ansible should be installed on your control node.

Step-by-Step Guide

Step 1: Configure API Access in vROps

First, ensure you have the necessary API access in vROps. You’ll need:

 vROps Host: The URL of your vROps instance.
 vROps Username: A user with API access permissions.
 vROps Password: The password for the above user.

Step 2: Install Ansible

If you haven’t installed Ansible yet, you can do so by following these commands:

sh

sudo apt update

sudo apt install ansible

Step 3: Create an Ansible Playbook

Create an Ansible playbook to interact with vROps. Below is an example playbook that retrieves the status of vROps resources.

Note: to use the other api end points you will need to acquire the token and set it as a fact to pass later.

Example

If you want to acquire the auth token:

.

name: Authenticate with vROps and Check vROps Status

  hosts: localhost

  vars:

    vrops_host: “your-vrops-host”

    vrops_username: “your-username”

    vrops_password: “your-password”

  tasks:

    – name: Authenticate with vROps

      uri:

        url: “https://{{ vrops_host }}/suite-api/api/auth/token/acquire”

        method: POST

        body_format: json

        body:

          username: {{ vrops_username }}”

          password: {{ vrops_password }}”

        headers:

          Content-Type: “application/json

        validate_certs: no

      register: auth_response

.

    – name: Fail if authentication failed

      fail:

        msg: “Authentication with vROps failed: {{ auth_response.json }}”

      when: auth_response.status != 200

.

    – name: Set auth token as fact

      set_fact:

        auth_token: {{ auth_response.json.token }}”

.

    – name: Get vROps status

      uri:

        url: “https://{{ vrops_host }}/suite-api/api/resources”

        method: GET

        headers:

          Authorization: vRealizeOpsToken {{ auth_token }}”

          Content-Type: “application/json

        validate_certs: no

      register: vrops_response

.

    – name: Display vROps status

      debug:

        msg: vROps response: {{ vrops_response.json }}”

.

Save this playbook to a file, for example, check_vrops_status.yml.

Step 4: Define Variables

Create a variables file to store your vROps credentials and host information.
Save it as vars.yml:

vrops_host: your-vrops-host

vrops_username: your-username

vrops_password: your-password

Step 5: Run the Playbook

Execute the playbook using the following command:

sh

ansible-playbook -e @vars.yml check_vrops_status.yml

This above command runs the playbook and retrieves the status of vROps resources, displaying the results if you used the first example.

Here are some of the key API functions you can use:

The Authentication to use the endpoints listed below, you will need to acquire the auth token and set it as a fact to pass to other tasks inside ansible to use with the various endpoints below.

 Login: Authenticate and get a session token.
 Endpoint: POST /suite-api/api/auth/token/acquire

Resource Management

 Get Resources: Retrieve a list of resources managed by vROps.
 Endpoint: GET /suite-api/api/resources
 Get Resource by ID: Retrieve details of a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}
 Create Resource: Add a new resource to vROps.
 Endpoint: POST /suite-api/api/resources
 Update Resource: Update information for an existing resource.
 Endpoint: PUT /suite-api/api/resources/{resourceId}
 Delete Resource: Remove a resource from vROps.
 Endpoint: DELETE /suite-api/api/resources/{resourceId}

Metrics and Data

 Get Metrics for a Resource: Retrieve metrics for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/stats
 Get Metric Definitions: List available metrics for a resource kind.
 Endpoint: GET /suite-api/api/resources/kind/{resourceKindKey}/statkeys
 Get Historical Metrics: Retrieve historical metric data for a resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/stats/historical

Alerts and Notifications

 Get Alerts: Retrieve a list of alerts.
 Endpoint: GET /suite-api/api/alerts
 Get Alert by ID: Retrieve details of a specific alert.
 Endpoint: GET /suite-api/api/alerts/{alertId}
 Acknowledge Alert: Acknowledge a specific alert.
 Endpoint: POST /suite-api/api/alerts/{alertId}/acknowledge
 Cancel Alert: Cancel a specific alert.
 Endpoint: POST /suite-api/api/alerts/{alertId}/cancel
 Generate Notifications: Send notifications based on specific conditions.
 Endpoint: POST /suite-api/api/notifications

Policies and Configurations

 Get Policies: Retrieve a list of policies.
 Endpoint: GET /suite-api/api/policies
 Get Policy by ID: Retrieve details of a specific policy.
 Endpoint: GET /suite-api/api/policies/{policyId}
 Create Policy: Add a new policy.
 Endpoint: POST /suite-api/api/policies
 Update Policy: Update an existing policy.
 Endpoint: PUT /suite-api/api/policies/{policyId}
 Delete Policy: Remove a policy.
 Endpoint: DELETE /suite-api/api/policies/{policyId}

Dashboards and Reports

 Get Dashboards: Retrieve a list of dashboards.
 Endpoint: GET /suite-api/api/dashboards
 Get Dashboard by ID: Retrieve details of a specific dashboard.
 Endpoint: GET /suite-api/api/dashboards/{dashboardId}
 Create Dashboard: Add a new dashboard.
 Endpoint: POST /suite-api/api/dashboards
 Update Dashboard: Update an existing dashboard.
 Endpoint: PUT /suite-api/api/dashboards/{dashboardId}
 Delete Dashboard: Remove a dashboard.
 Endpoint: DELETE /suite-api/api/dashboards/{dashboardId}
 Get Reports: Retrieve a list of reports.
 Endpoint: GET /suite-api/api/reports
 Generate Report: Generate a new report based on a template.
 Endpoint: POST /suite-api/api/reports/{reportTemplateId}/generate
 Get Report by ID: Retrieve details of a specific report.
 Endpoint: GET /suite-api/api/reports/{reportId}

Capacity and Utilization

 Get Capacity Remaining: Retrieve remaining capacity for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/capacity/remaining
 Get Capacity Usage: Retrieve capacity usage for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/capacity/usage

Additional Functionalities

 Get Custom Groups: Retrieve a list of custom groups.
 Endpoint: GET /suite-api/api/groups
 Create Custom Group: Add a new custom group.
 Endpoint: POST /suite-api/api/groups
 Update Custom Group: Update an existing custom group.
 Endpoint: PUT /suite-api/api/groups/{groupId}
 Delete Custom Group: Remove a custom group.
 Endpoint: DELETE /suite-api/api/groups/{groupId}
 Get Recommendations: Retrieve a list of recommendations.
 Endpoint: GET /suite-api/api/recommendations
 Get Recommendation by ID: Retrieve details of a specific recommendation.
 Endpoint: GET /suite-api/api/recommendations/{recommendationId}

These are just a few examples of the many functions available through the vROps REST API.

.

.

How to Power Up or Power Down multiple instances in OCI using CLI with Ansible

 This assume you have already configured the OCI cli and added your key to the user inside the OCI interface so your Ubuntu or Jump box can connect to your OCI infrastructure
 Ansible
 Role to control power up/down instances using the OCI CLI
 This assume you already have ansible setup
 You will need to install the ansible oci collections

.

Now the reason why you would probably want this is over terraform is because terraform is more suited for infrastructure orchestration and not really suited to deal with the instances once they are up and running.

If you have scaled servers out in OCI powering servers up and down in bulk currently is not available. If you are doing a migration or using a staging environment that you need need to use the machine when building or doing troubleshooting.

Then having a way to power up/down multiple machines at once is convenient.

.

Install the OCI collections if you don’t have it already.

Linux/macOS

curl -L https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/scripts/install.sh | bash -s — —verbose

.

ansible-galaxy collection list – Will list the collections installed

# /path/to/ansible/collections

Collection Version

——————- ——-

amazon.aws 1.4.0

ansible.builtin 1.3.0

ansible.posix 1.3.0

oracle.oci 2.10.0

.

Once you have it installed you need to test the OCI client is working

oci iam compartment list –all (this will list out the compartment ID list for your instances.

Compartments in OCI are a way to organise infrastructure and control access to those resources. This is great for if you have contractors coming and you only want them to have access to certain things not everything.

Now there are two ways you can your instance names.

 One logging in via the OCI interface and going the correct compartment, which is very slow and mind numbing to wait for.
 Or you can use automated approaches which is what you should be doing with everything you do that needs to be done over and over.

.

Bash Script to get the instances names from OCI

 This will use the OCI CLI and provide all instances name and ips
 It loops through each availability domain.
 for each availability domain, it lists the instance IDs and writes them to instance_ids.txt.
 It cleans up the instance_ids.txt file to remove brackets, quotes, and commas.
 It reads each instance ID from instance_ids.txt.
 For each instance, it retrieves the VNIC information.
 It extracts the display name, public IP, and private IP, and prints them.
 The script ends the loop and moves to the next availability domain.

compartment_id=ocid1.compartment.oc1..insert compartment ID here

.

# Explicitly define the availability domains based on your provided data

availability_domains=(“zcLB:US-CHICAGO-1-AD-1” “zcLB:US-CHICAGO-1-AD-2” “zcLB:US-CHICAGO-1-AD-3”)

.

# For each availability domain, list the instances

for ad in “${availability_domains[@]}”; do

.

    # List instances within the specific AD and compartment, extracting the “id” field

    oci compute instance list –compartment-id $compartment_id –availability-domain $ad –query data[].id” –raw-output > instance_ids.txt

.

    # Clean up the instance IDs (removing brackets, quotes, etc.)

    sed i ‘s/\[//g’ instance_ids.txt

    sed i ‘s/\]//g’ instance_ids.txt

    sed i ‘s/”//g’ instance_ids.txt

    sed i ‘s/,//g’ instance_ids.txt

.

    # Read each instance ID from instance_ids.txt

    while read -r instance_id; do

        # Get instance VNIC information

        instance_info=$(oci compute instance list-vnics –instance-id $instance_id)

.

        # Extract the required fields and print them

        display_name=$(echo $instance_info | jq -r ‘.data[0].”display-name”‘)

        public_ip=$(echo $instance_info | jq -r ‘.data[0].”public-ip“‘)

        private_ip=$(echo $instance_info | jq -r ‘.data[0].”private-ip“‘)

.

        echo “Availability Domain: $ad

        echo “Display Name: $display_name

        echo “Public IP: $public_ip

        echo “Private IP: $private_ip

        echo “—————————————–“

    done < instance_ids.txt

done

.

The output of the script when piped in to a file will look like

Instance.names

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance1

Public IP: 192.0.2.1

Private IP: 10.0.0.1

—————————————–

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance2

Public IP: 192.0.2.2

Private IP: 10.0.0.2

—————————————–

.

.

You can now grep this file for the name of the servers you want to power on or off quickly

 grep instance.names | grep <Instance*>

.

Now we have an ansible playbook that can power on or power off the instance by name provided by the OCI client

Ansible playbook to power on or off multiple instances via OCI CLI

name: Control OCI Instance Power State based on Instance Names

  hosts: localhost

  vars:

    instance_names_to_stop:

       instance1

      # Add more instance names here if you wish to stop them…

.

    instance_names_to_start:

      # List the instance names you wish to start here…

      # Example:

       Instance2

.

  tasks:

   name: Fetch all instance details in the compartment

    command:

      cmd: oci compute instance list –compartment-id ocid1.compartment.oc1..aaaaaaaak7jc7tn2su2oqzmrbujpr5wmnuucj4mwj4o4g7rqlzemy4yvxrza –output json

    register: oci_output

.

   set_fact:

      instances: {{ oci_output.stdout | from_json }}”

.

   name: Extract relevant information

    set_fact:

      clean_instances: {{ clean_instances | default([]) + [{ ‘name’: item[‘display-name’], ‘id’: item.id, ‘state’: item[‘lifecycle-state’] }] }}”

    loop: {{ instances.data }}”

    when: “‘display-name’ in item and ‘id’ in item and ‘lifecycle-state’ in item”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ instances_to_stop | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_stop and item.state == ‘RUNNING'”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ instances_to_start | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_start and item.state == ‘STOPPED'”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_stop) | selectattr(‘state’, ‘equalto‘, ‘RUNNING’) | list }}”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_start) | selectattr(‘state’, ‘equalto‘, ‘STOPPED’) | list }}”

.

   name: Display instances to stop (you can remove this debug task later)

    debug:

      var: instances_to_stop

.

   name: Display instances to start (you can remove this debug task later)

    debug:

      var: instances_to_start

.

   name: Power off instances

    command:

      cmd: oci compute instance action —action STOP –instance-id {{ item.id }}”

    loop: {{ instances_to_stop }}”

    when: instances_to_stop | length > 0

    register: state

.

#  – debug:

#      var: state

.

   name: Power on instances

    command:

      cmd: oci compute instance action —action START –instance-id {{ item.id }}”

    loop: {{ instances_to_start }}”

    when: instances_to_start | length > 0

.

The output will look like

PLAY [Control OCI Instance Power State based on Instance Names] **********************************************************************************

.

TASK [Gathering Facts] ***************************************************************************************************************************

ok: [localhost]

.

TASK [Fetch all instance details in the compartment] *********************************************************************************************

changed: [localhost]

.

TASK [Parse the OCI CLI output] ******************************************************************************************************************

ok: [localhost]

.

TASK [Extract relevant information] **************************************************************************************************************

ok: [localhost] => (item={‘display-name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘lifecycle-state’: ‘STOPPED’})

ok: [localhost] => (item={‘display-name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘lifecycle-state’: ‘RUNNING’})

.

TASK [Filter out instances to stop] **************************************************************************************************************

ok: [localhost]

.

TASK [Filter out instances to start] *************************************************************************************************************

ok: [localhost]

.

TASK [Display instances to stop (you can remove this debug task later)] **************************************************************************

ok: [localhost] => {

    instances_to_stop: [

        {

            “name”: “Instance2”,

            “id”: ocid1.instance.oc1..exampleuniqueID2″,

            “state”: RUNNING”

        }

    ]

}

.

TASK [Display instances to start (you can remove this debug task later)] *************************************************************************

ok: [localhost] => {

    instances_to_start: [

        {

            “name”: “Instance1”,

            “id”: ocid1.instance.oc1..exampleuniqueID1″,

            “state”: STOPPED”

        }

    ]

}

.

TASK [Power off instances] ***********************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘state’: ‘RUNNING’})

.

TASK [Power on instances] ************************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘state’: ‘STOPPED’})

.

PLAY RECAP ****************************************************************************************************************************************

localhost                  : ok=9    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

.

.

How to Deploy Another VPC in AWS with Scalable EC2’s for HA using Terraform

 This will configure a new VPC
 Create a new subnet for use
 Create a new security group with bunch rules
 Create a key pair for your new instances
 Allow you to scale your instances cleanly use the count attribute.

So we are going to do this a bit different than the other post. As the other post is just deploying one instance in an existing VPC.

This one is more fun. The structure we will use this time will allow you to scale your ec2 instances very cleanly. If you are using git repos to push out changes. Then having a main.tf for your instance is much simpler to manage at scale.

File structure:

terraform-project/

├── main.tf <– Your main configuration file

├── variables.tf <– Variables file that has the inputs to pass

├── outputs.tf <– Outputs file

├── security_group.tf <– File containing security group rules

└── modules/

└── instance/

        ├── main.tf <- this file contains your ec2 instances

└── variables.tf <- variable file that defines we will pass for the module in main.tf to use

.

Explaining the process:

Main.tf

 We have defined the provider and availability zone; if you have more than one cloud, then its good create a provider.tf and carve them out. 
 The key-pair to import into aws in the second availability zone that was generated locally in my terraform directory using.
 ssh-keygen -t rsa -b 2048 -f ./terraform-aws-key
 We are then saying lets create a new vpc called vpc2
 with the subnet cidr block 10.0.1.0/24 to use internally
 this will also map the public address to the new internal address assigned upon launch
 We will be creating servers using variables defined in the variables.tf
 Instance type
 AMID
 key_pair name to use
 new subnet to use
 and assign the new security group to the ec2 instance deployed
 We also added a count on the module so when we deploy ec2’s we can simply adjust the count number and pushed the code with a one tiny change as opposed to an entire block. You will see what I mean later.
main.tf

provider aws {

  region = “us-west-2”

}

.

resource aws_key_pair “my-nick-test-key” {

  key_name   = “my-nick-test-key”

  public_key = file(${path.module}/terraform-aws-key.pub”)

}

.

resource aws_vpc “vpc2” {

  cidr_block = “10.0.0.0/16”

}

.

resource aws_subnet newsubnet {

  vpc_id                  = aws_vpc.vpc2.id

  cidr_block              = “10.0.1.0/24”

  map_public_ip_on_launch = true

}

.

module web_server {

  source           = “./module/instance”

  ami_id           = var.ami_id

  instance_type    = var.instance_type

  key_name         = var.key_name_instance

  subnet_id        = aws_subnet.newsubnet.id

  instance_count   = 2  // Specify the number of instances you want

  security_group_id = aws_security_group.newcpanel.id

}

.

Variables.tf

 Here we define the variables we want to pass to the module in main.tf for the instance.
 The linux image
 Instance type (size of the machine)
 Key-pair to use for the image

variable ami_id {

  description = “The AMI ID for the instance”

  default     = “ami-0913c47048d853921” // Amazon Linux 2 AMI ID

}

.

variable instance_type {

  description = “The instance type for the instance”

  default     = “t2.micro

}

.

variable key_name_instance {

  description = “The key pair name for the instance”

  default     = “my-nick-test-key”

}

.

Security_group.tf

 This will create a new security group in the us-west-2 with inbound rules similar to cpanel with the name newcpanel

resource aws_security_group newcpanel {

  name        = newcpanel

  description = “Allow inbound traffic”

  vpc_id      = aws_vpc.vpc2.id

.

  // POP3 TCP 110

  ingress {

    from_port   = 110

    to_port     = 110

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 20

  ingress {

    from_port   = 20

    to_port     = 20

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 587

  ingress {

    from_port   = 587

    to_port     = 587

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // DNS (TCP) TCP 53

  ingress {

    from_port   = 53

    to_port     = 53

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // SMTPS TCP 465

  ingress {

    from_port   = 465

    to_port     = 465

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // HTTPS TCP 443

  ingress {

    from_port   = 443

    to_port     = 443

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // DNS (UDP) UDP 53

  ingress {

    from_port   = 53

    to_port     = 53

    protocol    = udp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // IMAP TCP 143

  ingress {

    from_port   = 143

    to_port     = 143

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // IMAPS TCP 993

  ingress {

    from_port   = 993

    to_port     = 993

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 21

  ingress {

    from_port   = 21

    to_port     = 21

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2086

  ingress {

    from_port   = 2086

    to_port     = 2086

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2096

  ingress {

    from_port   = 2096

    to_port     = 2096

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // HTTP TCP 80

  ingress {

    from_port   = 80

    to_port     = 80

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // SSH TCP 22

  ingress {

    from_port   = 22

    to_port     = 22

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // POP3S TCP 995

  ingress {

    from_port   = 995

    to_port     = 995

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2083

  ingress {

    from_port   = 2083

    to_port     = 2083

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2087

  ingress {

    from_port   = 2087

    to_port     = 2087

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2095

  ingress {

    from_port   = 2095

    to_port     = 2095

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2082

  ingress {

    from_port   = 2082

    to_port     = 2082

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

}

output newcpanel_sg_id {

  value       = aws_security_group.newcpanel.id

  description = “The ID of the security group ‘newcpanel‘”

}

.

.

Outputs.tf

 We want some information to be outputted upon creating the machines like the assigned public addresses. In terraform it needs somethings outputted for the checks to work. In ansible arent forced to do this, but it looks like in terraform you are.

output public_ips {

  value       = module.web_server.public_ips

  description = “List of public IP addresses for the instances.”

}

.

Okay so now we want to create the scalable ec2

 Up on deployment in the us-west-2 which essentially is for HA purposes.
 You want the key pair to used
 And the security group we defined earlier to be added the instance.

We create a modules/instance directory and inside here define the instances as resources

 Now there are a couple of ways to do this. Depends on how you grew your infrastructure out. If all your machines are the same then you don’t need a resource block for each instance which can make the code uglier to manage. You can use the count attribute to simply add or subtract inside the main.tf where the instance_count is defined under the module  instance_count   = 2

modules/instance/main.tf

resource aws_instance “Tailor-Server” {

  count          = var.instance_count  // Control the number of instances with a variable

.

  ami            = var.ami_id

  instance_type  = var.instance_type

  subnet_id      = var.subnet_id

  key_name       = var.key_name

  vpc_security_group_ids = [var.security_group_id]

.

  tags = {

    Name = format(“Tailor-Server%02d”, count.index + 1)  // Naming instances with a sequential number

  }

.

  root_block_device {

    volume_type           = “gp2”

    volume_size           = 30

    delete_on_termination = true

  }

}

.

Modules/instance/variables.tf

Each variable serves as an input that can be set externally when the module is called, allowing for flexibility and reusability of the module across different environments or scenarios.

So here we defining it as a list of items we need to pass for the module to work. We will later provide the actual parameter to pass to the variables being called in the main.tf

Cheat sheet:

ami_id: Specifies the Amazon Machine Image (AMI) ID that will be used to launch the EC2 instances. The AMI determines the operating system and software configurations that will be loaded onto the instances when they are created.

instance_type: Determines the type of EC2 instance to launch. This affects the computing resources available to the instance (CPU, memory, etc.).

Type: It is expected to be a string that matches one of AWS’s predefined instance types (e.g., t2.micro, m5.large).

key_name: Specifies the name of the key pair to be used for SSH access to the EC2 instances. This key should already exist in the AWS account.

subnet_id: Identifies the subnet within which the EC2 instances will be launched. The subnet is part of a specific VPC (Virtual Private Cloud).

instance_names: A list of names to be assigned to the instances. This helps in identifying the instances within the AWS console or when querying using the AWS CLI.

security_group_Id: Specifies the ID of the security group to attach to the EC2 instances. Security groups act as a virtual firewall for your instances to control inbound and outbound traffic.

 We are also adding a count here so we can scale ec2 very efficiently, especially if you have a lot of hands working in the pot keeps things very easy to manage.

variable ami_id {}

variable instance_type {}

variable key_name {}

variable subnet_id {}

variable instance_names {

  type        = list(string)

  description = “List of names for the instances to create.”

}

variable security_group_id {

  description = “Security group ID to assign to the instance”

  type        = string

}

variable instance_count {

  description = “The number of instances to create”

  type        = number

  default     = 1  // Default to one instance if not specified

}

.

Time to deploy your code: I didnt bother showing the plan here just the apply

my-terraform-vpc$ terraform apply

Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only ‘yes’ will be accepted to approve.

.

  Enter a value: yes

.

aws_subnet.newsubnet: Destroying… [id=subnet-016181a8999a58cb4]

aws_subnet.newsubnet: Destruction complete after 1s

aws_subnet.newsubnet: Creating…

aws_subnet.newsubnet: Still creating… [10s elapsed]

aws_subnet.newsubnet: Creation complete after 11s [id=subnet-0a5914443d2944510]

module.web_server.aws_instance.Tailor-Server[1]: Creating…

module.web_server.aws_instance.Tailor-Server[0]: Creating…

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [10s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [10s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [20s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [20s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [30s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [30s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [40s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [40s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [50s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [50s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Creation complete after 52s [id=i-0d103937dcd1ce080]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [1m0s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [1m10s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Creation complete after 1m12s [id=i-071bac658ce51d415]

.

Apply complete! Resources: 3 added, 0 changed, 1 destroyed.

.

Outputs:

.

newcpanel_sg_id = “sg-0df86c53b5de7b348”

public_ips = [

  “34.219.34.165”,

  “35.90.247.94”,

]

.

Results:

VPC successful:

EC2 successful:

Security-Groups:

Key Pairs:

Ec2 assigned SG group:

How to deploy an EC2 instance in AWS with Terraform

.

  • How to install terraform
  • How to configure your aws cli
  • How to steup your file structure
  • How to deploy your instance
  • You must have an AWS account already setup
    • You have an existing VPC
    • You have existing security groups

Depending on which machine you like to use. I use varied distros for fun.

For this we will use Ubuntu 22.04

How to install terraform

  • Once you are logged into your linux jump box or whatever you choose to manage.

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg –dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

.

ThanosJumpBox:~/myterraform$ terraform -v

Terraform v1.8.2

on linux_amd64

+ provider registry.terraform.io/hashicorp/aws v5.47.

  • Okay next you want to install the awscli

sudo apt update

sudo apt install awscli

2. Okay Now you need to go into your aws and create a user and aws cli key

  • Log into your aws console
  • Go to IAM
    • Under users create a user called Terrform-thanos

Next you want to either create a group or add it to an existing. To make things easy for now we are going to add it administrator group

Next click on the new user and create the ACCESS KEY

Next select the use case for the key

Once you create the ACCESS-KEY you will see the key and secret

Copy these to a text pad and save them somewhere safe.

Next you we going to create the RSA key pair

  • Go under EC2 Dashboard
  • Then Network & ecurity
  • Then Key Pairs
  • Create a new key pair and give it a name

Now configure your Terrform to use the credentials

thanosjumpbox-myterraform$ aws configure

AWS Access Key ID [****************RKFE]:

AWS Secret Access Key [****************aute]:

Default region name [us-west-1]:

Default output format [None]:

.

So a good terraform file structure to use in work environment would be

my-terraform-project/

├── main.tf

├── variables.tf

├── outputs.tf

├── provider.tf

├── modules/

   ├── vpc/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

   └── ec2/

       ├── main.tf

       ├── variables.tf

       └── outputs.tf

├── environments/

   ├── dev/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

   ├── prod/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

├── terraform.tfstate

├── terraform.tfvars

└─ .gitignore

That said for the purposes of this post we will keep it simple. I will be adding separate posts to deploy vpc’s, autoscaling groups, security groups etc.

This would also be very easy to display if you VSC to connect to your
linux machine

mkdir myterraform

cd myterraform

touch main.tf outputs.tf variables.tf

.

So we are going to create an Instance as follows

 EC2 in my existing VPC
 Using a AMI Amazon Linux 2023 AMI (AMI CATALOG will have the ID)
 ami-0827b6c5b977c020e (ID)
 t2.micro instance type
 using a subnet that is available in the us-west-1 zone available for my vpc
 You can find the ID in the console VPC-subnets
 Security groups again will be found under Network & Security > Security Groups
 Use a general purpose volume 30G SSD
 Which is using a custom security group I created earlier
 The outputs will provided via the outputs.tf

Main.tf

provider “aws” {

  region = var.region

}

.

resource “aws_instance” “my_instance” {

  ami           = “ami-0827b6c5b977c020e  # Use a valid AMI ID for your region

  instance_type = “t2.micro              # Free Tier eligible instance type

  key_name      = “”           # Ensure this key pair is already created in your AWS account

.

  subnet_id              = “subnet-0e80683fe32a75513  # Ensure this is a valid subnet in your VPC

  vpc_security_group_ids = [“sg-0db2bfe3f6898d033]  # Ensure this is a valid security group ID

.

  tags = {

    Name = “thanos-lives”

  }

.

  root_block_device {

    volume_type = “gp2  # General Purpose SSD, which is included in the Free Tier

    volume_size = 30     # Maximum size covered by the Free Tier

  }

.

Outputs.tf

output “instance_ip_addr” {

  value = aws_instance.my_instance.public_ip

  description = “The public IP address of the EC2 instance.”

}

.

output “instance_id” {

  value = aws_instance.my_instance.id

  description = “The ID of the EC2 instance.”

}

.

output “first_security_group_id” {

  value = tolist(aws_instance.my_instance.vpc_security_group_ids)[0]

  description = “The first Security Group ID associated with the EC2 instance.”

}

.

Variables.tf

variable “region” {

  description = “The AWS region to create resources in.”

  default     = “us-west-1”

}

.

variable “ami_id” {

  description = “The AMI ID to use for the server.”

}

.

.

Terraform.tfsvars

region = “us-west-1”

ami_id = “ami-0827b6c5b977c020e  # Replace with your chosen AMI ID

.

.

Deploying your code:

thanosjumpbox:~/my-terraform$ terraform init

.

Initializing the backend…

.

Initializing provider plugins…

Reusing previous version of hashicorp/aws from the dependency lock file

Using previously-installed hashicorp/aws v5.47.0

.

Terraform has been successfully initialized!

.

You may now begin working with Terraform. Try running “terraform plan” to see

any changes that are required for your infrastructure. All Terraform commands

should now work.

.

If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

thanosjumpbox:~/my-terraform$ terraform$

.

thanosjumpbox:~/my-terraform$ terraform$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  + create

.

Terraform will perform the following actions:

.

  # aws_instance.my_instance will be created

  + resource “aws_instance” “my_instance” {

      + ami                                  = “ami-0827b6c5b977c020e”

      + arn                                  = (known after apply)

      + associate_public_ip_address          = (known after apply)

      + availability_zone                    = (known after apply)

      + cpu_core_count                       = (known after apply)

      + cpu_threads_per_core                 = (known after apply)

      + disable_api_stop                     = (known after apply)

      + disable_api_termination              = (known after apply)

      + ebs_optimized                        = (known after apply)

      + get_password_data                    = false

      + host_id                              = (known after apply)

      + host_resource_group_arn              = (known after apply)

      + iam_instance_profile                 = (known after apply)

      + id                                   = (known after apply)

      + instance_initiated_shutdown_behavior = (known after apply)

      + instance_lifecycle                   = (known after apply)

      + instance_state                       = (known after apply)

      + instance_type                        = “t2.micro

      + ipv6_address_count                   = (known after apply)

      + ipv6_addresses                       = (known after apply)

      + key_name                             = “nicktailor-aws”

      + monitoring                           = (known after apply)

      + outpost_arn                          = (known after apply)

      + password_data                        = (known after apply)

      + placement_group                      = (known after apply)

      + placement_partition_number           = (known after apply)

      + primary_network_interface_id         = (known after apply)

      + private_dns                          = (known after apply)

      + private_ip                           = (known after apply)

      + public_dns                           = (known after apply)

      + public_ip                            = (known after apply)

      + secondary_private_ips                = (known after apply)

      + security_groups                      = (known after apply)

      + source_dest_check                    = true

      + spot_instance_request_id             = (known after apply)

      + subnet_id                            = “subnet-0e80683fe32a75513”

      + tags                                 = {

          + “Name” = “Thanos-lives”

        }

      + tags_all                             = {

          + “Name” = “Thanos-lives”

        }

      + tenancy                              = (known after apply)

      + user_data                            = (known after apply)

      + user_data_base64                     = (known after apply)

      + user_data_replace_on_change          = false

      + vpc_security_group_ids               = [

          + “sg-0db2bfe3f6898d033”,

        ]

.

      + root_block_device {

          + delete_on_termination = true

          + device_name           = (known after apply)

          + encrypted             = (known after apply)

          + iops                  = (known after apply)

          + kms_key_id            = (known after apply)

          + tags_all              = (known after apply)

          + throughput            = (known after apply)

          + volume_id             = (known after apply)

          + volume_size           = 30

          + volume_type           = “gp2”

        }

    }

.

Plan: 1 to add, 0 to change, 0 to destroy.

.

Changes to Outputs:

  + first_security_group_id = “sg-0db2bfe3f6898d033”

  + instance_id             = (known after apply)

  + instance_ip_addr        = (known after apply)

.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

.

Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run “terraform

apply” now.

.

thanosjumpbox:~/my-terraform$ terraform$ terraform apply

.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  + create

.

Terraform will perform the following actions:

.

  # aws_instance.my_instance will be created

  + resource “aws_instance” “my_instance” {

      + ami                                  = “ami-0827b6c5b977c020e”

      + arn                                  = (known after apply)

      + associate_public_ip_address          = (known after apply)

      + availability_zone                    = (known after apply)

      + cpu_core_count                       = (known after apply)

      + cpu_threads_per_core                 = (known after apply)

      + disable_api_stop                     = (known after apply)

      + disable_api_termination              = (known after apply)

      + ebs_optimized                        = (known after apply)

      + get_password_data                    = false

      + host_id                              = (known after apply)

      + host_resource_group_arn              = (known after apply)

      + iam_instance_profile                 = (known after apply)

      + id                                   = (known after apply)

      + instance_initiated_shutdown_behavior = (known after apply)

      + instance_lifecycle                   = (known after apply)

      + instance_state                       = (known after apply)

      + instance_type                        = “t2.micro

      + ipv6_address_count                   = (known after apply)

      + ipv6_addresses                       = (known after apply)

      + key_name                             = “nicktailor-aws”

      + monitoring                           = (known after apply)

      + outpost_arn                          = (known after apply)

      + password_data                        = (known after apply)

      + placement_group                      = (known after apply)

      + placement_partition_number           = (known after apply)

      + primary_network_interface_id         = (known after apply)

      + private_dns                          = (known after apply)

      + private_ip                           = (known after apply)

      + public_dns                           = (known after apply)

      + public_ip                            = (known after apply)

      + secondary_private_ips                = (known after apply)

      + security_groups                      = (known after apply)

      + source_dest_check                    = true

      + spot_instance_request_id             = (known after apply)

      + subnet_id                            = “subnet-0e80683fe32a75513”

      + tags                                 = {

          + “Name” = “Thanos-lives”

        }

      + tags_all                             = {

          + “Name” = “Thanos-lives”

        }

      + tenancy                              = (known after apply)

      + user_data                            = (known after apply)

      + user_data_base64                     = (known after apply)

      + user_data_replace_on_change          = false

      + vpc_security_group_ids               = [

          + “sg-0db2bfe3f6898d033”,

        ]

.

      + root_block_device {

          + delete_on_termination = true

          + device_name           = (known after apply)

          + encrypted             = (known after apply)

          + iops                  = (known after apply)

          + kms_key_id            = (known after apply)

          + tags_all              = (known after apply)

          + throughput            = (known after apply)

          + volume_id             = (known after apply)

          + volume_size           = 30

          + volume_type           = “gp2”

        }

    }

.

Plan: 1 to add, 0 to change, 0 to destroy.

.

Changes to Outputs:

  + first_security_group_id = “sg-0db2bfe3f6898d033”

  + instance_id             = (known after apply)

  + instance_ip_addr        = (known after apply)

.

Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only ‘yes’ will be accepted to approve.

.

  Enter a value: yes

.

aws_instance.my_instance: Creating…

aws_instance.my_instance: Still creating… [10s elapsed]

aws_instance.my_instance: Still creating… [20s elapsed]

aws_instance.my_instance: Creation complete after 22s [id=i-0ee382e24ad28ecb8]

.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

.

Outputs:

.

first_security_group_id = “sg-0db2bfe3f6898d033”

instance_id = “i-0ee382e24ad28ecb8”

instance_ip_addr = “50.18.90.217”

Result:

TightVNC Security Hole

Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network.[1]

VNC servers work on a variety of platforms, allowing you to share screens and keyboards between Windows, Mac, Linux, and Raspberry Pi devices. RDP server is proprietary and only works with one operating system. VNC vs RDP performance. RDP provides a better and faster remote connection.

There are a number of reasons why people use it.

 RDP requires licenses and VNC does not.
 You can also have multiple sessions on a user
 You can set it so it will connect to an existing session (which is what a lot folks use it for)
 It can be used on multiple OS’s including linux; while RDP is just for windows

There are a few VNC tools out there.

RealVNC

 They have enterprise version
 Requires Licenses
 Has no AD authentication
 Has decent Encryption

UltraVNC – Best one to use.

 Has AD authentication
 Has good Encryption
 AD authentication
 File transfer inside the VNC connection
 Multiuser connections and Existing
 Loads of features the others don’t have
 Is considered the most secure
 Free for personal and commercial use
 Available through chocolatey package manager

Tight-VNC – Security Hole

 Free
 Has encryption but its DES with an 8 character limit
 Available through chocolatey package manager

.

Tight-VNC has their encryption algorithm hardcoded into its software and appears they have NOT updated its encryption standards in years.

.

.

DES Encryption used

# This is hardcoded in VNC applications like TightVNC.

    $magicKey = [byte[]]@(0xE8, 0x4A, 0xD6, 0x60, 0xC4, 0x72, 0x1A, 0xE0)

    $ansi = [System.Text.Encoding]::GetEncoding(

        [System.Globalization.CultureInfo]::CurrentCulture.TextInfo.ANSICodePage)

.

    $pass = [System.Net.NetworkCredential]::new(, $Password).Password    

    $byteCount = $ansi.GetByteCount($pass)

    if ($byteCount gt 8) {

        $err = [System.Management.Automation.ErrorRecord]::new(

            [ArgumentException]‘Password must not exceed 8 characters’,

            PasswordTooLong,

            [System.Management.Automation.ErrorCategory]::InvalidArgument,

            $null)

        $PSCmdlet.WriteError($err)

        return

    }

.

    $toEncrypt = [byte[]]::new(8)

    $null = $ansi.GetBytes($pass, 0, $pass.Length, $toEncrypt, 0)

    

    $des = $encryptor = $null

    try {

        $des = [System.Security.Cryptography.DES]::Create()

        $des.Padding = ‘None’

        $encryptor = $des.CreateEncryptor($magicKey, [byte[]]::new(8))

.

        $data = [byte[]]::new(8)

        $null = $encryptor.TransformBlock($toEncrypt, 0, $toEncrypt.Length, $data, 0)

.

        , $data

    }

    finally {

        if ($encryptor) { $encryptor.Dispose() }

        if ($des) { $des.Dispose() }

    }

}

.

What this means is…IF you are using admin credentials on your machine while using Tight-VNC a hacker that is way better than I… Could gain access to your infrastructure by simply glimpsing the windows registry. Im sure there ways to exploit it.

I will demonstrate:

Now you can install Tight-vnc manually or via chocolatey. I used chocolatey and this from a public available repo.

.

.

Now lets set the password by right clicking tightvnc icon in the bottom corner and setting the password to an 8 character password, by clicking on change primary password and typing in whatever you like

‘Suck3r00’

.

A screenshot of a computer

Description automatically generated

.

.

Now lets open powershell without administrator privileges. Lets say I got in remotely and chocolatey is there and I want to check to see if tight-vnc is there.

.

A screenshot of a computer

Description automatically generated

.

As you can see I find this without administrator privilege.

.

Now lets say I was able to view the registry and get the encrypted value for tight-vnc; all I need to do is see for a few seconds.

.

.

Now there are tools online where you can convert that hexadecimal to binary decimal values long before AI was around. But since I love GPT im going to ask it to convert that for me

.

.

I have script that didn’t take long to put together from digging around for about an hour online. Which im obviously not going to share, BUT if I can do it……someone with skills could do pretty easy. A professional hacker NO SWEAT.

A computer screen shot of a blue screen

Description automatically generated

.

.As you can see if you have rolled this out how dangerous it is.

Having said that I have also written an Ansible Role which will purge tightvnc from your infrastructure and deploy ultravnc which will use encryption and AD authentication. Which the other two currently do NOT do.

.

Hope you enjoyed getting P0WNed.

.

.

.

.

How to Deploy VM’s in Hyper-V with Ansible

Thought it would be fun to do…..

If you can find another public repo that has it working online. Please send me a message so I can kick myself.

 This role will allow you to use a vhdx image to deploy vm’s in hyperv
 It will use the vm name to create a sub-folder to place the new vm image in
 It will configure the network switch
 It will setup the vlan tag/id and enable it
 It will also set the smart-paging file location to the destination path of the vm
 It will configure the OS network configuration 
 It will power on the machine and wait for response successfully
 It can also remove vms
 You can also call the role with tags if you want…

.

How to use this role: ansible-hyperv repo is set to private you must request access

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

b.Put your server under the appropriate group inside the file and save
i.testmachine1.nicktailor.com ansible_host=192.168.1.101 (the ip is pointed to the hypervisor)

Note: If there is no group simply list the server outside grouping, the –limit flag will pick it

up.

3.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

c.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
d.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
e.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

4.Move inside host_var
f.cd host_var
g.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

5.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

h.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
i.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
j.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

6.Move inside host_var
k.cd host_var
l.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
m.add the following parameters to your inventory file and save.

passed parameters: example: inventory/host_vars/testmachine.nicktailor.com

vms:

  – type: testserver

    name: nicktest

.

    cpu: 2   

    memory: 4096MB

.

    network:

      ip: 192.168.23.26

      netmask: 255.255.255.0

      gateway: 192.168.23.254

      dns: 192.168.0.17,192.168.0.18

      

#    network_switch: ‘External Virtual Switch’

    network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’

    vlanid: 1113

.

#   source-image

    src_vhd: ‘Z:\volumes\devops\devopssysprep\devopssysprep.vhdx

.

#   destination will be created in Z:\\volumes\servername\servername.vhdx by default

#   to change the paths you need to update the prov_vm.yml’s first three task paths

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called createvm.yml in the ansible directory which simply calls the ansible-hyperv role inside the roles directory.

Example: of ansible/createvm.yml

name: Provision VM

  hosts: hypervdev.nicktailor.com

  gather_facts: no

.

  tasks:

    – import_tasks: roles/ansible-hyperv/tasks/prov_vm.yml

.

Command:

ansible-playbook –i inventory/dev/hosts createvm.ymllimit=’testmachine1.nicktailor.com

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful example run of the book:

.

[ntailor@ansible-home ~]$ ansible-playbook –i inventory/hosts createvm.yml –limit=’testmachine1.nicktailor.com

.

PLAY [Provision VM] ****************************************************************************************************************************************************************

.

TASK [Create directory structure] **************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Check whether vhdx already exists] *******************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Clone vhdx] ******************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘changed’: False, ‘invocation’: {module_args: {‘path’: ‘Z:\\\\volumes\\\\devops\\nicktest\\nicktest.vhdx, checksum_algorithm: ‘sha1’, get_checksum: False, ‘follow’: False, ‘get_md5’: False}}, ‘stat’: {‘exists’: False}, ‘failed’: False, ‘item’: {‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx}, ansible_loop_var: ‘item’})

.

TASK [set_fact] ********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    path_folder: “Z:\\\\volumes\\\\devops\\nicktest\\nicktest.vhdx”

}

.

TASK [set_fact] ********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    page_folder: “Z:\\\\volumes\\\\devops\\nicktest”

}

.

TASK [Create VMs] ******************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Set SmartPaging File Location for new Virtual Machine to use destination image path] *****************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Set Network VlanID] **********************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Configure VMs IP] ************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [add_host] ********************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘changed’: True, ‘failed’: False, ‘item’: {‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx}, ansible_loop_var: ‘item’})

.

TASK [Poweron VMs] *****************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Wait for VM to be running] ***************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com -> localhost] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    “wait”: {

        “changed”: false,

        msg: “All items completed”,

        “results”: [

            {

                ansible_loop_var: “item”,

                “changed”: false,

                “elapsed”: 82,

                “failed”: false,

                “invocation”: {

                    module_args: {

                        active_connection_states: [

                            “ESTABLISHED”,

                            “FIN_WAIT1”,

                            “FIN_WAIT2”,

                            “SYN_RECV”,

                            “SYN_SENT”,

                            “TIME_WAIT”

                        ],

                        connect_timeout: 5,

                        “delay”: 0,

                        exclude_hosts: null,

                        “host”: “192.168.23.36”,

                        msg: null,

                        “path”: null,

                        “port”: 5986,

                        search_regex: null,

                        “sleep”: 1,

                        “state”: “started”,

                        “timeout”: 100

                    }

                },

                “item”: {

                    cpu: 2,

                    “memory”: “4096MB”,

                    “name”: nicktest,

                    “network”: {

                        dns: 192.168.0.17,192.168.0.18,

                        “gateway”: “192.168.23.254”,

                        ip: “192.168.23.36”,

                        “netmask”: “255.255.255.0”

                    },

                    network_switch: “Cisco VIC Ethernet Interface #6 – Virtual Switch”,

                    src_vhd: “C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx”,

                    “type”: testservers,

                    vlanid: 1113

                },

                match_groupdict: {},

                match_groups: [],

                “path”: null,

                “port”: 5986,

                search_regex: null,

                “state”: “started”

            }

        ]

    }

}

.

PLAY RECAP *************************************************************************************************************************************************************************

testmachine1.nicktailor.com      : ok=15   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

.

.

.

How to Configure Redhat 7 & 8 Network Interfaces using Ansible

 This role will configure redhat 7 and up interfaces for virtual and physical.
(bonded nics, gateways, routes, interface names)

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

c.Put your server under the appropriate group inside the file and save
d.testmachine1 ansible_host=192.168.1.101

.

Cool Stuff: If you deployed a virtual-machine using the ansible-vmware modules it will set the hostname of the host using the same shortname of the vm. If you require the fqdn vs the shortname on the host. To solve this I added some code to set the fdqn as the new_hostname if you define it under you hosts.file as shown below.

e.testmachine1 ansible_host=192.168.1.101 new_hostname=testmachine1.nicktailor.com

.

Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

f.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
g.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
h.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

3.Move inside host_var
i.cd host_var
j.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

4.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

k.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
l.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
m.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

5.Move inside host_var
n.cd host_var
o.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
p.add the following parameters to your inventory file and save.

passed parameters: example: var/testmachine1

#Configure network can be used on physical and virtual-machines

nic_devices:

    – device: ens192

      ip: 192.168.10.100

      nm: 255.255.255.0

      gw: 192.168.10.254

      uuid:

      mac:

..

Note: you do not need to specify the UUID, you can if you wish. You do need the MAC. if you are doing bonded nics on the hosts. If you are using physical machines with satellite deployments. Then its probably a good to idea to use the mac of the nic you want the dhcp request to hit to avoid accidently deploying to the wrong host. When dealing with physical machines you don’t really have the same forgiveness of snapshots or quickly rebuilding as a vm. You can do more complicated configurations as indicated below….You can always email or contact me via linkedin, top right of the blog if you need assistance.

More Advanced configurations: bonded nics, routes, multiple nics and gateways

bond_devices:

    – device: ens1

      mac: ec:0d:9a:05:3b:f0

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: ens1d1

      mac: ec:0d:9a:05:3b:f1

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: mgt

      ip: 10.100.1.2

      nm: 255.255.255.0

      gw: 10.100.1.254

      pr: ens1

    – device: ens6

      mac: ec:0d:9a:05:16:g0

      master: app

    – device: ens6d1

      mac: ec:0d:9a:05:16:g1

      master: app

    – device: app

      ip: 10.101.1.3

      nm: 255.255.255.0

      pr: ens6

routes:

    – device: app

      route:

        – 100.240.136.0/24

        – 100.240.138.0/24

.

    – device: app

      gw: 10.156.177.1

      route:

        – 10.156.148.0/24

.

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called setup-networkonly.yml in the ansible directory which simply calls the setup-redhat-interfaces role inside the roles directory.

Example: of ansible/ setup-networkonly.yml

hosts: all

  gather_facts: no

  roles:

   – role: setup-redhat-interfaces

.

Command:

ansible-playbook -i inventory/dev/hosts setup-networkonly.yml–limit=’testmachine1.nicktailor.com’

.

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

.

Test Run:

[root@ansible-home]# ansible-playbook –i inventory/dev/hosts setup-metworkonly.yml –limit=’testmachine1.nicktailor.com’ -k

SSH password:

.

PLAY [all] *************************************************************************************************************************************************************************

.

TASK [setup-redhat-network : Gather facts] ************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : set_fact] ****************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Cleanup network confguration] ********************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : find] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272815.953706, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s3′, u’xusr: False, u’atime: 1665494779.63, u’inode: 1055173, u’isgid: False, u’size: 285, u’isdir: False, u’ctime: 1530272816.3037066, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272848.538762, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s8′, u’xusr: False, u’atime: 1665494779.846, u’inode: 2769059, u’isgid: False, u’size: 203, u’isdir: False, u’ctime: 1530272848.6417623, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Setup bond devices] ******************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s8′, u’mac: u’08:00:27:13:b2:73′, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s9′, u’mac: u’08:00:27:e8:cf:cd’, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’mgt‘, u’ip: u’192.168.10.200‘, u’nm: u’255.255.255.0′, u’gw: u’10.0.2.2′, u’pr: u’enp0s8′})

.

TASK [setup-redhat-network : Setup NIC] ***************************************************************************************************************************************

.

TASK [setup-redhat-network : Setup static routes] *****************************************************************************************************************************

.

PLAY RECAP *************************************************************************************************************************************************************************

testmachine1.nicktailor.com : ok=7    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

.

[root@testmachine1.nicktailor.com]# cat /proc/net/bonding/mgt

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

.

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: enp0s8 (primary_reselect failure)

Currently Active Slave: enp0s8

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

.

Slave Interface: enp0s8

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:13:b2:73

Slave queue ID: 0

.

Slave Interface: enp0s9

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:e8:cf:cd

Slave queue ID: 0

.

[root@testmachine1.nicktailor.com]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:63:63:0e brd ff:ff:ff:ff:ff:ff

    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3

       valid_lft 86074sec preferred_lft 86074sec

    inet6 fe80::a162:1b49:98b7:6c54/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

3: enp0s8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:05:b4:e8 brd ff:ff:ff:ff:ff:ff

6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether ae:db:dc:52:22:f8 brd ff:ff:ff:ff:ff:ff

7: mgt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.200/24 brd 192.168.56.255 scope global mgt

       valid_lft forever preferred_lft forever

    inet6 fe80::a00:27ff:fe13:b273/64 scope link

       valid_lft forever preferred_lft forever

.

How to Join Windows Servers to your DC with Ansible

 This role will simply join a new windows server to the domain
 You simply need to define the passed parameters in defaults/main.yml indicated below
 This role will ask you for the domain admin password at runtime so you will need to know it. Don’t need to worry about vaulting the admin AD password in the code
 This role assume your windows host is already configured to use winrm

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

c.Put your server under the appropriate group inside the file and save
i.Testmachine1.nicktailor.coml ansible_host=192.168.1.101

Note: If there is no group simply list the server outside grouping, the –limit flag will pick it

up.

3.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

d.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
e.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
f.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

4.Move inside host_var
g.cd host_var
h.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

5.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

i.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
j.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
k.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

6.Move inside host_var
l.cd host_var
m.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
n.add the following parameters to your inventory file and save.

passed parameters: example: roles/add-server-to-dc/default/main.yml

dns_domain_name: ad.nicktailor.com

computer_name: testmachine1

domain_ou_path: “OU=Admin,DC=nicktailor,DC=local”

domain_admin_user: adminuser@nicktailor.com

state: domain

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called joinservertodomain.yml in the ansible directory which simply calls the add-servers-to-dc role inside the roles directory.

Example: of ansible/joinservertodomain.yml

hosts: all

  gather_facts: no

  vars_prompt:

  – name: domain_pass

    prompt: Enter Admin Domain Password

  roles:

    – role: addservers-todc

.

Command:

ansible-playbook –i inventory/dev/hosts joinservertodomain.ymllimit=’testmachine1.nicktailor.com

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful example run of the book:

.

[alfred@ansible.nicktailor.com ~]$ ansible-playbook –i inventory/hosts joinservertodomain.yml –limit=’testmachine1.nicktailor.com

ansible-playbook 2.9.27

  config file = /etc/ansible/ansible.cfg

  configured module search path = [‘/home/alfred/.ansible/plugins/modules’, ‘/usr/share/ansible/plugins/modules’]

  ansible python module location = /usr/lib/python3.6/site-packages/ansible

  executable location = /usr/bin/ansible-playbook

  python version = 3.6.8 (default, Nov 10 2021, 06:50:23) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3.0.2)]

.

PLAYBOOK: joinservertodomain.yml *****************************************************************************************************************************************************

Positional arguments: joinservertodomain.yml

verbosity: 4

connection: smart

timeout: 10

become_method: sudo

tags: (‘all’,)

inventory: (‘/home/alfred/inventory/hosts’,)

subset: testmachine1.nicktailor.com

forks: 5

1 plays in joinservertodomain.yml

Enter Domain Password:

.

PLAY [all] ***********************************************************************************************************************************************************************

META: ran handlers

.

TASK [addservertodc : Join windows host to Domain Controller] ********************************************************************************************************************

task path: /home/alfred/roles/addservertodc/tasks/main.yml:1

Using module file /usr/lib/python3.6/site-packages/ansible/modules/windows/win_domain_membership.ps1

Pipelining is enabled.

<testmachine1.nicktailor.com> ESTABLISH WINRM CONNECTION FOR USER: ansibleuser on PORT 5986 TO testmachine1.nicktailor.com

EXEC (via pipeline wrapper)

changed: [testmachine1.nicktailor.com] => {

    “changed”: true,

    reboot_required: true

}

.

TASK [addservertodc : win_reboot] ************************************************************************************************************************************************

win_reboot: system successfully rebooted

changed: [testmachine1.nicktailor.com] => {

    “changed”: true,

    “elapsed”: 23,

    “rebooted”: true

}

META: ran handlers

META: ran handlers

.

PLAY RECAP ***********************************************************************************************************************************************************************

testmachine1.nicktailor.com       : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

.

.

.

.

0