Sunday, February 23, 2025

Some Important Engineering Questions to ask at the end of a DevSecOps Job Interview.

Some Important Questions to ask at the end of a DevSecOps Engineering Job Interview.

A,     Can you tell me a little more about my day to day role?

B,    What can I do to impress the company in the first three months of my role?

C,    What are the plans for the company for the next five years?

D,    Could you describe the culture of the organization?

E,    What is the best thing about working for your company?

F,    Can you tell me more about the team I will be working with?

G,    What is the Work Culture like in the organization?

H,    How long does the company onboarding and orientation process last?

I,    Does the company have the Standard Operation Principle (SOP) for the various departments?

J,    Does the organization have a budget allocution for capacity training and seminars?

K,   If I were to find out more about the organization, while waiting for the outcome of the interview,  where should that be?

L ,   When and how can I contact you to see if I have been successful?

M, How many envirnoments does your company support?

N, Do you use IaC tool to provision resource?

O, How do you handle your statefiles?

P, How do you manage you codes?

Q, Do all commit in the stage branch go through pull requests and before being merged into the main branch?       

R, How many people are in the team?

S, What immediately shall be required of me if hired ?

Special Questions that can break the ice.

1, What would be the definition of excellent and successful in your environment?

2, How soon do I expect to hear from you?

3, Can you describe the team I will be working with and how collaboration is encouraged?

4, What are the key objectives of this role in the first three months?

5, How does the company support career growth and goal advancement?

6, What are the biggest challenges someone in this role may face on daily basis?

7, What tools and technologies will I be using regularly?

8, Have you incorporated Artificial Intelligence (AI) in your environment to in automation, debugging and security processes? 

Tuesday, February 18, 2025

Atlassian jira and project management

 

Double-click on the image to zoom-out ...Larger.

To return to Home page: Refresh Page or Take ESC Button on Keyboard.

Atlassian Jira is a powerful tool for project management, especially in Agile and DevSecOps environments. It helps teams plan, track, and manage software development projects, issue tracking, and workflow automation.

Jira for Project Management

Jira can be customized for different project management methodologies, such as Agile (Scrum/Kanban), Waterfall, or hybrid approaches. Below are some key features:

1. Agile Project Management

  • Scrum Support: Sprint planning, backlog grooming, story points, and burndown charts.
  • Kanban Boards: Visual workflow tracking with WIP (Work In Progress) limits.
  • Roadmaps: High-level planning to track project milestones.

2. Issue & Task Tracking

  • Epics, Stories, Tasks, and Bugs: Hierarchical issue tracking.
  • Custom Workflows: Define stages (To Do → In Progress → Done).
  • Automation Rules: Auto-assign tasks, update statuses, trigger notifications.

3. Reporting & Analytics

  • Velocity Charts: Measure team progress over sprints.
  • Burnup/Burndown Charts: Track scope changes and completed work.
  • Cumulative Flow Diagrams: Identify bottlenecks in workflows.

4. Integrations & CI/CD

  • DevOps Integrations: Connect with Bitbucket, GitHub, Jenkins, and AWS services.
  • Confluence Integration: Link documentation with Jira tickets.
  • Jira Service Management (JSM): ITSM support for incident and change management.

5. Permissions & Security

  • Role-Based Access Control (RBAC): Manage user permissions.
  • Audit Logs: Track changes for compliance.
  • Encryption & SSO Support: Secure authentication and user management.

Jira Best Practices for Project Management

  1. Use Epics for Large Workstreams: Break them into stories and tasks.
  2. Define Clear Workflows: Customize statuses and transitions.
  3. Set Up Dashboards: Track KPIs and team progress in real time.
  4. Automate Repetitive Tasks: Use Jira’s automation rules.
  5. Regularly Groom Backlogs: Keep priorities aligned with business goals.

Project's kanban board on gitHub

 


Double-click on the image to zoom-out ...Larger.

To return to Home page: Refresh Page or Take ESC Button on Keyboard.

A Kanban board on GitHub is a project management tool available in GitHub Projects that helps teams visualize work, track progress, and manage tasks efficiently. Here’s how to set up and use a Kanban board on GitHub:

1. Create a Project Board

  1. Go to your repository or organization.
  2. Click on the Projects tab.
  3. Click New Project.
  4. Choose Board view (Kanban-style layout).
  5. Name your project and select Public or Private visibility.
  6. Click Create project.

2. Add Columns (Kanban Stages)

GitHub provides default columns, but you can customize them:

  • To Do – Backlog or upcoming tasks.
  • In Progress – Tasks currently being worked on.
  • Review – Tasks under code review or testing.
  • Done – Completed tasks.

To add, remove, or rename columns:

  • Click the column title and select Edit or Delete.

3. Add Issues, Pull Requests, or Notes

  • Click + Add and select Issue or Pull Request.
  • Drag and drop items between columns as they progress.

4. Automate with GitHub Actions (Optional)

  • Use automation rules to move issues automatically when:
    • An issue is assigned.
    • A PR is merged.
    • A review is requested.

5. Track Progress

  • Use filters to view tasks by assignee, labels, or milestones.
  • Enable Charts (if using GitHub Projects Beta) to visualize work.

6. Integration with GitHub Workflows

  • Link Kanban with GitHub Issues, Pull Requests, and CI/CD workflows.
  • Use webhooks to integrate with Slack, Jira, or Trello.

Saturday, February 8, 2025

Terraform Meta-Arguments & key-concepts | Overiview.

An overview of Terraform infrastructure as code (IaC)

Focus:

Tailored for DovOps, DevSecOps and SRE.

Breakdown:

  • Intro,
  • How Terraform Works,
  • Key Features and Benefits,
  • Core concepts of Terraform,
  • twtech Most commonly used meta-arguments,
  • twtech-Insight.

Intro:

  • Terraform is an open-source Infrastructure as Code (IaC) tool that allows twtech to define, provision, and manage cloud and on-premises resources using human-readable configuration files
  • Terraform was Developed by HashiCorp (and acquired by IBM in Feb 2025), it provides a consistent workflow for managing infrastructure across multiple service providers like AWS, Azure, and Google Cloud. 
How Terraform Works
Terraform uses a declarative approach, where twtech describe the desired end state of its infrastructure, and it automatically figures out the steps required to achieve that state.
The core workflow involves three stages:
  1. Write: twtech defines the desired infrastructure using HashiCorp Configuration Language (HCL) or JSON in configuration files.
  2. Plan: Terraform generates an execution plan that outlines exactly what resources will be created, updated, or destroyed to match its configuration. This step allows for review before changes are applied.
  3. Apply: Upon approval, Terraform executes the plan, provisioning the infrastructure and recording the real infrastructure's status in a state file, which acts as a source of truth for twtech environment.
Key Features and Benefits

  • Multi-Cloud Support: Terraform is platform-agnostic, supporting numerous providers and services with accessible APIs, which simplifies managing hybrid or multi-cloud environments.
  • Modularity: Infrastructure configurations can be organized into reusable modules, promoting best practices, reducing redundancy, and enabling self-service infrastructure models within teams.
  • State Management: It keeps track of the current infrastructure in a state file, which helps in detecting configuration drift and ensuring accurate updates and deletions.
  • Automation: Terraform automates the provisioning process, reducing manual errors and improving efficiency and scalability.
  • Version Control: Configuration files can be stored in version control systems like Git, allowing for change tracking, collaboration, and rollbacks to previous versions

NB:

Terraform is an open-source Infrastructure as Code (IaC) tool that allows twtech to define and provision infrastructure resources in a consistent, repeatable, and automated manner. 

 Core concepts of Terraform

1. Provider:

  • A provider manages the lifecycle of a resource, such as AWS, Azure, GCP, or even other services like GitHub, Kubernetes, etc.
  • Providers contain the configurations needed to interact with the APIs of cloud platforms.

2. Resource:

  • Resources are the infrastructure components that Terraform manages (e.g., virtual machines, networks, databases).
  • twtech defines resources in Terraform using configuration files, and Terraform manages their lifecycle (create, read, update, delete).

3. Data Sources:

  • Data sources allow twtech to fetch information about existing infrastructure resources.
  • For example, querying an existing AWS VPC or getting information about a resource created outside of Terraform.

4. Modules:

  • A module is a container for multiple resources that are used together.
  • Modules help in organizing and reusing code, allowing twtech to break its configuration into smaller, more manageable parts.
  • twtech can use both local (fom local pc) and remote modules (from GitHub repository).

5. State:

  • Terraform maintains a state file (terraform.tfstate) that tracks the infrastructure’s current state.
  • The state file is used to map twtech configuration to real-world resources and helps Terraform determine what actions need to be taken during the apply phase.

6. Variables:

  • Variables allow twtech to parameterize Terraform configuration.
  • twtech can pass in values for variables via command-line arguments, environment variables, or by defining default values in configuration files.

7. Outputs:

  • Outputs are values that are displayed after the successful execution of a terraform apply command.
  • These values can be used to pass information between different modules or to provide information to users.

8. Plan and Apply:

  • terraform plan: Previews the changes Terraform will make to twtech infrastructure based on the current configuration and state.
  • terraform apply: Applies the changes to the infrastructure and updates the state.

9. Provisioners:

  • Provisioners allow twtech to execute scripts or other actions on the resources after they are created or modified. (bootsrtapping packages)
  • Examples include running a shell script on a virtual machine or configuring software on a newly provisioned resource.

10. Backend:

  • The backend defines where Terraform stores its state file.
  • Backends can be local (store the state files on twtech local filesystem) or remote (e.g., in an S3 bucket, Terraform Cloud, or Consul).

11. Terraform CLI Commands:

  • terraform init: Initializes the Terraform configuration and downloads the necessary provider plugins to the backend.(for backend Provider interaction like aws)
  • terraform validate: Validates the configuration files for syntax and configuration errors.
  • terraform format (fmt): Re-arrange the code configuration. 
  • terraform destroy: Destroys the infrastructure that was created by Terraform, then stores a backup file in the backend. (local file system or romote s3 bucket 
  • terraform apply --auto-approve: executed plan and apply without passing through interactive stages (without asking for confirmation) 
  • terraform import:  Imports resources not created with terraformand can henceforth be managed by terraform. 
  • terraform refresh: Updates the state file with the actual state of resources.

  • In Terraform, meta-arguments are special arguments that can be used with resources, modules, or providers to control how Terraform manages infrastructure. 
  • meta-arguments are not specific to the resource itself but affect the behavior of the Terraform operation.

twtech Most commonly used meta-arguments

1. depends_on

  • Purpose: Specifies explicit dependencies between resources, modules, or outputs. 
  • It can be used when the implicit order of resource creation is not sufficient, and twtech needs to explicitly control the order.
  • Example: If twetech wants to ensure a resource is created only after another resource is successfully created:
    # hcl
    resource "aws_security_group" "twtech-SG" { // security group config } resource "aws_instance" "twtech-instance" { depends_on = [aws_security_group.twtech-SG] // instance config }
           resource "aws_vpc" "twtech-vpc" {

                  // vpc config
            }
           resource "aws_instance" "twtech-instance" {

          depends_on = [aws_vpc.twtech-vpc]
                // instance config
              }

2. count

  • Purpose: Allows twtech to create multiple instances of a resource based on a condition or number. It's one of the most powerful meta-arguments for resource scaling.
  • Example: To create multiple EC2 instances:
    # hcl
    resource "aws_instance" "twtech-instance" { count = 20 ami = "ami-xxxxxxxxxxxxxxx" instance_type = "t2.medium" }
NB:
  • This will create 20 EC2 instances.

3. for_each

  • Purpose: Similar to count, but instead of just an integer, twtech can provide a collection (like a list or a map).
  •  It is more flexible and allows twtch to create resources dynamically based on a set of values.
  • Example: Creating an EC2 instance for each element in a list:
    # hcl
    resource "aws_instance" "family-instances" { for_each = toset(["focnha-DBinstance", "abunga-DBinstance", "atem-BDinstance"]) ami = "ami-xxxxxxxxxxxx" instance_type = "t2.large" tags = { Name = each.key } }
resource "aws_instance" "department-DBinstances" { for_each = toset(["HR-instance", "Safety-instance", "Health-instance"]) ami = "ami-xxxxxxxxxxxx" instance_type = "t2.xlarge" tags = { Name = each.key } }

NB:
This will create three EC2 instances, each named according to the list values.

4. lifecycle

  • Purpose: Controls how Terraform manages the lifecycle of resources
  • Terraform uses this to define behavior like preventing resource destruction or preventing changes to certain attributes.
  • Sub-arguments:
    • create_before_destroy: Ensures that resources are created before the old ones are destroyed (useful for replacements).
    • prevent_destroy: Prevents the resource from being destroyed (useful for critical resources).
    • ignore_changes: Specifies resource attributes that should not trigger updates when their values change.
  • Example:
    # hcl
    resource "aws_security_group" "twtech-SG" { name = "twtech-SG" lifecycle { prevent_destroy = true } }
         resource "aws_instance" "twtech-instance" { 
              name = "twtech-instance" 
              lifecycle { 
         create_before_destroy= true 
         } 
     }

         resource "aws_vpc" "twtech-vpc" { 
              name = "twtech-vpc" 
              lifecycle { 
         ignore_changes =  [ami, tags, instance_type]
         } 
     }

5. provider

  • Purpose: Specifies which provider is used for a specific resource, allowing twech to use different providers within the same configuration.
  • Example:
    # hcl
    resource "aws_s3_bucket" "twtech-bucket" { provider = aws.us_east-2 bucket = "twtech-s3-bucket" }
NB:
  • In this case, the resource uses a specific provider configuration.

6. provisioner

  • Purpose: While not strictly a "meta-argument," provisioners are used to execute scripts or commands on a resource after it is created or updated. They can be used for bootstrapping or configuration tasks.
  • Example:
    # hcl
    resource "aws_instance" "twtech-instance" { ami = "ami-xxxxxxxxxxxxxxx" instance_type = "t2.xlarge" provisioner "remote-exec" { inline = [ "echo 'Hello, World from twtech Terraform Team' > /tmp/twtechhello.txt" ] } }
         resource "aws_instance" "twtech-instance" { 
              ami = "ami-xxxxxxxxxxxxxxx" 
              instance_type = "t2.xlarge"
         
         provisioner "remote-exec" { 
               user_data = file("${path.module}/bootstrap.sh")
        } 
    }

7. module

  • Purpose: This meta-argument is used within a module to specify how it should interact with resources and configurations. It allows twtech to reuse and encapsulate configurations, making it easier to maintain complex infrastructures.
  • Example:
    # hcl
    module "vpc" { source = "./modules/vpc" cidr_block = "10.0.0.0/16" }

8. ignore_changes (within lifecycle)

  • Purpose: Specifies that changes to specific attributes of a resource should be ignored by Terraform. This is useful for attributes that might change frequently (e.g., dynamically assigned IPs).
  • Example:
    # hcl
    resource "aws_instance" "twtech-instance" { ami = "ami-xxxxxxxxxxxxx" instance_type = "t2.medium" lifecycle { ignore_changes = [ami] } }

9. connection (for provisioners)

  • Purpose: Specifies how Terraform would connect to a resource (e.g., via SSH or WinRM) for running provisioners.
  • Example:
    # hcl
    resource "aws_instance" "twtech-webappinstance" { ami = "ami-xxxxxxxxxxxxxx" instance_type = "t2.xlarge" provisioner "remote-exec" { connection { type = "ssh" user = "ec2-user" private_key = file("~/.ssh/id_rsa") host = self.public_ip } inline = [ "echo 'Hello, World from twtech Terraform Team' > /tmp/twtechhello.txt" ] } }

twtech-Insight:

  • depends_on: Specifies explicit resource dependencies.
  • count: Creates multiple instances of a resource.
  • for_each: Loops over a set of values to create resources.
  • lifecycle: Customizes resource lifecycle behavior (e.g., prevent destruction).
  • provider: Specifies which provider to use for a resource.
  • provisioner: Executes commands or scripts after resource creation.
  • module: Encapsulates a set of resources and configurations into reusable modules.
  • ignore_changes: Prevents changes to specific attributes.
  • connection: Specifies how Terraform should connect for provisioners.

Terraform workflow 

  • This involves a sequence of steps that twtech follows to manage twtech infrastructure using Terraform. 
  • Below is an overview of the typical workflow used for infrastructure as code (IaC) with Terraform:

1. Write Configuration Files

  • Define Infrastructure: twtech starts by writing Terraform configuration files (usually with .tf extensions) that describe your infrastructure. 
  • This is where twtech define the resources, data sources, providers, and other components that twtech want to manage with Terraform.
  • HCL (HashiCorp Configuration Language) is used to define resources like EC2 instances, databases, networking components, etc.
  • Example:
    # hcl

    provider "aws" { region = "us-east-2" } resource "aws_instance" "twtech-instance" { ami = "ami-xxxxxxxxxxxx" instance_type = "t2.medium" }

2. Initialize the Working Directory

  • terraform init: Before twtech can run Terraform, twtech needs to initialize the working directory. 
  • This command downloads necessary provider plugins and prepares the environment for Terraform operations.
  • Example:
    terraform init
NB:
  • This step will:
    • Download the required provider(s) based on twtech configuration.
    • Set up the backend for storing twtech state (local or remote).

3. Validate the Configuration

  • terraform validate: This command checks the syntax of twtech configuration files and ensures that there are no errors.
  • Example:
    terraform validate

4. Plan the Changes

  • terraform plan: This command creates an execution plan, showing twtech what actions Terraform will take to apply twtech changes (create, update, or destroy resources). 
  • It compares the desired state (from twtech.tf files) with the current state (from the Terraform state file).
  • It provides a preview of the changes Terraform is about to make, allowing twteck to review before applying.
  • Example:
    terraform plan
NB:
  • Output might show a plan like:
    # plaintext

    + aws_instance.example ami: "ami-XXXXXXXXXXXXX" => "ami-YYYYYYYYYYYY" instance_type: "t2.micro" => "t2.medium"

5. Apply the Changes

  • terraform apply: Once twtech reviews the plan, twtech can apply it, which will provision the actual resources described in twtech configuration files.
  • twtech will be prompted to confirm before applying, but twtech can skip the prompt using the -auto-approve flag.
  • Example:
    terraform apply
NB:
  • After applying, Terraform updates the state file to reflect the newly provisioned infrastructure.

6. Inspect and Manage the Infrastructure

  • terraform show: After applying, twtech  uses this command to inspect the current state of the infrastructure.
  • Example:
    terraform show
NB:
  • This command displays the current state of twtech resources, including details like IP addresses, IDs, and configuration values.

7. Destroy Infrastructure if no longer needed (Optional) 

  • terraform destroy: If twtech wants to tear down the infrastructure and remove all the resources it created, twtech can use the terraform destroy command.
NB:
  • This will prompt twtech for confirmation and then remove all the resources then create a backup for the state files.
  • Example:
    terraform destroy
  • twtech uses:  -auto-approve flag to bypass the confirmation prompt.

8. Manage State

  • State Management: Terraform keeps track of twtech infrastructure using a state file (terraform.tfstate). 
  • This terraform.tfstate file stores information about the resources created and their configurations.
  • twtech handles this file carefully, especially in a team environment. 
  • twtech recommends that terraform state files be stored remotely (using services like Terraform Cloud, S3 with DynamoDB, etc.) to prevented corrution or being tempered with. 
  • if corrupted, twtech may never be able to get the exact resources.
  • twtech used dynamoDB table to prevent corruption of its state files.
  • Locking is configured insures that only one executioner is allowed at a particular time. 
  • To manage the state, use the terraform state commands:
    terraform state list # List resources in state terraform state show <id> # Show details of a resource in state

9. Collaborate (Optional)

  • Terraform Cloud or Remote Backend: In a team setting, it’s common to use Terraform Cloud or a remote backend (like an S3 bucket) to store the state file. 
  • This allows for collaboration and ensures consistency across multiple users.
  • Terraform Cloud also provides features like workspaces and run triggers for better collaboration and automation.

Example Terraform Workflow

Let’s walk through a simplified example:

  1. Write Configuration File (main.tf):

    # hcl

    provider "aws" { region = "us-east-2" } resource "aws_instance" "twtech-dbinstance" { ami = "ami-xxxxxxxxxxxxxxxx" instance_type = "t2.medium" }
  2. Initialize Terraform:

    terraform init
  3. Validate the Configuration:

    terraform validate
  4. Create an Execution Plan:

    terraform plan
  5. Apply the Changes:

    terraform apply
  6. Destroy the Resources (if needed):

    terraform destroy

twtech Best Practices for Terraform Workflow

  • Version Control: Store  Terraform configuration files in version control (e.g., Git) to track changes and collaborate with the team.
  • Store statefiles in a remote backed (s3) and use DynamoDB tables to prevent corruption.
  • Remote State: Use a remote backend to store the Terraform state file securely and enable collaboration.
  • Modules: Break up large configurations into reusable modules to improve maintainability.
  • Plan Before Apply: Always run terraform plan before terraform apply to ensure that the changes are what you expect.
  • Workspaces: Use Terraform workspaces for managing multiple environments (e.g., dev, staging, Main, UAT, QA, Prod)

Amazon EventBridge | Overview.

Amazon EventBridge - Overview. Scope: Intro, Core Concepts, Key Benefits, Link to official documentation, Insights. Intro: Amazon EventBridg...