Sunday, June 22, 2025

S3 Event Notifications Amazon.

 

Amazon S3 Event Notifications allow twtech to automatically trigger actions when specific events occur in an S3 bucket. These events could include the creation, deletion, or restoration of an object.

 Use Cases

  • Triggering AWS Lambda functions after an object is uploaded.
  • Sending messages to an Amazon SQS queue or Amazon SNS topic.
  • Starting workflows or automation (e.g., media processing, indexing, or data analysis).

 How It Works

  1. Events: twtech defines what events to listen for, such as:
    • s3:ObjectCreated:* — any object creation (e.g., upload, copy).
    • s3:ObjectRemoved:* — any object deletion.
    • s3:ObjectRestore:* — restoring an archived object.
  2. Destinations:
    • Lambda function – run code automatically in response to the event.
    • Amazon SNS topic – send a notification message.
    • Amazon SQS queue – queue the message for asynchronous processing.
  3. Filters (optional):
    • Prefix – only trigger events for objects with a specific prefix (e.g., "images/").
    • Suffix – only trigger events for objects with a certain suffix (e.g., ".jpg").

 Example: S3 Event Notification to Lambda

# json

{

  "LambdaFunctionConfigurations": [

    {

      "Id": "InvokeLambdaOnImageUpload",

      "LambdaFunctionArn": "arn:aws:lambda:us-east-2:12345678xxx:function:ProcessImage",

      "Events": ["s3:ObjectCreated:*"],

      "Filter": {

        "Key": {

          "FilterRules": [

            { "Name": "prefix", "Value": "uploads/" },

            { "Name": "suffix", "Value": ".jpg" }

          ]

        }

      }

    }

  ]

}

twtech can configure this via:

  • The AWS Management Console
  • The AWS CLI
  • Infrastructure as Code tools (e.g., CloudFormation, Terraform)

S3 Event Notifications – IAM Permissions:

To enable Amazon S3 Event Notifications, twtech must set IAM permissions properly for both:

1.     Amazon S3 to invoke the target service (like Lambda, SQS, or SNS).

2.     twtech user or role to configure the S3 event notification.

 1. Permissions for S3 to Invoke the Target

Depending on twtech target (Lambda, SNS, or SQS), you must allow S3 to invoke that resource.

 Example: S3 → Lambda Invocation

Lambda resource-based policy must allow S3 to invoke it:

# json
{
  "Sid": "AllowS3Invoke",
  "Effect": "Allow",
  "Principal": {
    "Service": "s3.amazonaws.com"
  },
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ProcessImage",
  "Condition": {
    "ArnLike": {
      "AWS:SourceArn": "arn:aws:s3:::twtech-s3bucket"
    }
  }
}

twtech can attach this using the AWS CLI:

# bash
aws lambda add-permission \
  --function-name ProcessImage \
  --principal s3.amazonaws.com \
  --statement-id AllowS3Invoke \
  --action "lambda:InvokeFunction" \
  --source-arn arn:aws:s3:::twtech-s3bucket

 2. Permissions for the User/Role Configuring the Notification

To allow an IAM user/role to set up S3 event notifications, grant permissions like:

# json 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketNotification",
        "s3:PutBucketNotification",
        "s3:GetBucketNotificationConfiguration",
        "s3:PutBucketNotificationConfiguration"
      ],
      "Resource": "arn:aws:s3:::twtech-s3bucket"
    }
  ]
}

3. Additional Target-Specific IAM Examples

 S3 → SQS ( Amazon Sinple Queue Service)

SQS queue policy must allow S3 to send messages:

# json
{
  "Effect": "Allow",
  "Principal": { "Service": "s3.amazonaws.com" },
  "Action": "sqs:SendMessage",
  "Resource": "arn:aws:sqs:us-east-1:123456789xxx:MyQueue",
  "Condition": {
    "ArnEquals": { "aws:SourceArn": "arn:aws:s3:::twtech-s3bucket" }
  }
}

 S3 → SNS (short massage notification)

SNS topic policy must allow S3 to publish:

# json 
{
  "Effect": "Allow",
  "Principal": { "Service": "s3.amazonaws.com" },
  "Action": "sns:Publish",
  "Resource": "arn:aws:sns:us-east-1:123456789012:MyTopic",
  "Condition": {
    "ArnLike": { "aws:SourceArn": "arn:aws:s3:::twtech-s3bucket" }
  }
}

 twtech-Important Notes

  • Only one event notification configuration can be set per bucket via the S3 API; use the combined configuration to include multiple destinations.
  • Permissions must be granted (e.g., S3 must be allowed to invoke twtech Lambda).

Project: Hands-on

How twtech creates s3, then configure event notification & EventBridge

Create an s3-bucket

Create an s3-bucket: twtech-s3-eventbridge



How twtech configure the event notification in the created bucket: twtech-s3-eventbridge

Selected the bucket and click open: twtech-s3-eventbridge

Navigate to the properties tab for the bucket created: twtech-s3-eventbridge


From properties tab, navigate down to the section : Event notifications, to create event notification.

Two options come into play: 1, create event notification or 2, enable Amazon EventBridge.


How twtech enables EventBridge for the bucket: twtech-s3-eventbridge

From: off


To: on

Remember to: Save changes


Option 2: How twtech creates a simple event notification.

Navigate to the properties tab for the bucket created: twtech-s3-eventbridge


From properties tab, navigate down to the section Event notifications:  to create event notification.

Assign a name: twtech-event-notification

Event types: specify

Specify at least one event for which you want to receive notifications. For each group, you can choose an event type for all events, or you can choose one or more individual events.



To use the destination: SQS

 twtech needs to Create SQS and allow s3bucket to publish data to: SQS locations.

Create:  SQS

Search from the aws services: SQS


Create Queue: twtech-sqs-queue

Assign a name: twtech-sqs-queue





How twtech enhances the access policy with the: Queue policies

From:

Generate a new policy with: amazon policy generator

Select policy type: SQS Queue Policy

Add statement:

Principle: *

Action: SendMessage

Amazon Resource Name(ARN): arn:aws:sqs:us-east-2:98xxxxxxxx:twtech-sqs-queue

From: SQS policy

Create Queue: twtech-sqs-queue

Remember to always add the statement before: generating policy


# Copy the SQS policy generated: SQS policy

# json

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Sid": "Statement1",

      "Effect": "Allow",

      "Principal": "*",

      "Action": [

        "sqs:SendMessage"

      ],

      "Resource": "arn:aws:sqs:us-east-2:980xxxxxxxx:twtech-sqs-queue"

    }

  ]

Paste in the  Access policy: To update the access policy

From:

To:

Save changes to: update SQS policy



Return to event notification: refresh, select the created sqs

Save changes for: event notification

Go to the creates SQS Queue click open: twtech-sqs-queue



How twtech test event from aws s3 : Poll for messages

From: No messages


To: message sent.

Select the message and click: To access the message sent


How twtech tests whether the:  event notification is working with the SQS.

Return the bucket and select the bucket configured for event notification (EventBridge), then click open: twtech-s3-eventbridge.

Upload the object to the bucket.


Upload object: Michael-learns.mp3

From: in progress 

To: successful


How twtech verifies that an even message was created after uploading the object: Poll for messeges

Yes : the message nofication was created.

Access the message to see what happened to the bucket: Notification.

Message: Michael learns was uploaded to the bucket

twtech-insights

Amazon SQS (Simple Queue Service) is a fully managed message queuing service provided by AWS that enables decoupling and scaling of microservices, distributed systems, and serverless applications.

Here are the key concepts and features of Amazon SQS:

 Core Concepts:

  1. Queue: A buffer that stores messages until they are processed.
  2. Message: A data record stored in a queue (up to 256 KB in size).
  3. Producer: Sends messages to the queue.
  4. Consumer: Retrieves and processes messages from the queue.

 Types of Queues:

  1. Standard Queue (default)
    • Nearly unlimited throughput.
    • At-least-once delivery (a message might be delivered more than once).
    • Best-effort ordering (messages may not be delivered in the order they were sent).
  2. FIFO Queue (First-In-First-Out)
    • Guarantees exactly-once processing and strict message order.
    • Limited throughput (up to 300 messages per second without batching).

 Key Features:

  • Scalability: Automatically scales to handle any workload.
  • Durability: Messages are redundantly stored across multiple AWS availability zones.
  • Visibility Timeout: Prevents other consumers from processing the same message while it's being handled.
  • Dead-Letter Queues (DLQs): For messages that fail to process after multiple attempts.
  • Long Polling: Reduces cost and latency by waiting until a message is available or timeout.
  • Message Delay: Allows delaying the delivery of new messages.

 Use Cases:

  • Decoupling microservices
  • Asynchronous task processing (e.g., background jobs)
  • Buffering and batching workloads
  • Integrating with AWS Lambda for serverless architectures.

No comments:

Post a Comment

Kubernetes Clusters | Upstream Vs Downstream.

  The terms "upstream" and "downstream" in the context of Kubernetes clusters often refer to the direction of code fl...