When twtech creates a normal AWS Lambda function
(not Lambda@Edge), the default deployment model works like the following:
1. Deployment Location
- Lambda functions are regional by default.
- twtech chooses an AWS region (e.g., us-east-2,)
when creating the function.
- The function only runs in that region unless twtech
replicate it to other regions.
2. Invocation Flow
- Clients
(apps, APIs, AWS services) must send requests to that region to invoke the
function.
- AWS provisions your function on-demand in the
chosen region.
- If the function hasn’t been invoked for a while, a cold
start occurs:
- AWS allocates a container.
- Loads your function code and runtime.
- Then executes your handler.
3. Scaling
- Automatic scaling
— AWS runs multiple instances of twtech function in parallel if needed.
- Each execution environment handles one request at a
time.
- There’s no extra configuration needed for scaling
unless you want reserved or provisioned concurrency.
4. Networking
- By default, Lambda functions can access the public
internet (unless twtech attaches them to a VPC).
By default, Lambda functions are created outside the VPC. - They can also integrate with other AWS services
in the same region without internet.
5. Deployment Package
- Code + dependencies are uploaded as:
- A ZIP file
- A container image (if using container-based Lambda)
- AWS stores it and deploys it to execution environments on-demand.
6. Lifecycle
Default (Regional) Lambda
Lifecycle:
- Creates
Lambda in a region.
- Deploys
twtech code (publish version or use
$LATEST).
- On
invocation, AWS:
- Allocates
environment (cold start if needed)
- Runs handler function
- Keeps environment warm for a short time for subsequent requests (warm starts).
- Scales
horizontally when traffic
increases.
twtech biggest difference for: default Lambda vs Lambda@Edge,
- Default Lambda
→ lives in one AWS region.
- Lambda@Edge
→ replicated automatically to multiple global edge locations.
No comments:
Post a Comment