← Back to Tutorials

Introduction to AWS Lambda: Build and Deploy Serverless Functions

aws lambdaserverlessawscloud computingevent-driven architectureiamcloudwatchdevops

Introduction to AWS Lambda: Build and Deploy Serverless Functions

AWS Lambda is a compute service that runs your code only when needed, without you provisioning or managing servers. You upload a function (your code), configure when it should run (an event source), and AWS handles the infrastructure: scaling, availability, and patching. This tutorial walks through Lambda fundamentals and then builds and deploys real functions using the AWS CLI—covering packaging, IAM permissions, logs, environment variables, versions/aliases, and event sources.


Table of Contents


What is AWS Lambda?

Lambda is “serverless” in the sense that you don’t manage the servers. You still write and own the application code and configuration, but AWS manages:

You are billed based on:

Lambda is a great fit for:


Core concepts

Understanding these terms will make the rest of the tutorial much easier:

Function

A Lambda function is your code plus configuration:

Event source and trigger

Lambda runs in response to events:

Execution role (IAM)

Lambda assumes an IAM role when it runs. This role determines what AWS services your function can access. For example, writing logs to CloudWatch requires permissions.

Cold start vs warm start

When Lambda needs a new execution environment, it performs a “cold start” (initialization). Subsequent invocations may reuse the same environment (“warm start”), which is faster. Initialization code outside the handler runs only during cold starts.

Timeout and memory

Logs and metrics

By default, Lambda integrates with CloudWatch Logs and CloudWatch Metrics:


Prerequisites

You will need:

Check AWS CLI:

aws --version

Check Python:

python3 --version

Set up AWS CLI and verify identity

Configure credentials (if not already configured):

aws configure

Verify who you are:

aws sts get-caller-identity

Set a default region (example uses us-east-1):

aws configure set region us-east-1

Create an IAM role for Lambda

Lambda needs an execution role that allows it to write logs to CloudWatch. We’ll create:

  1. A trust policy that allows the Lambda service to assume the role
  2. The role itself
  3. Attach the managed policy AWSLambdaBasicExecutionRole

Create a trust policy file:

cat > trust-policy.json <<'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "lambda.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Create the role:

aws iam create-role \
  --role-name lambda-basic-execution-role \
  --assume-role-policy-document file://trust-policy.json

Attach the basic execution policy:

aws iam attach-role-policy \
  --role-name lambda-basic-execution-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

Fetch the role ARN (you’ll need this for function creation):

ROLE_ARN=$(aws iam get-role \
  --role-name lambda-basic-execution-role \
  --query 'Role.Arn' --output text)

echo "$ROLE_ARN"

Why this role is necessary:
When your function runs, Lambda assumes this role and uses its permissions. Without CloudWatch log permissions, you might still run code, but you won’t see logs, which makes debugging painful.


Build your first Lambda (Python) and deploy with AWS CLI

We’ll build a simple function that returns a JSON response and logs useful info.

Create the function code

Create a folder and file:

mkdir -p lambda-hello
cd lambda-hello

Create lambda_function.py:

cat > lambda_function.py <<'EOF'
import json
import os
import time

def handler(event, context):
    # Log basic request context
    print("Function name:", context.function_name)
    print("AWS request id:", context.aws_request_id)
    print("Event:", json.dumps(event))

    # Example environment variable usage
    greeting = os.environ.get("GREETING", "Hello")

    # Simulate small amount of work
    start = time.time()
    time.sleep(0.05)
    elapsed_ms = int((time.time() - start) * 1000)

    body = {
        "message": f"{greeting} from AWS Lambda!",
        "elapsed_ms": elapsed_ms,
        "input_event_keys": list(event.keys()) if isinstance(event, dict) else None
    }

    return {
        "statusCode": 200,
        "headers": {"Content-Type": "application/json"},
        "body": json.dumps(body)
    }
EOF

Package the function as a ZIP

For a basic function with no external dependencies, you can zip the single file:

zip function.zip lambda_function.py

Create the Lambda function

Choose a unique function name:

FUNCTION_NAME="hello-lambda-cli"

Create it (Python 3.12 runtime shown; you can adjust if needed):

aws lambda create-function \
  --function-name "$FUNCTION_NAME" \
  --runtime python3.12 \
  --role "$ROLE_ARN" \
  --handler lambda_function.handler \
  --zip-file fileb://function.zip \
  --timeout 10 \
  --memory-size 128

Explanation of key flags:


Invoke the function and read logs

Invoke synchronously

Invoke the function and write the response to a file:

aws lambda invoke \
  --function-name "$FUNCTION_NAME" \
  --payload '{"source":"tutorial","action":"test"}' \
  response.json

View the response:

cat response.json

You should see JSON with "statusCode": 200 and a body string.

Tail logs in CloudWatch

Lambda writes logs to a log group like:

/aws/lambda/<function-name>

Tail logs:

aws logs tail "/aws/lambda/$FUNCTION_NAME" --follow

If you don’t see logs immediately, invoke again in another terminal. Also note that log delivery can lag slightly.

What to look for in logs:


Environment variables and configuration

Environment variables are a standard way to configure code without changing it. Examples include API keys (often stored in Secrets Manager), feature flags, or environment-specific settings.

Set an environment variable:

aws lambda update-function-configuration \
  --function-name "$FUNCTION_NAME" \
  --environment "Variables={GREETING=Hi}"

Invoke again:

aws lambda invoke \
  --function-name "$FUNCTION_NAME" \
  --payload '{"source":"tutorial","action":"env-test"}' \
  response.json

cat response.json

You should see "Hi from AWS Lambda!".

Security note:
Environment variables can be encrypted at rest by Lambda, but you should avoid placing long-lived secrets directly in environment variables. Prefer AWS Secrets Manager or SSM Parameter Store with appropriate IAM permissions.


Versions and aliases (safe deployments)

Lambda supports publishing immutable versions. A version is a snapshot of your code and most configuration at publish time. You can then point an alias (like dev, staging, prod) to a version. This enables safer rollouts and rollbacks.

Publish a version

aws lambda publish-version --function-name "$FUNCTION_NAME"

List versions:

aws lambda list-versions-by-function --function-name "$FUNCTION_NAME"

Create an alias pointing to a version

Suppose the published version is 1:

aws lambda create-alias \
  --function-name "$FUNCTION_NAME" \
  --name prod \
  --function-version 1

Invoke via alias:

aws lambda invoke \
  --function-name "$FUNCTION_NAME:prod" \
  --payload '{"source":"tutorial","action":"alias"}' \
  response.json

cat response.json

Update code, publish a new version, and repoint alias

Make a code change (for example, default greeting). Edit lambda_function.py and re-zip:

sed -i.bak 's/"Hello"/"Hello (v2)"/' lambda_function.py
zip -r function.zip lambda_function.py

Update function code:

aws lambda update-function-code \
  --function-name "$FUNCTION_NAME" \
  --zip-file fileb://function.zip

Publish version 2:

aws lambda publish-version --function-name "$FUNCTION_NAME"

Point prod alias to version 2:

aws lambda update-alias \
  --function-name "$FUNCTION_NAME" \
  --name prod \
  --function-version 2

Why versions/aliases matter:
If you deploy directly to $LATEST, you can accidentally break production. With aliases, you can validate a version and then move the alias when ready. Rollback is simply repointing the alias to the previous version.


Add an event source: API Gateway HTTP API

A common pattern is exposing Lambda as an HTTP endpoint. API Gateway HTTP APIs are usually simpler and cheaper than REST APIs for many use cases.

We’ll create:

Create an HTTP API

API_ID=$(aws apigatewayv2 create-api \
  --name hello-lambda-http-api \
  --protocol-type HTTP \
  --query 'ApiId' --output text)

echo "$API_ID"

Create a Lambda integration

API Gateway needs the Lambda invocation URI. We can fetch it from Lambda:

LAMBDA_ARN=$(aws lambda get-function \
  --function-name "$FUNCTION_NAME" \
  --query 'Configuration.FunctionArn' --output text)

echo "$LAMBDA_ARN"

Create integration:

INTEGRATION_ID=$(aws apigatewayv2 create-integration \
  --api-id "$API_ID" \
  --integration-type AWS_PROXY \
  --integration-uri "$LAMBDA_ARN" \
  --payload-format-version 2.0 \
  --query 'IntegrationId' --output text)

echo "$INTEGRATION_ID"

Create a route

Create a GET /hello route:

aws apigatewayv2 create-route \
  --api-id "$API_ID" \
  --route-key "GET /hello" \
  --target "integrations/$INTEGRATION_ID"

Grant API Gateway permission to invoke Lambda

API Gateway must be allowed to call your function. Add permission:

ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
REGION=$(aws configure get region)

aws lambda add-permission \
  --function-name "$FUNCTION_NAME" \
  --statement-id apigw-invoke-permission \
  --action lambda:InvokeFunction \
  --principal apigateway.amazonaws.com \
  --source-arn "arn:aws:execute-api:$REGION:$ACCOUNT_ID:$API_ID/*/*/*"

Create a stage (auto-deploy)

aws apigatewayv2 create-stage \
  --api-id "$API_ID" \
  --stage-name "$default" \
  --auto-deploy

Call the endpoint

Get the API endpoint:

API_ENDPOINT=$(aws apigatewayv2 get-api \
  --api-id "$API_ID" \
  --query 'ApiEndpoint' --output text)

echo "$API_ENDPOINT"

Call your route:

curl -i "$API_ENDPOINT/hello"

What happens on each request:

  1. API Gateway receives the HTTP request.
  2. It transforms the request into an event payload (v2.0 format).
  3. It invokes Lambda with that event.
  4. Lambda returns a response object; API Gateway translates it back to HTTP.

Add an event source: S3 object-created notifications

Another classic serverless pattern is reacting to file uploads.

We’ll:

Update the Lambda code to handle S3 events

Replace lambda_function.py with a handler that prints S3 event details but still works for API Gateway:

cat > lambda_function.py <<'EOF'
import json
import os

def handler(event, context):
    print("Event:", json.dumps(event))

    # Detect a common S3 event shape
    if isinstance(event, dict) and "Records" in event:
        records = event["Records"]
        s3_records = []
        for r in records:
            if r.get("eventSource") == "aws:s3":
                bucket = r["s3"]["bucket"]["name"]
                key = r["s3"]["object"]["key"]
                s3_records.append({"bucket": bucket, "key": key})

        if s3_records:
            return {
                "statusCode": 200,
                "body": json.dumps({"message": "Processed S3 event", "objects": s3_records})
            }

    greeting = os.environ.get("GREETING", "Hello")
    return {
        "statusCode": 200,
        "body": json.dumps({"message": f"{greeting} from Lambda", "event_keys": list(event.keys()) if isinstance(event, dict) else None})
    }
EOF

Re-zip and update code:

zip -r function.zip lambda_function.py
aws lambda update-function-code \
  --function-name "$FUNCTION_NAME" \
  --zip-file fileb://function.zip

Create an S3 bucket

Bucket names must be globally unique. Create one:

BUCKET="lambda-s3-trigger-$RANDOM-$RANDOM"
REGION=$(aws configure get region)

aws s3api create-bucket \
  --bucket "$BUCKET" \
  --region "$REGION" \
  $( [ "$REGION" = "us-east-1" ] && echo "" || echo "--create-bucket-configuration LocationConstraint=$REGION" )

Allow S3 to invoke Lambda

Add permission for S3:

aws lambda add-permission \
  --function-name "$FUNCTION_NAME" \
  --statement-id s3-invoke-permission \
  --action lambda:InvokeFunction \
  --principal s3.amazonaws.com \
  --source-arn "arn:aws:s3:::$BUCKET"

Configure bucket notification to trigger Lambda on object creation

Create a notification configuration file:

cat > notification.json <<EOF
{
  "LambdaFunctionConfigurations": [
    {
      "LambdaFunctionArn": "$LAMBDA_ARN",
      "Events": ["s3:ObjectCreated:*"]
    }
  ]
}
EOF

Apply it:

aws s3api put-bucket-notification-configuration \
  --bucket "$BUCKET" \
  --notification-configuration file://notification.json

Upload a file to trigger the function

echo "hello from s3" > test.txt
aws s3 cp test.txt "s3://$BUCKET/test.txt"

Tail logs:

aws logs tail "/aws/lambda/$FUNCTION_NAME" --follow

You should see an event that includes Records with S3 bucket and object key information.

Important behavior:
S3 invokes Lambda asynchronously. That means:


Packaging dependencies (zip) and Lambda layers

Real functions often need third-party libraries. With ZIP-based deployment, you must bundle dependencies into the ZIP (or use a Lambda Layer).

Option A: Bundle dependencies into the ZIP

Example: add requests for outbound HTTP calls.

Create a clean build directory:

cd ..
mkdir -p lambda-hello-build
cd lambda-hello-build

Copy your function:

cp ../lambda-hello/lambda_function.py .

Install dependencies into the current directory:

python3 -m pip install requests -t .

Zip everything:

zip -r function.zip .

Update Lambda code:

aws lambda update-function-code \
  --function-name "$FUNCTION_NAME" \
  --zip-file fileb://function.zip

Why this works:
Lambda’s Python runtime includes your deployment package directory in sys.path, so imported modules are found.

Option B: Use a Lambda Layer

Layers let you package dependencies separately and attach them to multiple functions.

Create a layer structure:

mkdir -p layer/python
python3 -m pip install requests -t layer/python
cd layer
zip -r requests-layer.zip python

Publish the layer:

LAYER_ARN=$(aws lambda publish-layer-version \
  --layer-name requests-layer \
  --zip-file fileb://requests-layer.zip \
  --compatible-runtimes python3.12 \
  --query 'LayerVersionArn' --output text)

echo "$LAYER_ARN"

Attach the layer to your function:

aws lambda update-function-configuration \
  --function-name "$FUNCTION_NAME" \
  --layers "$LAYER_ARN"

When to prefer layers:


Troubleshooting and common pitfalls

1) AccessDeniedException when creating or invoking

This usually means your IAM user/role lacks permissions. Ensure you can:

2) No logs appear

Common causes:

3) Handler not found / import errors

Typical errors:

4) Timeouts

If your function times out:

5) API Gateway returns 502

Often indicates:


Clean up resources

To avoid ongoing charges, delete what you created.

Delete API Gateway

List APIs (optional):

aws apigatewayv2 get-apis

Delete the API:

aws apigatewayv2 delete-api --api-id "$API_ID"

Delete S3 notifications and bucket

Remove notification configuration:

aws s3api put-bucket-notification-configuration \
  --bucket "$BUCKET" \
  --notification-configuration '{}'

Delete objects and bucket:

aws s3 rm "s3://$BUCKET" --recursive
aws s3api delete-bucket --bucket "$BUCKET"

Delete Lambda function

aws lambda delete-function --function-name "$FUNCTION_NAME"

Delete layer (optional)

List layer versions:

aws lambda list-layer-versions --layer-name requests-layer

Delete a specific version (example version 1):

aws lambda delete-layer-version --layer-name requests-layer --version-number 1

Delete IAM role

Detach policy:

aws iam detach-role-policy \
  --role-name lambda-basic-execution-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

Delete role:

aws iam delete-role --role-name lambda-basic-execution-role

Where to go next

Once you can deploy and trigger Lambda functions, the next skills to build are:

If you want, share your preferred runtime (Python, Node.js, Java, Go) and your target trigger (HTTP API, S3, SQS, schedule), and I can adapt this into a more specialized end-to-end project.