Goodbye Serverless: Deploying a Puppeteer Lambda via Bitbucket OIDC
I already wrote about my first approach to OG images on Lambda a few years ago (Generating Open Graph images to S3 on AWS Lambda).
This time I ditched Serverless/SAM and pushed a container image to Lambda directly from Bitbucket Pipelines using OIDC—no static keys anywhere. Puppeteer/Chromium renders the image, I write to S3, and I serve it via a Lambda Function URL.
Lambda changes
I won’t bore you with every little detail of upgrading Puppeteer and Chromium layers, or the handful of Node.js APIs that moved around since then.
In broad strokes: I updated the Lambda handler to the modern Node 20 runtime, refreshed puppeteer and swapped in the maintained Chromium build, fixed a couple of deprecated methods in the screenshot flow, and cleaned up some package.json scripts that no longer worked.
The core idea of generating HTML and capturing it as a JPEG stayed the same — the real story here is the deployment pipeline, so that’s where we will keep the focus.
Architecture at a glance
Files you need
/ (root)
├─ generator.js # HTML → JPEG render logic
├─ lambda.js # Lambda handler
├─ templates/ # HTML/CSS templates, fonts
├─ Dockerfile # headless Chromium + Node 20
├─ bitbucket-pipelines.yml # OIDC-enabled pipeline (see below)
└─ deploy_lambda_image.sh # idempotent deployer (see below)
Bitbucket Pipelines (OIDC + Docker + deploy)
Set this up in the root directory of your repository. Bitbucket will pick it up automatically, although you need to push the deploy button the first time.
# bitbucket-pipelines.yml
# -----------------------------------------------------------------------------
# This pipeline builds a Docker image for the Lambda function, pushes it to ECR,
# and then deploys/updates the Lambda from that image. It uses Bitbucket OIDC,
# so there are no long-lived AWS access keys anywhere.
# -----------------------------------------------------------------------------
image: node:20
options:
size: 2x # more CPU/RAM for Docker builds; remove if not needed
pipelines:
branches:
master:
- step:
name: Build, push, deploy (prod)
deployment: production
oidc: true # <-- CRITICAL: enable OIDC for this step
services: [docker]
caches: [node]
script:
# --- Install AWS CLI v2 (needed for 'aws' commands) and Python
- apt-get update && apt-get install -y unzip curl python3
- curl -sSL "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- unzip -q awscliv2.zip
- ./aws/install
- aws --version
# --- OIDC: export role + write the token file for web-identity
# The AWS CLI detects the web identity token via AWS_WEB_IDENTITY_TOKEN_FILE
- export AWS_REGION="${AWS_REGION}"
- export AWS_ROLE_ARN="${AWS_ROLE_ARN}"
- export AWS_WEB_IDENTITY_TOKEN_FILE="$PWD/web-identity-token"
- echo "$BITBUCKET_STEP_OIDC_TOKEN" > "$AWS_WEB_IDENTITY_TOKEN_FILE"
# --- Inspect OIDC token claims (iss, aud, sub) for debugging
# Useful when the trust policy conditions don't match.
- python3 -c "import os, base64, json; p=os.environ['BITBUCKET_STEP_OIDC_TOKEN'].split('.')[1]; p += '=' * (-len(p)%4); print(json.dumps(json.loads(base64.urlsafe_b64decode(p)), indent=2))"
# --- Verify OIDC wiring and assumed identity
- aws sts get-caller-identity
# --- ECR coordinates
- export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
- export ECR_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
- export IMAGE_TAG="${BITBUCKET_COMMIT}" # prod: use commit sha
- export IMAGE_URI="${ECR_URI}/${ECR_REPO}:${IMAGE_TAG}"
# --- Ensure ECR repo exists (idempotent)
- |
if ! aws ecr describe-repositories --repository-names "$ECR_REPO" >/dev/null 2>&1; then
aws ecr create-repository --repository-name "$ECR_REPO" >/dev/null
fi
# --- Login to ECR (Docker)
- aws ecr get-login-password --region "$AWS_REGION" | docker login --username AWS --password-stdin "$ECR_URI"
# --- Build & push container image for Lambda
- docker build -t "$IMAGE_URI" .
- docker push "$IMAGE_URI"
# --- Deploy Lambda (create/update + function URL etc.)
- bash ./ci/deploy_lambda_image.sh
pull-requests:
"**":
- step:
name: Build & push (no deploy)
oidc: true
services: [docker]
caches: [node]
script:
- apt-get update && apt-get install -y unzip curl
- curl -sSL "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- unzip -q awscliv2.zip
- ./aws/install
- export AWS_REGION="${AWS_REGION}"
- export AWS_ROLE_ARN="${AWS_ROLE_ARN}"
- export AWS_WEB_IDENTITY_TOKEN_FILE="$PWD/web-identity-token"
- echo "$BITBUCKET_STEP_OIDC_TOKEN" > "$AWS_WEB_IDENTITY_TOKEN_FILE"
- export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
- export ECR_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
- export IMAGE_TAG="pr-${BITBUCKET_PR_ID}-${BITBUCKET_COMMIT}" # distinct tag for PRs
- export IMAGE_URI="${ECR_URI}/${ECR_REPO}:${IMAGE_TAG}"
- |
if ! aws ecr describe-repositories --repository-names "$ECR_REPO" >/dev/null 2>&1; then
aws ecr create-repository --repository-name "$ECR_REPO" >/dev/null
fi
- aws ecr get-login-password --region "$AWS_REGION" | docker login --username AWS --password-stdin "$ECR_URI"
- docker build -t "$IMAGE_URI" .
- docker push "$IMAGE_URI"
definitions:
services:
docker:
memory: 2048 # bump if headless-chromium layers are large
Deployment script (with defensive waits)
This is the deployer I call from the pipeline. It creates/updates the execution role, ensures ECR tag visibility, updates code first, then configuration, and (if missing) creates a public Function URL.
Spoilers: It's a long file. Don't worry, check the comments to understand each step.
#!/usr/bin/env bash
set -euo pipefail
# Required env
: "${AWS_REGION:?Missing AWS_REGION}"
: "${FUNCTION_NAME:?Missing FUNCTION_NAME}"
: "${OG_BUCKET:?Missing OG_BUCKET}"
: "${ECR_REPO:?Missing ECR_REPO}"
export AWS_DEFAULT_REGION="$AWS_REGION"
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
ECR_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
IMAGE_TAG="${BITBUCKET_COMMIT}"
IMAGE_URI="${ECR_URI}/${ECR_REPO}:${IMAGE_TAG}"
EXEC_ROLE_NAME="${FUNCTION_NAME}-exec"
EXEC_ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/${EXEC_ROLE_NAME}"
echo "Account: ${AWS_ACCOUNT_ID}"
echo "Region: ${AWS_REGION}"
echo "ECR Repo: ${ECR_REPO}"
echo "Image URI: ${IMAGE_URI}"
echo "Exec Role ARN: ${EXEC_ROLE_ARN}"
# --- Wait until Lambda is ready (no in-progress update) ---
wait_lambda_ready() {
local phase="${1:-}"
echo "Waiting for Lambda ($phase) to be Active/Successful..."
for i in {1..60}; do
local state status reason
state=$(aws lambda get-function-configuration --function-name "$FUNCTION_NAME" --query 'State' --output text 2>/dev/null || echo "UNKNOWN")
status=$(aws lambda get-function-configuration --function-name "$FUNCTION_NAME" --query 'LastUpdateStatus' --output text 2>/dev/null || echo "UNKNOWN")
reason=$(aws lambda get-function-configuration --function-name "$FUNCTION_NAME" --query 'LastUpdateStatusReason' --output text 2>/dev/null || echo "")
echo " State=$state LastUpdateStatus=$status ${reason:+Reason=$reason}"
if [ "$state" = "Active" ] && [ "$status" = "Successful" ]; then
return 0
fi
if [ "$status" = "Failed" ]; then
echo "Lambda update failed: $reason"
exit 1
fi
sleep 5
done
echo "Timed out waiting for Lambda to become Active/Successful."
exit 1
}
# 1) Ensure execution role exists (+ CloudWatch logs)
if ! aws iam get-role --role-name "$EXEC_ROLE_NAME" >/dev/null 2>&1; then
echo "Creating Lambda execution role: $EXEC_ROLE_NAME"
cat > /tmp/trust.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { "Service": "lambda.amazonaws.com" },
"Action": "sts:AssumeRole"
}]
}
EOF
aws iam create-role --role-name "$EXEC_ROLE_NAME" --assume-role-policy-document file:///tmp/trust.json >/dev/null
aws iam attach-role-policy --role-name "$EXEC_ROLE_NAME" --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole >/dev/null
fi
# 1a) Upsert inline S3 PutObject
cat > /tmp/s3put.json <<EOF
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["s3:PutObject", "s3:PutObjectAcl"],
"Resource":["arn:aws:s3:::${OG_BUCKET}/images/*"]
}]
}
EOF
aws iam put-role-policy --role-name "$EXEC_ROLE_NAME" --policy-name "S3PutImages" --policy-document file:///tmp/s3put.json >/dev/null || true
# 1b) Upsert inline ECR pull (broadened to "*") + auth token
cat > /tmp/ecrpull.json <<'EOF'
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:DescribeImages"
],
"Resource":"*"
}]
}
EOF
aws iam put-role-policy --role-name "$EXEC_ROLE_NAME" --policy-name "EcrImagePull" --policy-document file:///tmp/ecrpull.json >/dev/null || true
cat > /tmp/ecrauth.json <<'EOF'
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["ecr:GetAuthorizationToken"],
"Resource":"*"
}]
}
EOF
aws iam put-role-policy --role-name "$EXEC_ROLE_NAME" --policy-name "EcrAuthToken" --policy-document file:///tmp/ecrauth.json >/dev/null || true
# 1c) If ECR repo is KMS-encrypted with a customer key, grant decrypt
ENC_TYPE=$(aws ecr describe-repositories --repository-name "$ECR_REPO" --query "repositories[0].encryptionConfiguration.encryptionType" --output text 2>/dev/null || echo "AES256")
KMS_KEY=$(aws ecr describe-repositories --repository-name "$ECR_REPO" --query "repositories[0].encryptionConfiguration.kmsKey" --output text 2>/dev/null || echo "None")
if [ "$ENC_TYPE" = "KMS" ] && [ "$KMS_KEY" != "None" ]; then
echo "ECR repo uses KMS key: $KMS_KEY — adding decrypt permission to exec role"
cat > /tmp/ecrkms.json <<EOF
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["kms:Decrypt","kms:DescribeKey"],
"Resource":"${KMS_KEY}"
}]
}
EOF
aws iam put-role-policy --role-name "$EXEC_ROLE_NAME" --policy-name "EcrKmsDecrypt" --policy-document file:///tmp/ecrkms.json >/dev/null || true
fi
# Give IAM a moment to propagate
sleep 10
# Diagnostics (optional): simulate exec-role permissions for ECR pull
echo "Simulating exec role permissions for ECR pull..."
aws iam simulate-principal-policy --policy-source-arn "$EXEC_ROLE_ARN" --action-names ecr:BatchGetImage ecr:GetDownloadUrlForLayer ecr:BatchCheckLayerAvailability ecr:DescribeImages ecr:GetAuthorizationToken --resource-arns "arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/${ECR_REPO}" "*" --query "EvaluationResults[].{Action:EvalActionName,Decision:EvalDecision}" --output table || true
# --- helper: check image via AWS (preferred) or Docker (fallback)
check_image() {
echo "Checking image presence in ECR (describe-images)..."
if aws ecr describe-images --repository-name "$ECR_REPO" --image-ids imageTag="$IMAGE_TAG" >/dev/null 2>&1; then
return 0
fi
echo "describe-images failed; trying 'docker manifest inspect'..."
if docker manifest inspect "$IMAGE_URI" >/dev/null 2>&1; then
return 0
fi
echo "Last aws ecr describe-images error:" >&2
aws ecr describe-images --repository-name "$ECR_REPO" --image-ids imageTag="$IMAGE_TAG" || true
return 1
}
# 1.5) Wait until the image tag is visible (or docker can see it)
echo "Waiting for image ${IMAGE_URI} to be available..."
for i in {1..12}; do
if check_image; then
echo "Image is available."
break
fi
echo "Not yet available, retrying in 5s... ($i/12)"
sleep 5
if [ "$i" -eq 12 ]; then
echo "Image ${IMAGE_URI} not available after 60s"; exit 1
fi
done
# 2) Create or update the function (container image)
if aws lambda get-function --function-name "$FUNCTION_NAME" >/dev/null 2>&1; then
echo "Updating Lambda code: $FUNCTION_NAME"
aws lambda update-function-code --function-name "$FUNCTION_NAME" --image-uri "$IMAGE_URI" >/dev/null
# Wait for async code update to finish before config change
wait_lambda_ready "after code update"
echo "Updating configuration (memory/timeout/env/ephemeral storage)"
aws lambda update-function-configuration --function-name "$FUNCTION_NAME" --memory-size 2048 --timeout 60 --environment "Variables={OG_BUCKET=${OG_BUCKET}}" --ephemeral-storage "Size=1024" >/dev/null
# Wait again so subsequent steps don’t race
wait_lambda_ready "after config update"
else
echo "Creating Lambda function: $FUNCTION_NAME"
aws lambda create-function --function-name "$FUNCTION_NAME" --package-type Image --code "ImageUri=${IMAGE_URI}" --role "$EXEC_ROLE_ARN" --memory-size 2048 --timeout 60 --environment "Variables={OG_BUCKET=${OG_BUCKET}}" --architectures x86_64 --ephemeral-storage "Size=1024" >/dev/null
# New functions spend time in Pending/Active — wait it out
wait_lambda_ready "after create"
fi
# 3) Function URL (public). Lock down later if needed.
wait_lambda_ready "before function URL"
if ! aws lambda get-function-url-config --function-name "$FUNCTION_NAME" >/dev/null 2>&1; then
echo "Creating Function URL"
aws lambda create-function-url-config --function-name "$FUNCTION_NAME" --auth-type NONE --cors "AllowOrigins=['*'],AllowMethods=['*'],AllowHeaders=['*']" >/dev/null
aws lambda add-permission --function-name "$FUNCTION_NAME" --statement-id "FunctionURLAllowPublic" --action "lambda:InvokeFunctionUrl" --principal "*" --function-url-auth-type "NONE" >/dev/null
else
echo "Function URL already exists"
fi
LAMBDA_URL=$(aws lambda get-function-url-config --function-name "$FUNCTION_NAME" --query FunctionUrl --output text)
echo "Deployed: ${FUNCTION_NAME}"
echo "Image: ${IMAGE_URI}"
echo "URL: ${LAMBDA_URL}"
Deployer role (policy + trust)
You can set this up manually in the AWS Console, or scroll down for the CLI commands you can use for the one-time setup.
Deployer inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LambdaAndIAM",
"Effect": "Allow",
"Action": [
"lambda:CreateFunction",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration",
"lambda:GetFunction",
"lambda:GetFunctionConfiguration",
"lambda:CreateFunctionUrlConfig",
"lambda:GetFunctionUrlConfig",
"lambda:AddPermission",
"iam:GetRole",
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:PassRole",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:lambda:<REGION>:*:function:<FUNCTION_PREFIX>*",
"arn:aws:iam::*:role/<FUNCTION_PREFIX>-exec*",
"arn:aws:s3:::<BUCKET>/images/*"
]
},
{
"Sid": "EcrPushAndDescribe",
"Effect": "Allow",
"Action": [
"ecr:CreateRepository",
"ecr:DescribeRepositories",
"ecr:GetRepositoryPolicy",
"ecr:SetRepositoryPolicy",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage",
"ecr:DescribeImages",
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken",
"sts:GetCallerIdentity"
],
"Resource": "*"
}
]
}
Deployer trust policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/api.bitbucket.org/2.0/workspaces/<WORKSPACE_SLUG>/pipelines-config/identity/oidc"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"api.bitbucket.org/2.0/workspaces/<WORKSPACE_SLUG>/pipelines-config/identity/oidc:aud": "ari:cloud:bitbucket::workspace/<WORKSPACE_UUID>"
},
"StringLike": {
"api.bitbucket.org/2.0/workspaces/<WORKSPACE_SLUG>/pipelines-config/identity/oidc:sub": "{<REPO_UUID>}:{<ENV_UUID>}:*"
}
}
}
]
}
One-time setup
Replace placeholders with your real values. Compute the SHA-1 thumbprint for
api.bitbucket.org(AWS prompts for it in some accounts). Below are the exact commands I run.
# 1) Create the OIDC provider for your Bitbucket workspace
aws iam create-open-id-connect-provider --url https://api.bitbucket.org/2.0/workspaces/<WORKSPACE_SLUG>/pipelines-config/identity/oidc --thumbprint-list <BITBUCKET_SHA1_THUMBPRINT> --client-id-list ari:cloud:bitbucket::workspace/<WORKSPACE_UUID>
# 2) Create the deployer role with the trust policy from above
aws iam create-role --role-name bitbucket-deployer --assume-role-policy-document file://bitbucket-trust-policy.json
# 3) Attach the inline policy granting ECR/Lambda/IAM actions
aws iam put-role-policy --role-name bitbucket-deployer --policy-name BitbucketDeployerPolicy --policy-document file://bitbucket-deployer-policy.json
Environment variables you need
This setup relies on a few environment variables that must be defined in your Bitbucket repository (Repository Settings → Pipelines → Repository variables).
- AWS_REGION – the AWS region where your Lambda and ECR live (e.g.
eu-central-1). - AWS_ROLE_ARN – the ARN of the IAM role Bitbucket should assume via OIDC (your deployer role).
- ECR_REPO – the name of your ECR repository (it will be auto-created if missing).
- FUNCTION_NAME – the name of your Lambda function (e.g.
og-image-generator). - OG_BUCKET – the target S3 bucket where rendered images will be uploaded (e.g.
cdn.imrecsige.dev).
Some environment variables come from Bitbucket automatically:
- BITBUCKET_COMMIT – commit hash of the build, used as the image tag.
- BITBUCKET_STEP_OIDC_TOKEN – the OIDC token injected into the pipeline step when
oidc: trueis enabled. - BITBUCKET_PR_ID – the pull request ID, used in image tagging for PR builds.
Runtime defaults that kept Chromium happy
- Arch:
x86_64 - Memory:
2048 MB - Timeout:
60 s - Ephemeral storage (
/tmp):1 GB - Bucket/prefix:
OG_BUCKET/images/ - Image tag:
${BITBUCKET_COMMIT}
Function URL & custom domain
Function URLs are stable unless you delete them. I keep it public for this use-case. If you want your own domain + TLS, put CloudFront in front and set the origin to the Function URL (or switch the URL to AWS_IAM and add an Origin Access Control).
If broken
- Verify OIDC: decode the step token (
iss/aud/sub) and runaws sts get-caller-identity. - ECR visibility:
aws ecr describe-images --image-ids imageTag=...and, if it’s slow to propagate,docker manifest inspect <image-uri>. - Lambda races: only call
update-function-configurationafterupdate-function-codesettles; the script’swait_lambda_readypreventsResourceConflictException. - Policy sanity: IAM Policy Simulator against the execution role for
ecr:*pull actions; if stubborn, temporarily attachAmazonEC2ContainerRegistryReadOnlyto prove it’s perms. - Same account/region: confirm Lambda and ECR are in the same account & region.
Conclusion
I now build a Puppeteer/Chromium container, push it to ECR, and deploy a Lambda from Bitbucket without Serverless/SAM and without long-lived AWS keys. The function writes OG images to S3 and is reachable via a Function URL.
Rocks along the way were:
- Using the exact Bitbucket OIDC issuer/audience/sub.
- Giving the execution role explicit ECR pull permissions.
- Adding defensive waits around Lambda updates.
Everything else is boring plumbing — exactly how I like my deploys.