Serving assets from Cloudflare R2
Deploy compiled assets to a CDN for better performance

Plain can compile and fingerprint your assets for production, and for many apps, serving them directly from your app server works fine. But as your assets grow larger or traffic increases, offloading them to a CDN can reduce server load and improve performance for users far from your server.
This guide walks through setting up Cloudflare R2 for asset hosting, with specific instructions for Heroku deployments. The tradeoff is added complexity in your deployment pipeline.
Why R2? No egress fees (unlike S3), automatic compression at the edge, and custom domain support.
Cloudflare setup
Create a bucket
- Go to the Cloudflare dashboard and select R2
- Create a new bucket (e.g.,
myapp-assets) - Under Settings > Public access, connect a custom domain (e.g.,
assets.example.com)
Create API credentials
- In R2, go to Manage R2 API Tokens
- Create a new token with Object Read & Write permissions for your bucket
- Save the Access Key ID, Secret Access Key, and note your account's S3 endpoint
Plain configuration
ASSETS_CDN_URL
Tell Plain where your assets will be served from:
# app/settings.py
if "DYNO" in os.environ:
ASSETS_CDN_URL = "https://assets.example.com/site/"
Or use the PLAIN_ASSETS_CDN_URL environment variable instead of modifying settings.py.
The trailing slash and path (/site/) should match where you upload files in R2.
Build without compression
Since Cloudflare compresses responses at the edge, you don't need Plain to generate .gz files:
plain build --no-compress
Uploading with rclone
rclone can configure R2 entirely via environment variables, which is ideal for CI/CD:
RCLONE_CONFIG_R2_TYPE=s3 \
RCLONE_CONFIG_R2_PROVIDER=Cloudflare \
RCLONE_CONFIG_R2_ACCESS_KEY_ID=$R2_ACCESS_KEY_ID \
RCLONE_CONFIG_R2_SECRET_ACCESS_KEY=$R2_SECRET_ACCESS_KEY \
RCLONE_CONFIG_R2_ENDPOINT=$R2_ENDPOINT \
rclone copy .plain/assets/compiled/ r2:$R2_BUCKET/site/ --checksum -v
Use copy instead of sync. The sync command deletes files in R2 that don't exist locally,
which is dangerous when your upload happens during the build phase (before deployment succeeds).
If the release fails, the old version is still running but its assets are gone from R2.
With copy, old assets accumulate but don't conflict since filenames are fingerprinted.
You can clean up periodically if storage becomes a concern.
Heroku example
Add these environment variables to your Heroku app:
R2_ACCESS_KEY_IDR2_SECRET_ACCESS_KEYR2_ENDPOINT(e.g.,https://<account-id>.r2.cloudflarestorage.com)R2_BUCKET
In bin/post_compile:
#!/bin/bash -e
plain build --no-compress
# Download rclone
curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip -q rclone-current-linux-amd64.zip
RCLONE=$(find rclone-*-linux-amd64 -name rclone -type f)
# Upload compiled assets to R2
RCLONE_CONFIG_R2_TYPE=s3 \
RCLONE_CONFIG_R2_PROVIDER=Cloudflare \
RCLONE_CONFIG_R2_ACCESS_KEY_ID=$R2_ACCESS_KEY_ID \
RCLONE_CONFIG_R2_SECRET_ACCESS_KEY=$R2_SECRET_ACCESS_KEY \
RCLONE_CONFIG_R2_ENDPOINT=$R2_ENDPOINT \
$RCLONE copy .plain/assets/compiled/ r2:$R2_BUCKET/site/ --checksum -v --stats 0
# Cleanup (assets are served from R2 now)
rm -rf rclone-* .plain/assets/compiled/
Verifying it works
After deploying, open browser dev tools and check the Network tab for asset requests. Look for:
- Request URL pointing to your R2 domain with a fingerprinted filename
- Cf-Cache-Status: HIT confirming Cloudflare is caching the asset
- Content-Encoding: zstd (or gzip) showing Cloudflare is compressing at the edge