Skip to main content

Overview

By default, Tornado uploads files to our managed storage. You can configure your own cloud storage to receive downloads directly.
Tornado supports 4 major cloud providers, giving you flexibility to use the storage solution that best fits your infrastructure.

Supported Providers

AWS S3 / S3-Compatible

AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, OVH

Azure Blob Storage

Azure Storage Accounts with Blob containers

Google Cloud Storage

GCS buckets with service account authentication

Alibaba OSS

Alibaba Cloud Object Storage Service
Each provider has its own dedicated API endpoint:
ProviderConfigureRemove
S3 / S3-CompatiblePOST /user/s3DELETE /user/s3
Azure BlobPOST /user/blobDELETE /user/blob
Google Cloud StoragePOST /user/gcsDELETE /user/gcs
Alibaba OSSPOST /user/ossDELETE /user/oss

S3 / S3-Compatible Storage

Works with AWS S3 and any S3-compatible provider.

Supported S3 Providers

ProviderEndpoint Format
AWS S3https://s3.{region}.amazonaws.com
Cloudflare R2https://{account_id}.r2.cloudflarestorage.com
DigitalOcean Spaceshttps://{region}.digitaloceanspaces.com
Backblaze B2https://s3.{region}.backblazeb2.com
Wasabihttps://s3.{region}.wasabisys.com
MinIOhttps://your-minio-server.com
OVH Object Storagehttps://s3.{region}.cloud.ovh.net

Configure S3 Storage

curl -X POST "https://api.tornadoapi.io/user/s3" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "endpoint": "https://s3.us-east-1.amazonaws.com",
    "bucket": "my-tornado-downloads",
    "region": "us-east-1",
    "access_key": "AKIAIOSFODNN7EXAMPLE",
    "secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
    "folder_prefix": "downloads/"
  }'
1

Create R2 Bucket

In Cloudflare dashboard, go to R2 and create a new bucket.
2

Create API Token

Create an R2 API token with Object Read & Write permissions.
3

Get Account ID

Find your Account ID in the Cloudflare dashboard URL or overview page.
4

Configure Tornado

curl -X POST "https://api.tornadoapi.io/user/s3" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "endpoint": "https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com",
    "bucket": "my-downloads",
    "region": "auto",
    "access_key": "YOUR_R2_ACCESS_KEY_ID",
    "secret_key": "YOUR_R2_SECRET_ACCESS_KEY"
  }'

Required S3 Permissions

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::my-tornado-downloads",
        "arn:aws:s3:::my-tornado-downloads/*"
      ]
    }
  ]
}

Azure Blob Storage

Use Azure Storage Accounts with Blob containers.

Configure Azure Blob

curl -X POST "https://api.tornadoapi.io/user/blob" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "account_name": "mystorageaccount",
    "container": "tornado-downloads",
    "account_key": "your-storage-account-key-base64...",
    "folder_prefix": "videos/"
  }'
1

Create Storage Account

In Azure Portal, go to Storage Accounts > Create.
  • Choose Standard performance
  • Select Hot access tier
  • Enable Blob public access if needed for direct URLs
2

Create Container

In your Storage Account, go to Containers > + Container.Name it (e.g., tornado-downloads).
3

Get Access Key

Go to Access keys in your Storage Account.Copy key1 or key2.
4

Configure Tornado

curl -X POST "https://api.tornadoapi.io/user/blob" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "account_name": "mystorageaccount",
    "container": "tornado-downloads",
    "account_key": "xxxxxxxxxxxxxxxxxxx=="
  }'

Alternative: SAS Token

You can use a SAS token instead of the account key for more granular permissions:
curl -X POST "https://api.tornadoapi.io/user/blob" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "account_name": "mystorageaccount",
    "container": "tornado-downloads",
    "sas_token": "sv=2022-11-02&ss=b&srt=co&sp=rwdlacyx&se=2025-01-01..."
  }'
Provide either account_key OR sas_token, not both. SAS tokens should have read, write, delete, and list permissions.

Required Azure Permissions

When using a SAS token, ensure these permissions are enabled:
  • Read (r) - For generating download URLs
  • Write (w) - For uploading files
  • Delete (d) - For cleanup operations
  • List (l) - For validation

Google Cloud Storage

Use GCS buckets with service account authentication.

Configure GCS

curl -X POST "https://api.tornadoapi.io/user/gcs" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "my-gcp-project",
    "bucket": "tornado-downloads",
    "service_account_json": "{\"type\":\"service_account\",\"project_id\":\"my-gcp-project\",...}",
    "folder_prefix": "videos/"
  }'
1

Create GCS Bucket

In Google Cloud Console, go to Cloud Storage > Create Bucket.
  • Choose a unique name
  • Select your preferred region
  • Choose Standard storage class
2

Create Service Account

Go to IAM & Admin > Service Accounts > Create Service Account.Name it (e.g., tornado-storage).
3

Grant Permissions

Assign the Storage Object Admin role to the service account for your bucket:
gsutil iam ch \
  serviceAccount:tornado-storage@PROJECT.iam.gserviceaccount.com:objectAdmin \
  gs://tornado-downloads
4

Download JSON Key

In the service account details, go to Keys > Add Key > Create new key > JSON.Download and save the JSON file.
5

Configure Tornado

Use the JSON key content (minified):
curl -X POST "https://api.tornadoapi.io/user/gcs" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "my-gcp-project",
    "bucket": "tornado-downloads",
    "service_account_json": "{\"type\":\"service_account\",\"project_id\":\"my-gcp-project\",\"private_key_id\":\"...\",\"private_key\":\"-----BEGIN PRIVATE KEY-----\\n...\\n-----END PRIVATE KEY-----\\n\",\"client_email\":\"tornado-storage@my-gcp-project.iam.gserviceaccount.com\",\"client_id\":\"...\",\"auth_uri\":\"https://accounts.google.com/o/oauth2/auth\",\"token_uri\":\"https://oauth2.googleapis.com/token\"}"
  }'
For the service_account_json field, you can either:
  • Pass the JSON as an escaped string
  • Base64 encode the JSON file: base64 -w0 service-account.json

Required GCS Permissions

The service account needs the Storage Object Admin role, which includes:
  • storage.objects.create
  • storage.objects.delete
  • storage.objects.get
  • storage.objects.list

Alibaba Cloud OSS

Alibaba OSS uses its own endpoint and credential format.

Configure Alibaba OSS

curl -X POST "https://api.tornadoapi.io/user/oss" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "endpoint": "https://oss-cn-hangzhou.aliyuncs.com",
    "bucket": "tornado-downloads",
    "access_key_id": "your-oss-access-key-id",
    "access_key_secret": "your-oss-access-key-secret",
    "folder_prefix": "videos/"
  }'

OSS Endpoint Regions

RegionEndpoint
China (Hangzhou)https://oss-cn-hangzhou.aliyuncs.com
China (Shanghai)https://oss-cn-shanghai.aliyuncs.com
China (Beijing)https://oss-cn-beijing.aliyuncs.com
Singaporehttps://oss-ap-southeast-1.aliyuncs.com
US Westhttps://oss-us-west-1.aliyuncs.com
Germanyhttps://oss-eu-central-1.aliyuncs.com

Folder Prefix

All providers support an optional folder_prefix to organize your downloads:
{
  "folder_prefix": "tornado/downloads/2024/"
}
Files will be uploaded to:
your-bucket/
└── videos/
    └── tornado/
        └── downloads/
            └── 2024/
                ├── video-title-1.mp4
                └── video-title-2.mp4
The folder prefix is placed inside the base folder (videos/ by default) and combined with any folder parameter you specify in individual job requests.

Base Folder

All providers support an optional base_folder parameter to change the top-level folder where files are organized. By default, files are placed inside a videos/ folder.
{
  "base_folder": "media"
}
The full path structure is:
your-bucket/
└── [base_folder]/            ← Configurable (default: "videos")
    └── [folder_prefix]/
        └── [job folder]/
            └── filename.ext

Examples

Default behavior (no base_folder specified):
your-bucket/videos/my-video.mp4
your-bucket/videos/my-folder/my-video.mp4
Custom base_folder:
{
  "base_folder": "downloads"
}
your-bucket/downloads/my-video.mp4
your-bucket/downloads/my-folder/my-video.mp4
Combined with folder_prefix:
{
  "folder_prefix": "alex/test/",
  "base_folder": "premier"
}
your-bucket/premier/alex/test/my-video.mp4
your-bucket/premier/alex/test/my-folder/my-video.mp4
If you don’t specify base_folder, it defaults to videos for backward compatibility. The base_folder is always the top-level folder, with folder_prefix nested inside it.

Presigned URLs

When you poll job status, the s3_url field contains a presigned/signed URL for your bucket:
{
  "status": "Completed",
  "s3_url": "https://my-bucket.s3.amazonaws.com/video.mp4?X-Amz-Signature=..."
}
ProviderURL FormatValidity
S3/R2AWS Signature V424 hours
Azure BlobSAS URL24 hours
GCSSigned URL V424 hours
Alibaba OSSOSS Signature24 hours

Legacy Endpoint (S3 Only)

The /user/bucket endpoint still works for S3-compatible storage only:
# Legacy endpoint (S3 only)
curl -X POST "https://api.tornadoapi.io/user/bucket" \
  -H "x-api-key: sk_your_api_key" \
  -d '{"endpoint": "...", "bucket": "...", "region": "...", "access_key": "...", "secret_key": "..."}'
The /user/bucket endpoint is deprecated. Use /user/s3 for all new S3 integrations, or the provider-specific endpoints for other cloud providers.

Reset to Default Storage

To switch back to Tornado’s managed storage, use the DELETE endpoint for your provider:
# Remove S3 storage
curl -X DELETE "https://api.tornadoapi.io/user/s3" \
  -H "x-api-key: sk_your_api_key"

# Remove Azure Blob storage
curl -X DELETE "https://api.tornadoapi.io/user/blob" \
  -H "x-api-key: sk_your_api_key"

# Remove GCS storage
curl -X DELETE "https://api.tornadoapi.io/user/gcs" \
  -H "x-api-key: sk_your_api_key"

# Remove Alibaba OSS storage
curl -X DELETE "https://api.tornadoapi.io/user/oss" \
  -H "x-api-key: sk_your_api_key"
After removing, all new downloads will use Tornado’s managed storage. Existing files in your custom storage remain untouched.

Troubleshooting

Common Errors

ErrorCauseSolution
Credential validation failed: Access DeniedInvalid credentialsVerify access key/secret/token
Credential validation failed: NoSuchBucketBucket/container doesn’t existCreate the bucket first
Credential validation failed: timeoutEndpoint unreachableCheck endpoint URL and network
Invalid service account JSONMalformed GCS credentialsValidate JSON format
Account key must be valid Base64Azure key format errorCopy the full key from Azure Portal

Testing Your Configuration

After configuring storage, create a test job to verify everything works:
curl -X POST "https://api.tornadoapi.io/jobs" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
    "max_resolution": "360"
  }'
If the job completes successfully with a valid s3_url, your storage is configured correctly.

Inline Storage (Per-Request)

For marketplace users or one-off configurations, you can provide storage credentials directly in the job request:
curl -X POST "https://api.tornadoapi.io/jobs" \
  -H "x-api-key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
    "format": "mp4",
    "storage": {
      "provider": "s3",
      "endpoint": "https://s3.amazonaws.com",
      "bucket": "my-videos",
      "region": "us-east-1",
      "access_key": "AKIAIOSFODNN7EXAMPLE",
      "secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
    }
  }'
Inline storage credentials:
  • Take priority over pre-configured storage
  • Are validated before the job is accepted
  • Are never stored or logged
  • Support all providers (S3, Azure Blob, GCS, OSS)
  • Support folder_prefix and base_folder parameters
For API marketplace users (RapidAPI, Apify, Zyla), inline storage credentials are required for every request. See the Marketplace Integration guide for details.

Security Best Practices

Create dedicated credentials with only the permissions needed:
  • S3: Custom IAM policy with specific bucket access
  • Azure: SAS token with limited scope
  • GCS: Service account with only Storage Object Admin on specific bucket
Set up credential rotation:
  • AWS: Use IAM Access Analyzer
  • Azure: Set SAS token expiration
  • GCS: Rotate service account keys
Monitor access to your storage:
  • S3: Enable Server Access Logging
  • Azure: Enable Storage Analytics
  • GCS: Enable Cloud Audit Logs