Storage
Provider-agnostic file storage. Upload files, generate signed URLs, and manage assets across your application.
Read this when you need to upload files, serve downloads, or manage user assets.
Useful for developers building features that handle images, documents, or any file uploads.
Overview
The storage module provides a unified interface for file operations that works across different storage providers. Write your code once with Supabase Storage, then swap to S3 or Azure later by changing an environment variable.
All storage code lives in lib/storage/ with detailed documentation in lib/storage/LIB-STORAGE.md.
Key Features
What the storage module provides:
File Upload
Upload files from buffers, strings, or blobs. Handles content types and metadata automatically.
Signed URLs
Secure access to private files with time-limited signed URLs that expire automatically.
Validation
Validate uploads for file size, type, and format before storing.
Quick Start
Upload a file and get a signed URL:
import { createStorageFromServer } from "@/lib/storage/server"
import { validateFile, MIME_TYPES } from "@/lib/storage"
export async function uploadAvatar(userId: string, file: File) {
// Validate the file
validateFile({ size: file.size, type: file.type }, {
maxSize: 2 * 1024 * 1024, // 2MB
allowedTypes: MIME_TYPES.images,
})
// Upload to storage
const storage = await createStorageFromServer()
const path = `${userId}/avatar.png`
const buffer = Buffer.from(await file.arrayBuffer())
await storage.write(path, buffer, { contentType: file.type })
// Get a signed URL for access
const url = await storage.getSignedUrl(path)
return { path, url }
}Buckets
Files are organized into buckets. Catalyst provides two platform buckets out of the box:
uploadsPrivateFor user documents, sensitive files, and private data. Requires signed URLs for access.
Use for: User avatars, documents, private attachments
uploads-publicPublicFor shared assets and public downloads. Files are accessible via direct URL.
Use for: Shared images, public downloads, marketing assets
Naming convention: Public buckets always end with -public suffix for clarity. Module-specific buckets use the pattern {module}-{purpose}.
File Paths
Platform buckets require user-prefixed paths for security (RLS enforces this):
// Required format for platform buckets
{user-id}/{folder}/{filename}
// Examples
abc123/avatar.png
abc123/documents/report.pdf
abc123/exports/data.csvThis ensures users can only access their own files. The storage RLS policies check that the first folder in the path matches the authenticated user's ID.
Common Operations
Upload a file
const storage = await createStorageFromServer()
await storage.write(`${userId}/doc.pdf`, buffer, {
contentType: "application/pdf",
})Get a signed URL (private files)
const url = await storage.getSignedUrl(`${userId}/doc.pdf`, {
expiresIn: 3600, // 1 hour
})Get a public URL (public bucket)
const storage = await createStorageFromServer({ bucket: "uploads-public" })
const url = storage.getPublicUrl(`${userId}/image.png`)Check if file exists
if (await storage.exists(`${userId}/avatar.png`)) {
// File exists
}Delete a file
await storage.delete(`${userId}/old-file.pdf`)List files in a folder
const { files } = await storage.list(`${userId}/documents`)File Validation
Always validate uploads before storing them:
import { validateFile, MIME_TYPES } from "@/lib/storage"
// Validate before upload (throws if invalid)
validateFile({ size: file.size, type: file.type }, {
maxSize: 5 * 1024 * 1024, // 5MB max
allowedTypes: MIME_TYPES.images, // Only images
})
// Available MIME type groups
MIME_TYPES.images // JPEG, PNG, GIF, WebP, SVG
MIME_TYPES.documents // PDF, text files, Word, Excel
MIME_TYPES.videos // MP4, WebM
MIME_TYPES.audio // MP3, WAV
MIME_TYPES.archives // ZIP, TARHelper Functions
sanitizeFilenameClean a filename for safe storage
sanitizeFilename("My Doc (1).pdf") // "my-doc-1.pdf"generateUniqueFilenameGenerate a unique filename with timestamp
generateUniqueFilename("avatar.png") // "1705678901234-a1b2c3d4.png"formatFileSizeFormat bytes for display
formatFileSize(1536000) // "1.5 MB"
Setup
Storage uses Supabase by default. Configure your environment:
# .env.local STORAGE_PROVIDER=supabase # or "s3", "memory" STORAGE_BUCKET=uploads # default bucket STORAGE_MAX_FILE_SIZE=10485760 # 10MB in bytes STORAGE_SIGNED_URL_EXPIRY=3600 # 1 hour
Configure Supabase
Ensure your Supabase credentials are set in .env.local.
Run migrations
Run supabase db push to create the platform buckets and RLS policies.
Start using storage
Import from @/lib/storage/server and start uploading files.
Module Buckets
Modules can create their own storage buckets via SQL migrations. This keeps module data isolated and allows for custom RLS policies.
Example: Feedback Module Bucket
-- modules/feedback/supabase/migrations/..._storage.sql
INSERT INTO storage.buckets (id, name, public)
VALUES ('feedback-attachments', 'feedback-attachments', false)
ON CONFLICT (id) DO NOTHING;
-- Add RLS policies for the bucket...See lib/storage/LIB-STORAGE.md for the full pattern on adding storage to modules.
For AI Agents
Key rules:
- Always use
createStorageFromServer()for server-side uploads - Always prefix paths with user ID:
`${userId}/filename` - Always validate files before uploading with
validateFile() - Use
getSignedUrl()for private files,getPublicUrl()for public - Public buckets end with
-publicsuffix - Read
lib/storage/LIB-STORAGE.mdfor full API reference