StoringSTORING

Documentation

Everything you need to optimize your cloud image assets with Storing.

Getting Started

Storing is a cloud asset optimization service that automatically classifies, compresses, and converts your images to next-gen formats. It uses a 3-tier classification system (icon, standard, hero) to apply the right compression strategy for each image, achieving typical savings of 40-70% without visible quality loss.

Three ways to use Storing

Web Dashboard

Visual interface at use.storing.app for connecting buckets, running audits, and monitoring jobs.

CLI

Command-line tool for local compression, batch processing, and scripting into CI/CD pipelines.

MCP Server

Model Context Protocol server for AI coding agents like Claude Code, Cursor, and Windsurf.

Quick start: compress a file in 30 seconds

bash
npm install -g @supertype.ai/storing
storing login --email [email protected] --password yourpassword
storing compress hero-banner.png

That's it. Storing will classify the image, pick the optimal format (AVIF by default), and save the compressed file alongside the original. Requires authentication — sign up for free to get started.

CLI Reference

The storingCLI compresses images through Storing's optimization pipeline and manages remote bucket operations via the Storing API. All operations require authentication.

Installation

bash
npm install -g @supertype.ai/storing

Requires Node.js 18+. After installing, the storing command is available globally.

Authentication

An account is required for all operations. Sign up for free at use.storing.app.

bash
# Create an account
storing register --email [email protected] --password yourpassword --name "Your Name"

# Log in (saves token to ~/.storing/credentials.json)
storing login --email [email protected] --password yourpassword

# Log out
storing logout

Compression

The compresscommand sends images to Storing's cloud optimization pipeline. Each image is classified into a tier (icon, standard, hero) and compressed with tier-appropriate quality settings. The compressed file is downloaded back to your machine automatically. Requires authentication.

bash
# Compress a single file
storing compress image.png

# Specify output format
storing compress image.png --format avif
storing compress image.png --format webp
storing compress image.png --format jpeg

# Compression profiles
storing compress image.png --profile conservative  # higher quality
storing compress image.png --profile balanced       # default
storing compress image.png --profile aggressive     # smaller files

# Size-based compression
storing compress image.png --max-size 200KB
storing compress image.png --max-size 100KB --min-quality 70

# Compress a directory
storing compress ./images/ --recursive
storing compress ./images/ --recursive --format avif --output ./optimized/

# JSON output (for scripting)
storing compress image.png --json

Options

FlagDefaultDescription
-f, --formatautoOutput format: original, avif, webp, jxl, jpeg, png, auto, multi
-p, --profilebalancedCompression profile: conservative, balanced, aggressive
--max-sizeTarget maximum file size (e.g. 200KB, 1.5MB)
--min-qualityMinimum SSIMULACRA2 quality score floor
-o, --outputOutput directory (default: same as input)
-r, --recursivefalseProcess directories recursively
--jsonfalseMachine-readable JSON output

Bucket Operations

These commands interact with the Storing API and require authentication via storing login.

Connect a bucket

bash
# Connect an S3 bucket (uses CloudFormation for IAM role)
storing connect s3://my-bucket --role-arn arn:aws:iam::123456:role/Storing-ReadOnly

# Connect a GCS bucket (opens OAuth in browser)
storing connect gs://my-bucket

For S3, running storing connect s3://my-bucket without a --role-arn prints a CloudFormation Quick Create link to set up the read-only IAM role.

Audit

bash
# Audit a connected bucket
storing audit s3://my-bucket

# Audit a local directory
storing audit ./public/images

Audits scan all images, classify them, and project potential savings without modifying any files.

Optimize

bash
# Optimize with defaults (balanced profile, auto format)
storing optimize s3://my-bucket

# Optimize with specific settings
storing optimize s3://my-bucket --profile aggressive --format avif

Starts an optimization job on the server. The CLI polls for progress and shows a live progress bar with ETA.

Status & Apply

bash
# Check all jobs
storing status

# Check a specific job
storing status <job-id>

# Apply optimized files (shows the aws s3 cp command)
storing apply <job-id>

# Preview without executing
storing apply <job-id> --dry-run

History

bash
# View all past jobs
storing history

# Filter by bucket
storing history s3://my-bucket

Environment Variables

VariableDescription
STORING_API_KEYAPI key for non-interactive authentication (MCP, CI/CD, scripts). Generate from the Settings page.
STORING_TOKENJWT token for authentication. Alternative to API key — use the token from storing login.

MCP Integration

Storing provides a Model Context Protocol (MCP) server that lets AI coding agents compress images, audit directories, and manage bucket optimizations through natural language. Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client.

Setup

Add the MCP server to your project's .mcp.json or global Claude config:

json
{
  "mcpServers": {
    "storing": {
      "command": "npx",
      "args": ["-y", "@supertype.ai/storing-mcp"],
      "env": {
        "STORING_API_KEY": "sk-your-api-key-here"
      }
    }
  }
}

The MCP server requires an API key for authentication. Generate one from the Settings page and set it as the STORING_API_KEY environment variable. You can also use STORING_TOKEN with a JWT token from storing login.

Available Tools

ToolDescriptionRequires API Key
storing_compressCompress or convert local image files. Supports AVIF, WebP, JXL, JPEG, PNG.Yes
storing_auditAudit a local directory or bucket for optimization opportunities.Yes
storing_connectConnect a cloud storage bucket (S3 or GCS).Yes
storing_optimizeStart an optimization job on a connected bucket.Yes
storing_statusCheck the status of optimization jobs.Yes
storing_reclassifyOverride automatic tier classifications for specific files.Yes
storing_rulesManage classification rules (add, list, remove glob-based rules).Yes
storing_applyGet the CLI command to apply optimization results.Yes
storing_historyView optimization history for a bucket.Yes

storing_compress parameters

ParameterTypeRequiredDescription
filesstring[]YesArray of local file paths to compress
formatstringNoOutput format (default: auto)
profilestringNoCompression profile (default: balanced)
max_sizestringNoMaximum output size, e.g. "200KB"
min_qualitynumberNoMinimum SSIMULACRA2 quality score

storing_audit parameters

ParameterTypeRequiredDescription
pathstringYesLocal directory path or bucket URI (e.g. s3://my-bucket)

Example AI Agent Interaction

Here's how a typical interaction works when an AI agent has the Storing MCP server configured:

1

User: "The images in public/images are too large. Can you optimize them?"

2

Agent: Runs storing_audit with path: "./public/images"

Discovers 24 images totaling 18.3 MB across 3 tiers (2 hero, 15 standard, 7 icon).

3

Agent: Runs storing_compress with files: ["./public/images/hero-banner.png", ...] and format: "avif"

Compresses all 24 files. Total: 18.3 MB to 5.1 MB (72% saved).

4

Agent: Reports results to the user with a breakdown by tier and suggestions for format choices.

Dashboard Guide

The web dashboard at use.storing.app provides a visual interface for all bucket operations.

1

Sign up

Create an account with email/password, or sign in with GitHub or Google OAuth. GitHub-only users should use the GitHub sign-in button.

2

Connect a bucket

For S3: click Connect, enter your bucket name, follow the CloudFormation link to create a read-only IAM role, then paste the Role ARN from CloudFormation Outputs. For GCS: click Connect and authorize via Google OAuth — your buckets are listed automatically.

3

Run an audit

Click Audit on a connected bucket. Storing scans all images, classifies them into tiers, and projects savings. Audits are free and non-destructive.

4

Start optimization

Select a compression profile (conservative, balanced, aggressive) and output format. Click Optimize to start the job. Files are processed in the background.

5

Monitor jobs

Track progress on the Jobs page. See real-time file counts, percentage complete, and estimated time remaining. You’ll receive an email when the job finishes.

6

Apply results

Once a job completes, optimized files are staged in a temporary S3 bucket. Copy the provided CLI command (aws s3 cp) to apply the optimized files to your bucket.

Per-Bucket Optimization Settings

Each connected bucket can have its own default optimization settings. Configure these from the bucket detail page to avoid re-entering options every time you start a job. When you create an optimization job, the bucket's defaults are pre-filled automatically.

SettingDescription
Max widths per tierMaximum resize dimensions for each tier (icon, standard, hero).
Size targets per tierTarget maximum file size for each tier after compression.
Default formatPreferred output format (e.g. avif, webp, auto).
Default profileCompression profile (conservative, balanced, aggressive).
Apply modeHow optimized files are delivered: stage, writeback, or subfolder.

Settings are stored as part of the bucket configuration and can also be managed via the API:

bash
# Get current bucket optimization defaults
curl -H "Authorization: Bearer <token>" \
  https://use.storing.app/api/buckets/:id/settings

# Update bucket optimization defaults
curl -X PATCH -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{"format":"avif","profile":"aggressive","applyMode":"writeback"}' \
  https://use.storing.app/api/buckets/:id/settings

Write-Back Modes

Storing supports three modes for delivering optimized files, controlled by the applyMode setting:

ModeBehavior
stageDefault. Optimized files go to a temporary staging bucket. Download via ZIP from the dashboard or copy with aws s3 cp.
writebackOptimized files are written back to the source bucket in-place (same path, new file extension).
subfolderOptimized files are written to an optimized/ subfolder within the source bucket.

Write-back and subfolder modes require additional permissions on the source bucket. For S3, deploy the write-back IAM role using the CloudFormation template:

bash
# CloudFormation template for write-back IAM role
# Adds s3:PutObject and s3:DeleteObject to the Storing role
aws cloudformation create-stack \
  --stack-name Storing-WriteBack \
  --template-url https://storing-staging.s3.ap-southeast-1.amazonaws.com/templates/write-back-role.yaml \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameters ParameterKey=BucketName,ParameterValue=your-bucket-name

For GCS, you will be prompted to re-authenticate with write scope (full_control) when enabling write-back or subfolder mode.

Upload to Bucket

Upload images directly to a connected bucket through Storing. Files are automatically optimized using the bucket's configured defaults before being written to the bucket.

From the bucket detail page, use the drag-and-drop upload zone to upload files. You can also use the API directly:

bash
# Upload and optimize a file to a connected bucket
curl -X POST -H "Authorization: Bearer <token>" \
  -F "[email protected]" \
  -F "destinationPath=images/hero/" \
  https://use.storing.app/api/buckets/:id/upload

The destinationPathparameter is optional — if omitted, the file is placed at the bucket root. The file is compressed using the bucket's default format, profile, and tier settings before upload.

Classification System

Storing uses a 3-tier classification system to apply the right compression strategy for each image. Classification is based on dimensions and file size.

TierCriteriaStrategySSIMULACRA2 Target
Icon≤ 512px, ≤ 100KBAggressive compression. Small UI elements where size matters most.60
Standard512–1920pxBalanced compression. Typical content images and photos.70
Hero≥ 1920pxConservative, quality-first. Large hero images, banners, backgrounds.80

Confidence scoring

Each classification includes a confidence score (0-100%). Images that fall near tier boundaries receive lower confidence scores. Low confidence files are flagged in audit reports so you can review and reclassify them if needed.

Reclassification

You can override automatic classifications using the storing_reclassify MCP tool or by setting up glob-based rules with storing_rules. For example, you might force all files under /icons/ to the icon tier regardless of dimensions.

Profile offsets

Compression profiles shift the SSIMULACRA2 target up or down from the base tier value:

ProfileOffsetEffect
Conservative+5Higher quality, larger files
Balanced0Default — optimal balance
Aggressive-5Smaller files, slightly lower quality

Per-tier optimization defaults

Each tier has default resize dimensions and maximum file size targets that are applied automatically during compression. These defaults ensure images are appropriately sized for their role without manual configuration.

TierMax WidthMax File Size
Icon256px30 KB
Standard1200px200 KB
Hero1920px500 KB

These defaults can be overridden per-bucket via the bucket optimization settings. Images smaller than the max width are not upscaled.

Output Formats

FormatNotes
AVIFPrimary next-gen format. Typically ~50% smaller than JPEG at equivalent quality. Broad and growing browser support.
WebPSafe next-gen choice. Supported by all modern browsers. Good compression, slightly behind AVIF.
JPEG (MozJPEG)Universal compatibility. Uses MozJPEG encoder for better compression than standard JPEG.
PNGLossless format. Best for images requiring transparency or pixel-perfect reproduction.
JPEG XLOpt-in format. Supports lossless JPEG recompression (re-compress JPEG with zero quality loss). Limited browser support.

Format modes

When specifying the --format flag (CLI) or format parameter (MCP), you can use these modes:

ModeBehavior
originalKeep the original format, just re-compress with better settings.
avifConvert all images to AVIF.
webpConvert all images to WebP.
jxlConvert all images to JPEG XL.
autoAutomatically pick the best format per image (default). Usually AVIF.
multiGenerate multiple formats per image for use with <picture> srcset.

Works with Any Cloud

Storing's compression works with images from any source — AWS S3, Google Cloud Storage, Azure Blob Storage, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, or your local file system. Just download your images, compress with Storing, and re-upload.

For S3 and GCS, Storing offers a fully automated pipeline (connect → audit → optimize → deliver). For other providers, use the CLI or MCP with your existing tools:

Azure Blob Storage

bash
# Download
az storage blob download-batch -d ./images -s my-container --account-name myaccount

# Compress
storing compress ./images/ -r --format avif

# Re-upload
az storage blob upload-batch -s ./images -d my-container --account-name myaccount --overwrite

Cloudflare R2

bash
# Download (R2 is S3-compatible)
aws s3 sync s3://my-bucket ./images --endpoint-url https://<account-id>.r2.cloudflarestorage.com

# Compress
storing compress ./images/ -r --format webp --profile balanced

# Re-upload
aws s3 sync ./images s3://my-bucket --endpoint-url https://<account-id>.r2.cloudflarestorage.com

DigitalOcean Spaces

bash
# Download (Spaces is S3-compatible)
aws s3 sync s3://my-space ./images --endpoint-url https://nyc3.digitaloceanspaces.com

# Compress
storing compress ./images/ -r --format avif

# Re-upload
aws s3 sync ./images s3://my-space --endpoint-url https://nyc3.digitaloceanspaces.com

Any provider with rclone

bash
# rclone works with 40+ cloud providers
rclone sync remote:my-bucket ./images

# Compress
storing compress ./images/ -r

# Re-upload
rclone sync ./images remote:my-bucket

With AI agents (MCP)

AI coding agents with the Storing MCP server can automate this workflow for any cloud. Just ask:

“Download all images from our Azure container, optimize them with Storing, and re-upload the compressed versions.”

The agent handles the download/compress/upload loop automatically using the appropriate cloud CLI + storing_compress.

Authentication

Storing supports two authentication methods. Choose based on your use case:

MethodBest ForHow to GetHeader Format
JWT TokenCLI, web dashboard, interactive usestoring login or web sign-inAuthorization: Bearer <token>
API KeyMCP servers, CI/CD, scripts, non-interactive useSettings page → Generate KeyAuthorization: ApiKey sk-...

When to use API keys

Use API keys when you can't do an interactive login — MCP servers, CI/CD pipelines, cron jobs, or any script that runs unattended. API keys don't expire but can be deleted from the Settings page. Each key tracks its own usage count.

When to use JWT tokens

Use JWT tokens for interactive CLI sessions and the web dashboard. Tokens expire after 7 days. Run storing login to get a fresh token.

Generating an API key

bash
# Option 1: Web dashboard
# Go to Settings → API Keys → Generate Key
# Copy the key immediately — it's only shown once

# Option 2: CLI
storing login --email [email protected] --password yourpass
# Then use the Settings page to generate an API key

# Use the key in MCP config, scripts, or CI/CD:
STORING_API_KEY=sk-your-key-here storing compress image.png

API Reference

The Storing API is a REST service running at https://use.storing.app. The CLI and MCP server are thin clients over this API.

Auth

MethodEndpointDescription
POST/api/auth/registerCreate a new account
POST/api/auth/loginLog in, receive JWT token
POST/api/auth/apikeyGenerate an API key
GET/api/auth/apikeysList your API keys (preview + usage count)
DELETE/api/auth/apikeys/:idDelete an API key
GET/api/auth/githubStart GitHub OAuth flow
GET/api/auth/googleStart Google OAuth flow

Compression

MethodEndpointDescription
POST/api/compressCompress a single image (multipart upload)
POST/api/compress/auditClassify and estimate savings for uploaded images

Buckets & Jobs

MethodEndpointDescription
GET/api/bucketsList connected buckets
POST/api/buckets/connectConnect a new bucket
DELETE/api/buckets/:idDisconnect a bucket
GET/api/buckets/:id/settingsGet bucket optimization defaults
PATCH/api/buckets/:id/settingsUpdate bucket optimization defaults
POST/api/buckets/:id/uploadUpload and optimize file to bucket
GET/api/jobsList optimization jobs
POST/api/jobsCreate a new optimization job
GET/api/jobs/:idGet job details + results
GET/api/jobs/:id/progressGet job progress (for polling)
GET/api/jobs/:id/streamSSE stream for real-time job progress
POST/api/auditStart a bucket audit
GET/api/audit/:idGet audit report

Usage & Monitoring

MethodEndpointDescription
GET/api/usageYour usage history (paginated)
GET/api/usage/summaryAggregated usage stats (totals, by format/tier, daily)
POST/api/monitoring/:projectId/enableEnable weekly monitoring
GET/api/monitoring/:projectIdGet monitoring status
GET/api/healthHealth check (no auth required)

All endpoints except /api/health, /api/auth/register, and /api/auth/login require authentication via Authorization: Bearer <token> or Authorization: ApiKey sk-....

Full API documentation coming soon.

The CLI and MCP server are thin clients over this API. For now, these interfaces cover all functionality and are the recommended way to interact with Storing programmatically.