@firebase-backup-platform/storage-connectors
The storage-connectors package provides a unified interface for multi-cloud storage operations. It supports AWS S3, Google Cloud Storage, and DigitalOcean Spaces with a consistent API.
Installation
npm install @firebase-backup-platform/storage-connectors
# or
yarn add @firebase-backup-platform/storage-connectors
Overview
This package abstracts away cloud provider differences, allowing you to use the same code regardless of your storage backend:
Quick Start
Create a Connector
import { createStorageConnector } from '@firebase-backup-platform/storage-connectors';
// AWS S3
const s3 = createStorageConnector({
type: 's3',
bucket: 'my-backup-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
// Google Cloud Storage
const gcs = createStorageConnector({
type: 'gcs',
bucket: 'my-backup-bucket',
credentials: require('./service-account.json')
});
// DigitalOcean Spaces
const spaces = createStorageConnector({
type: 'spaces',
bucket: 'my-backup-space',
region: 'nyc3',
credentials: {
accessKeyId: process.env.DO_SPACES_KEY,
secretAccessKey: process.env.DO_SPACES_SECRET
}
});
Basic Operations
// Upload
await connector.upload({
path: 'backups/2024-01-15/data.json',
data: Buffer.from(JSON.stringify(backupData)),
contentType: 'application/json'
});
// Download
const data = await connector.download({
path: 'backups/2024-01-15/data.json'
});
// List files
const files = await connector.list({
prefix: 'backups/2024-01/'
});
// Delete
await connector.delete({
path: 'backups/2024-01-15/data.json'
});
API Reference
createStorageConnector(config)
Factory function to create storage connectors.
function createStorageConnector(config: StorageConfig): StorageConnector
StorageConfig:
interface StorageConfig {
type: 's3' | 'gcs' | 'spaces';
bucket: string;
region?: string;
credentials: CredentialConfig;
prefix?: string;
endpoint?: string;
}
StorageConnector Interface
All connectors implement this interface:
interface StorageConnector {
// Core operations
upload(options: UploadOptions): Promise<UploadResult>;
download(options: DownloadOptions): Promise<Buffer>;
delete(options: DeleteOptions): Promise<void>;
list(options: ListOptions): Promise<ListResult>;
exists(path: string): Promise<boolean>;
// Metadata
getMetadata(path: string): Promise<ObjectMetadata>;
setMetadata(path: string, metadata: Metadata): Promise<void>;
// Signed URLs
getSignedUrl(options: SignedUrlOptions): Promise<string>;
// Multipart uploads
createMultipartUpload(options: MultipartOptions): Promise<MultipartUpload>;
// Stream operations
createReadStream(path: string): Readable;
createWriteStream(path: string): Writable;
// Connection testing
testConnection(): Promise<ConnectionTestResult>;
}
Operations
Upload
Upload files to storage:
// Basic upload
const result = await connector.upload({
path: 'backups/data.json',
data: buffer
});
// With options
const result = await connector.upload({
path: 'backups/data.json',
data: buffer,
contentType: 'application/json',
metadata: {
'x-backup-id': 'bkp_123',
'x-project-id': 'proj_456'
},
encryption: {
algorithm: 'AES256' // Server-side encryption
},
acl: 'private',
cacheControl: 'no-cache'
});
console.log('Uploaded:', result.path);
console.log('ETag:', result.etag);
console.log('Size:', result.size);
UploadOptions:
| Property | Type | Required | Description |
|---|---|---|---|
path | string | Yes | Destination path |
data | Buffer | Readable | Yes | File content |
contentType | string | No | MIME type |
metadata | object | No | Custom metadata |
encryption | object | No | Server-side encryption |
acl | string | No | Access control |
cacheControl | string | No | Cache headers |
Download
Download files from storage:
// Basic download
const data = await connector.download({
path: 'backups/data.json'
});
// With range (partial download)
const partialData = await connector.download({
path: 'backups/large-file.json',
range: { start: 0, end: 1024 * 1024 } // First 1MB
});
// Parse JSON
const backupData = JSON.parse(data.toString());
Delete
Delete files:
// Delete single file
await connector.delete({
path: 'backups/old-backup.json'
});
// Delete multiple files
await connector.deleteMany({
paths: [
'backups/file1.json',
'backups/file2.json',
'backups/file3.json'
]
});
List
List files in storage:
// List with prefix
const result = await connector.list({
prefix: 'backups/2024-01/'
});
for (const file of result.objects) {
console.log(`${file.path} - ${file.size} bytes`);
}
// With pagination
let continuationToken = null;
do {
const result = await connector.list({
prefix: 'backups/',
maxKeys: 1000,
continuationToken
});
for (const file of result.objects) {
console.log(file.path);
}
continuationToken = result.nextContinuationToken;
} while (continuationToken);
// With delimiter (for folder-like listing)
const result = await connector.list({
prefix: 'backups/',
delimiter: '/'
});
console.log('Folders:', result.prefixes);
console.log('Files:', result.objects);
ListResult:
interface ListResult {
objects: ObjectInfo[];
prefixes: string[];
isTruncated: boolean;
nextContinuationToken?: string;
}
interface ObjectInfo {
path: string;
size: number;
lastModified: Date;
etag: string;
storageClass?: string;
}
Provider-Specific Configuration
AWS S3
const s3 = createStorageConnector({
type: 's3',
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
sessionToken: process.env.AWS_SESSION_TOKEN // Optional, for STS
},
// Optional S3-specific settings
s3: {
forcePathStyle: false, // Use virtual-hosted style
accelerateEndpoint: false,
useArnRegion: false
}
});
IAM Policy Requirements:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
Google Cloud Storage
const gcs = createStorageConnector({
type: 'gcs',
bucket: 'my-bucket',
credentials: require('./service-account.json'),
// Or use environment variable
// credentials: JSON.parse(process.env.GOOGLE_CREDENTIALS)
});
// Alternative: Use ADC (Application Default Credentials)
const gcs = createStorageConnector({
type: 'gcs',
bucket: 'my-bucket',
useADC: true // Uses GOOGLE_APPLICATION_CREDENTIALS
});
Required IAM Roles:
roles/storage.objectAdmin- Full object access- Or specific permissions:
storage.objects.createstorage.objects.getstorage.objects.deletestorage.objects.list
DigitalOcean Spaces
const spaces = createStorageConnector({
type: 'spaces',
bucket: 'my-space',
region: 'nyc3', // nyc3, sfo3, ams3, sgp1, fra1
credentials: {
accessKeyId: process.env.DO_SPACES_KEY,
secretAccessKey: process.env.DO_SPACES_SECRET
},
// Custom endpoint (optional)
endpoint: 'https://nyc3.digitaloceanspaces.com'
});
Advanced Features
Multipart Uploads
For large files (recommended for files > 100MB):
// Start multipart upload
const upload = await connector.createMultipartUpload({
path: 'backups/large-backup.json',
contentType: 'application/json'
});
// Upload parts
const partSize = 10 * 1024 * 1024; // 10MB parts
const parts = [];
let partNumber = 1;
for (let offset = 0; offset < data.length; offset += partSize) {
const chunk = data.slice(offset, offset + partSize);
const part = await upload.uploadPart({
partNumber,
data: chunk
});
parts.push(part);
partNumber++;
}
// Complete upload
await upload.complete({ parts });
// Or abort if needed
// await upload.abort();
Streaming
For memory-efficient processing:
import { pipeline } from 'stream/promises';
import { createGzip } from 'zlib';
// Upload with streaming
const readStream = fs.createReadStream('large-file.json');
const writeStream = connector.createWriteStream('backups/large-file.json.gz');
await pipeline(
readStream,
createGzip(),
writeStream
);
// Download with streaming
const downloadStream = connector.createReadStream('backups/large-file.json.gz');
const outputStream = fs.createWriteStream('restored-file.json');
await pipeline(
downloadStream,
createGunzip(),
outputStream
);
Signed URLs
Generate pre-signed URLs for temporary access:
// Generate download URL
const downloadUrl = await connector.getSignedUrl({
path: 'backups/data.json',
action: 'read',
expiresIn: 3600 // 1 hour
});
// Generate upload URL
const uploadUrl = await connector.getSignedUrl({
path: 'backups/new-file.json',
action: 'write',
expiresIn: 3600,
contentType: 'application/json'
});
console.log('Download URL:', downloadUrl);
console.log('Upload URL:', uploadUrl);
Copy Operations
// Copy within same bucket
await connector.copy({
source: 'backups/original.json',
destination: 'backups/copy.json'
});
// Copy with new metadata
await connector.copy({
source: 'backups/original.json',
destination: 'backups/copy.json',
metadata: {
'x-copied-at': new Date().toISOString()
}
});
Metadata Operations
// Get metadata
const metadata = await connector.getMetadata('backups/data.json');
console.log('Size:', metadata.size);
console.log('Content-Type:', metadata.contentType);
console.log('Last Modified:', metadata.lastModified);
console.log('Custom Metadata:', metadata.metadata);
// Set metadata
await connector.setMetadata('backups/data.json', {
'x-backup-verified': 'true',
'x-verified-at': new Date().toISOString()
});
Connection Testing
const result = await connector.testConnection();
if (result.success) {
console.log('Connection successful!');
console.log('Bucket:', result.bucket);
console.log('Region:', result.region);
console.log('Permissions:', result.permissions);
} else {
console.error('Connection failed:', result.error);
console.error('Details:', result.details);
}
ConnectionTestResult:
interface ConnectionTestResult {
success: boolean;
bucket: string;
region?: string;
permissions: {
read: boolean;
write: boolean;
delete: boolean;
list: boolean;
};
error?: string;
details?: string;
}
Error Handling
import {
createStorageConnector,
StorageError,
NotFoundError,
PermissionDeniedError,
QuotaExceededError,
NetworkError,
InvalidCredentialsError
} from '@firebase-backup-platform/storage-connectors';
try {
await connector.download({ path: 'backups/file.json' });
} catch (error) {
if (error instanceof NotFoundError) {
console.log('File not found:', error.path);
} else if (error instanceof PermissionDeniedError) {
console.log('Permission denied:', error.message);
console.log('Required permissions:', error.requiredPermissions);
} else if (error instanceof QuotaExceededError) {
console.log('Storage quota exceeded');
} else if (error instanceof NetworkError) {
console.log('Network error:', error.message);
// Retry logic
} else if (error instanceof InvalidCredentialsError) {
console.log('Invalid credentials:', error.message);
} else if (error instanceof StorageError) {
console.log('Storage error:', error.message);
console.log('Provider:', error.provider);
console.log('Code:', error.code);
}
}
Retry and Resilience
Configure automatic retries:
const connector = createStorageConnector({
type: 's3',
bucket: 'my-bucket',
region: 'us-east-1',
credentials: { ... },
retry: {
maxAttempts: 3,
initialDelay: 1000,
maxDelay: 30000,
backoffMultiplier: 2,
retryableErrors: ['NetworkError', 'ServiceUnavailable', 'Throttled']
}
});
TypeScript Support
Full TypeScript support with comprehensive types:
import {
createStorageConnector,
StorageConnector,
StorageConfig,
S3Config,
GCSConfig,
SpacesConfig,
UploadOptions,
UploadResult,
DownloadOptions,
ListOptions,
ListResult,
ObjectInfo,
ObjectMetadata,
SignedUrlOptions,
MultipartUpload,
ConnectionTestResult,
StorageError
} from '@firebase-backup-platform/storage-connectors';
// Typed configuration
const config: S3Config = {
type: 's3',
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: 'key',
secretAccessKey: 'secret'
}
};
// Typed connector
const connector: StorageConnector = createStorageConnector(config);
// Typed operations
const uploadResult: UploadResult = await connector.upload({
path: 'test.json',
data: Buffer.from('{}')
});
const listResult: ListResult = await connector.list({
prefix: 'backups/'
});
Best Practices
Path Naming
// Use consistent, organized paths
const path = `backups/${projectId}/${year}/${month}/${backupId}.json`;
// Example: backups/proj_123/2024/01/bkp_456.json
Prefix Configuration
// Set a global prefix for all operations
const connector = createStorageConnector({
type: 's3',
bucket: 'shared-bucket',
region: 'us-east-1',
credentials: { ... },
prefix: 'firebackup/' // All paths prefixed with this
});
// upload to 'backups/file.json' becomes 'firebackup/backups/file.json'
await connector.upload({
path: 'backups/file.json',
data: buffer
});
Memory Management
// For large files, use streaming
const readStream = connector.createReadStream('backups/large-file.json');
readStream.on('data', (chunk) => {
// Process chunk
});
readStream.on('end', () => {
console.log('Download complete');
});
Related Packages
- backup-core - Uses storage connectors for backups
- encryption - Encrypt data before upload
- compression - Compress data before upload