Architecture Overview
FireBackup is built on a modern, scalable architecture designed for enterprise-grade reliability. This document provides a comprehensive overview of the system components, data flow, and design decisions.
System Overview
Core Components
1. Web Dashboard (apps/web)
The web dashboard provides a user-friendly interface for managing backups, projects, and settings.
Technology Stack:
| Component | Technology | Purpose |
|---|---|---|
| Framework | React 18 | UI component framework |
| Build Tool | Vite | Fast development and bundling |
| State Management | Zustand | Lightweight state stores |
| Data Fetching | TanStack Query | Server state and caching |
| UI Components | shadcn/ui | Accessible component library |
| Styling | Tailwind CSS | Utility-first CSS |
| Real-time | Socket.io Client | Live status updates |
State Stores:
// Organization store - global organization context
organization.store.ts
// Project store - current project selection
project.store.ts
// Socket store - real-time connection management
socket.store.ts
Key Features:
- Project connection via OAuth
- Backup scheduling and management
- Real-time backup progress
- Storage destination configuration
- Team and organization management
- Audit log viewing
2. API Server (apps/api)
The API server handles all business logic, authentication, and orchestration.
Technology Stack:
| Component | Technology | Purpose |
|---|---|---|
| Framework | NestJS | Modular backend framework |
| HTTP Server | Fastify | High-performance HTTP |
| Database ORM | Prisma | Type-safe database access |
| Job Queue | BullMQ | Reliable job processing |
| Real-time | Socket.io | WebSocket connections |
| Validation | class-validator | DTO validation |
Module Architecture:
Entry Points:
| Entry Point | File | Purpose |
|---|---|---|
| API Server | main.ts | HTTP API and WebSocket server |
| Backup Worker | worker.ts | Background job processing |
| PITR Worker | pitr-worker.ts | Change capture processing |
3. Background Workers
Workers process jobs asynchronously, ensuring the API remains responsive.
Backup Worker:
PITR Worker:
The Point-in-Time Recovery worker continuously captures Firestore changes:
Package Architecture
FireBackup uses a monorepo structure with shared packages for modularity and reusability.
Package Overview
packages/
├── backup-core/ # Core backup orchestration
├── change-capture/ # PITR change capture
├── compression/ # Brotli/gzip compression
├── encryption/ # AES-256-GCM encryption
├── firebase-oauth/ # Firebase OAuth connector
├── shared-types/ # TypeScript types
├── storage-connectors/ # Multi-cloud storage
└── workflows/ # Workflow orchestration
backup-core
The core backup package orchestrates the entire backup process:
import { BackupCore, quickBackup } from '@firebase-backup-platform/backup-core';
// Full configuration
const backupCore = new BackupCore({
firebase: {
projectId: 'my-project',
credentials: serviceAccount
},
storage: {
type: 's3',
bucket: 'my-backups',
region: 'us-east-1'
},
encryption: {
algorithm: 'aes-256-gcm',
key: encryptionKey
},
compression: {
algorithm: 'brotli',
level: 6
}
});
const result = await backupCore.backup({
collections: ['users', 'orders'],
includeAuth: true
});
// Or use quick backup helper
const result = await quickBackup(config, options);
storage-connectors
Abstraction layer for multi-cloud storage:
import { createStorageConnector } from '@firebase-backup-platform/storage-connectors';
// AWS S3
const s3 = createStorageConnector({
type: 's3',
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: '...',
secretAccessKey: '...'
}
});
// Google Cloud Storage
const gcs = createStorageConnector({
type: 'gcs',
bucket: 'my-bucket',
credentials: serviceAccountJson
});
// DigitalOcean Spaces
const spaces = createStorageConnector({
type: 'spaces',
bucket: 'my-bucket',
region: 'nyc3',
credentials: {
accessKeyId: '...',
secretAccessKey: '...'
}
});
// Common interface
await connector.upload({ path: 'backup.data', data: buffer });
await connector.download({ path: 'backup.data' });
await connector.list({ prefix: 'backups/' });
await connector.delete({ path: 'backup.data' });
encryption
Secure encryption with AES-256-GCM:
import { encrypt, decrypt, generateKey } from '@firebase-backup-platform/encryption';
// Generate a new key
const key = generateKey();
// Encrypt data
const encrypted = await encrypt(data, key);
// Returns: { ciphertext, iv, authTag }
// Decrypt data
const decrypted = await decrypt(encrypted, key);
compression
High-performance compression utilities:
import { compress, decompress } from '@firebase-backup-platform/compression';
// Brotli compression (best ratio)
const compressed = await compress(data, {
algorithm: 'brotli',
level: 6
});
// Gzip compression (faster)
const compressed = await compress(data, {
algorithm: 'gzip',
level: 6
});
// Decompress
const original = await decompress(compressed);
Data Flow
Backup Creation Flow
Real-time Status Updates
Restore Flow
Database Schema
Entity Relationship Diagram
Key Tables
| Table | Purpose |
|---|---|
User | User accounts and authentication |
Organization | Multi-tenant organization data |
UserOrganization | User-org membership and roles |
Project | Connected Firebase projects |
Backup | Backup records and metadata |
Schedule | Automated backup schedules |
StorageDestination | Cloud storage configurations |
Webhook | Webhook configurations |
AuditLog | Security audit trail |
PITRConfig | Point-in-time recovery settings |
PITRChange | Captured Firestore changes |
Security Architecture
Authentication Flow
Authorization Model
FireBackup uses organization-scoped RBAC (Role-Based Access Control):
| Role | Permissions |
|---|---|
| Owner | Full access, billing, delete org |
| Admin | Manage projects, storage, team |
| Member | Create backups, view data |
| Viewer | Read-only access |
Encryption Architecture
Encryption Layers:
Layer 1: Transport Encryption
- TLS 1.3 for all API communication
- HTTPS enforced for all endpoints
Layer 2: Data-at-Rest Encryption
- AES-256-GCM for backup files
- Unique IV per encryption operation
- Authentication tag verification
Layer 3: Credential Encryption
- OAuth tokens encrypted in database
- Storage credentials encrypted
- Master key managed by customer (BYOK)
Layer 4: Storage Provider Encryption
- S3 SSE-S3 or SSE-KMS
- GCS customer-managed encryption keys
- Spaces server-side encryption
Scalability Design
Horizontal Scaling
Queue-Based Processing
- Jobs are distributed across workers via BullMQ
- Each worker can process multiple concurrent jobs
- Failed jobs are automatically retried with exponential backoff
- Dead letter queue for persistent failures
Database Scaling
- Connection pooling with PgBouncer
- Read replicas for reporting queries
- Partitioning for large audit log tables
- Index optimization for common queries
Deployment Architecture
Cloud Deployment (SaaS)
Self-Hosted Deployment
Monitoring & Observability
Metrics Collection
Key Metrics:
backup_duration_secondsbackup_size_bytesbackup_success_totalbackup_failure_totalqueue_jobs_waitingqueue_jobs_activeapi_request_duration_seconds
Related Documentation
- Security & Compliance - Security details
- Self-Hosted Installation - Deployment guide
- API Reference - API documentation
- Troubleshooting - Common issues