Skip to main content

Architecture Overview

FireBackup is built on a modern, scalable architecture designed for enterprise-grade reliability. This document provides a comprehensive overview of the system components, data flow, and design decisions.

System Overview


Core Components

1. Web Dashboard (apps/web)

The web dashboard provides a user-friendly interface for managing backups, projects, and settings.

Technology Stack:

ComponentTechnologyPurpose
FrameworkReact 18UI component framework
Build ToolViteFast development and bundling
State ManagementZustandLightweight state stores
Data FetchingTanStack QueryServer state and caching
UI Componentsshadcn/uiAccessible component library
StylingTailwind CSSUtility-first CSS
Real-timeSocket.io ClientLive status updates

State Stores:

// Organization store - global organization context
organization.store.ts

// Project store - current project selection
project.store.ts

// Socket store - real-time connection management
socket.store.ts

Key Features:

  • Project connection via OAuth
  • Backup scheduling and management
  • Real-time backup progress
  • Storage destination configuration
  • Team and organization management
  • Audit log viewing

2. API Server (apps/api)

The API server handles all business logic, authentication, and orchestration.

Technology Stack:

ComponentTechnologyPurpose
FrameworkNestJSModular backend framework
HTTP ServerFastifyHigh-performance HTTP
Database ORMPrismaType-safe database access
Job QueueBullMQReliable job processing
Real-timeSocket.ioWebSocket connections
Validationclass-validatorDTO validation

Module Architecture:

Entry Points:

Entry PointFilePurpose
API Servermain.tsHTTP API and WebSocket server
Backup Workerworker.tsBackground job processing
PITR Workerpitr-worker.tsChange capture processing

3. Background Workers

Workers process jobs asynchronously, ensuring the API remains responsive.

Backup Worker:

PITR Worker:

The Point-in-Time Recovery worker continuously captures Firestore changes:


Package Architecture

FireBackup uses a monorepo structure with shared packages for modularity and reusability.

Package Overview

packages/
├── backup-core/ # Core backup orchestration
├── change-capture/ # PITR change capture
├── compression/ # Brotli/gzip compression
├── encryption/ # AES-256-GCM encryption
├── firebase-oauth/ # Firebase OAuth connector
├── shared-types/ # TypeScript types
├── storage-connectors/ # Multi-cloud storage
└── workflows/ # Workflow orchestration

backup-core

The core backup package orchestrates the entire backup process:

import { BackupCore, quickBackup } from '@firebase-backup-platform/backup-core';

// Full configuration
const backupCore = new BackupCore({
firebase: {
projectId: 'my-project',
credentials: serviceAccount
},
storage: {
type: 's3',
bucket: 'my-backups',
region: 'us-east-1'
},
encryption: {
algorithm: 'aes-256-gcm',
key: encryptionKey
},
compression: {
algorithm: 'brotli',
level: 6
}
});

const result = await backupCore.backup({
collections: ['users', 'orders'],
includeAuth: true
});

// Or use quick backup helper
const result = await quickBackup(config, options);

storage-connectors

Abstraction layer for multi-cloud storage:

import { createStorageConnector } from '@firebase-backup-platform/storage-connectors';

// AWS S3
const s3 = createStorageConnector({
type: 's3',
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: '...',
secretAccessKey: '...'
}
});

// Google Cloud Storage
const gcs = createStorageConnector({
type: 'gcs',
bucket: 'my-bucket',
credentials: serviceAccountJson
});

// DigitalOcean Spaces
const spaces = createStorageConnector({
type: 'spaces',
bucket: 'my-bucket',
region: 'nyc3',
credentials: {
accessKeyId: '...',
secretAccessKey: '...'
}
});

// Common interface
await connector.upload({ path: 'backup.data', data: buffer });
await connector.download({ path: 'backup.data' });
await connector.list({ prefix: 'backups/' });
await connector.delete({ path: 'backup.data' });

encryption

Secure encryption with AES-256-GCM:

import { encrypt, decrypt, generateKey } from '@firebase-backup-platform/encryption';

// Generate a new key
const key = generateKey();

// Encrypt data
const encrypted = await encrypt(data, key);
// Returns: { ciphertext, iv, authTag }

// Decrypt data
const decrypted = await decrypt(encrypted, key);

compression

High-performance compression utilities:

import { compress, decompress } from '@firebase-backup-platform/compression';

// Brotli compression (best ratio)
const compressed = await compress(data, {
algorithm: 'brotli',
level: 6
});

// Gzip compression (faster)
const compressed = await compress(data, {
algorithm: 'gzip',
level: 6
});

// Decompress
const original = await decompress(compressed);

Data Flow

Backup Creation Flow

Real-time Status Updates

Restore Flow


Database Schema

Entity Relationship Diagram

Key Tables

TablePurpose
UserUser accounts and authentication
OrganizationMulti-tenant organization data
UserOrganizationUser-org membership and roles
ProjectConnected Firebase projects
BackupBackup records and metadata
ScheduleAutomated backup schedules
StorageDestinationCloud storage configurations
WebhookWebhook configurations
AuditLogSecurity audit trail
PITRConfigPoint-in-time recovery settings
PITRChangeCaptured Firestore changes

Security Architecture

Authentication Flow

Authorization Model

FireBackup uses organization-scoped RBAC (Role-Based Access Control):

RolePermissions
OwnerFull access, billing, delete org
AdminManage projects, storage, team
MemberCreate backups, view data
ViewerRead-only access

Encryption Architecture

Encryption Layers:

Layer 1: Transport Encryption

  • TLS 1.3 for all API communication
  • HTTPS enforced for all endpoints

Layer 2: Data-at-Rest Encryption

  • AES-256-GCM for backup files
  • Unique IV per encryption operation
  • Authentication tag verification

Layer 3: Credential Encryption

  • OAuth tokens encrypted in database
  • Storage credentials encrypted
  • Master key managed by customer (BYOK)

Layer 4: Storage Provider Encryption

  • S3 SSE-S3 or SSE-KMS
  • GCS customer-managed encryption keys
  • Spaces server-side encryption

Scalability Design

Horizontal Scaling

Queue-Based Processing

  • Jobs are distributed across workers via BullMQ
  • Each worker can process multiple concurrent jobs
  • Failed jobs are automatically retried with exponential backoff
  • Dead letter queue for persistent failures

Database Scaling

  • Connection pooling with PgBouncer
  • Read replicas for reporting queries
  • Partitioning for large audit log tables
  • Index optimization for common queries

Deployment Architecture

Cloud Deployment (SaaS)

Self-Hosted Deployment


Monitoring & Observability

Metrics Collection

Key Metrics:

  • backup_duration_seconds
  • backup_size_bytes
  • backup_success_total
  • backup_failure_total
  • queue_jobs_waiting
  • queue_jobs_active
  • api_request_duration_seconds