Infrastructure

Tasmanian Cloud Architecture

High-level architecture overview of Tasmanian Cloud's sovereign infrastructure

Tasmanian Cloud is built on a sovereign-first architecture, ensuring all data remains within Tasmania while providing enterprise-grade reliability, security, and performance.


Architecture Overview

flowchart TB
    subgraph "Internet"
        USERS[Customers & End Users]
        CF[Cloudflare CDN/WAF]
    end

    subgraph "Tasmanian Cloud - Launceston DC"
        subgraph "Edge Layer"
            LB1[HAProxy Load Balancer]
            WAF1[Local WAF]
        end

        subgraph "Compute Cluster"
            PVE1[Proxmox Node 1]
            PVE2[Proxmox Node 2]
            PVE3[Proxmox Node 3]
        end

        subgraph "Storage Cluster"
            CEPH1[Ceph OSD 1]
            CEPH2[Ceph OSD 2]
            CEPH3[Ceph OSD 3]
            CEPH_MON[Ceph MON/MGR]
        end

        subgraph "Services Layer"
            PANEL[Paymenter Panel]
            RUSTFS[RustFS Storage]
            MONITOR[Monitoring Stack]
        end
    end

    subgraph "Network"
        NETBIRD[Netbird Mesh VPN]
        FIREWALL[PFsense Cluster]
    end

    USERS --> CF
    CF --> LB1
    LB1 --> PANEL

    PANEL --> PVE1
    PANEL --> PVE2
    PANEL --> PVE3

    PVE1 --> CEPH1
    PVE2 --> CEPH2
    PVE3 --> CEPH3

    CEPH_MON --> CEPH1
    CEPH_MON --> CEPH2
    CEPH_MON --> CEPH3

    NETBIRD --> PVE1
    NETBIRD --> PVE2
    NETBIRD --> PVE3

Core Principles

1. Sovereign by Design

All infrastructure is physically located in Tasmania. No data leaves Australian jurisdiction unless explicitly configured by the customer.

flowchart LR
    subgraph "Data Sovereignty"
        A[Customer Data] --> B[Tasmanian Cloud]
        B --> C[Tasmania Only]
        C --> D[No Offshore Transfer]
    end

    subgraph "Compliance"
        E[ISO 27001]
        F[Essential 8]
        G[Privacy Act]
    end

    D --> E
    D --> F
    D --> G

2. Zero-Trust Security

Every component authenticates and authorizes every request. No implicit trust based on network location.

3. High Availability

No single points of failure. All critical services run in HA configuration.

4. API-First

All services expose RESTful APIs for automation and integration.


Compute Layer

Proxmox VE Cluster

NodeSpecsRole
pve1.tasmanian.cloud2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMePrimary compute
pve2.tasmanian.cloud2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMeSecondary compute
pve3.tasmanian.cloud2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMeTertiary compute
flowchart TB
    subgraph "Proxmox Cluster"
        CM[Corosync Cluster Manager]

        subgraph "Nodes"
            N1[Node 1]
            N2[Node 2]
            N3[Node 3]
        end

        subgraph "VM Distribution"
            VM1[Customer VMs]
            VM2[Service VMs]
            VM3[Management VMs]
        end
    end

    CM --> N1
    CM --> N2
    CM --> N3

    N1 --> VM1
    N2 --> VM2
    N3 --> VM3

Virtual Machine Types

TypeUse CaseSpecifications
StandardGeneral workloads1-16 vCPUs, 2-64GB RAM
High-MemoryDatabases, caches1-32 vCPUs, 8-256GB RAM
GPUAI/ML inference8-64 vCPUs, 64-512GB RAM, 1-8 L40S
BurstableVariable workloads1-4 vCPUs, 2-16GB RAM, CPU credits

Storage Layer

Ceph Cluster

flowchart TB
    subgraph "Ceph Storage Architecture"
        MON[Ceph MON
        Quorum: 3 nodes]
        MGR[Ceph MGR
        Active/Standby]

        subgraph "OSD Pool"
            OSD1[OSD 1-8
            Node 1]
            OSD2[OSD 9-16
            Node 2]
            OSD3[OSD 17-24
            Node 3]
        end

        subgraph "Storage Pools"
            POOL1[VM Disks
            3x Replica]
            POOL2[Object Storage
            Erasure Coding]
            POOL3[Backups
            2x Replica]
        end
    end

    MON --> MGR
    OSD1 --> POOL1
    OSD2 --> POOL1
    OSD3 --> POOL1

Storage Tiers

TierPerformanceUse CaseRedundancy
NVMe Hot500K+ IOPSDatabases, active VMs3x Replica
NVMe Warm100K+ IOPSGeneral workloads3x Replica
HDD Cold200+ MB/sArchives, backupsErasure Coding

RustFS Integration

For S3-compatible object storage with post-quantum cryptography:

flowchart LR
    CLIENT[S3 Client] --> API[S3 API Gateway]
    API --> RUSTFS[RustFS Cluster]
    RUSTFS --> PQ[Post-Quantum Crypto]
    PQ --> DISK[Encrypted Storage]

    subgraph "RustFS Features"
        KYBER[Kyber-768 KEM]
        DILITHIUM[Dilithium-3 Signatures]
        AES[AES-256-GCM]
    end

    PQ --> KYBER
    PQ --> DILITHIUM
    PQ --> AES

Network Architecture

Physical Network

flowchart TB
    subgraph "Network Topology"
        INTERNET[Internet]

        subgraph "Edge"
            FW1[PFsense Primary]
            FW2[PFsense Secondary]
            LB[HAProxy Cluster]
        end

        subgraph "Core"
            CORE1[Core Switch 1]
            CORE2[Core Switch 2]
        end

        subgraph "Access"
            TOR1[ToR Switch 1]
            TOR2[ToR Switch 2]
            TOR3[ToR Switch 3]
        end
    end

    INTERNET --> FW1
    INTERNET --> FW2
    FW1 --> CORE1
    FW2 --> CORE2
    CORE1 --> TOR1
    CORE1 --> TOR2
    CORE2 --> TOR3

Network Segmentation

VLANPurposeCIDR
10Management10.0.10.0/24
20Proxmox Cluster10.0.20.0/24
30Ceph Storage10.0.30.0/24
40Customer VMs10.0.40.0/22
50Services10.0.50.0/24
100Netbird VPN100.64.0.0/10

Netbird Mesh VPN

flowchart TB
    subgraph "Netbird Mesh"
        NB[Netbird Management]

        subgraph "Peers"
            PEER1[Proxmox Node 1]
            PEER2[Proxmox Node 2]
            PEER3[Proxmox Node 3]
            PEER4[Customer Site 1]
            PEER5[Customer Site 2]
        end

        subgraph "Access Control"
            ACL1[Group: Infrastructure]
            ACL2[Group: Customers]
            ACL3[Group: Management]
        end
    end

    NB --> PEER1
    NB --> PEER2
    NB --> PEER3
    NB --> PEER4
    NB --> PEER5

    ACL1 --> PEER1
    ACL2 --> PEER4
    ACL3 --> PEER5

Security Layer

Defense in Depth

flowchart TB
    subgraph "Security Layers"
        L1[Layer 1: Perimeter
        Cloudflare WAF
        DDoS Protection]

        L2[Layer 2: Network
        Firewall Rules
        IDS/IPS]

        L3[Layer 3: Host
        Wazuh EDR
        Tetragon eBPF]

        L4[Layer 4: Application
        Input Validation
        Auth/AuthZ]

        L5[Layer 5: Data
        Encryption at Rest
        Encryption in Transit]
    end

    L1 --> L2
    L2 --> L3
    L3 --> L4
    L4 --> L5

Wazuh XDR

ComponentFunctionCoverage
SIEMLog aggregation and analysisAll systems
EDREndpoint detection and responseAll VMs
FIMFile integrity monitoringCritical files
VulnerabilityCVE scanningWeekly

Service Layer

Core Services

flowchart TB
    subgraph "Tasmanian Cloud Services"
        subgraph "Management"
            PANEL[Paymenter Panel
            panel.tasmanian.cloud]
            API[REST API]
            CLI[CLI Tool]
        end

        subgraph "Compute"
            VM[VPS/VMs]
            K8S[Kubernetes]
            GPU[GPU Instances]
        end

        subgraph "Storage"
            BLOCK[Block Storage]
            OBJECT[S3-Compatible]
            BACKUP[Backup Service]
        end

        subgraph "Networking"
            VPC[Virtual Private Cloud]
            LB[Load Balancers]
            DNS[Managed DNS]
        end
    end

    PANEL --> API
    API --> VM
    API --> K8S
    API --> GPU
    API --> BLOCK
    API --> OBJECT
    API --> VPC

Monitoring & Observability

Stack Components

ComponentPurposeStack
MetricsTime-series dataPrometheus + Grafana
LogsCentralized loggingLoki + Grafana
TracesDistributed tracingTempo
AlertsAlert managementAlertmanager
UptimeService monitoringUptime Kuma
flowchart LR
    subgraph "Monitoring Pipeline"
        AGENTS[Node Exporters
        VM Agents] --> PROM[Prometheus]
        LOGS[Application Logs
        System Logs] --> LOKI[Loki]
        TRACES[Request Traces] --> TEMPO[Tempo]

        PROM --> GRAFANA[Grafana Dashboards]
        LOKI --> GRAFANA
        TEMPO --> GRAFANA

        PROM --> ALERT[Alertmanager]
        ALERT --> PAGER[PagerDuty/Slack]
    end

Disaster Recovery

Backup Strategy

Data TypeFrequencyRetentionLocation
VM SnapshotsDaily30 daysOn-site
DatabaseHourly7 daysOn-site
Object StorageReal-timeVersionedOn-site
Off-site BackupDaily90 daysSecondary DC

Recovery Objectives

MetricTargetImplementation
RPO (Recovery Point Objective)1 hourContinuous replication
RTO (Recovery Time Objective)4 hoursAutomated failover

Scalability

Horizontal Scaling

flowchart TB
    subgraph "Scaling Strategy"
        CURRENT[Current: 3 Nodes] --> PHASE1[Phase 1: 5 Nodes]
        PHASE1 --> PHASE2[Phase 2: 10 Nodes]
        PHASE2 --> PHASE3[Phase 3: Multi-Site]

        subgraph "Capacity"
            C1[100 VMs
50TB Storage]
            C2[500 VMs
200TB Storage]
            C3[2000 VMs
1PB Storage]
            C4[10000 VMs
Multi-PB]
        end

        CURRENT --- C1
        PHASE1 --- C2
        PHASE2 --- C3
        PHASE3 --- C4
    end