Architecture Overview
High-level architecture overview of tasmanian.cloud sovereign infrastructure
tasmanian.cloud is built on a sovereign-first architecture, ensuring all data remains within Tasmania while providing enterprise-grade reliability, security, and performance.
Architecture Diagram
flowchart TB
subgraph "Internet"
USERS[Customers & End Users]
CF[Cloudflare CDN/WAF]
end
subgraph "tasmanian.cloud - Launceston DC"
subgraph "Edge Layer"
LB1[HAProxy Load Balancer]
WAF1[Local WAF]
end
subgraph "Compute Cluster"
PVE1[Proxmox Node 1]
PVE2[Proxmox Node 2]
PVE3[Proxmox Node 3]
end
subgraph "Storage Cluster"
CEPH1[Ceph OSD 1]
CEPH2[Ceph OSD 2]
CEPH3[Ceph OSD 3]
CEPH_MON[Ceph MON/MGR]
end
subgraph "Services Layer"
O2S[OpenSelfServe Portal]
RUSTFS[RustFS Storage]
MONITOR[Monitoring Stack]
end
end
subgraph "Network"
NETBIRD[Netbird Mesh VPN]
FIREWALL[PFsense Cluster]
end
USERS --> CF
CF --> LB1
LB1 --> O2S
O2S --> PVE1
O2S --> PVE2
O2S --> PVE3
PVE1 --> CEPH1
PVE2 --> CEPH2
PVE3 --> CEPH3
CEPH_MON --> CEPH1
CEPH_MON --> CEPH2
CEPH_MON --> CEPH3
NETBIRD --> PVE1
NETBIRD --> PVE2
NETBIRD --> PVE3
Core Principles
1. Sovereign by Design
All infrastructure is physically located in Tasmania. No data leaves Australian jurisdiction unless explicitly configured by the customer.
flowchart LR
subgraph "Data Sovereignty"
A[Customer Data] --> B[tasmanian.cloud]
B --> C[Tasmania Only]
C --> D[No Offshore Transfer]
end
subgraph "Compliance"
E[ISO 27001]
F[Essential 8]
G[Privacy Act]
end
D --> E
D --> F
D --> G
2. Zero-Trust Security
Every component authenticates and authorizes every request. No implicit trust based on network location.
3. High Availability
No single points of failure. All critical services run in HA configuration.
4. API-First
All services expose RESTful APIs for automation and integration.
Compute Layer
Proxmox VE Cluster
| Node | Specs | Role |
|---|---|---|
| pve1.tasmanian.cloud | 2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMe | Primary compute |
| pve2.tasmanian.cloud | 2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMe | Secondary compute |
| pve3.tasmanian.cloud | 2x EPYC 7443, 512GB RAM, 8x 3.84TB NVMe | Tertiary compute |
flowchart TB
subgraph "Proxmox Cluster"
CM[Corosync Cluster Manager]
subgraph "Nodes"
N1[Node 1]
N2[Node 2]
N3[Node 3]
end
subgraph "VM Distribution"
VM1[Customer VMs]
VM2[Service VMs]
VM3[Management VMs]
end
end
CM --> N1
CM --> N2
CM --> N3
N1 --> VM1
N2 --> VM2
N3 --> VM3
Virtual Machine Types
| Type | Use Case | Specifications |
|---|---|---|
| Standard | General workloads | 1-16 vCPUs, 2-64GB RAM |
| High-Memory | Databases, caches | 1-32 vCPUs, 8-256GB RAM |
Storage Layer
Ceph Cluster
flowchart TB
subgraph "Ceph Storage Architecture"
MON[Ceph MON
Quorum: 3 nodes]
MGR[Ceph MGR
Active/Standby]
subgraph "OSD Pool"
OSD1[OSD 1-8
Node 1]
OSD2[OSD 9-16
Node 2]
OSD3[OSD 17-24
Node 3]
end
subgraph "Storage Pools"
POOL1[VM Disks
3x Replica]
POOL2[Object Storage
Erasure Coding]
POOL3[Backups
2x Replica]
end
end
MON --> MGR
OSD1 --> POOL1
OSD2 --> POOL1
OSD3 --> POOL1
Storage Tiers
| Tier | Performance | Use Case | Redundancy |
|---|---|---|---|
| NVMe Hot | 500K+ IOPS | Databases, active VMs | 3x Replica |
| NVMe Warm | 100K+ IOPS | General workloads | 3x Replica |
| HDD Cold | 200+ MB/s | Archives, backups | Erasure Coding |
RustFS Integration
For S3-compatible object storage with post-quantum cryptography:
flowchart LR
CLIENT[S3 Client] --> API[S3 API Gateway]
API --> RUSTFS[RustFS Cluster]
RUSTFS --> PQ[Post-Quantum Crypto]
PQ --> DISK[Encrypted Storage]
subgraph "RustFS Features"
KYBER[Kyber-768 KEM]
DILITHIUM[Dilithium-3 Signatures]
AES[AES-256-GCM]
end
PQ --> KYBER
PQ --> DILITHIUM
PQ --> AES
Network Architecture
Physical Network
flowchart TB
subgraph "Network Topology"
INTERNET[Internet]
subgraph "Edge"
FW1[PFsense Primary]
FW2[PFsense Secondary]
LB[HAProxy Cluster]
end
subgraph "Core"
CORE1[Core Switch 1]
CORE2[Core Switch 2]
end
subgraph "Access"
TOR1[ToR Switch 1]
TOR2[ToR Switch 2]
TOR3[ToR Switch 3]
end
end
INTERNET --> FW1
INTERNET --> FW2
FW1 --> CORE1
FW2 --> CORE2
CORE1 --> TOR1
CORE1 --> TOR2
CORE2 --> TOR3
Network Segmentation
| VLAN | Purpose | CIDR |
|---|---|---|
| 10 | Management | 10.0.10.0/24 |
| 20 | Proxmox Cluster | 10.0.20.0/24 |
| 30 | Ceph Storage | 10.0.30.0/24 |
| 40 | Customer VMs | 10.0.40.0/22 |
| 50 | Services | 10.0.50.0/24 |
| 100 | Netbird VPN | 100.64.0.0/10 |
Netbird Mesh VPN
External access to customer resources is provided exclusively via VPN using Netbird. We do not offer public IPs or floating IPs due to limited IPv4 resources.
flowchart TB
subgraph "Netbird Mesh"
NB[Netbird Management]
subgraph "Peers"
PEER1[Proxmox Node 1]
PEER2[Proxmox Node 2]
PEER3[Proxmox Node 3]
PEER4[Customer Site 1]
PEER5[Customer Site 2]
end
subgraph "Access Control"
ACL1[Group: Infrastructure]
ACL2[Group: Customers]
ACL3[Group: Management]
end
end
NB --> PEER1
NB --> PEER2
NB --> PEER3
NB --> PEER4
NB --> PEER5
ACL1 --> PEER1
ACL2 --> PEER4
ACL3 --> PEER5
Security Layer
Defense in Depth
flowchart TB
subgraph "Security Layers"
L1[Layer 1: Perimeter
Cloudflare WAF
DDoS Protection]
L2[Layer 2: Network
Firewall Rules
IDS/IPS]
L3[Layer 3: Host
Wazuh EDR
Tetragon eBPF]
L4[Layer 4: Application
Input Validation
Auth/AuthZ]
L5[Layer 5: Data
Encryption at Rest
Encryption in Transit]
end
L1 --> L2
L2 --> L3
L3 --> L4
L4 --> L5
Wazuh XDR
| Component | Function | Coverage |
|---|---|---|
| SIEM | Log aggregation and analysis | All systems |
| EDR | Endpoint detection and response | All VMs |
| FIM | File integrity monitoring | Critical files |
| Vulnerability | CVE scanning | Weekly |
Service Layer
Core Services
flowchart TB
subgraph "tasmanian.cloud Services"
subgraph "Management"
O2S[OpenSelfServe
o2s.tasmanian.cloud]
API[REST API]
CLI[CLI Tool]
end
subgraph "Compute"
VM[VPS/VMs]
K8S[Kubernetes]
TEMPLATES[Templates]
end
subgraph "Storage"
BLOCK[Block Storage]
OBJECT[S3-Compatible]
BACKUP[Backup Service]
end
subgraph "Networking"
VPN[Netbird VPN]
PRIVATE[Private Networks]
end
end
O2S --> API
API --> VM
API --> K8S
API --> TEMPLATES
API --> BLOCK
API --> OBJECT
API --> VPN
Monitoring & Observability
Stack Components
| Component | Purpose | Stack |
|---|---|---|
| Metrics | Time-series data | Prometheus + Grafana |
| Logs | Centralized logging | Loki + Grafana |
| Traces | Distributed tracing | Tempo |
| Alerts | Alert management | Alertmanager |
| Uptime | Service monitoring | Uptime Kuma |
flowchart LR
subgraph "Monitoring Pipeline"
AGENTS[Node Exporters
VM Agents] --> PROM[Prometheus]
LOGS[Application Logs
System Logs] --> LOKI[Loki]
TRACES[Request Traces] --> TEMPO[Tempo]
PROM --> GRAFANA[Grafana Dashboards]
LOKI --> GRAFANA
TEMPO --> GRAFANA
PROM --> ALERT[Alertmanager]
ALERT --> PAGER[PagerDuty/Slack]
end
Disaster Recovery
Backup Strategy
| Data Type | Frequency | Retention | Location |
|---|---|---|---|
| VM Snapshots | Daily | 30 days | On-site |
| Database | Hourly | 7 days | On-site |
| Object Storage | Real-time | Versioned | On-site |
| Off-site Backup | Daily | 90 days | Secondary DC |
Recovery Objectives
| Metric | Target | Implementation |
|---|---|---|
| RPO (Recovery Point Objective) | 1 hour | Continuous replication |
| RTO (Recovery Time Objective) | 4 hours | Automated failover |
Scalability
Horizontal Scaling
flowchart TB
subgraph "Scaling Strategy"
CURRENT[Current: 3 Nodes] --> PHASE1[Phase 1: 5 Nodes]
PHASE1 --> PHASE2[Phase 2: 10 Nodes]
PHASE2 --> PHASE3[Phase 3: Multi-Site]
subgraph "Capacity"
C1[100 VMs
50TB Storage]
C2[500 VMs
200TB Storage]
C3[2000 VMs
1PB Storage]
C4[10000 VMs
Multi-PB]
end
CURRENT --- C1
PHASE1 --- C2
PHASE2 --- C3
PHASE3 --- C4
end