1. API keys
Every project mints two kinds of API keys, prefixed so the difference is visible at a glance:
gflux_pub_…is the publishable kind. Safe to embed in browser code. Ingest-only. Rate-limited per-IP. Optional per-key origin allowlist locks ingest to a specific list of domains, so a key that leaks from one customer's site can't be used from another.gflux_secret_…is the server kind. Treated like a database password. Rate-limited per-key (higher ceilings). Unlocks the management API and the identity-alias / GDPR-erasure endpoints for scripted workflows.
Every endpoint declares which classifications it will accept. A correctly-authenticated key whose key_type is not in the endpoint's allowed set yields 403 with error: "wrong_key_type", a distinct response code from 401 so customers can tell "wrong key" from "unknown key" without log-spelunking.
1.1 Encryption at rest
Only the SHA-256 hash of an API key is used for authentication. The raw key is also stored encrypted with AES-256-GCM and a 12-byte random nonce per row, AAD bound to the owning project_id so a stolen ciphertext cannot be replayed under a different project. The master key lives in the control plane's environment, never in the database. A leaked database backup is opaque ciphertext on its own.
The encryption seam is one file. Moving to AWS KMS, GCP KMS, or Vault Transit is a swap of two functions; the column layout, the reveal endpoint, and the audit log don't need to change.
1.2 Rotation
Customers rotate a key from the dashboard with one click. The rotate flow:
- Mints a new key inheriting the old one's type, scope, and (for publishable) origin allowlist.
- Clamps the old key's expiry to now + 5 minutes, giving the customer a graceful swap window.
- After the grace window, the old key naturally stops authenticating via the standard expiry filter. No worker, no scheduler. The auth pipeline already does the work.
2. Audit trail
Every successful management-API mutation writes one row to audit_events in the same Postgres transaction as the change it describes. Failure modes are explicit:
- Mutation rolled back, audit row rolled back. No orphan log rows.
- Audit insert fails, whole mutation fails. No untracked changes.
Each row carries the actor (user or API key id), the action, the target table + id, and a before/after JSON snapshot. Project admins can read their own project's audit log via Row-Level Security; nobody but the platform service-role can write.
Required for SOC 2 CC7.2, ISO 27001 A.12.4, and GDPR Art. 30 records-of-processing. We're not certified for any of those yet, but the controls are in place so when we are, the evidence is reconstructible from a single SQL query.
3. Rate limits
Two windows per request, sliding-bucket:
- Publishable: 1,000 events/minute per /24-anonymized IP, 30,000 events/hour per IP.
- Server: 10,000 events/minute per key, 500,000 events/hour per key.
These are operator defaults, configurable per environment. They fail open, so a Redis outage cannot block legitimate ingest. The limiter is a guardrail against abuse, not an authentication layer (that already happened upstream).
4. Privacy & consent
4.1 GPC (Global Privacy Control)
GPC is honored by default at both the SDK and the ingest endpoint. When a browser sends Sec-GPC: 1 we drop the event silently with HTTP 202 so the SDK does not retry. This is legally binding under CCPA. California fined Sephora $1.2M in 2022 for ignoring it.
4.2 DNT (Do Not Track)
DNT is configurable per project. The default is to ignore it, matching industry behavior (GA, Segment, Mixpanel). Projects that want to honor DNT set the mode to honor; the SDK then buffers pre-consent events in memory and replays them on identify() or banner accept, with each replayed event tagged so backend analytics can segment them.
4.3 GDPR Article 17 (erasure)
A single SQL function, public.erase_profile(), cascades a hard delete across every PII table for one profile in one transaction. The function is SECURITY DEFINER and writes an audit row before it returns. The same function is wrapped by an admin REST endpoint and (per the keen-donut plan) is exposed for server-key automation so customers can fulfill a deletion request from their own backend.
4.4 IP anonymization
Raw IP addresses never touch durable storage. IPv4 is truncated to /24, IPv6 to /48, at ingest. The truncated value goes onto the Redis queue and into the database. Both the application layer and a matching SQL function (public.anonymize_ip()) implement the same rule so backfill scripts and worker ingestion paths can't drift.
5. Data residency
Customer event data lives in Supabase Postgres in the EU (Frankfurt). The control plane runs on Hetzner in the EU. Cloudflare sits in front of the API and dashboard for TLS termination and DDoS scrubbing. Cloudflare may briefly process request data for that purpose; nothing about a customer's events leaves the EU after the proxy hop.
Customer-managed encryption keys (CMEK) and per-customer region selection are not in scope for the private beta. If your contract requires a specific region or a customer- managed key, talk to us before signing up.
6. Test vs live separation
Every API key carries an environment label (test or live). The ingest layer tags every event with test_mode derived from the key's environment. Dashboard reads default to live data only; customers explicitly toggle into test data when they need to debug. Smoke traffic from CI never pollutes a customer's production charts.
7. Network & transport
- All public surfaces are HTTPS-only. Customer tracking domains terminate with Let's Encrypt certificates provisioned on-demand by Caddy, gated by a two-layer hostname check (Caddy callback + Django middleware) so a random hostname can't cause a cert issuance.
- The control plane and Redis are bound to loopback on the droplet; only Caddy listens on the public ports. There is no path to the database that doesn't go through the Django auth layer first.
- Service-to-service: Postgres connections use
sslmode=require. Redis is internal-network only.
8. What we don't have yet
We're a private beta. Here's what isn't shipped yet:
- SOC 2 Type 1 or Type 2 attestation. Controls are structured to be ready for it; the formal audit waits until post-GA.
- SAML / SSO login. Today the dashboard authenticates via Supabase Auth (email + Google / GitHub / Apple).
- Customer-managed encryption keys (CMEK).
- An independent third-party pen test on the production environment. Internal threat modeling and code review only so far.
- A status page. www.getfluxly.com is the current source of truth for "is the service up."
- mTLS or IP allowlists for server-key authentication.
Each of these is on a roadmap. If a specific control would gate your evaluation, email hello@getfluxly.com and we'll tell you honestly where it sits.
9. Reporting a vulnerability
Found something? Email security@getfluxly.com with reproduction steps. We'll acknowledge within two business days and keep you in the loop while we fix it.
We don't run a paid bounty yet, but we will publicly credit any disclosure that materially helps us (with your permission) on the changelog at getfluxly.com/changelog.
10. Related reading
Engineers who want the full operator-facing detail can read the internal write-ups that this page summarizes. They live in the repo under docs/security-and-compliance/:
api-keys-human.md: the publishable/server split, rotation, rate limits, origin allowlist, and the test/live data store.key-management.md: AES-GCM details, AAD binding, KMS migration path.audit-logging.md: fullaudit_eventsschema and retention.dnt-and-gpc.md: consent decision tree, buffer-then-replay, where SDK and backend split work.gdpr-compliance.md: Art. 5 / 17 / 25 / 30 / 32 obligations and where each is enforced.ip-anonymization.md: the matching Python and SQL implementations of the IP truncation rule.data-residency.md: region decision + what EU customers should expect.access-model.md+rls-and-grants.md: Row-Level Security policies and per-role grants.