Security

Open source, as-is, early beta. We're transparent about where we are and where we're going.

Own your security

AI Curator is open source software you run yourself. You deploy it, you configure it, you expose it to the internet (or you don't). You manage access. We provide the software — you provide the operational security.

This isn't a temporary gap. Own your security is the model, and it matches the product positioning: your infrastructure, your data, your security posture. If we claimed to handle security for you, it would contradict the core message.

Your infrastructure. Your data. Your security posture. The same reason you chose local-first is the same reason you own your security.

Current state

We believe in being specific about where we are rather than making vague promises. Here's the honest state of security in AI Curator today.

ActiveOpen Source
MIT license. Full source on GitHub. You can read every line of code, audit it yourself, or fork it. Transparency is the most fundamental security guarantee.
ActiveAutomated Scans
Dependency scanning and static analysis run on every commit. Known-vulnerable dependencies and common code patterns are caught before they ship.
CurrentEarly Beta
Provided as-is. No warranty, no SLA, no security guarantees. You run it at your own risk. This is honest software — we don't make claims we can't back up.
PlannedHuman Audit
A professional security audit will be commissioned once development stabilizes out of beta. The results will be published. This is a concrete commitment, not a vague promise.

Security principles

Transparency over theater

Every company says they "take security seriously." That means nothing. Instead, we're specific: automated scans on every commit, human audit coming post-beta, open source code you can read. Vague promises are theater. Specifics are transparency.

Local-first means local-secure

AI Curator runs on your infrastructure by default. It doesn't phone home, doesn't send telemetry, doesn't make network requests you didn't initiate. Data stays in local SQLite at ~/.curator/. You can back it up, encrypt it, or air-gap it. It's yours.

Your deployment, your attack surface

If you run AI Curator on your local machine behind a firewall, the attack surface is minimal. If you deploy it to a VPS with the API exposed to the internet, you own that attack surface — just like any other self-hosted software. The software is the same. The security posture changes based on how you deploy it.

The landscape is shifting

The security landscape is evolving rapidly — new attack vectors, new vulnerability classes, new regulatory requirements. ElGap has deliberately chosen the "own your security" model because it matches the product positioning and because it's honest about what early beta software can guarantee. As the landscape stabilizes and AI Curator matures, this approach will be revisited.

Securing your deployment

If you deploy AI Curator beyond your local machine, here are practical recommendations. These aren't official security guarantees — they're common-sense practices for self-hosting any service.

Don't expose the API to the public internet without auth
The Live Capture API and Web UI have no built-in authentication. If you deploy to a server, put it behind a reverse proxy with auth (basic auth, Cloudflare Access, Tailscale, etc.).
Encrypt the data directory
Your data lives at ~/.curator/. Use filesystem-level encryption (LUKS, FileVault, BitLocker) if the data is sensitive.
Back up regularly
It's a local SQLite database. Back it up like you would any other database — scheduled copies, S3 sync, whatever fits your workflow.
Keep dependencies updated
When you update AI Curator, you get the latest dependency tree with known vulnerabilities patched. Run brew upgrade ai-curator or pull the latest Docker image regularly.

Audit roadmap

Done
Automated dependency scanning
Every commit runs dependency vulnerability scans. Known CVEs are caught before merge.
Done
Static code analysis
Automated SAST checks run on every commit to detect common security anti-patterns.
Planned
Professional security audit
A third-party security audit will be commissioned once development stabilizes out of beta. Results will be published publicly.
Planned
Coordinated vulnerability disclosure
A formal process for reporting and disclosing vulnerabilities will be established post-audit.

We're transparent about where we are: beta software, automated scans, human audit planned. When the audit is complete, the findings will be published here. Until then, open source, as-is, own your security.

Reporting vulnerabilities

If you discover a security vulnerability in AI Curator, please report it responsibly. We don't have a formal bug bounty program yet, but we do review and respond to every report.

How to report

  • 1.Open a private issue on GitHub Security Advisories
  • 2.Include the vulnerability details, affected versions, and any proof of concept
  • 3.Allow reasonable time for review and fix before public disclosure