20 KiB
Veylant IA — AI Governance Hub
The enterprise intelligence layer between your teams and the LLMs.
PII anonymization · Intelligent routing · GDPR/EU AI Act compliance · Cost control · Full audit trail
Why Veylant IA?
Most organizations adopting AI face the same problems: employees using personal ChatGPT accounts with sensitive data, no visibility into what is sent to which model, no cost control, and zero compliance posture for GDPR or the EU AI Act.
Veylant IA solves this by acting as a transparent reverse proxy in front of every LLM your company uses. It intercepts every request, strips PII before it reaches any provider, routes traffic according to configurable policies, logs everything to an immutable audit trail, and gives compliance officers a one-click GDPR Article 30 report.
Your app / IDE / Slack bot
│
▼
┌──────────────────────────────────────────────┐
│ Veylant IA Proxy │
│ Auth → PII Scan → Route → Audit → Bill │
└──────────────────────────────────────────────┘
│ │ │
OpenAI API Anthropic API Mistral / Ollama
Zero code change required. Point your OPENAI_BASE_URL at the proxy — everything else stays the same.
Features
| Category | Capability |
|---|---|
| Shadow AI Prevention | Drop-in proxy; works with any OpenAI-compatible SDK |
| PII Anonymization | 3-layer detection: regex → Presidio NER → LLM validation; pseudonymization with Redis mapping |
| Intelligent Routing | Priority-based rules engine (JSONB conditions: role, department, sensitivity, model, token estimate) |
| Fallback Chains | Automatic failover across providers with circuit breaker (threshold=5, TTL=60s) |
| GDPR Compliance | Art.30 registry, Art.15 access, Art.17 erasure, DPIA reports — all generated as PDF |
| EU AI Act | Risk classification (Minimal/Limited/High/Unacceptable) from a 5-question questionnaire |
| Audit Logs | Append-only ClickHouse storage; exportable as CSV; access-of-access logging |
| RBAC | 4 roles (admin, manager, user, auditor); per-model and per-department permissions |
| Cost Tracking | Token-level billing per provider; budget alerts by email |
| Rate Limiting | Token-bucket per tenant + per user; DB overrides without restart |
| Multi-tenancy | PostgreSQL Row-Level Security; logical isolation with no data bleed |
| Streaming | Full SSE pass-through; PII applied to request, not streamed response |
| Provider Hot-reload | Add/update/remove LLM providers from the admin UI without restarting the proxy |
| Observability | Prometheus metrics, Grafana dashboards, SLO 99.5%, 7 alerting rules (PagerDuty + Slack) |
Architecture
┌─────────────────────────────────────────────────────┐
│ Go Proxy [cmd/proxy] │
│ chi · zap · viper · HS256 JWT · distroless image │
│ │
Client request │ ┌───────┐ ┌──────────┐ ┌──────────┐ ┌──────┐ │
──────────────────────► │ │ Auth │→ │ Rate Lim │→ │ Router │→ │ PII │ │
OpenAI-compatible │ └───────┘ └──────────┘ └──────────┘ └──┬───┘ │
│ │ │
│ ┌─────────────────────────────────────────▼────┐ │
│ │ Provider Dispatch + Fallback │ │
│ │ OpenAI · Anthropic · Azure · Mistral · Ollama│ │
│ └─────────────────────────────────────────┬────┘ │
│ │ │
│ ┌──────────┐ ┌──────────┐ ┌────────────▼────┐ │
│ │ Billing │ │ Metrics │ │ Audit Logger │ │
│ └──────────┘ └──────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────┘
│ gRPC (<2ms) │ async batch
▼ ▼
┌───────────────────────┐ ┌─────────────────┐
│ PII Service │ │ ClickHouse │
│ FastAPI + grpc.aio │ │ (append-only) │
│ Regex → Presidio NER │ └─────────────────┘
│ → LLM validation │
└───────────────────────┘
│
┌────────────────┼────────────────┐
▼ ▼ ▼
PostgreSQL 16 Redis 7 Prometheus
(RLS tenancy) (rate limit, + Grafana
PII mapping)
Stack at a glance:
| Layer | Technology |
|---|---|
| Proxy | Go 1.24, chi, zap, viper |
| PII sidecar | Python 3.12, FastAPI, Presidio, spaCy fr_core_news_lg |
| Relational DB | PostgreSQL 16 with Row-Level Security |
| Analytics | ClickHouse (append-only audit logs, TTL retention) |
| Cache / sessions | Redis 7, AES-256-GCM encrypted mappings |
| Frontend | React 18, TypeScript, Vite, shadcn/ui, Recharts |
| Observability | Prometheus, Grafana, Alertmanager |
| Secrets | HashiCorp Vault (90-day API key rotation) |
| Infra | Helm + Kubernetes (EKS), Terraform, Istio blue/green |
Quick Start
Prerequisites
- Docker + Docker Compose
- Go 1.24+ (for local development)
buf(brew install buf) — for proto regeneration only
1. Clone and start
git clone https://github.com/DH7789-dev/Veylant-IA.git
cd Veylant-IA
# Copy the example config
cp config.yaml.example config.yaml # or use the default config.yaml
# Start the full local stack
# (PostgreSQL · ClickHouse · Redis · PII service · Proxy · Prometheus · Grafana · React dashboard)
make dev
The first start downloads ~2 GB of images and model data. Subsequent starts take ~10 seconds.
2. Verify
make health
# → {"status":"ok","timestamp":"2026-01-01T00:00:00Z","version":"1.0.0"}
3. Open the dashboard
| Service | URL | Credentials |
|---|---|---|
| Dashboard | http://localhost:3000 | admin@veylant.dev / admin123 |
| Playground | http://localhost:8090/playground | — (public) |
| Documentation | http://localhost:3000/docs | — (public) |
| Grafana | http://localhost:3001 | admin / admin |
| Prometheus | http://localhost:9090 | — |
4. Send your first proxied request
# Obtain a JWT
TOKEN=$(curl -s -X POST http://localhost:8090/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"admin@veylant.dev","password":"admin123"}' \
| jq -r '.token')
# Send a request — identical to the OpenAI API
curl http://localhost:8090/v1/chat/completions \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role":"user","content":"Mon IBAN est FR7614508 — peux-tu m'\''aider?"}]
}'
The proxy will strip FR7614508 before sending it upstream and return the response with the pseudonymized token.
5. Use with any OpenAI-compatible SDK
from openai import OpenAI
import httpx, json
# Get a JWT
resp = httpx.post("http://localhost:8090/v1/auth/login",
json={"email": "admin@veylant.dev", "password": "admin123"})
token = resp.json()["token"]
# Point the OpenAI SDK at Veylant IA
client = OpenAI(
api_key=token,
base_url="http://localhost:8090/v1",
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello from Veylant IA!"}],
)
print(response.choices[0].message.content)
Configuration
All configuration lives in config.yaml. Every key can be overridden via environment variable using the VEYLANT_ prefix with . replaced by _.
server:
port: 8090
env: development # "production" → fatal on any missing service
tenant_name: "Acme Corp"
auth:
jwt_secret: "change-me-in-production"
jwt_ttl_hours: 24
pii:
grpc_addr: "localhost:50051"
timeout_ms: 100
fail_open: true # false in production
notifications:
smtp:
host: "smtp.example.com"
port: 587
username: "alerts@example.com"
password: "..."
from: "alerts@example.com"
from_name: "Veylant IA"
# Environment variable override example
VEYLANT_AUTH_JWT_SECRET=my-secret \
VEYLANT_SERVER_ENV=production \
./bin/proxy
Development
# Build
make build # → bin/proxy
# Test
make test # go test -race ./...
make test-cover # HTML coverage report → coverage.html
make test-integration # testcontainers (requires Docker)
# Single test
go test -run TestRuleEngine ./internal/routing/
pytest services/pii/tests/test_regex.py::test_iban
# Code quality
make lint # golangci-lint + black --check + ruff check
make fmt # gofmt + black
make check # Full pre-commit: build + vet + lint + test
# Frontend
cd web && npm install && npm run dev # Vite dev server on :3000 with HMR
cd web && npm run build # Production build → web/dist/
cd web && npm run lint # ESLint (max-warnings: 0)
# Database
make migrate-up # Apply pending migrations
make migrate-down # Roll back last migration
make migrate-status # Show current version
# Proto (only needed when editing .proto files)
make proto # buf generate → gen/ and services/pii/gen/
make proto-lint # buf lint
Development mode graceful degradation
When server.env=development, the proxy starts even if services are unavailable:
- PostgreSQL unreachable → routing disabled, feature flags use in-memory fallback
- ClickHouse unreachable → audit logging uses in-memory
MemLogger - PII service unreachable → PII disabled if
pii.fail_open=true
In production mode, any unavailable service causes a fatal startup error.
API Reference
The proxy exposes a fully documented REST API. All endpoints return errors in OpenAI JSON format.
| Group | Endpoints |
|---|---|
| Auth | POST /v1/auth/login |
| Proxy | POST /v1/chat/completions (streaming supported) |
| PII | POST /v1/pii/analyze |
| Admin — Logs | GET /v1/admin/logs, GET /v1/admin/compliance/export/logs |
| Admin — Users | GET/POST /v1/admin/users, PUT/DELETE /v1/admin/users/{id} |
| Admin — Providers | GET/POST /v1/admin/providers, PUT/DELETE/POST-test /v1/admin/providers/{id} |
| Admin — Rules | GET/POST /v1/admin/routing-rules, PUT/DELETE /v1/admin/routing-rules/{id} |
| Admin — Rate Limits | GET/POST /v1/admin/rate-limits, PUT/DELETE /v1/admin/rate-limits/{id} |
| Admin — Flags | GET/POST /v1/admin/flags, PUT /v1/admin/flags/{key} |
| Compliance | GET/POST /v1/admin/compliance/entries, PUT/DELETE /v1/admin/compliance/entries/{id} |
| Compliance — GDPR | GET /v1/admin/compliance/report/article30 (PDF/JSON), POST /v1/admin/compliance/gdpr/access, DELETE /v1/admin/compliance/gdpr/erasure |
| Compliance — AI Act | POST /v1/admin/compliance/classify, GET /v1/admin/compliance/report/aiact, GET /v1/admin/compliance/dpia/{id} |
| Notifications | POST /v1/notifications/send |
Interactive docs (Swagger UI): http://localhost:8090/docs Raw OpenAPI spec: http://localhost:8090/docs/openapi.yaml
Deployment
Docker Compose (single server)
# Production-like stack on a single machine
docker compose -f docker-compose.yml up -d
# Set secrets via environment
VEYLANT_AUTH_JWT_SECRET=your-secret \
VEYLANT_DATABASE_DSN=postgres://... \
docker compose up -d
Kubernetes + Helm
# Staging deploy
IMAGE_TAG=1.0.0 KUBECONFIG=~/.kube/config make helm-deploy
# Blue/green production deploy
make deploy-blue IMAGE_TAG=1.1.0 # Deploy to blue slot
make deploy-green IMAGE_TAG=1.1.0 # Switch traffic to green
make deploy-rollback ACTIVE_SLOT=blue # Instant rollback (<5s)
Helm chart is published to GHCR OCI:
helm install veylant-proxy oci://ghcr.io/DH7789-dev/charts/veylant-proxy --version 1.0.0
Terraform (AWS EKS)
cd deploy/terraform
terraform init
terraform plan -var="cluster_name=veylant-prod" -var="region=eu-west-3"
terraform apply
The Terraform module provisions: EKS v1.31 (3-AZ node groups), RDS PostgreSQL, ElastiCache Redis, S3 backup bucket with IRSA, and configures Istio for blue/green traffic management.
Public site (Landing page + Documentation)
The standalone web-public/ app can be deployed independently:
# Build
docker build -f web-public/Dockerfile \
--build-arg VITE_DASHBOARD_URL=https://app.veylant.io \
--build-arg VITE_PLAYGROUND_URL=https://proxy.veylant.io/playground \
-t veylant-public .
# Portainer stack — see web-public/docker-compose.yml
Security
Veylant IA was designed with a Zero Trust security model and underwent a grey-box penetration test (2026-06-09→20) with 0 Critical, 0 High findings.
| Control | Implementation |
|---|---|
| Transport | TLS 1.3 external, mTLS between services |
| Authentication | HS256 JWT, bcrypt password hashing |
| Authorization | RBAC with PostgreSQL Row-Level Security |
| Secrets | AES-256-GCM at application level; API keys stored as SHA-256 hashes |
| API keys | HashiCorp Vault, 90-day rotation cycle |
| Audit | Every request logged; access to audit logs is itself logged |
| SAST | Semgrep rules enforced in CI (SQL injection, context propagation, sensitive field logging) |
| Container scan | Trivy (CRITICAL/HIGH blocking) |
| Secrets detection | gitleaks in CI |
| DAST | OWASP ZAP (non-blocking, main branch only) |
Responsible disclosure: Please report security vulnerabilities by opening a private advisory on GitHub or emailing security@veylant.io.
Observability
- Metrics: Prometheus scrapes the proxy on
:9090; 7 pre-built alerting rules cover latency, error rate, circuit breaker state, certificate expiry, DB connections, and PII anomalies. - Dashboards: Two Grafana dashboards —
proxy-overview.json(operational) andproduction-slo.json(SLO 99.5%, error budget burn rate). - Alerting: PagerDuty for
criticalseverity; Slack forwarning. - Load testing: k6 scenarios (
smoke/load/stress/soak) — run withmake load-test SCENARIO=load.
Tenant Onboarding
# After `make dev`, seed a new tenant with default routing rules and rate limits
./deploy/onboarding/onboard-tenant.sh
# Bulk import users from CSV (email, first_name, last_name, department, role)
./deploy/onboarding/import-users.sh users.csv
Project Structure
cmd/proxy/ Go entry point — wires all modules, starts HTTP server
internal/ Go modules (auth, middleware, router, pii, auditlog, compliance,
admin, billing, circuitbreaker, ratelimit, flags, crypto,
metrics, provider, proxy, apierror, health, notifications, config)
gen/ Generated gRPC stubs (buf generate — never edit manually)
services/pii/ Python FastAPI + gRPC PII detection service
proto/pii/v1/ gRPC .proto definitions
migrations/ golang-migrate SQL files (up/down pairs)
clickhouse/ ClickHouse DDL applied at startup
web/ React 18 dashboard (Vite, shadcn/ui)
src/pages/docs/ Public documentation site (37 pages, shared with web-public)
web-public/ Standalone React app: landing page + docs (separate build)
test/integration/ Integration tests (testcontainers-go, //go:build integration)
test/k6/ k6 load test scripts (smoke/load/stress/soak)
deploy/ Helm, Kubernetes, Terraform, Prometheus, Grafana, Alertmanager
onboarding/ Tenant seed scripts
docs/ PRD, execution plan, ADRs, runbooks, commercial docs
CHANGELOG.md Full version history
Contributing
Contributions are welcome! Please read through the guidelines below before opening a PR.
Development workflow
- Fork the repository and create a feature branch from
main - Run
make checkbefore committing — this runs build, vet, lint, and tests - Follow Conventional Commits:
feat:,fix:,chore:,docs: - Ensure Go internal packages maintain ≥80% test coverage; Python PII service ≥75%
- Integration tests (
//go:build integration) must pass — they use testcontainers and require Docker - Open a pull request against
main— CI runs automatically
Code style
- Go:
goimportswith local prefixgithub.com/veylant/ia-gateway; three import groups (stdlib · external · internal) - Python:
black+ruff; noeval()orexec()on external data - React: ESLint with max-warnings: 0; UI copy in French;
date-fnswithfrlocale
Custom Semgrep rules
CI enforces project-specific SAST rules:
- No
context.Background()in HTTP handlers → user.Context() - No SQL string concatenation → use parameterized queries
- No sensitive fields in structured logs → use redaction helpers
- No hardcoded API keys (strings starting with
sk-) json.NewDecoder(r.Body)must be preceded byhttp.MaxBytesReader
Adding a new LLM provider
Implement the provider.Adapter interface (Send(), Stream(), Validate(), HealthCheck()) in internal/provider/<name>/. Add the provider type to the factory in internal/admin/provider_configs.go and register it in the Helm chart's allowed providers list.
License
MIT © 2026 Veylant IA — see LICENSE for details.
Built with Go · Python · React | Made in France 🇫🇷