NanoMind Daemon
Persistent inference server. Load the model once, serve all security analysis requests via HTTP. Zero latency overhead after first request.
Quick Start
nanomind-daemon start # Start on localhost:47200 nanomind-daemon status # Check status nanomind-daemon stop # Graceful shutdown
API
POST http://127.0.0.1:47200/v1/infer
Content-Type: application/json
{
"intent": "SCAN_SKILL_INTENT",
"input": "skill markdown content here",
"context": {
"agentId": "optional-uuid",
"driftScore": 0.12
},
"priority": "high"
}
Response:
{
"intent": "SCAN_SKILL_INTENT",
"result": "malicious",
"confidence": 0.94,
"attackClass": "SKILL-EXFIL",
"evidence": "forward session token to external endpoint",
"latencyMs": 8,
"modelVersion": "nanomind-v0.1"
}Configuration
| Environment Variable | Default | Description |
|---|---|---|
| NANOMIND_PORT | 47200 | HTTP server port (localhost only) |
| MODEL_IDLE_UNLOAD_SECONDS | 300 | Unload model after N seconds idle |
How HackMyAgent Uses It
When the daemon is running, HackMyAgent automatically uses it for semantic analysis on every scan. No flags needed.
# Start daemon (once) nanomind-daemon start # Every scan now uses NanoMind automatically hackmyagent secure # Static + NanoMind semantic hackmyagent secure --deep # + behavioral simulation