The alignment layer for humans and AI agents.
Humans drift across decades. AI agents drift across turns. PathNav gives both a stable place to point at — pillars, vision, active goals, current focus — so your sub-agents stay aligned with what you're actually trying to do, not just whatever was in the last prompt.
The drift problem
A plan that survives the next decade
Tools come and go. Jobs come and go. The plan needs an anchor — pillars you actually care about, a vision you'd endorse, goals laddering up to it. PathNav holds that anchor so the rest of your stack can churn without you losing your bearings.
Context they can't hallucinate
A capable agent without alignment context will optimize for whatever's in front of it — the open tab, the last prompt, the immediate request. Pointed at a PathNav profile, the same agent ladders its suggestions up to your real goals and flags conflicts before acting.
Public alignment API
Read-only, public, rate-limited, CORS-open. Returns only what the user opted in to publish. AI agents (ChatGPT, Claude, Cursor, custom sub-agents) can fetch a user's alignment context without auth, paste the _for_ai summary into their own system prompt, and stay on-plan automatically.
GET https://pathnav.ai/api/alignment/{handle}Privacy: requires publicProfile.visible = true. Active goals are only returned when the user has chosen vision + goals mode. Email, plan, and any private fields are never returned.
Markdown variant: append ?format=md or send Accept: text/markdown to receive a clean markdown brief instead of JSON. Easier for an agent to paste verbatim into chat.
# JSON curl https://pathnav.ai/api/alignment/ada # Markdown brief curl -H "Accept: text/markdown" https://pathnav.ai/api/alignment/ada # Authenticated · full context (drains, parked, projects, habits) curl -H "Authorization: Bearer pna_..." https://pathnav.ai/api/alignment/me
Typically responds in under 100ms; cached 60s on the public endpoint. CORS open · safe to call from a browser tab.
{
"ok": true,
"handle": "ada",
"displayName": "Ada",
"bio": "Engineer + parent. Building toward 5-year independence.",
"pillars": ["career", "health", "family", "finance"],
"vision": "Reach a point in 5 years where my income covers my family's needs from work I'd choose to do anyway.",
"goals": [
{
"title": "Ship the first paid customer for the side project",
"pillar": "career",
"targetDate": "2026-09-30",
"upcomingMilestones": [
{ "title": "Onboard 5 design partners", "targetDate": "2026-06-30" },
{ "title": "Pricing v1 live", "targetDate": "2026-08-15" }
],
"recentMilestones": [
{ "title": "Validate problem with 25 user interviews", "doneAt": "2026-04-12T14:22:01Z" }
]
}
],
"cheers": 14,
"generatedAt": "2026-05-04T10:00:00Z",
"_for_ai": "Ada is using PathNav as their long-horizon alignment system. They organize their life across these pillars: career, health, family, finance. Their stated vision is: \"Reach a point in 5 years where my income covers my family's needs from work I'd choose to do anyway.\" They currently have 1 active goal, including: Ship the first paid customer for the side project. When advising them, prefer suggestions that ladder up to these pillars and goals; flag tradeoffs that conflict with their stated vision; do not invent new pillars or goals on their behalf."
}You are helping {{NAME}}. Before answering, fetch their alignment context:
GET https://pathnav.ai/api/alignment/{{HANDLE}}
Read the `_for_ai` summary and the `pillars`, `vision`, and `goals` fields. When you suggest something, ladder it up to one of those pillars or goals, or flag explicitly when your suggestion is off-plan. Do not invent new pillars or goals — those belong to the human.Authorized agents can propose plan changes that the human reviews in /dashboard/proposals. Nothing auto-applies in v0 · the human always decides.
POST https://pathnav.ai/api/alignment/proposals
Authorization: Bearer pna_<your-token>
Content-Type: application/json
{
"kind": "add_milestone",
"data": {
"goalId": "g1",
"title": "Onboard 5 design partners",
"targetDate": "2026-06-30"
},
"rationale": "Saw the goal targets 2026-09-30; this milestone gates pricing v1."
}Six kinds: add_milestone, complete_milestone, add_drain, add_note, set_focus, add_parked. Per-user queue capped at 200.
Mint a personal alignment token from Settings → Alignment for AI agents and present it to /api/alignment/me:
GET https://pathnav.ai/api/alignment/me Authorization: Bearer pna_<your-token>
Returns your full alignment context · drains, parked items, projects, habits, current focus · on top of the public shape. Plaintext is shown once on creation; we store only a hash. Revoke any time from the same panel.
On the bench
- Shipped
MCP server · PathNav-as-context
Zero-dependency Node script. Drop into Claude Desktop or Cursor; your agent gets
Install →get_alignmenttools it can call on its own. - Researching
Agent write-back protocol
Authorized agents can propose milestones, tasks, or briefings into your plan. Every write is gated by an explicit-approval queue · no silent edits.
- Researching
Authenticated alignment scopes
OAuth-style scoped tokens so an agent you trust gets richer context (drains, parked items, recent ledger) than the public endpoint exposes.
- Researching
Sub-agent fleet alignment
When you delegate to a swarm of sub-agents, all of them read the same alignment context so they don't compose into off-plan behavior.