BeakrGo to website

The graph

Every [[link]] written by the compiler is parsed and stored as an edge -- creating a dense, queryable graph that powers retrieval, backlinks, and structural health checks. The compiler assembles the graph as it writes pages, so every accepted source, answer, and correction can add nodes, edges, aliases, and provenance that compound over time.

Protocol V3topic (hub)Head of R&DpersonProtocol V2topicOR-3topicQ1 OutcomesmeetingOR-3 Calib.red linkProtocolsindexResolved linkRed link (unresolved)Parent / index

A real knowledge base becomes this dense in hours. The hub pages (topics, people) surface as high-degree nodes; red links drive health-agent work queues.

How the graph is built

The graph is not a separate system -- it is updated on every page write. No background jobs, no eventual consistency. Link parsing and edge writes happen synchronously in the same transaction as the page write, so newly saved links are available to search and agents immediately. Nightly jobs still compute health signals and propose structural cleanup, but they are not required for a new link to appear in the graph.

Graph growth is cumulative. A page can start as a thin node created from a source file, then gain aliases, backlinks, temporal events, citations, source agreement, and revision context as connectors sync newer evidence and agents add accepted updates. Beakr does not let that growth become unbounded: clustering, condensation, reorganization, and density checks merge duplicate structure, tighten sparse neighborhoods, and keep the graph information-dense and useful.

Page links

Every parsed [[link]] is stored as an edge. Records the source page, target page (or null for red links), and the link text as written.

Title aliases

Preserves old titles after renames so existing links keep resolving. Aliases are checked during link resolution when exact title match fails.

Parent hierarchy

Pages can declare a parent page, creating a tree structure for navigation and scoped search.

Backlinks

Computed from page links -- all pages that reference a given page. Powers "what links here" queries and hub detection.

How a [[link]] resolves

Every time a page is written, the compiler parses all [[links]] in the content and resolves each one against the existing knowledge base. Resolution happens synchronously in the same database transaction as the page write.

1 — PARSE[[Head of R&D]]extract link text2 — LOOKUP ALIASTitle aliasestitle or alias match3 — RESOLVEtarget_page_idor null (red link)4 — WRITE EDGEPage linkalways persisted

Resolution happens on every write. An edge always exists, even when the target doesn't.

Renames don't break links

When a page is renamed, the old title is stored as a title alias. Any existing [[old title]] links continue to resolve because the resolution pipeline checks aliases after failing an exact title match. No global find-and-replace is needed.

Before rename
Page title: Auth Service. Three other pages link to [[Auth Service]]. All resolve normally.
Rename action
Title changed to Authentication Service. The old title Auth Service is saved as a title alias automatically.
After rename
All three [[Auth Service]] links still resolve to the same page via alias lookup. New links can use either [[Auth Service]] or [[Authentication Service]].
Red links are a feature, not a bug. They represent concepts the knowledge base references but has not yet compiled into dedicated pages. Red link density is a health signal -- a high count means the knowledge base has identified gaps it can fill. The maintenance agent uses red links as its primary work queue, prioritizing pages that would resolve the most unresolved references.

Fuzzy dates and semantic context

The graph stores dates with explicit precision, such as day, month, quarter, or year, instead of pretending every source provides exact timestamps. Embeddings help retrieve fuzzy references like "early Q2" or "after the advisory meeting", while temporal metadata remains the source of truth for ordering and filtering.

Graph health signals

The nightly health scan computes structural metrics from the graph. These signals drive maintenance agent work queues and surface in the dashboard.

SignalWhat it measuresWhy it matters
Backlink countNumber of inbound links to a pageHigh-backlink pages are knowledge hubs. Zero-backlink pages may be orphaned or poorly integrated.
Orphan countPages with no inbound or outbound linksOrphans are invisible to graph traversal. The maintenance agent attempts to connect them or flags them for review.
Red-link densityRatio of unresolved to total linksA high ratio means the knowledge base has many references to concepts it has not yet compiled. Drives compiler prioritization.
Thin contentPages below a word-count thresholdThin pages are often stubs created by red-link resolution. They need enrichment from additional sources.
StalenessTime since last revision relative to source update frequencyA page whose sources have been updated but which has not been recompiled may contain outdated information.
Source agreementAgreement, contradiction, and coverage across citations, sources, and revisionsPages with thin or conflicting evidence are candidates for review, enrichment, or contradiction resolution before agents rely on them for high-stakes answers.