Open Research Agenda
Every live discipline is defined by its open problems as much as by its settled commitments. Hi-Centric-AI publishes its agenda rather than concealing it. What follows are six questions the discipline currently investigates.
What we are working on, and what we cannot yet resolve.
- Authority
How is human authority preserved at scale?
Hi-Centric-AI commits to named human authority at the structural center of every system. As systems scale across organizational boundaries — across teams, across institutions, across regulatory jurisdictions — how is named authority preserved without devolving into anonymous bureaucracy? What architectural and institutional patterns prevent authority drift over the operational lifetime of a system? This is the discipline's most consequential open question.
- Knowledge
How are knowledge boundaries defined and defended?
Bounded knowledge is the precondition of accountable AI. But how should boundaries be drawn — by domain, by regulation, by institution, by named expert? When two bounded knowledge systems must interoperate, what protocols preserve their respective bounds? How does the discipline recognize when a knowledge boundary has been silently breached? Bounded knowledge is the easier commitment to articulate; the engineering of bounded-knowledge practice remains open.
- Compounding
What are the empirical signatures of value compounding?
Hi-Centric-AI claims that systems built to the discipline grow more valuable under operation rather than depreciating. This is a testable empirical claim. What are the operational signatures of compounding? What metrics distinguish a system that genuinely deepens institutional knowledge from one that merely accumulates use logs? How long must a Hi-Centric-AI system be in operation before compounding can be empirically demonstrated?
- Practitioner
Who is the Hi-Centric-AI practitioner?
Disciplines have practitioners. The discipline of Hi-Centric-AI is in early articulation; the practitioner roles, training pathways, and professional identities are open to definition. What are the constitutive skills of a Hi-Centric-AI practitioner? What kind of training prepares them? What relationship do they hold to existing professions — software architecture, knowledge engineering, organizational design, applied epistemology? The practitioner question is the question of the discipline's social form.
- Failure
What are the named failure modes?
Live disciplines name their failure modes. Hi-Centric-AI's failure modes include authority drift (silent erosion of named authority), knowledge bleed (loss of field bounding under operation), value depreciation (use that fails to compound), and substrate escape (artificial intelligence operating outside the bounds the discipline architects). Each of these failure modes deserves a body of empirical work characterizing how it arises, how it is detected, and how it is corrected. That body of work is open.
- Frontier
What does the discipline say about frontier models?
The frontier of autonomous AI development — large foundation models, increasingly capable agentic systems — does not lie outside the scope of Hi-Centric-AI. The discipline must articulate how its commitments apply to systems whose capabilities are growing rapidly along axes the older lineage did not anticipate. Authority over what the model produces; bounded knowledge in the face of unbounded training data; compounding institutional expertise built on top of substrates the institution does not control. Open work, on the most important AI question of the present.
A discipline that hides its open problems isn't one.
Hi-Centric-AI is articulated as a contemporary, live, working discipline — not a finished doctrine. The open questions above are the work the founders are doing and the work available to anyone who wishes to extend the field. We publish them so that the discipline can be inherited, tested, and pushed forward by named human practitioners other than ourselves.