Research
Automated eXpert for Industrial Output Management. A three-layer neuro-symbolic architecture
where LLMs propose engineering specifications, a constraint engine validates against physics,
and a Colored Petri Net supervisor orchestrates the feedback loop.
Reduced invalid AI-generated battery cell designs from ~47% to ~8%.
Python · CadQuery · PyBaMM · Pydantic · NetworkX
Dual-memory cognitive architecture for AI agents with explicit trust gradients.
After analyzing 12 agent memory frameworks (Zep, Mem0, MemGPT, LangChain, and others),
none were found to implement explicit trust differentiation between human-validated knowledge
and AI-generated hypotheses. DAEDALUS introduces a tri-color trust model
(blue/green/red) for provenance tracking and fact promotion pipelines.
Neo4j · MCP · Python
Neo4j-based working memory specification for cognitive AI agents.
Implements bi-temporal edges and three-factor retrieval scoring
(recency × importance × relevance) for the high-volume, low-trust
memory layer within the DAEDALUS architecture.
Neo4j · Cypher · Graph Schema Design
Open Access Publications · Zenodo
AXIOM: Colored Petri Net Supervision Framework for LLM-Generated Engineering Specifications
Technical Note · February 2026
First formal CPN model for supervising LLM outputs, defining a complete CPN tuple with fourteen guard functions mapping physics constraints for Li-Ion battery cells. Validated through a 346-test suite in the CellCAD-Python framework.
The Tri-Color Trust Model: A Provenance-Typed Edge Taxonomy for Trustworthy AI-Augmented Knowledge Graphs
Technical Note · February 2026
Provenance-typed edge taxonomy assigning every connection a color based on epistemic origin (Blue/Green/Red), with formal lifecycle transitions and path-aware conflict detection. Integrated with Colored Petri Net supervision.
ARIADNE: Neo4j Working Memory Specification for Cognitive AI Agents
Technical Note · 38 pages · February 2026
Three-tier graph memory (episodic, semantic, community) with bi-temporal edges, hybrid retrieval combining vector search, BM25, and graph traversal, and a trust-gradient promotion pipeline with confidence thresholds.
DAEDALUS Research Convergence Matrices: Multi-Model Analysis of Cognitive Architecture Patterns
Dataset · February 2026
Identical research questions posed to 3–4 frontier models independently, then systematically compared for convergence. Across 44 patterns and 12 memory frameworks, found 0/12 implement trust-level differentiation.
Dual-Memory Architecture with Trust Gradients for Cognitive AI Agents
Technical Note · 30 pages · February 2026
Gap analysis of 12 agent memory frameworks revealing none differentiate between verified facts and AI suggestions. Dual-memory architecture with confidence-based fact promotion pipeline where AI agents can propose but never auto-commit.
PROTEUS: Graph-Based Idea Evolution Through Autonomous AI Agents
Technical Note · February 2026
Concepts as entities within a graph-based ecology where five specialized AI agents apply evolutionary pressures with full provenance tracking. Includes a pre-registered 78-run controlled experiment comparing human-only, human–AI, multi-agent, and single-agent ideation.
The Engage Law: Cognitive Preservation as Structural Constraint in Self-Evolving AI Agent Architectures
Technical Note · February 2026
Analysis of 100+ architectures finding none treat cognitive preservation as a structural constraint. Proposes a fourth design principle requiring that autonomous self-modification cannot degrade human expertise.