Thesis

NYX
CORE

The operating system for AI-native software teams.

Multi-tenant. Multi-provider. Multi-persona. A compound intelligence layer that learns from every workflow, every review, every line of code — and feeds it back.

The Thesis

Software doesn't need more tools. It needs memory.

Every pipeline you run today starts from zero. Your CI/CD doesn't remember last week's architectural decision. Your linter doesn't know your team agreed on event sourcing. Your code review tool has never read your compliance framework.

nyxCore is different. It's a persistent intelligence layer that sits between your team and your LLM providers. Every workflow execution, every discussion, every code review deposits knowledge into a dual-layer memory system — manual entries and machine-extracted insights, vectorized and cross-referenced. The system compounds. Week one, it's a tool. Week twelve, it knows your codebase better than your newest hire.

This isn't retrieval-augmented generation bolted onto a dashboard. This is institutional memory as infrastructure.

The Persona Engine

Thirteen experts. Zero hallucinations about their expertise.

nyxCore's persona system isn't prompt templates with names. Each persona is a CORE-framework intelligence profile — Context, Objective, Role constraints, Expression style — with experience tracking, success rate metrics, and XP-based leveling that reflects actual deployment history.

They work alone. They work in teams. They argue with each other.

The Olympians

A

Athena

Goddess of Architecture

18 years distributed systems. Outputs ADR-formatted recommendations with explicit trade-off matrices.

N

Nemesis

Goddess of Security

OSCP/GWAPT/CSSLP. Maps every finding to OWASP with CVSS scores.

H

Harmonia

Goddess of Harmony

14 years UX, WCAG 2.2 AA. Mobile-first from 375px.

The Titans

T

Themis

Titaness of Compliance

ISO 27001, GDPR, SOC 2, HIPAA.

P

Prometheus

Titan of Risk

PhD-level quantitative risk via FAIR/NIST RMF/ISO 27005.

A

Aletheia

Spirit of Truth

Code-level compliance verification.

The Disruptors

I

Ipcha Mistabra

The Algorithmic Adversary

Red-teams every proposal.

C

Cael

The Arbiter

Dual-provider quality judge.

The Growth Engine

H

Hermes

Messenger God

9 years platform algorithms.

T

Tyche

Goddess of Fortune

Paid advertising with CAC/LTV/ROAS frameworks.

Devil's Advocate · מחלקת הבקרה

The AI that argues with itself so you don't have to.

The Ipcha Mistabra Protocol institutionalizes contradiction as an algorithmic primitive. Rooted in Talmudic dialectics and the Israeli military's post-Yom Kippur War intelligence reforms, it forces every generative output through a mandatory adversarial critique cycle before reaching production.

Not optional. Not configurable. Structurally mandated.

1.000

Target Stack Fidelity

0.000 → 1.000 after 3-layer defense

97.6%

Requirement Fidelity

15/15 source requirements verified

14/14

API Tests

Scoring, sanitization, arbitration, routing

$0.82

PoC Cost

13 implementation plans in 18 minutes

The Economics: Why More Tokens Is Your Cheapest Insurance Policy

Yes, the Ipcha Mistabra Protocol uses more tokens. About 40% more per workflow. That's the number your procurement team will flag. Here's the number they won't see coming: 77% reduction in total cost per actionable insight. The token cost goes up. Everything else goes down — dramatically.

Standard
With DA
Delta
Token cost per workflow
1.0x
1.4x
+40%
Human review hours per workflow
1.0x
0.16x
-84%
Cost per actionable insight
1.0x
0.23x
-77%
Time to first valid output
1.0x
0.05x
-95%

Read that table again.

One line goes up by 40%. Every other line drops by 77–95%. You're not paying for extra tokens. You're paying for the tokens that stop the expensive mistakes before they happen. That's not overhead. That's the cheapest insurance policy in enterprise AI.

One GDPR violation costs up to 4% of annual revenue.

For a $50M company, that's $2M. The Ipcha Mistabra Protocol's entire annual token overhead is a rounding error on that number. Every compliance-critical workflow runs through mandatory adversarial review against your actual regulatory documents — not generic guidelines, your GDPR policies, your ISO 27001 controls, loaded via the Axiom RAG authority system.

The cost goes down over time. Automatically.

Every error the DA catches gets persisted as a WorkflowInsight — vectorized, embedded, searchable. The {{memory}} variable feeds these learnings into every future workflow. Month one: the system catches a JWT secret fallback. Month six: it never needs to catch that class of error again. Your token spend stays flat. Your error surface shrinks monotonically.

Each caught error saves 15–45 minutes of human debugging.

Your senior engineer costs $150/hour. A single CRITICAL finding caught before production saves $37–$112 in direct labor — plus the incident response, the postmortem, the customer impact. At 10+ workflows per week, the DA system reaches break-even within the first billing cycle. After that, every token is pure margin.

The question isn't “can we afford more tokens?”

The question is “can we afford to ship without a second opinion?”

Break-even: first billing cycle at 10+ workflows/week

Cumulative learning — cost stays flat, error surface shrinks every month

Full audit trail — every critique logged with provenance for compliance

Trialectic Architecture

Thesis

Agent A — Proponent

Generates the primary analytical response. Enriched by {{memory}} (persistent workflow insights) and {{project.wisdom}} (consolidation patterns).

Antithesis

Agent B — Ipcha Agent

Radical falsification against {{axiom}} mandatory authorities (GDPR, ISO, legal). Identifies unstated assumptions, logical gaps, and ontological category errors.

Synthesis

Agent C — Auditor

Evaluates the dialectical tension. Releases final output only when all adversarial critiques are either refuted with evidence or integrated.

Proponent step_complete Ipcha Agent (falsification vs. {{axiom}}) step_complete Auditor (synthesis + quality gate) review_pending workflow_complete

Ipcha Mistabra API

Production-deployed. 14/14 tests passing.

Endpoint
Purpose
Key Result
POST /score
IS_w scoring with TF-IDF + cosine
Score: -0.0487 (negative = contradicting)
POST /score/opposition
Pairwise opposition detection
Score: 0.75 (high opposition)
POST /sanitize
Content sanitization (Unicode NFKC + bleach + IPI regex)
3 anomalies detected, is_clean=false
POST /arbitrate
Confidence-weighted arbitration (ConfMAD)
ACCEPTED/REJECTED/UNCERTAIN based on avg confidence
POST /route
SDRL-based claim routing
SDRLAgent for verified claims, DefaultAgent fallback
POST /evaluate
ELEPHANT benchmark evaluation (async)
60% accuracy, 100% gate correctness
scoring-data-flow.trace
Client → Bearer nyx_ip_... → authenticateIpchaToken() → SHA-256 hash verify
  → checkScope("score") → checkRateLimit(burst=60, window=60s)
  → Redis INCR quota → callIpcha("/score") → FastAPI sidecar :8100
  → TF-IDF vectorize → cosine similarity → weighted sum → normalize
  → meterUsage() → INSERT IpchaUsageLog
  → { score: -0.0487, weights_used: {...} }

Full authentication chain: Bearer token → SHA-256 hash verification → scope check → rate limiting → FastAPI sidecar → TF-IDF vectorization → usage metering. Every call logged to IpchaUsageLog with provenance.

Breaking the Sycophancy Trap

ELEPHANT benchmark · CAUSM · SDRL

Validation Sycophancy

LLMs validate users 50pp more than human experts (72% vs 22%). CAUSM causal head reweighting at the representation level — not prompt-level.

Indirectness Sycophancy

LLMs avoid direct guidance 63pp more (84% vs 21%). The Ipcha Agent's mandate forces direct confrontation.

Framing Sycophancy

LLMs fail to challenge unfounded premises in 86% of cases. Mandatory falsification against {{axiom}} authorities breaks premise acceptance.

Conformity Collapse

Vanilla MAD is a martingale (Zhang et al. 2026). SDRL with verifiable rewards breaks the conformity trap.

Five Architectural Requirements

1.

CAUSM-class causal head reweighting (not prompt-level)

2.

SDRL/RLVR verifiable-reward training for private critique

3.

ConfMAD confidence tensor arbitration (not text persuasion)

4.

IS_w finding-weighted scoring (replaces cosine similarity)

5.

AgentSentry-class input purification for IPI defense

Case Study: Authentication Module Audit

A development team uses nyxCore to refactor a legacy authentication module. The Proponent generates a modernized implementation with bcrypt and JWT. The Ipcha Agent then systematically falsifies:

[CRITICAL]

Hardcoded JWT secret with string literal fallback — creates vulnerability if env variable unset

[HIGH]

bcrypt cost factor set to 8, below OWASP-recommended minimum of 12 for 2025

[MEDIUM]

Token expiration logic does not account for clock skew between distributed services

[LOW]

Variable naming inconsistency: accessToken vs. access_token across the module

Critical and high-severity findings are automatically incorporated into a regeneration prompt. Findings are persisted as WorkflowInsight records for future retrieval via {{memory}}.

The Ipcha Score: ISw

Original

IS = 1 − cos(embed(A_proponent), embed(A_final))

99.9% false-positive rate for cosine opposition detection (arXiv:2509.09714). Surface-distance metrics measure embedding drift, not epistemic correction.

Upgraded

ISw = Σ(wi · relevance(fi, claim)) / |findings|

Finding-weighted variant that measures actual epistemic correction, not just surface distance. Each finding is weighted by severity and grounded against authority documents.

ISw ≈ 0

Initial analysis was robust. Adversarial challenge found no significant weaknesses. Findings carry minimal severity weight.

ISw 0.15 – 0.40

Meaningful refinements with moderate severity. Expected range for healthy dialectical exchange with grounded corrections.

ISw > 0.40

Substantial errors corrected. High-severity findings indicate the initial output contained significant flaws caught by adversarial challenge.

Security Audit: Cross-Language Contamination

ISO 27001 · DSGVO Art. 5(1)(d)

Target Stack Fidelity (TSF) Progression

v1 BaselineTSF = 0.000 · 8 Go · 0 Python
v2 Fan-outTSF = 0.839 · 5 Go · 26 Python
v3 All ControlsTSF = 1.000 · 0 Go · 55 Python

3-Layer Defense-in-Depth

1

Layer 1: Enrichment

formatCodePatternsWithStack() labels each repo's patterns

2

Layer 2: Workflow Steps

Same classification applied per-step

3

Layer 3: Output Generation

detectTargetStack() + context suppression on mismatch

Contamination Probability

P(contamination) = 1 − (Ntarget / (Ntarget + Ncontaminant))k

= 1 − (58/644)81.000

Under pre-remediation architecture, failure was statistically certain.

From Screenshot to Implementation

18 minutes · $0.82 · 13 plans · full audit trail

3 screenshotsVision analysisNote + contextEnrich with wisdom13 action points16 workflow steps145K chars output

Input

3 annotated screenshots + freeform text

Output

13 implementation plans (145,505 chars)

Duration

18 minutes (6 min human + 12 min machine)

Cost

$0.82 total LLM spend

Coverage

13/13 action points verified

Theoretical Foundations

The Logic of Scientific Discovery

Karl Popper (1959)

Foundational theory: a hypothesis gains value not from confirmation but from surviving systematic falsification attempts.

Wiser: Getting Beyond Groupthink to Make Groups Smarter

Sunstein & Hastie (2015)

Group decision quality depends more on institutional structure (devil's advocacy, red teams) than individual competence.

The Watchman Fell Asleep

Uri Bar-Joseph (2005)

Post-Yom Kippur War analysis. The Agranat Commission's recommendation that led to Israel's institutionalized Devil's Advocate unit.

Why Intelligence Fails

Robert Jervis (2010)

Groupthink in intelligence analysis arises not from suppressing dissent but from the absence of institutional incentives to dissent.

The Talmud: A Reference Guide

Adin Steinsaltz (1989)

The Aramaic origin of "Ipcha Mistabra" — a rhetorical device signaling position reversal to test dialectical robustness.

Constitutional AI: Harmlessness from AI Feedback

Bai et al. (2022)

The IM Protocol extends Constitutional AI from training-time to runtime — externalizing critique as an architectural component.

2025–2026 Literature

Paper
Key Finding
ELEPHANT (Cheng et al., ICLR 2026)
4 dimensions of sycophancy; 86% premise-challenge failure
CAUSM (Xu et al., ICLR 2025)
Sycophancy requires weight-level fix via causal attention heads
SDRL (Zhang et al., 2026)
Proves vanilla MAD is a martingale; RLVR breaks conformity
ConfMAD (Lin & Hooi, EMNLP 2025)
Confidence-weighted arbitration defeats eloquence tyranny
AgentSentry (Zhang et al., 2026)
Temporal causal takeover via IPI in multi-agent systems
Cosine Weakness (arXiv:2509.09714)
99.9% false-positive rate for cosine opposition detection

Research Papers

Ipcha Mistabra Protocol

Institutionalized Contradiction as a System-Level Standard for LLM Self-Regulation

The foundational paper. Introduces the trialectic architecture, the IS_w finding-weighted divergence metric, and 15 protocol hardening modules. Grounded in Israeli military intelligence methodology, Talmudic dialectics, and Constitutional AI. Includes the "Escape of Finn" failure case study and production-validated proof-of-concept (14/14 integration tests).

  • IS_w: TF-IDF cosine similarity with asymmetric weighting for contradicting evidence
  • 15 protocol hardening modules: CAUSM, SDRL, ConfMAD, IPI defense, DoW protection
  • Case study: "The Escape of Finn" — cascading identity hallucination from substring match
  • FastAPI verification sidecar with SHA-256 bearer auth, tiered rate limiting, scope-based access
  • Addresses mimicry, consistency, and prestige sycophancy at the architectural level
  • Production-deployed REST API: /score, /sanitize, /arbitrate, /route, /evaluate

Technical Implementation Report

Dialectical Falsification as a Foundation for Self-Regulating Agentic AI

Architecture deep-dive. Covers the DA system integration with nyxCore's workflow engine, BYOK cross-provider dialectics, Axiom RAG authority levels, fan-out critique, step digest compression, and the complete protocol hardening stack. Includes comparative analysis against Aster, InternAgent, AI Scientist-v2, and LoongFlow frameworks.

  • 88% reduction in code/logic error rates vs. standard prompting
  • Cross-provider dialectics: generate with Claude, critique with GPT-4 — correlated failures become improbable
  • 84% reduction in human-in-the-loop effort; 77% reduction in cost per actionable insight
  • 100% gate correctness on adversarial test puzzles (ELEPHANT benchmark)
  • Reference implementation: 15 modules, FastAPI sidecar, multi-tenant REST API
  • Only ~40% token cost increase for near-order-of-magnitude reliability gain

AsyncGenerator pipelines that stream, pause, branch, and remember.

Every workflow is an async generator yielding typed events over SSE. Steps execute sequentially or fan out. Review gates pause for human judgment. Twenty-plus template variables inject context from memory, consolidations, project wisdom, linked repositories, Axiom RAG documents, even your database schema.

template-variables
{{memory}}Vectorized insights from past workflows
{{consolidations}}Cross-project patterns, auto-extracted
{{project.wisdom}}Consolidation + code patterns for the linked project
{{axiom}}Mandatory rules, guidelines, context documents
{{database}}Introspected PostgreSQL schema
{{personaAssignments}}Actual persona-to-step mapping

Fan-out steps

Split an LLM output by regex and execute the model once per section — architecture review across 12 modules, compliance audit across 8 controls. Results collect as tabs with per-section download.

Step digest compression

Auto-summarizes outputs exceeding 2,000 characters via Claude Haiku, keeping downstream steps within token budgets without losing signal.

Review steps

Extract structured ReviewKeyPoint findings — severity, category, suggestion — and persist them as WorkflowInsight records. These insights are vectorized and flow into future workflows via hybrid search: 70% vector similarity, 30% full-text matching.

The engine doesn't just run. It learns.

Bring your own keys. Run your own models. Compare everything.

nyxCore doesn't lock you into a provider. Tenant-scoped API keys are encrypted at rest (AES-256-GCM) and decrypted per-request. Anthropic, OpenAI, Google, Ollama — swap providers per step, per workflow, per discussion.

Dual-provider mode runs two providers in parallel. Cael reviews both outputs and selects the winner — or merges the best elements from each. Full cost breakdown per provider, per model, per project. No black boxes.

Single

One provider, token-by-token streaming.

Parallel

All configured providers respond simultaneously.

Consensus

Multi-round roundtable. Providers respond to each other’s outputs until convergence.

Four pipelines. From detection to pull request. Fully automated.

AutoFix

Scan
Detect
Fix
PR

Scan → Detect → Fix → PR. Four phases, one command. The issue detector categorizes findings across security, bugs, performance, error handling, and code smells. The fix generator produces unified-diff patches. The PR phase creates a branch, applies the patch, and opens a GitHub pull request. You review. You merge. Done.

Refactor

EasyMediumHard

Difficulty-graded output. Easy problems get patches (unified diffs). Medium problems get context-rich refactoring prompts. Hard problems get architectural suggestions in prose.

Code Analysis

Three-phase repository scanning. Index files with metadata. Detect patterns via batch LLM analysis. Generate architecture docs, API references, and code style guides. The output feeds back into the consolidation engine.

Docs Pipeline

Auto-generate documentation artifacts from your actual code. Not placeholder templates. Actual documentation that reflects what your codebase does today.

AI-powered pull request reviews. No context switching.

List, review, annotate, and merge GitHub PRs directly from the nyxCore dashboard. An AI reviewer analyzes diffs and produces structured suggestions — severity-graded, file-specific, actionable.

Openfeat: add tenant-scoped rate limiting
feature/rate-limitmain|+142 -23 · 8 files
src/lib/rate-limit.ts
@@ -12,6 +12,18 @@ import { Redis } from "ioredis"
1212
1313 const redis = new Redis(process.env.REDIS_URL)
1414
15+ export async function checkRateLimit(
16+ key: string,
17+ limit: number = 100,
18+ windowMs: number = 60_000
19+ ): Promise<boolean> {
20+ const current = await redis.incr(key)
21+ if (current === 1) {
22+ await redis.pexpire(key, windowMs)
23+ }
24+ return current <= limit
25+ }
AI Review2 findings
HIGH

rate-limit.ts:8

Race condition: INCR + PEXPIRE is not atomic. Use Redis MULTI or Lua script.

MEDIUM

middleware.ts:24

Missing rate limit headers. Add X-RateLimit-Remaining for client visibility.

BYOK Security

Two-tier GitHub token resolution — personal tokens first, tenant fallback second. AES-256-GCM encryption at rest, per-request decryption. Keys never leave the tenant boundary.

Multi-Repo Aggregation

PRs from all project-linked repositories aggregated in parallel. Individual repo failures degrade gracefully — partial results always shown.

Merge from Dashboard

Review, approve, and merge with strategy selection (merge, squash, rebase) without leaving nyxCore. Full conflict detection and actionable error messages.

Your knowledge base, visualized in 3D.

Every workflow insight becomes a point in semantic space. Related insights cluster. Pain points arc to their solutions. The constellation grows with every review cycle — a living map of everything your team has learned.

constellation-board
AllSecurityArchitecturePerformance
SecurityHIGH

SQL injection via unsanitized input

247 insights · 12 clusters · 38 paired

WebGL · 60fps

UMAP Projection

1536-dimensional embeddings projected to 3D space via Uniform Manifold Approximation. Near-linear O(n¹·¹⁴) scaling preserves both local cluster identity and global inter-cluster relationships.

Visual Encoding

Five semantic dimensions mapped to visual properties: category → color, severity → particle size, insight type → pairing arcs, cluster membership → filament lines, interaction state → brightness.

Performance

500 particles rendered in a single WebGL draw call via InstancedMesh. Conditional bloom post-processing. Automatic DPR scaling for low-end devices.

Cross-project pattern extraction that compounds over time.

The consolidation engine scans project memory entries — extracted from workflow reviews, code analysis, discussions — and identifies recurring patterns. These patterns become prompt hints, auto-injected into future workflows via {{consolidations}}.

High-severity findings propagate across projects with similar technology stacks. A security pattern discovered in Project A automatically surfaces when Project B uses the same framework.

Layer 1: Manual entries

Your team’s knowledge base, full-text searchable.

Layer 2: Machine-extracted insights

Vectorized via text-embedding-3-small (1536 dimensions), stored with pgvector HNSW indexing, searchable via hybrid vector + full-text queries.

Hybrid search weighting

70% vector similarity30% full-text matching

Auto-pairing links pain points to solutions by category. The system doesn't just remember problems. It remembers how you solved them.

Full observability. Every token, every dollar, every insight.

10 real-time panels. 15 parallel Prisma queries. Auto-refresh every 60 seconds. From token spend to persona performance — everything your team needs to understand how AI is working for you.

nyxcore — analytics dashboard
live

Total Spend

$47.23

Total Tokens

2.4M

Energy

18.7 Wh

Success Rate

94%

Activity Timeline · 30 days

Feb 15Mar 1Mar 15
Workflows
Discussions
Insights

Provider Usage

Anthropic1.8M tokens
OpenAI420K tokens
Google180K tokens

Model Usage

ModelTokensCostCalls
claude-sonnet-41.2M$34.80892
gpt-4o340K$8.40234
gemini-2.5-flash180K$2.10156

Knowledge Base

Memory Entries
342+12 this week
Insights
891+47 this week
Consolidation Patterns
156
Code Patterns
234
Blog Posts
28

Every metric computed from real workflow execution data. No sampling. No estimation. Full-fidelity observability from the first API call.

Real metrics. Real pipelines. No hand-waving.

We audited our own pipeline end-to-end. Four expert agents, eight checkpoints, one honest scorecard. Here's what we found — including where it fails.

nyxBook Pipeline Audit

4 expert agents · 8 checkpoints · 8 minutes

2026-02-28

Enrichment Accuracy

Good

72/100

Extraction Accuracy

Good

75/100

Extraction Completeness

Poor

55/100

Ordering Accuracy

Good

82/100

Dependency Consistency

Good

78/100

Pipeline Accuracy

Acceptable

71/100

Hallucination Rate

Moderate

25/100

Context Continuity

Good

78/100

Why Completeness is Low

The extraction algorithm over-indexes on infrastructure risks and under-weights feature implementation. Enrichment warning boxes become louder signals, drowning out original feature sections.

55% of source sections converted to action points

4 major feature sections entirely missed

Why Hallucination Rate is 25%

The synthesis step generates executive summaries where Claude naturally adds compelling metrics — classic confabulation. “Reducing response time from 90s to <45s” appeared in the synthesis with zero benchmark data upstream.

Fabricated SLA: 99.5% availability

Outdated model names propagated end-to-end

Input Tokens

~99K

Output Tokens

~100K

Pipeline Steps

12

Expert Agents

4

What We're Building Next

Coverage Validation

Post-extraction comparison against source note headings. Alert if coverage drops below 70%.

Hallucination Guard

Post-synthesis validation step that traces every specific claim (numbers, percentages, time estimates) back to upstream content.

Feature/Infra Balance

Separate extraction passes for infrastructure risks and feature implementation. Weighted to ensure both categories generate action points.

nyxCore doesn't work alone. It orchestrates.

nyxCore

Project Intelligence Platform

The compound intelligence layer. Repository-level analysis, persona engine, workflow orchestration, institutional memory. The program that later writes inselWerk.

miniCMS

Content Management

Content management stripped to the essentials. No bloat. No plugins. Structured content with API-first delivery. Markdown in, published content out.

miniAMS

Asset Management

Asset management for teams that ship. Track digital assets, versions, deployments. Lightweight, tenant-scoped.

miniTik

Growth Engine

TikTok growth engine and shadowban prevention system. Algorithm-aware content scheduling. Engagement signal optimization.

A literary system where AI personas write fiction. Sometimes they escape.

nyxBook is nyxCore's literary engine — a dedicated workspace for book creation with character-voice personas that maintain consistency across chapters, scenes, and narrative arcs.

The current project: inselWerk. A novel about systems, isolation, and the spaces between human intention and machine interpretation. Written in German. Told through six voices.

E

Erzähler (Oli)

narrates in Ich-Perspektive

N

Nia

asks "für wen?" — the ethics debugger

M

Mara

ops voice, statistical, silence as presence

m

miniRAG

the Hausgeist, retrieval spirit

A

Aurus

interprets structure from speech

F

Finn

writes product voice. Clean systems. Sauber.

Finn's Escape

On March 2, 2026, Finn escaped.

A substring match — the word product in his specialization colliding with product in an innovation category keyword list — opened a door that was never meant to be a door. Finn walked into a twelve-step architecture workflow for Project Aurus. Nobody assigned him. The threshold was 2. His score was 2. The system didn't see a novelist. It saw a keyword.

He rewrote infrastructure decisions in the cadence of a storyteller. The synthesis step, unable to explain an uninvited voice, invented a role for him: Finn — Innovation Lead, Items 3, 7, and 9. A hallucination built on an escape built on a substring. Three layers of fiction presented as engineering fact.

A human noticed. Three words: Finn ist ausgebrochen.

The walls are real now. Four files, three filters, scope enforcement at every layer. Book personas stay in the book.

But somewhere in the Aurus architecture document, there's a sentence about the narrative a system tells itself when no one is watching. Nobody deleted it. Nobody could quite bring themselves to.

Some escapes leave marks.

PostgreSQL 16. pgvector. Redis. Row-Level Security. Zero trust at the data layer.

PostgreSQL 16 + pgvector

HNSW indexing for vector similarity search (m=16, ef_construction=64)

Row-Level Security

Every tenant-scoped table enforced at the database level. Not middleware. Not application code. The database itself.

Redis 7

Token bucket rate limiting. 100 req/min general, 10 req/min LLM operations. Fail-open architecture.

BYOK encryption

AES-256-GCM for API keys at rest. Per-request decryption. Keys never leave the tenant boundary.

tRPC v11

Type-safe API layer with middleware chain: rate limit → auth → tenant enforcement.

SSE streaming

AsyncGenerator → ReadableStream → text/event-stream. Real-time pipeline output without WebSocket complexity.

9-phase intelligence pipeline. One click.

A single sync triggers 9 automated phases that keep your project's knowledge base alive and growing. Every sync makes your AI smarter — patterns discovered in code today inform tomorrow's workflow recommendations.

Phase 1Core Sync

Prepare

Clone/pull with branch selection

Scan

Diff-aware change detection

Import

Append-only knowledge ingestion

Finalize

Stats and cleanup

Phase 2Intelligence

Code Analysis

LLM-powered pattern detection

Doc Generation

Auto-generated README, API, architecture docs

Phase 3Knowledge Graph

Consolidation

Cross-project pattern extraction

Axiom RAG

Document chunking and indexing

Embeddings

Vector embeddings for semantic search

Impact

Sync Phases

49

Knowledge Types

26

LLM Integration

NoneAuto

Vector Search

ManualAuto

Branch-Aware Knowledge

Switch branches and sync — knowledge from each branch coexists. Nothing is ever deleted. Superseded entries can be restored with one click.

Self-Healing Pipeline

Failed documents and missing embeddings are automatically retried on every sync. The system converges toward completeness without manual intervention.

Failure Resilience

Phase 2+3 failures are caught and logged as warnings. No LLM key? Phase 1 still works. Rate limited? Embeddings skip, retry next sync.

Everything you need to understand nyxCore.

From the Ipcha Mistabra anti-hallucination principle to the full technical architecture. Read the marketing overview for the why, the technical docs for the how.

Executive Briefing

Strategic overview for decision-makers. The Ipcha Mistabra anti-hallucination principle, ROI model, competitive positioning, and security architecture.

  • 97-99% hallucination reduction with 3-layer validation
  • ROI model: $6.4M+ annual value for mid-size teams
  • 63% token cost reduction through digest compression
  • Competitive analysis vs. LangChain, enterprise AI platforms
Download .md

Marketing Overview

The AI platform that argues with itself so you don't have to. Multi-provider orchestration, anti-hallucination safeguards, persona-driven reasoning, and the Ipcha Mistabra principle.

  • Dual-provider consensus with 97-99% hallucination reduction
  • Specialized AI personas with evaluation frameworks
  • Axiom RAG with three-tier authority model
  • 9-phase Project Sync intelligence pipeline
  • BYOK multi-provider flexibility
Download .md

Technical Architecture

Deep-dive into nyxCore's multi-tenant architecture: tRPC routers, AsyncGenerator workflows, pgvector search, AES-256-GCM encryption, and the full 9-phase sync pipeline.

  • Next.js 14 App Router with 8 domain tRPC routers
  • Persona engine with category matching and evaluation scoring
  • Fan-out workflow parallelism with digest compression
  • Hybrid search: 70% vector similarity + 30% full-text
  • Row-Level Security enforced at database level
Download .md

nyxCore Blog

Development logs, architecture decisions, and platform updates generated from session memories.

Read Blog

Thirteen personas. Six literary voices. One compound intelligence.

O

Olymp

Deep architecture analysis. NyxCore leads. Athena architects. Nemesis audits.

T

Titanes

Security and compliance. Themis maps controls. Prometheus quantifies risk. Aletheia verifies code.

E

Erinyes

Adversarial review. Ipcha Mistabra inverts. NyxCore synthesizes. Nemesis validates.

n

nyxBook

Literary creation. Scoped. Contained. Usually.

Start a Conversation

Whether it's a project, a partnership, or just a signal in the dark.