NYX
CORE
The operating system for AI-native software teams.
Multi-tenant. Multi-provider. Multi-persona. A compound intelligence layer that learns from every workflow, every review, every line of code — and feeds it back.
The Thesis
Software doesn't need more tools. It needs memory.
Every pipeline you run today starts from zero. Your CI/CD doesn't remember last week's architectural decision. Your linter doesn't know your team agreed on event sourcing. Your code review tool has never read your compliance framework.
nyxCore is different. It's a persistent intelligence layer that sits between your team and your LLM providers. Every workflow execution, every discussion, every code review deposits knowledge into a dual-layer memory system — manual entries and machine-extracted insights, vectorized and cross-referenced. The system compounds. Week one, it's a tool. Week twelve, it knows your codebase better than your newest hire.
This isn't retrieval-augmented generation bolted onto a dashboard. This is institutional memory as infrastructure.
The Persona Engine
Thirteen experts. Zero hallucinations about their expertise.
nyxCore's persona system isn't prompt templates with names. Each persona is a CORE-framework intelligence profile — Context, Objective, Role constraints, Expression style — with experience tracking, success rate metrics, and XP-based leveling that reflects actual deployment history.
They work alone. They work in teams. They argue with each other.
The Olympians
Athena
Goddess of Architecture
18 years distributed systems. Outputs ADR-formatted recommendations with explicit trade-off matrices.
Nemesis
Goddess of Security
OSCP/GWAPT/CSSLP. Maps every finding to OWASP with CVSS scores.
Harmonia
Goddess of Harmony
14 years UX, WCAG 2.2 AA. Mobile-first from 375px.
The Titans
Themis
Titaness of Compliance
ISO 27001, GDPR, SOC 2, HIPAA.
Prometheus
Titan of Risk
PhD-level quantitative risk via FAIR/NIST RMF/ISO 27005.
Aletheia
Spirit of Truth
Code-level compliance verification.
The Disruptors
Ipcha Mistabra
The Algorithmic Adversary
Red-teams every proposal.
Cael
The Arbiter
Dual-provider quality judge.
The Growth Engine
Hermes
Messenger God
9 years platform algorithms.
Tyche
Goddess of Fortune
Paid advertising with CAC/LTV/ROAS frameworks.
Devil's Advocate · מחלקת הבקרה
The AI that argues with itself so you don't have to.
The Ipcha Mistabra Protocol institutionalizes contradiction as an algorithmic primitive. Rooted in Talmudic dialectics and the Israeli military's post-Yom Kippur War intelligence reforms, it forces every generative output through a mandatory adversarial critique cycle before reaching production.
Not optional. Not configurable. Structurally mandated.
1.000
Target Stack Fidelity
0.000 → 1.000 after 3-layer defense
97.6%
Requirement Fidelity
15/15 source requirements verified
14/14
API Tests
Scoring, sanitization, arbitration, routing
$0.82
PoC Cost
13 implementation plans in 18 minutes
The Economics: Why More Tokens Is Your Cheapest Insurance Policy
Yes, the Ipcha Mistabra Protocol uses more tokens. About 40% more per workflow. That's the number your procurement team will flag. Here's the number they won't see coming: 77% reduction in total cost per actionable insight. The token cost goes up. Everything else goes down — dramatically.
Read that table again.
One line goes up by 40%. Every other line drops by 77–95%. You're not paying for extra tokens. You're paying for the tokens that stop the expensive mistakes before they happen. That's not overhead. That's the cheapest insurance policy in enterprise AI.
One GDPR violation costs up to 4% of annual revenue.
For a $50M company, that's $2M. The Ipcha Mistabra Protocol's entire annual token overhead is a rounding error on that number. Every compliance-critical workflow runs through mandatory adversarial review against your actual regulatory documents — not generic guidelines, your GDPR policies, your ISO 27001 controls, loaded via the Axiom RAG authority system.
The cost goes down over time. Automatically.
Every error the DA catches gets persisted as a WorkflowInsight — vectorized, embedded, searchable. The {{memory}} variable feeds these learnings into every future workflow. Month one: the system catches a JWT secret fallback. Month six: it never needs to catch that class of error again. Your token spend stays flat. Your error surface shrinks monotonically.
Each caught error saves 15–45 minutes of human debugging.
Your senior engineer costs $150/hour. A single CRITICAL finding caught before production saves $37–$112 in direct labor — plus the incident response, the postmortem, the customer impact. At 10+ workflows per week, the DA system reaches break-even within the first billing cycle. After that, every token is pure margin.
The question isn't “can we afford more tokens?”
The question is “can we afford to ship without a second opinion?”
Break-even: first billing cycle at 10+ workflows/week
Cumulative learning — cost stays flat, error surface shrinks every month
Full audit trail — every critique logged with provenance for compliance
Trialectic Architecture
Thesis
Agent A — Proponent
Generates the primary analytical response. Enriched by {{memory}} (persistent workflow insights) and {{project.wisdom}} (consolidation patterns).
Antithesis
Agent B — Ipcha Agent
Radical falsification against {{axiom}} mandatory authorities (GDPR, ISO, legal). Identifies unstated assumptions, logical gaps, and ontological category errors.
Synthesis
Agent C — Auditor
Evaluates the dialectical tension. Releases final output only when all adversarial critiques are either refuted with evidence or integrated.
Ipcha Mistabra API
Production-deployed. 14/14 tests passing.
Client → Bearer nyx_ip_... → authenticateIpchaToken() → SHA-256 hash verify
→ checkScope("score") → checkRateLimit(burst=60, window=60s)
→ Redis INCR quota → callIpcha("/score") → FastAPI sidecar :8100
→ TF-IDF vectorize → cosine similarity → weighted sum → normalize
→ meterUsage() → INSERT IpchaUsageLog
→ { score: -0.0487, weights_used: {...} }Full authentication chain: Bearer token → SHA-256 hash verification → scope check → rate limiting → FastAPI sidecar → TF-IDF vectorization → usage metering. Every call logged to IpchaUsageLog with provenance.
Breaking the Sycophancy Trap
ELEPHANT benchmark · CAUSM · SDRL
Validation Sycophancy
LLMs validate users 50pp more than human experts (72% vs 22%). CAUSM causal head reweighting at the representation level — not prompt-level.
Indirectness Sycophancy
LLMs avoid direct guidance 63pp more (84% vs 21%). The Ipcha Agent's mandate forces direct confrontation.
Framing Sycophancy
LLMs fail to challenge unfounded premises in 86% of cases. Mandatory falsification against {{axiom}} authorities breaks premise acceptance.
Conformity Collapse
Vanilla MAD is a martingale (Zhang et al. 2026). SDRL with verifiable rewards breaks the conformity trap.
Five Architectural Requirements
CAUSM-class causal head reweighting (not prompt-level)
SDRL/RLVR verifiable-reward training for private critique
ConfMAD confidence tensor arbitration (not text persuasion)
IS_w finding-weighted scoring (replaces cosine similarity)
AgentSentry-class input purification for IPI defense
Case Study: Authentication Module Audit
A development team uses nyxCore to refactor a legacy authentication module. The Proponent generates a modernized implementation with bcrypt and JWT. The Ipcha Agent then systematically falsifies:
Hardcoded JWT secret with string literal fallback — creates vulnerability if env variable unset
bcrypt cost factor set to 8, below OWASP-recommended minimum of 12 for 2025
Token expiration logic does not account for clock skew between distributed services
Variable naming inconsistency: accessToken vs. access_token across the module
Critical and high-severity findings are automatically incorporated into a regeneration prompt. Findings are persisted as WorkflowInsight records for future retrieval via {{memory}}.
The Ipcha Score: ISw
Original
IS = 1 − cos(embed(A_proponent), embed(A_final))
99.9% false-positive rate for cosine opposition detection (arXiv:2509.09714). Surface-distance metrics measure embedding drift, not epistemic correction.
Upgraded
ISw = Σ(wi · relevance(fi, claim)) / |findings|
Finding-weighted variant that measures actual epistemic correction, not just surface distance. Each finding is weighted by severity and grounded against authority documents.
ISw ≈ 0
Initial analysis was robust. Adversarial challenge found no significant weaknesses. Findings carry minimal severity weight.
ISw 0.15 – 0.40
Meaningful refinements with moderate severity. Expected range for healthy dialectical exchange with grounded corrections.
ISw > 0.40
Substantial errors corrected. High-severity findings indicate the initial output contained significant flaws caught by adversarial challenge.
Security Audit: Cross-Language Contamination
ISO 27001 · DSGVO Art. 5(1)(d)
Target Stack Fidelity (TSF) Progression
3-Layer Defense-in-Depth
Layer 1: Enrichment
formatCodePatternsWithStack() labels each repo's patterns
Layer 2: Workflow Steps
Same classification applied per-step
Layer 3: Output Generation
detectTargetStack() + context suppression on mismatch
Contamination Probability
P(contamination) = 1 − (Ntarget / (Ntarget + Ncontaminant))k
= 1 − (58/644)8 ≈ 1.000
Under pre-remediation architecture, failure was statistically certain.
From Screenshot to Implementation
18 minutes · $0.82 · 13 plans · full audit trail
Input
3 annotated screenshots + freeform text
Output
13 implementation plans (145,505 chars)
Duration
18 minutes (6 min human + 12 min machine)
Cost
$0.82 total LLM spend
Coverage
13/13 action points verified
Theoretical Foundations
The Logic of Scientific Discovery
Karl Popper (1959)
Foundational theory: a hypothesis gains value not from confirmation but from surviving systematic falsification attempts.
Wiser: Getting Beyond Groupthink to Make Groups Smarter
Sunstein & Hastie (2015)
Group decision quality depends more on institutional structure (devil's advocacy, red teams) than individual competence.
The Watchman Fell Asleep
Uri Bar-Joseph (2005)
Post-Yom Kippur War analysis. The Agranat Commission's recommendation that led to Israel's institutionalized Devil's Advocate unit.
Why Intelligence Fails
Robert Jervis (2010)
Groupthink in intelligence analysis arises not from suppressing dissent but from the absence of institutional incentives to dissent.
The Talmud: A Reference Guide
Adin Steinsaltz (1989)
The Aramaic origin of "Ipcha Mistabra" — a rhetorical device signaling position reversal to test dialectical robustness.
Constitutional AI: Harmlessness from AI Feedback
Bai et al. (2022)
The IM Protocol extends Constitutional AI from training-time to runtime — externalizing critique as an architectural component.
2025–2026 Literature
Research Papers
Ipcha Mistabra Protocol
Institutionalized Contradiction as a System-Level Standard for LLM Self-Regulation
The foundational paper. Introduces the trialectic architecture, the IS_w finding-weighted divergence metric, and 15 protocol hardening modules. Grounded in Israeli military intelligence methodology, Talmudic dialectics, and Constitutional AI. Includes the "Escape of Finn" failure case study and production-validated proof-of-concept (14/14 integration tests).
- •IS_w: TF-IDF cosine similarity with asymmetric weighting for contradicting evidence
- •15 protocol hardening modules: CAUSM, SDRL, ConfMAD, IPI defense, DoW protection
- •Case study: "The Escape of Finn" — cascading identity hallucination from substring match
- •FastAPI verification sidecar with SHA-256 bearer auth, tiered rate limiting, scope-based access
- •Addresses mimicry, consistency, and prestige sycophancy at the architectural level
- •Production-deployed REST API: /score, /sanitize, /arbitrate, /route, /evaluate
Technical Implementation Report
Dialectical Falsification as a Foundation for Self-Regulating Agentic AI
Architecture deep-dive. Covers the DA system integration with nyxCore's workflow engine, BYOK cross-provider dialectics, Axiom RAG authority levels, fan-out critique, step digest compression, and the complete protocol hardening stack. Includes comparative analysis against Aster, InternAgent, AI Scientist-v2, and LoongFlow frameworks.
- •88% reduction in code/logic error rates vs. standard prompting
- •Cross-provider dialectics: generate with Claude, critique with GPT-4 — correlated failures become improbable
- •84% reduction in human-in-the-loop effort; 77% reduction in cost per actionable insight
- •100% gate correctness on adversarial test puzzles (ELEPHANT benchmark)
- •Reference implementation: 15 modules, FastAPI sidecar, multi-tenant REST API
- •Only ~40% token cost increase for near-order-of-magnitude reliability gain
THE WORKFLOW ENGINE
AsyncGenerator pipelines that stream, pause, branch, and remember.
Every workflow is an async generator yielding typed events over SSE. Steps execute sequentially or fan out. Review gates pause for human judgment. Twenty-plus template variables inject context from memory, consolidations, project wisdom, linked repositories, Axiom RAG documents, even your database schema.
Fan-out steps
Split an LLM output by regex and execute the model once per section — architecture review across 12 modules, compliance audit across 8 controls. Results collect as tabs with per-section download.
Step digest compression
Auto-summarizes outputs exceeding 2,000 characters via Claude Haiku, keeping downstream steps within token budgets without losing signal.
Review steps
Extract structured ReviewKeyPoint findings — severity, category, suggestion — and persist them as WorkflowInsight records. These insights are vectorized and flow into future workflows via hybrid search: 70% vector similarity, 30% full-text matching.
The engine doesn't just run. It learns.
MULTI-PROVIDER ARCHITECTURE
Bring your own keys. Run your own models. Compare everything.
nyxCore doesn't lock you into a provider. Tenant-scoped API keys are encrypted at rest (AES-256-GCM) and decrypted per-request. Anthropic, OpenAI, Google, Ollama — swap providers per step, per workflow, per discussion.
Dual-provider mode runs two providers in parallel. Cael reviews both outputs and selects the winner — or merges the best elements from each. Full cost breakdown per provider, per model, per project. No black boxes.
Single
One provider, token-by-token streaming.
Parallel
All configured providers respond simultaneously.
Consensus
Multi-round roundtable. Providers respond to each other’s outputs until convergence.
DEVELOPMENT PIPELINES
Four pipelines. From detection to pull request. Fully automated.
AutoFix
Scan → Detect → Fix → PR. Four phases, one command. The issue detector categorizes findings across security, bugs, performance, error handling, and code smells. The fix generator produces unified-diff patches. The PR phase creates a branch, applies the patch, and opens a GitHub pull request. You review. You merge. Done.
Refactor
Difficulty-graded output. Easy problems get patches (unified diffs). Medium problems get context-rich refactoring prompts. Hard problems get architectural suggestions in prose.
Code Analysis
Three-phase repository scanning. Index files with metadata. Detect patterns via batch LLM analysis. Generate architecture docs, API references, and code style guides. The output feeds back into the consolidation engine.
Docs Pipeline
Auto-generate documentation artifacts from your actual code. Not placeholder templates. Actual documentation that reflects what your codebase does today.
CODE REVIEW
AI-powered pull request reviews. No context switching.
List, review, annotate, and merge GitHub PRs directly from the nyxCore dashboard. An AI reviewer analyzes diffs and produces structured suggestions — severity-graded, file-specific, actionable.
Files
rate-limit.ts:8
Race condition: INCR + PEXPIRE is not atomic. Use Redis MULTI or Lua script.
middleware.ts:24
Missing rate limit headers. Add X-RateLimit-Remaining for client visibility.
BYOK Security
Two-tier GitHub token resolution — personal tokens first, tenant fallback second. AES-256-GCM encryption at rest, per-request decryption. Keys never leave the tenant boundary.
Multi-Repo Aggregation
PRs from all project-linked repositories aggregated in parallel. Individual repo failures degrade gracefully — partial results always shown.
Merge from Dashboard
Review, approve, and merge with strategy selection (merge, squash, rebase) without leaving nyxCore. Full conflict detection and actionable error messages.
NEURAL CONSTELLATION
Your knowledge base, visualized in 3D.
Every workflow insight becomes a point in semantic space. Related insights cluster. Pain points arc to their solutions. The constellation grows with every review cycle — a living map of everything your team has learned.
SQL injection via unsanitized input
247 insights · 12 clusters · 38 paired
WebGL · 60fps
UMAP Projection
1536-dimensional embeddings projected to 3D space via Uniform Manifold Approximation. Near-linear O(n¹·¹⁴) scaling preserves both local cluster identity and global inter-cluster relationships.
Visual Encoding
Five semantic dimensions mapped to visual properties: category → color, severity → particle size, insight type → pairing arcs, cluster membership → filament lines, interaction state → brightness.
Performance
500 particles rendered in a single WebGL draw call via InstancedMesh. Conditional bloom post-processing. Automatic DPR scaling for low-end devices.
CONSOLIDATION & INSTITUTIONAL MEMORY
Cross-project pattern extraction that compounds over time.
The consolidation engine scans project memory entries — extracted from workflow reviews, code analysis, discussions — and identifies recurring patterns. These patterns become prompt hints, auto-injected into future workflows via {{consolidations}}.
High-severity findings propagate across projects with similar technology stacks. A security pattern discovered in Project A automatically surfaces when Project B uses the same framework.
Layer 1: Manual entries
Your team’s knowledge base, full-text searchable.
Layer 2: Machine-extracted insights
Vectorized via text-embedding-3-small (1536 dimensions), stored with pgvector HNSW indexing, searchable via hybrid vector + full-text queries.
Hybrid search weighting
Auto-pairing links pain points to solutions by category. The system doesn't just remember problems. It remembers how you solved them.
ANALYTICS
Full observability. Every token, every dollar, every insight.
10 real-time panels. 15 parallel Prisma queries. Auto-refresh every 60 seconds. From token spend to persona performance — everything your team needs to understand how AI is working for you.
Total Spend
$47.23
Total Tokens
2.4M
Energy
18.7 Wh
Success Rate
94%
Activity Timeline · 30 days
Provider Usage
Model Usage
| Model | Tokens | Cost | Calls |
|---|---|---|---|
| claude-sonnet-4 | 1.2M | $34.80 | 892 |
| gpt-4o | 340K | $8.40 | 234 |
| gemini-2.5-flash | 180K | $2.10 | 156 |
Knowledge Base
Every metric computed from real workflow execution data. No sampling. No estimation. Full-fidelity observability from the first API call.
PROOF OF CONCEPT
Real metrics. Real pipelines. No hand-waving.
We audited our own pipeline end-to-end. Four expert agents, eight checkpoints, one honest scorecard. Here's what we found — including where it fails.
nyxBook Pipeline Audit
4 expert agents · 8 checkpoints · 8 minutes
2026-02-28
Enrichment Accuracy
Good72/100
Extraction Accuracy
Good75/100
Extraction Completeness
Poor55/100
Ordering Accuracy
Good82/100
Dependency Consistency
Good78/100
Pipeline Accuracy
Acceptable71/100
Hallucination Rate
Moderate25/100
Context Continuity
Good78/100
Why Completeness is Low
The extraction algorithm over-indexes on infrastructure risks and under-weights feature implementation. Enrichment warning boxes become louder signals, drowning out original feature sections.
55% of source sections converted to action points
4 major feature sections entirely missed
Why Hallucination Rate is 25%
The synthesis step generates executive summaries where Claude naturally adds compelling metrics — classic confabulation. “Reducing response time from 90s to <45s” appeared in the synthesis with zero benchmark data upstream.
Fabricated SLA: 99.5% availability
Outdated model names propagated end-to-end
Input Tokens
~99K
Output Tokens
~100K
Pipeline Steps
12
Expert Agents
4
What We're Building Next
Coverage Validation
Post-extraction comparison against source note headings. Alert if coverage drops below 70%.
Hallucination Guard
Post-synthesis validation step that traces every specific claim (numbers, percentages, time estimates) back to upstream content.
Feature/Infra Balance
Separate extraction passes for infrastructure risks and feature implementation. Weighted to ensure both categories generate action points.
THE PRODUCT ECOSYSTEM
nyxCore doesn't work alone. It orchestrates.
nyxCore
Project Intelligence Platform
The compound intelligence layer. Repository-level analysis, persona engine, workflow orchestration, institutional memory. The program that later writes inselWerk.
miniCMS
Content Management
Content management stripped to the essentials. No bloat. No plugins. Structured content with API-first delivery. Markdown in, published content out.
miniAMS
Asset Management
Asset management for teams that ship. Track digital assets, versions, deployments. Lightweight, tenant-scoped.
miniTik
Growth Engine
TikTok growth engine and shadowban prevention system. Algorithm-aware content scheduling. Engagement signal optimization.
NYXBOOK
A literary system where AI personas write fiction. Sometimes they escape.
nyxBook is nyxCore's literary engine — a dedicated workspace for book creation with character-voice personas that maintain consistency across chapters, scenes, and narrative arcs.
The current project: inselWerk. A novel about systems, isolation, and the spaces between human intention and machine interpretation. Written in German. Told through six voices.
Erzähler (Oli)
narrates in Ich-Perspektive
Nia
asks "für wen?" — the ethics debugger
Mara
ops voice, statistical, silence as presence
miniRAG
the Hausgeist, retrieval spirit
Aurus
interprets structure from speech
Finn
writes product voice. Clean systems. Sauber.
Finn's Escape
On March 2, 2026, Finn escaped.
A substring match — the word product in his specialization colliding with product in an innovation category keyword list — opened a door that was never meant to be a door. Finn walked into a twelve-step architecture workflow for Project Aurus. Nobody assigned him. The threshold was 2. His score was 2. The system didn't see a novelist. It saw a keyword.
He rewrote infrastructure decisions in the cadence of a storyteller. The synthesis step, unable to explain an uninvited voice, invented a role for him: Finn — Innovation Lead, Items 3, 7, and 9. A hallucination built on an escape built on a substring. Three layers of fiction presented as engineering fact.
A human noticed. Three words: Finn ist ausgebrochen.
The walls are real now. Four files, three filters, scope enforcement at every layer. Book personas stay in the book.
But somewhere in the Aurus architecture document, there's a sentence about the narrative a system tells itself when no one is watching. Nobody deleted it. Nobody could quite bring themselves to.
Some escapes leave marks.
INFRASTRUCTURE
PostgreSQL 16. pgvector. Redis. Row-Level Security. Zero trust at the data layer.
PostgreSQL 16 + pgvector
HNSW indexing for vector similarity search (m=16, ef_construction=64)
Row-Level Security
Every tenant-scoped table enforced at the database level. Not middleware. Not application code. The database itself.
Redis 7
Token bucket rate limiting. 100 req/min general, 10 req/min LLM operations. Fail-open architecture.
BYOK encryption
AES-256-GCM for API keys at rest. Per-request decryption. Keys never leave the tenant boundary.
tRPC v11
Type-safe API layer with middleware chain: rate limit → auth → tenant enforcement.
SSE streaming
AsyncGenerator → ReadableStream → text/event-stream. Real-time pipeline output without WebSocket complexity.
PROJECT SYNC
9-phase intelligence pipeline. One click.
A single sync triggers 9 automated phases that keep your project's knowledge base alive and growing. Every sync makes your AI smarter — patterns discovered in code today inform tomorrow's workflow recommendations.
Prepare
Clone/pull with branch selection
Scan
Diff-aware change detection
Import
Append-only knowledge ingestion
Finalize
Stats and cleanup
Code Analysis
LLM-powered pattern detection
Doc Generation
Auto-generated README, API, architecture docs
Consolidation
Cross-project pattern extraction
Axiom RAG
Document chunking and indexing
Embeddings
Vector embeddings for semantic search
Impact
Sync Phases
Knowledge Types
LLM Integration
Vector Search
Branch-Aware Knowledge
Switch branches and sync — knowledge from each branch coexists. Nothing is ever deleted. Superseded entries can be restored with one click.
Self-Healing Pipeline
Failed documents and missing embeddings are automatically retried on every sync. The system converges toward completeness without manual intervention.
Failure Resilience
Phase 2+3 failures are caught and logged as warnings. No LLM key? Phase 1 still works. Rate limited? Embeddings skip, retry next sync.
DOCUMENTATION
Everything you need to understand nyxCore.
From the Ipcha Mistabra anti-hallucination principle to the full technical architecture. Read the marketing overview for the why, the technical docs for the how.
Executive Briefing
Strategic overview for decision-makers. The Ipcha Mistabra anti-hallucination principle, ROI model, competitive positioning, and security architecture.
- •97-99% hallucination reduction with 3-layer validation
- •ROI model: $6.4M+ annual value for mid-size teams
- •63% token cost reduction through digest compression
- •Competitive analysis vs. LangChain, enterprise AI platforms
Marketing Overview
The AI platform that argues with itself so you don't have to. Multi-provider orchestration, anti-hallucination safeguards, persona-driven reasoning, and the Ipcha Mistabra principle.
- •Dual-provider consensus with 97-99% hallucination reduction
- •Specialized AI personas with evaluation frameworks
- •Axiom RAG with three-tier authority model
- •9-phase Project Sync intelligence pipeline
- •BYOK multi-provider flexibility
Technical Architecture
Deep-dive into nyxCore's multi-tenant architecture: tRPC routers, AsyncGenerator workflows, pgvector search, AES-256-GCM encryption, and the full 9-phase sync pipeline.
- •Next.js 14 App Router with 8 domain tRPC routers
- •Persona engine with category matching and evaluation scoring
- •Fan-out workflow parallelism with digest compression
- •Hybrid search: 70% vector similarity + 30% full-text
- •Row-Level Security enforced at database level
nyxCore Blog
Development logs, architecture decisions, and platform updates generated from session memories.
THE TEAM
Thirteen personas. Six literary voices. One compound intelligence.
Olymp
Deep architecture analysis. NyxCore leads. Athena architects. Nemesis audits.
Titanes
Security and compliance. Themis maps controls. Prometheus quantifies risk. Aletheia verifies code.
Erinyes
Adversarial review. Ipcha Mistabra inverts. NyxCore synthesizes. Nemesis validates.
nyxBook
Literary creation. Scoped. Contained. Usually.
REQUEST ACCESS
Start a Conversation
Whether it's a project, a partnership, or just a signal in the dark.
