Report Generation

User4 min read

8. Report Generation System (Berichterstellung)

The report generation system transforms workflow, auto-fix, and refactor run data into styled reports. Each report is shaped by a combination of report style (format/audience), persona (voice/expertise), and structured context (run data).

8.1 Four Report Styles

Each style is defined by a system prompt instruction in STYLE_TEMPLATES:

Style Audience Instruction
executive C-suite, stakeholders Focus on key metrics, business impact, strategic recommendations. Concise and action-oriented.
security Security teams, auditors Include risk levels, vulnerability summary, remediation priority, CVSS-like severity language.
marketing End-users, customers Release notes format: "What Changed" and "Why It Matters" sections. User-facing benefits.
technical Engineers, developers Code-level findings, architecture impact, file paths, implementation notes.

Provider Resolution

The report generator uses a fallback chain for provider selection:

async function resolveWorkingProvider(tenantId, preferredProvider?): Promise<LLMProvider> {
  // 1. Try preferred provider if specified
  if (preferredProvider) return resolveProvider(preferredProvider, tenantId);

  // 2. Fall through LLM_PROVIDERS list: anthropic -> openai -> google -> ollama -> kimi
  for (const providerName of LLM_PROVIDERS) {
    try { return await resolveProvider(providerName, tenantId); }
    catch { continue; }
  }
  throw new Error("No working LLM provider available");
}

Generation Configuration

Parameter Value
Temperature 0.5
Max Tokens 4,096
System Prompt Persona prompt (if any) + Style template

8.2 Context Formatting

Three context formatters convert run data into structured markdown:

formatAutoFixContext(runId, tenantId)

# AutoFix Run: {owner}/{repo}
Status: {status}
Issues Found: {count}

## Issues
### {title} [{severity}] [{category}]
File: {filePath}:{lineStart}
{detail}

**Fix**: {fixExplanation}

Issues are ordered by severity (ascending = critical first), then creation date.

formatRefactorContext(runId, tenantId)

# Refactor Run: {owner}/{repo}
Status: {status}
Opportunities Found: {count}

## Opportunities
### {title} [{difficulty}] [{impact}] [{category}]
Files: {filePaths.join(", ")}
{detail}

**Improvement**: {explanation}

Items ordered by impact (ascending = high first), then difficulty.

formatWorkflowContext(workflowId, tenantId)

# Workflow: {name}
Status: {status}

## Steps
### {label} [{status}]
Provider: {provider} / {model}
{digest ?? output.content}

Steps shown in execution order. Uses digest when available, with a 2,000-character cap on displayed content.

8.3 Persona-Driven Reports

When a persona is selected for report generation, the persona's system prompt is prepended to the style instruction:

const systemParts: string[] = [];
if (opts.personaSystemPrompt) {
  systemParts.push(opts.personaSystemPrompt);  // Persona voice first
}
systemParts.push(styleInstruction);  // Style format second

// System message = persona + style
messages.push({ role: "system", content: systemParts.join("\n\n") });

This creates a layered instruction: the persona defines who is writing (voice, expertise, perspective), while the style defines how to write (format, audience, focus areas).

Persona usage is tracked asynchronously after generation:

if (opts.personaId) {
  incrementPersonaUsage(opts.personaId).catch(() => {});
}

8.4 Example: Security Report for an AutoFix Run

sequenceDiagram participant UI as Dashboard participant API as tRPC Router participant RC as formatAutoFixContext() participant RG as generateReport() participant LLM as LLM Provider UI->>API: autoFix.generateReport({ runId, style: "security", personaId }) API->>RC: formatAutoFixContext(runId, tenantId) RC-->>API: Structured markdown context API->>API: Load persona system prompt API->>RG: generateReport({<br/> reportStyle: "security",<br/> personaSystemPrompt: "You are a CISO...",<br/> context: markdown,<br/> featureLabel: "AutoFix"<br/>}) RG->>LLM: complete([<br/> system: persona + security style,<br/> user: "Generate a security report..."<br/>]) LLM-->>RG: Report content RG->>RG: incrementPersonaUsage(personaId) RG-->>API: ReportResult API-->>UI: { content, model, tokenUsage, costEstimate }

System prompt composition for a security report with a "CISO" persona:

[Persona]
You are a seasoned Chief Information Security Officer with 20 years of experience
in enterprise security. You think in terms of risk frameworks, compliance requirements,
and organizational impact...

[Style]
Write a security-focused report. Include risk levels, vulnerability summary,
remediation priority, and use CVSS-like severity language.

User prompt:

Generate a security report for this AutoFix run:

# AutoFix Run: acme/api-gateway
Status: completed
Issues Found: 12

## Issues
### SQL injection in query builder [critical] [Security]
File: src/db/queries.ts:47
Raw user input concatenated into SQL string...
**Fix**: Use parameterized queries via Prisma.sql template literals

### Missing rate limit on /api/auth/login [high] [Security]
File: src/app/api/auth/route.ts:23
No rate limiting on authentication endpoint...
**Fix**: Add Redis-backed rate limiter (10 req/min)
...

Output: A full security report with risk matrix, CVSS-aligned severity ratings, remediation timeline, and executive summary -- all written in the CISO persona's voice.


Part 2 complete. Covers sections 5-8: Project & Consolidation, Workflow Intelligence Pipeline, Energy & Cost Models, and Report Generation.