The Decision System
The Decision System is the core architectural pattern that provides transparency and control over AI-driven DSL extraction and code generation.
Overview
Every LLM interaction produces a structured Decision object that encapsulates:
This creates a transparent, debuggable, and reversible system where humans maintain control.
The Decision Object
interface Decision {
// What was discovered/modified
entities: Array<{
name: string
fields: Record<string, Field>
relationships: string[]
confidence: number
reasoning: string
}>
// Current state assessment
phase: 'discovering' | 'clarifying' | 'proposing' | 'confirming' | 'building'
readiness: number // 0-100% ready to build
// LLM's reasoning (transparency)
reasoning: string
confidence: number
// What's needed next
questions: string[]
missingPieces: string[]
suggestedActions: string[]
// Metadata
messageId: string
timestamp: Date
modelUsed: string
tokensUsed: number
}
Why Decisions Matter
1. Transparency
Every DSL change can be traced back to a specific decision:// User can see exactly why an entity was added
{
decision: {
entities: [{
name: 'User',
confidence: 0.95,
reasoning: "User mentioned 'users need to login', indicating authentication system"
}]
}
}
2. Debuggability
When something goes wrong, you can examine the decision chain:// Trace back through decisions to find issues
const decisions = await getDecisionHistory(chatId);
decisions.forEach(d => {
console.log(`${d.timestamp}: ${d.reasoning}`);
console.log(`Confidence: ${d.confidence}`);
console.log(`Changes: ${d.entities.length} entities`);
});
3. Human Override
Decisions can be reviewed and corrected before applying:interface CachedDecision {
decision: Decision
humanFeedback?: {
approved: boolean
corrections?: Partial<Decision>
notes?: string
timestamp: Date
}
}
// Human can correct misunderstandings
if (decision.confidence < 0.8) {
const review = await requestHumanReview(decision);
if (review.corrections) {
decision = mergeCorrections(decision, review.corrections);
}
}
4. Intelligent Caching
Decisions are the cacheable unit, not just responses:// Cache similar decisions for faster responses
const cacheKey = hashDecisionContext({
message: userMessage,
currentEntities: Object.keys(dsl.entities),
phase: dsl.phase
});
const cachedDecision = await cache.get(cacheKey);
if (cachedDecision && cachedDecision.confidence > 0.9) {
return cachedDecision;
}
Decision Flow Architecture
graph TD
A[User Message] --> B[LLM Processing]
B --> C[Generate Decision]
C --> D{Confidence Check}
D -->|High| E[Auto-Apply]
D -->|Medium| F[Request Confirmation]
D -->|Low| G[Request Clarification]
E --> H[Update DSL]
F --> I[Human Review]
I --> H
G --> J[Generate Questions]
J --> A
H --> K[Store Decision]
Decision to Command Pattern
Decisions generate reversible commands for DSL modification:
interface DSLCommand {
type: 'ADD_ENTITY' | 'MODIFY_FIELD' | 'ADD_RELATIONSHIP' | 'REMOVE_ENTITY'
target: string // Entity or field path
payload: any // The change to make
confidence: number
reasoning: string
reversible: boolean
undo?: () => DSLCommand // Reverse command
}
// Convert decision to commands
function decisionToCommands(decision: Decision): DSLCommand[] {
const commands: DSLCommand[] = []
for (const entity of decision.entities) {
if (!dslContext.entities[entity.name]) {
commands.push({
type: 'ADD_ENTITY',
target: entity.name,
payload: entity,
confidence: entity.confidence,
reasoning: decision.reasoning,
reversible: true,
undo: () => ({
type: 'REMOVE_ENTITY',
target: entity.name,
payload: null,
confidence: 1.0,
reasoning: 'Undo entity addition',
reversible: true
})
})
}
}
return commands
}
Confidence Scoring
Multi-Level Confidence
interface ConfidenceScores {
// Overall decision confidence
overall: number
// Confidence per component
entities: Record<string, number>
fields: Record<string, number>
relationships: Record<string, number>
// Factors affecting confidence
factors: {
explicitness: number // How explicit was the mention?
consistency: number // Consistent with context?
completeness: number // Complete information?
domainMatch: number // Matches known patterns?
}
}
Confidence Thresholds
const CONFIDENCE_THRESHOLDS = {
AUTO_APPLY: 0.9, // Apply without confirmation
SUGGEST: 0.7, // Suggest with explanation
CLARIFY: 0.5, // Need clarification
IGNORE: 0.3 // Too uncertain
}
function handleDecision(decision: Decision) {
if (decision.confidence >= CONFIDENCE_THRESHOLDS.AUTO_APPLY) {
return applyAutomatically(decision);
}
if (decision.confidence >= CONFIDENCE_THRESHOLDS.SUGGEST) {
return suggestWithExplanation(decision);
}
if (decision.confidence >= CONFIDENCE_THRESHOLDS.CLARIFY) {
return requestClarification(decision);
}
return null; // Ignore low confidence
}
Decision Examples
High Confidence Decision
{
"entities": [
{
"name": "User",
"fields": {
"email": { "type": "string", "required": true },
"password": { "type": "string", "required": true }
},
"confidence": 0.95,
"reasoning": "User explicitly mentioned 'users login with email and password'"
}
],
"phase": "clarifying",
"readiness": 35,
"confidence": 0.92,
"reasoning": "Clear authentication requirements specified",
"questions": [],
"missingPieces": ["User roles", "Password reset flow"],
"suggestedActions": ["Add user roles", "Implement forgot password"]
}
Low Confidence Decision
{
"entities": [
{
"name": "Report",
"fields": {},
"confidence": 0.4,
"reasoning": "User mentioned 'reporting' but unclear if entity or feature"
}
],
"phase": "discovering",
"readiness": 15,
"confidence": 0.45,
"reasoning": "Ambiguous requirements need clarification",
"questions": [
"What kind of reports do you need?",
"Are reports generated from existing data or created manually?",
"Who creates and views reports?"
],
"missingPieces": ["Report structure", "Report types", "Report generation"],
"suggestedActions": ["Clarify reporting requirements"]
}
Decision Storage
Event Sourcing Decisions
CREATE TABLE decision_log (
id UUID PRIMARY KEY,
chatId UUID NOT NULL,
messageId UUID NOT NULL,
decision JSONB NOT NULL,
confidence DECIMAL(3,2),
applied BOOLEAN DEFAULT false,
humanFeedback JSONB,
createdAt TIMESTAMPTZ DEFAULT NOW()
);
-- Query decision history
SELECT
decision->>'phase' as phase,
AVG((decision->>'confidence')::numeric) as avg_confidence,
COUNT(*) as decision_count
FROM decision_log
WHERE chatId = $1
GROUP BY decision->>'phase'
ORDER BY MIN(createdAt);
Decision Analytics
interface DecisionAnalytics {
totalDecisions: number
averageConfidence: number
acceptanceRate: number // % approved by users
correctionRate: number // % needing corrections
byPhase: Record<Phase, {
count: number
avgConfidence: number
avgEntities: number
}>
commonPatterns: Array<{
pattern: string
frequency: number
confidence: number
}>
}
Learning from Decisions
Feedback Loop
class DecisionLearning {
async learnFromFeedback(decision: Decision, feedback: HumanFeedback) {
// Store feedback
await this.storeFeedback(decision.id, feedback);
// Update pattern matching
if (feedback.corrections) {
await this.updatePatterns(decision, feedback.corrections);
}
// Adjust confidence weights
if (!feedback.approved) {
await this.adjustConfidenceWeights(decision.factors);
}
// Retrain if significant corrections
if (feedback.corrections && this.significantCorrection(feedback)) {
await this.scheduleRetraining(decision.context);
}
}
}
Pattern Recognition
// Learn from successful decisions
const successfulPatterns = await db.query(`
SELECT
decision->'reasoning' as pattern,
COUNT(*) as usage_count,
AVG((decision->>'confidence')::numeric) as avg_confidence
FROM decision_log
WHERE
humanFeedback->>'approved' = 'true'
AND confidence > 0.8
GROUP BY pattern
HAVING COUNT(*) > 5
`);
// Apply learned patterns to new decisions
function applyLearnedPatterns(context: Context): Decision {
const matchingPatterns = findMatchingPatterns(context, successfulPatterns);
if (matchingPatterns.length > 0) {
// Use proven patterns with higher confidence
return generateDecisionFromPattern(matchingPatterns[0]);
}
// Fall back to standard processing
return generateDecision(context);
}
Decision UI Components
Decision Review Panel
function DecisionReviewPanel({ decision }: { decision: Decision }) {
return (
<div className="decision-panel">
<div className="confidence-meter">
<ProgressBar value={decision.confidence * 100} />
<span>{Math.round(decision.confidence * 100)}% confident</span>
</div>
<div className="reasoning">
<h4>AI Reasoning:</h4>
<p>{decision.reasoning}</p>
</div>
<div className="changes">
<h4>Proposed Changes:</h4>
{decision.entities.map(entity => (
<EntityChange
key={entity.name}
entity={entity}
confidence={entity.confidence}
/>
))}
</div>
<div className="actions">
<Button onClick={() => applyDecision(decision)}>
Apply Changes
</Button>
<Button variant="secondary" onClick={() => modifyDecision(decision)}>
Modify
</Button>
<Button variant="ghost" onClick={() => rejectDecision(decision)}>
Reject
</Button>
</div>
</div>
);
}
Best Practices
1. Always Generate Decisions
Never modify DSL directly - always go through decisions:// ❌ Bad: Direct modification
dsl.entities.User = { fields: {...} }
// ✅ Good: Through decision
const decision = generateDecision(context);
const commands = decisionToCommands(decision);
applyCommands(commands);
2. Track Decision Lineage
Maintain parent-child relationships between decisions:interface Decision {
id: string
parentId?: string // Links to previous decision
// ... other fields
}
3. Implement Rollback
Allow undoing decisions:async function rollbackDecision(decisionId: string) {
const decision = await getDecision(decisionId);
const commands = decisionToCommands(decision);
for (const command of commands.reverse()) {
if (command.undo) {
await applyCommand(command.undo());
}
}
}
4. Monitor Decision Quality
// Track metrics
const metrics = {
decisionsPerConversation: avg(decisionsCount),
confidenceOverTime: trackProgression(confidences),
userSatisfaction: measureApprovalRate(),
errorRate: countRejections() / totalDecisions
};
// Alert on quality issues
if (metrics.errorRate > 0.2) {
alertTeam('High decision error rate detected');
}