Recommendations
In-process LLM recommendation generation — deprecated external worker, current queue schema, and output entities.
Deprecated: Recommendations are no longer generated by an external HTTP worker. The
RecommendationGenerationServicecalls the OpenAI API directly from within the NestJS process. See AI Inference Pipeline -- Recommendation Generation for the current architecture.
The RECOMMENDATIONS_WORKER_URL environment variable is still accepted but unused. The BullMQ recommendations queue remains active for retry semantics and pipeline stage progression, but dispatches in-process work rather than HTTP requests.
Current Schema
Source of Truth: src/modules/analysis/dto/recommendations.dto.ts
Job Message
The queue job contains only pipeline/run metadata (no aggregated data payload):
{
jobId: string; // UUID
version: string; // "1.0"
type: 'recommendations';
metadata: {
pipelineId: string;
runId: string;
}
publishedAt: string; // ISO 8601
}Output Entity
Each RecommendedAction entity stores:
| Field | Type | Description |
|---|---|---|
category | STRENGTH | IMPROVEMENT | Positive finding vs. area for improvement |
headline | text | Short title (5-10 words) |
description | text | 1-2 sentences on the observed pattern |
actionPlan | text | 2-4 sentences with concrete steps |
priority | HIGH | MEDIUM | LOW | Urgency level |
supportingEvidence | JSONB | Structured sources with confidence score |
Versioning
The RecommendationRun.workerVersion field stores the OpenAI model name used for generation (e.g., gpt-4o-mini), configured via RECOMMENDATIONS_MODEL.