Faculytics Docs

Recommendations

In-process LLM recommendation generation — deprecated external worker, current queue schema, and output entities.

Deprecated: Recommendations are no longer generated by an external HTTP worker. The RecommendationGenerationService calls the OpenAI API directly from within the NestJS process. See AI Inference Pipeline -- Recommendation Generation for the current architecture.

The RECOMMENDATIONS_WORKER_URL environment variable is still accepted but unused. The BullMQ recommendations queue remains active for retry semantics and pipeline stage progression, but dispatches in-process work rather than HTTP requests.

Current Schema

Source of Truth: src/modules/analysis/dto/recommendations.dto.ts

Job Message

The queue job contains only pipeline/run metadata (no aggregated data payload):

{
  jobId: string; // UUID
  version: string; // "1.0"
  type: 'recommendations';
  metadata: {
    pipelineId: string;
    runId: string;
  }
  publishedAt: string; // ISO 8601
}

Output Entity

Each RecommendedAction entity stores:

FieldTypeDescription
categorySTRENGTH | IMPROVEMENTPositive finding vs. area for improvement
headlinetextShort title (5-10 words)
descriptiontext1-2 sentences on the observed pattern
actionPlantext2-4 sentences with concrete steps
priorityHIGH | MEDIUM | LOWUrgency level
supportingEvidenceJSONBStructured sources with confidence score

Versioning

The RecommendationRun.workerVersion field stores the OpenAI model name used for generation (e.g., gpt-4o-mini), configured via RECOMMENDATIONS_MODEL.