Faculytics Docs

Core Components

Technology stack, module architecture, login strategies, cron jobs, and analysis pipeline components.

This document describes the high-level components, technology stack, and module architecture of the api.faculytics project.

1. System Overview

api.faculytics serves as an intermediary layer between Moodle and local institutional data. Its primary responsibilities include:

  • Authentication: Authenticating users via Moodle tokens and issuing local JWTs.
  • Data Synchronization: Mirroring Moodle's institutional hierarchy (Campuses, Semesters, Departments, Programs) and course enrollments.
  • Entity Management: Maintaining a normalized local database for analytics and extended features.
  • Questionnaire Management: Managing weighted questionnaires for student and faculty feedback. See Questionnaire Management for detailed architecture.

2. Technology Stack

  • Backend Framework: NestJS (v10+)
  • Database ORM: MikroORM with PostgreSQL
  • Authentication: Passport.js (JWT and Refresh Token strategies)
  • External API: Moodle Web Services (REST)
  • Task Scheduling: NestJS Schedule (Cron)
  • Caching: @nestjs/cache-manager with Redis (@keyv/redis)
  • Job Queue: BullMQ (@nestjs/bullmq) on Redis
  • Health Checks: @nestjs/terminus with custom indicators
  • Validation: Zod (Environment variables), class-validator (DTOs)

3. Module Architecture

The application is structured into Infrastructure and Application layers, coordinated by the AppModule.

4. Login Strategy Pattern

Authentication uses a priority-based strategy pattern (src/modules/auth/strategies/). Each strategy implements the LoginStrategy interface:

  • CanHandle(localUser, body): Determines if this strategy applies to the login request.
  • Execute(em, localUser, body): Performs authentication and returns the user + optional Moodle token.
  • priority: Numeric ordering (lower = higher precedence).
StrategyPriorityWhen it handles
LocalLoginStrategy10User exists and has a local password
MoodleLoginStrategy100User has no local password or doesn't exist yet

Priority ranges: 0-99 core auth, 100-199 external providers, 200+ fallbacks. To add a new provider, implement LoginStrategy and register it under the LOGIN_STRATEGIES injection token.

5. Cron Jobs

Background jobs extend BaseJob and register in StartupJobRegistry. All jobs are in src/crons/jobs/.

JobSchedulePurpose
CategorySyncJobStartup + cronSyncs Moodle categories to local hierarchy
CourseSyncJobStartup + cronSyncs Moodle courses
EnrollmentSyncJobStartup + cronSyncs user-course enrollments and roles; invalidates enrollment cache
RefreshTokenCleanupJobEvery 12 hoursPurges refresh tokens older than 7 days

6. Moodle Connectivity & Error Handling

The MoodleClient enforces a 10-second timeout (MOODLE_REQUEST_TIMEOUT_MS) on all Moodle API calls via AbortSignal.timeout(). Network failures are wrapped in MoodleConnectivityError:

  • Timeout: "Moodle request timed out during {operation}"
  • Connection failure: "Failed to connect to Moodle service during {operation}"
  • General network error: "Network error during Moodle {operation}"

The MoodleLoginStrategy catches MoodleConnectivityError and translates it to a 401 Unauthorized with a user-friendly message.

7. Analysis Pipeline

The AnalysisModule provides a multi-stage analysis pipeline that orchestrates AI processing of qualitative feedback. See AI Inference Pipeline for the full architecture and Analysis Pipeline Workflow for the stage-by-stage flow.

Pipeline Orchestrator

The PipelineOrchestratorService manages the full analysis lifecycle through a confirm-before-execute pattern:

  1. Create — Computes coverage stats (response rate, submission/comment counts) and generates warnings
  2. Confirm — Validates configuration and dispatches the first stage
  3. Stage progression — Each processor calls back into the orchestrator to advance to the next stage
  4. Terminal statesCOMPLETED, FAILED, or CANCELLED

Components

ComponentPurpose
PipelineOrchestratorServiceCreates pipelines, manages stage transitions, dispatches batch jobs
AnalysisServiceLow-level entry point — EnqueueJob() and EnqueueBatch() for ad-hoc jobs
AnalysisControllerREST API for pipeline CRUD (POST/GET /analysis/pipelines)
BaseBatchProcessorAbstract base — HTTP dispatch, Zod validation, retry, stall detection
RunPodBatchProcessorRunPod-specific subclass — auth headers, { input/output } envelope handling
SentimentProcessorBatch sentiment analysis, triggers sentiment gate on completion
TopicModelProcessorBatch topic modeling via RunPod, chunked assignment persistence
TopicLabelServiceLLM-based labeling of BERTopic topics (gpt-4o-mini, inline before recommendations)
RecommendationGenerationServiceBuilds LLM prompts from DB data, calls OpenAI, computes confidence and evidence
RecommendationsProcessorBullMQ processor — delegates to RecommendationGenerationService, persists results
EmbeddingProcessorPer-submission embedding generation (upsert, extends BaseAnalysisProcessor)

Pipeline Stages

AWAITING_CONFIRMATION → SENTIMENT_ANALYSIS → SENTIMENT_GATE → TOPIC_MODELING → TOPIC_LABELING → GENERATING_RECOMMENDATIONS → COMPLETED

Each stage has a corresponding RunStatus (PENDINGPROCESSINGCOMPLETED / FAILED).

Queue Architecture

Four BullMQ queues with independent concurrency:

QueueProcessorConcurrency Default
sentimentSentimentProcessor3
embeddingEmbeddingProcessor3
topic-modelTopicModelProcessor1
recommendationsRecommendationsProcessor1

REST Endpoints

MethodPathDescription
POST/analysis/pipelinesCreate a pipeline (returns coverage stats + warnings)
POST/analysis/pipelines/:id/confirmConfirm and start execution
POST/analysis/pipelines/:id/cancelCancel a non-terminal pipeline
GET/analysis/pipelines/:id/statusGet pipeline status with stage details
GET/analysis/pipelines/:id/recommendationsGet recommendations for a completed pipeline

Resilience: Exponential backoff retries, stall detection, graceful degradation when Redis is unavailable (ServiceUnavailableException), HTTP timeout via AbortController.

Local development: docker compose up starts Redis and a mock worker (Hono HTTP server on port 3001) that simulates worker responses.

8. Health Checks

The HealthModule uses @nestjs/terminus to provide structured health checks at GET /health:

IndicatorChecks
databaseSELECT 1 via MikroORM EntityManager
redisRead/write test via cache manager

Returns HTTP 200 with status: 'ok' when healthy, HTTP 503 with status: 'error' and per-indicator details when unhealthy.

9. Startup & Initialization Flow

The application enforces a strict initialization sequence in InitializeDatabase before it begins accepting traffic. This ensures that the database schema and required infrastructure state are always synchronized with the code.

  1. Migration (orm.migrator.up()): Automatically applies any pending database migrations.
  2. Infrastructure Seeding (orm.seeder.seed(DatabaseSeeder)): Executes idempotent seeders (e.g., DimensionSeeder) to populate required reference data.
  3. Application Bootstrap: Only after both steps succeed does app.listen() execute. If any step fails, the process exits with code 1.