Demonstration prototype · anonymous · no PHI/PII processed · not for clinical use

Check-in

How's it actually going?

A few sentences. No structure. Whatever's on your mind. The system reads it, classifies for crisis indicators, and routes the next step. The text itself is hashed and discarded — only the classification is kept.

Demo mode · no PHI/PII stored. SHA-256 hash + length + classification only.

What the classifier is doing

A 70B-parameter open-source LLM (Llama 3.3) reads your text against a tight RACE-framework prompt anchored in the Columbia C-SSRS and Joiner Interpersonal Theory of Suicide. It returns one of three risk levels — none, elevated, crisis — with a citation and indicators. The prompt is explicitly conservative-bias: when the model is uncertain, it escalates.

The classifier output never directly drives crisis escalation. The deterministic peer-first → 5-minute window → Veterans Crisis Line rule is data, not code-path inferred from model output. See architecture · research §5 (crisis detection from text).