llm-monitoring

Tag

Cards List
#llm-monitoring

One line system prompt change dropped model quality from 84% to 52%. How are people monitoring semantic quality in production?

Reddit r/AI_Agents · 5d ago

A developer shares their experience of a single system prompt change degrading LLM response quality without triggering traditional monitoring alerts, and describes internal tooling they built to monitor semantic quality in production LLM applications.

0 favorites 0 likes
#llm-monitoring

Most injection detectors score each prompt in isolation. I built one that tracks the geometric trajectory of the full session. Here is a concrete result.

Reddit r/artificial · 2026-04-20

A developer built Arc Gate, a monitoring proxy for LLMs that uses Fisher information manifold geometry to detect session-level prompt injection attacks, identifying Crescendo-style gradual manipulation by tracking t-values against a phase transition threshold t* = 1.2247 rather than per-turn phrase detection.

0 favorites 0 likes
← Back to home

Submit Feedback