Incidentary Docs

Anomaly Feed

Zero-configuration detection of latency spikes, error bursts, retry storms, and dependency degradation — from day one.

Anomaly Feed

The anomaly feed is a chronological stream of detected anomalies across your instrumented services. It provides debugging value from day one — without waiting for an incident, without configuring alerts, and without requiring more than one instrumented service.

What gets detected

Anomaly typeTrigger conditionBaseline
Latency spikep95 exceeds 3x the 24-hour rolling p95Rolling 24h window
Error burstError rate exceeds 5% over a 1-minute windowBaseline below 1%
Retry stormRetry count exceeds 10x the 1-hour rolling averageRolling 1h window
Dependency degradationGhost service error rate or latency threshold exceededPer-service baseline

Detection runs incrementally on CE ingest, not in a batch job. Anomalies appear within minutes of the pattern emerging.

What the feed looks like

Today
|-- 14:32  Latency spike — payments service p95: 120ms -> 890ms
|          Affected traces: 3 | Duration: 8 min | Resolved
|
|-- 11:15  Error burst — inventory service 503 rate hit 12%
|          Affected traces: 47 | Duration: 2 min | Auto-resolved
|
|-- 09:44  Retry storm — shipping service: 340 retries in 60s
|          Affected traces: 12 | Duration: 4 min | Auto-resolved

Each entry links to the affected traces, so you can drill into the causal chain directly from the anomaly.

Informational, not interruptive

Anomalies are surfaced in the feed — they do not send push notifications, trigger pages, or create alerts. This is a deliberate design choice. Alert fatigue is the enemy of trust. Incidentary integrates with your existing alerting tools (PagerDuty, OpsGenie, Slack) for incident triggers. The anomaly feed is for proactive observation, not reactive interruption.

Severity levels

SeverityMeaning
infoMild deviation from baseline — worth noting, not urgent
warningSignificant deviation — investigate when convenient
criticalLarge deviation — likely affecting users, investigate promptly

Zero configuration

The anomaly feed requires no setup beyond installing the SDK. There are no thresholds to configure, no alert rules to write, and no dashboards to build. Detection starts as soon as there is enough baseline data (typically 24 hours of CE ingest).

The thresholds are intentionally conservative. False positives erode trust faster than false negatives. If an anomaly appears in the feed, it is likely worth looking at.

Filtering

The feed can be filtered by:

  • Service: show anomalies for a specific service
  • Type: latency spike, error burst, retry storm, or dependency degradation
  • Severity: info, warning, or critical
  • Time range: focus on a specific period

Deploy correlation

When you report deploys via POST /api/v1/deploys, the anomaly feed correlates anomalies with recent deployments. An anomaly that started shortly after a deploy shows the deploy SHA and deployer, so you can quickly determine whether the deploy caused the issue.

Acknowledging anomalies

You can acknowledge an anomaly to indicate that it has been seen and is being handled. Acknowledged anomalies remain in the feed but are visually de-emphasized.

On this page