← Resources
Checklistdraft · not yet final

The Amplitude Audit Checklist

Every check the Amplitude Auditor runs, organised by category and importance. The same checklist Adasight uses on every client engagement.

The same checklist Adasight uses on every Amplitude audit engagement, organised by category. Importance flag tells you which findings to act on first.

[H] high importance — fix before next analytical sprint. [M] medium — fix in the next governance pass. [L] low — worth knowing about but not blocking.

Events in Amplitude

  • [M]Number of implemented events

    Count of events present in the Amplitude taxonomy. Sanity check that the platform is actually receiving event types.

    Why this matters · Zero or very low event counts indicate the SDK isn't installed or the project is dormant. Very high counts (>200) often indicate event sprawl — generic events that should be consolidated.

    events-count-implemented
  • [H]Event naming — snake_case convention

    Events should follow snake_case naming (lowercase, words separated by underscores).

    Why this matters · Inconsistent naming conventions break analyst workflows, downstream pipelines, and dbt models. Mixed casing means duplicated reporting work and inconsistent dashboards over time.

    events-naming-snake-case
  • [L]Event names within reasonable length

    Event names should be ≤40 characters.

    Why this matters · Overlong event names hurt readability in dashboards and indicate the event is doing too much (combining what should be event + property).

    events-naming-length
  • [M]Event categories are in use

    Events should be organised into categories within Amplitude's Govern.

    Why this matters · Categories make the taxonomy navigable as it grows. Without them, finding the right event becomes a search problem instead of a structured browse.

    events-categories-used
  • [L]Hidden / deleted events housekeeping

    Track how many events are hidden or deleted vs. visible. Excessive hidden events suggest unfinished cleanup.

    Why this matters · A long tail of hidden/deleted events indicates the team has tried to clean up but didn't follow through. Worth surfacing in the audit so it can be resolved or accepted.

    events-hidden-events
  • [H]Event names — no spaces or special characters

    Event names should not contain spaces or special characters beyond underscores. Spaces in event names are a classic source of downstream bugs.

    Why this matters · Event names with spaces (`Sign Up Completed`) require constant escaping in queries, dashboards, and dbt models. They also tend to be camel-cased inconsistently across SDKs. Pick a clean name once.

    gov-event-special-chars

Tracking Plan

  • [M]Events have descriptions

    Every event in the taxonomy should have a human-readable description so analysts know what it actually means.

    Why this matters · Undocumented events become tribal knowledge. Six months later, no one remembers when `signup_flow_step_3` fires. Descriptions are the cheapest form of institutional memory.

    plan-event-descriptions
  • [H]User properties — sufficient richness

    User properties enrich every event with stable user-level attributes (plan, signup date, role, etc.).

    Why this matters · Without user properties you can only segment by behavior in a session. With them you can answer 'did Pro plan users convert at higher rate than Free' without joining external systems.

    plan-user-properties-count
  • [H]Event properties — coverage on top events

    The most-used events should carry meaningful event properties — not just user properties.

    Why this matters · Event properties capture the *context* of what just happened (which button, which feature, which value). Without them, all you have is a count of fires — no ability to drill in.

    plan-event-properties-coverage
  • [L]Events with no documented properties

    Events that have zero documented properties in the taxonomy. Often signals a fire-and-forget event that lost its context.

    Why this matters · An event with no properties tells you something happened — but not the context. If an event consistently has no properties, ask whether it's actually useful or just noise.

    gov-event-required-properties

Feature Usage & Adoption

  • [M]Cohorts in use

    Number of cohorts defined. Cohorts are the foundation of segmentation — low usage means the team isn't slicing data.

    Why this matters · Without cohorts, every chart is a global view. Real product insight comes from comparing segments. A team with <5 cohorts after 6+ months is under-utilising the platform.

    feature-cohorts-count
  • [M]Behavioral cohort sophistication

    What share of cohorts are behavioral (defined by user actions) vs. lookup (uploaded ID lists)?

    Why this matters · Behavioral cohorts compound — they auto-update as new users meet the criteria. Lookup cohorts are static and decay. A healthy mix is 80%+ behavioral.

    feature-cohorts-behavioral
  • [L]Annotations in use

    Annotations mark key events on charts (releases, campaigns, incidents). Their presence indicates a culture of context-rich analysis.

    Why this matters · Without annotations, anomalies in charts have to be explained from memory or git logs. Annotations turn analytics into a shared timeline the whole team can reason from.

    feature-annotations
  • [M]Dashboards in use

    Number of saved dashboards. Active dashboard usage is a leading indicator of healthy data culture.

    Why this matters · Dashboards are how non-analysts consume Amplitude. Zero or very low counts mean the analytics team is doing all the work in ad-hoc charts — analysis stays trapped in their heads, doesn't compound.

    feature-dashboards
  • [L]Saved charts in use

    Number of saved charts. Charts are the unit of analysis — saved (vs. throwaway) charts indicate the team is building reusable analytical assets.

    Why this matters · Throwaway analyses are forgotten. Saved charts compound — they get linked to dashboards, referenced in docs, embedded in Slack. Low save rates indicate ad-hoc culture.

    feature-charts
  • [M]Alerts configured

    Number of alerts (smart anomaly + custom threshold). Alerts move the team from reactive to proactive.

    Why this matters · Without alerts, problems are discovered when someone opens a dashboard. With alerts, the team is notified the moment something breaks. The cost of one missed regression is higher than 50 false-positive alerts.

    feature-alerts
  • [L]Notebooks in use

    Number of notebooks. Notebooks are where Amplitude stops being a query tool and starts being a report-sharing tool.

    Why this matters · Notebooks let analysts share narrative analyses (with charts, text, and links) instead of dumping screenshots into Slack. Adoption is a leading indicator of analytical maturity.

    feature-notebooks
  • [L]Cohort definitions — recently used

    Cohorts that have not been computed recently are likely abandoned. A cohort that hasn't run in 30+ days is taking up registry space.

    Why this matters · Healthy analytics teams retire cohorts that no longer serve their purpose. Stale cohorts clutter the search surface and confuse new analysts about which cohort to use.

    gov-cohort-recency
  • [L]Annotations — recent activity

    When was the last annotation added? If it's been months, the team probably stopped maintaining annotations — or never started.

    Why this matters · Annotations only help future analysis if they're current. A team that annotated a release in February but not the three since has lost the value.

    gov-annotations-recency
  • [L]Cohorts — owner attributed

    Cohorts should have an owner. Orphan cohorts (no owner attribution) tend to be poorly maintained and ambiguous in intent.

    Why this matters · Cohorts without owners decay in clarity over time — no one is sure why they exist or whether they're still correct. Adding an owner is cheap insurance.

    gov-cohort-ownership

Data Sources & Quality

  • [M]User properties — snake_case naming

    User property keys should follow snake_case (lowercase, words separated by underscores).

    Why this matters · Property naming inconsistencies create downstream pain in dashboards, dbt models, and shared analyses. The cost compounds.

    ds-user-properties-naming
  • [M]Group analytics — configured for B2B

    If the product is B2B (account/workspace/organisation pattern), group analytics should be configured.

    Why this matters · Without group analytics, B2B teams cannot easily answer 'which accounts are healthy / churning' or run cohort analyses at the company level. Adding it after the fact is expensive — it's a re-instrumentation effort.

    ds-group-analytics
  • [L]Top events — share of meaningful business events

    A sanity check: of the top 10 events, how many look like business actions vs. generic page views or auto-collected events?

    Why this matters · When the top events are dominated by `Page Viewed` or `[Amplitude] Active Session` and similar, the team is over-tracking generic activity and under-tracking meaningful business actions. Inverts the value of the data.

    ds-event-volume-meaningful-share
  • [M]Suspected duplicate events — naming variants

    Detects events whose names differ only in casing, punctuation, or trivial wording (e.g., `signup` vs `sign_up` vs `Signup`). Strong signal of taxonomy drift.

    Why this matters · Duplicate events fragment data — half the users on `signup`, half on `sign_up`, neither dashboard tells the truth. Common pattern when teams scale without governance.

    ds-event-doubles

That's 24 of the 90+ checks shipping in Sprint 2. The full list — including Session Replay, Guides & Surveys, and Web Experimentation checks — lands alongside the public-facing audit UI.

Want to skip the manual work?

Run an automated audit on your project.

Same checklist, run automatically against your Amplitude configuration. ~60 seconds. Configuration metadata only.