Customising the Dashboard
Learn how to customize the QA/QC Dashboard, switch between project and hub views, and use scope controls to aggregate or compare project data.
Accessing the Dashboard
Go to the Dashboard tab on the QA/QC page and pick a hub from the selector at the top. You can view QA data two ways:
- This project — the traditional single-project view. Select a project from the hub/project selector and the dashboard loads data for that project only. All other QA tabs (Run Check, Rules, Zone Templates, Check History) continue to work on the selected project as before.
- All projects in hub — roll up QA data from every project in the hub. Available as soon as a hub is selected (no project required). A separate multi-project picker lets you narrow the roll-up to a specific subset of projects.
The hub-wide toggle and project multi-select are unique to the Dashboard tab — they don't affect any other QA tab.
If no hub/project is selected, an onboarding prompt is shown. If the current scope has no completed check runs, a prompt directs you to the Run Check tab.
Dashboard Scope Controls (Dashboard tab only)
A scope bar appears under the main hub/project selector whenever the Dashboard tab is active and a hub is selected:
- This project / All projects in hub — switches between single-project and hub-wide scope.
- Project multi-select (hub mode only) — a dropdown listing every project in the hub. "All N projects" is the default; tick the checkboxes to narrow to a subset. Select-all / Clear shortcuts reset the picker.
- Aggregated / Compare projects (hub mode only) — controls how hub data is rendered in the charts. See below.
Switching hubs, or toggling back to "This project", resets the scope controls and compare mode.
Aggregated vs. Compare Projects
When you're in hub scope you can choose how to render the charts:
- Aggregated (default) — every chart combines data across all selected projects into a single series. KPIs become hub totals (sum of files, combined health score, etc.), and the Projects Breakdown tile lists per-project totals in a table.
- Compare projects — two time-series tiles switch to one line per project so you can compare trajectories directly:
- Health Score Trend — by project
- Files Checked per Run — by project
Each project gets its own colour, and the chart legend lists project names so you can map a colour back to a project. Hovering a point shows the project name and value. All other tiles (status donut, violations by rule, register charts, etc.) stay aggregated — they don't gain from per-project splitting.
Compare mode is capped at the top 8 projects by file volume across the selected runs. If your hub or selection contains more projects, Foreman shows an amber "Showing top 8 projects by volume" note next to the toggle and limits comparison to the busiest 8.
Run Filter
Above the charts (in the Customise panel), a row of run pills lets you choose which completed runs feed the dashboard. By default the dashboard aggregates the 10 most recent completed runs. Click any pill to toggle individual runs in/out — the charts and KPIs immediately re-aggregate. In single-project mode up to 20 recent runs are available; in hub-wide mode the window widens to 100 so you see enough coverage across projects. In hub mode the run label also shows the source project name so you can tell at a glance which project each run belongs to.
Dashboard Components
The dashboard is built from a grid of customisable tiles. You can show, hide, reorder and resize each tile from the Customise panel — see Customising the Dashboard.
KPI Stats Cards (full-width row)
Always pinned at the top, the stats row shows aggregated metrics for the selected runs:
| Card | Value |
|---|---|
| Health Score | Pass rate as a percentage (Passed / (Total − Skipped) × 100) |
| Total Files | Total files evaluated across the selected runs |
| Passed / Failed / Warnings / Skipped | Counts split by status |
| Total Violations | Sum of all rule violations |
| Time Saved | Estimated hours saved vs manual review |
File & Quality Charts
File Status Distribution (Donut)
Proportion of Passed, Failed, Warnings, and Skipped files. Empty segments (zero count) are hidden.
Health Score Trend (Line)
Pass rate plotted across the selected runs. Y-axis 0–100%, smooth curve with markers, run labels include date and time for runs on the same day.
Violations by Rule (Horizontal Bar)
Top 7 rules with the most violations across the selected runs.
Violations Over Time (Stacked Bar)
Per-run breakdown of error and warning counts over time. Helps you spot whether violation counts are trending up or down.
Pass Rate by Folder (Progress Bars)
Custom progress bars listing folders with their compliance percentage. Folders with lower pass rates appear first so you can immediately see which areas need attention.
File Types Checked (Donut)
Distribution of file types (e.g. PDF, DWG, RVT, IFC) across checked files.
Violations by Rule Type (Donut)
Violation count broken down by rule type — Naming, Metadata, Allowed Values, Format, Freshness, Content Match, Content Convention, Content Extraction, List Validation, Numeric Range, Register Cross-Reference, Segment Consistency. Pair with Violations by Rule Category below to drill from "where's the biggest problem area?" → "which specific type within that area?".
Violations by Rule Category (Donut)
A higher-level view that buckets all 12 rule types into three categories — File & Metadata, Lists & Ranges, and PDF Content. Use this to see which broad area of quality concerns dominates the project, then click through to the matching specific rule type in Violations by Rule Type above.
Register & Validation Charts
These tiles only show data when at least one List Validation or Register Cross-Reference rule has run.
Register Completeness (Progress Cards)
Per Register Cross-Reference rule, a colour-coded radial completeness ring with:
- % complete in the centre (green ≥95%, amber ≥80%, red <80%)
- Found out of total register entries
- Missing count (entries in the register with no matching file)
- Unexpected count (files on disk that aren't in the register)
This tile is the fastest way to see how close your project is to delivering everything on the MIDP/TIDP.
Register Completeness Trend (Line)
Multi-series smooth line chart, one series per Register Cross-Reference rule. Y-axis 0–100% completeness, X-axis spans the selected runs. Shows deliverables converging toward completion as the project approaches a milestone.
Top Missing Register Entries (Horizontal Bar)
Top 10 register entries that have been missing across the most check runs in your selection. Each bar's value is a count of runs the entry has been missing from, not a percentage — an entry stuck at 8 across 10 selected runs has been missing for nearly the whole window.
Use this as a chase list — the entries at the top are the persistent stragglers that haven't been delivered yet. Pair with Register completeness to see whether the overall percentage is improving while specific entries persist.
Hidden when no Register Cross-Reference rule is in scope. Adapts immediately when you toggle runs in the Customise panel.
Top Invalid Values (Horizontal Bar)
Top 10 values that fail List Validation rules most often across the selected runs. Each bar's value is a count of failures (one per file × rule × run that didn't match the list).
Surfaces typos and rogue codes — for example, if STRC appears here repeatedly when the approved discipline list only has STR, you've got a consistent typo to chase. Pair with the Validation Lists management UI to add the value to the list (if it's a legitimate code) or hand the chase to the responsible team (if it's a typo).
Hidden when no List Validation rule is in scope.
Run Operations Charts
Trigger Source Breakdown (Donut)
Distribution of check runs by how they were triggered. Five sources are tracked, each with its own colour pill in the chart and the run history list:
| Source | Where it comes from |
|---|---|
| Manual | A user clicked Start check from the Run Check tab |
| Scheduled | The scheduled-jobs runner fired a recurring job |
| Webhook | A folder webhook trigger detected a file change and queued a check |
Assistant (Chat) |
A user asked the in-app AI assistant to run a check on their behalf |
MCP (McpClient) |
An external MCP-compatible AI tool (e.g. Claude Desktop) called the MCP server's run_qa_check tool |
Use this tile to spot whether automation is doing its job — a healthy hub usually shows the bulk of runs as Scheduled / Webhook, with Manual reserved for one-off investigations.
Check Duration (Bar)
Per-run execution time in seconds. Useful for monitoring check performance and spotting runs that took unusually long.
Files Checked per Run (Stacked Bar)
Per-run file counts split by Passed and Failed. Two columns wide by default.
Repeat Failures (Table)
A full-width table listing files that have failed across multiple consecutive check runs. Each row shows the file name, the number of consecutive failures, and the rule(s) that triggered them. Repeat failures highlight persistent issues that need escalation rather than a simple fix.
Projects Breakdown (Table)
Only populated in hub scope. A two-column table with one row per project contributing to the currently selected runs:
| Column | Meaning |
|---|---|
| Project | Project name (resolved from the hub) |
| Runs | Number of selected runs that came from that project |
| Files | Total files checked for that project |
| Passed / Failed | Counts split by outcome |
| Health | Pass rate pill — green ≥80%, amber ≥60%, red below |
This tile is the fastest way to spot which projects are healthy and which are dragging the hub-wide average down. It's hidden in single-project mode.
Time Saved Calculation
The Time Saved card uses:
- 3 minutes per file for manual review (opening the document, checking naming, verifying title block fields, recording results).
- 20 minutes per batch for compiling a manual report across all checked files.
The card displays the cumulative time saved across the selected check runs for the project. Individual check runs also show their per-run time saved in the Check History.
Scope Switching
When you change the selected hub, project, or scope mode:
- All dashboard data is immediately cleared (no stale data from the previous scope).
- New data is loaded for the selected scope — a single project, all projects in the hub, or a narrowed project subset.
- Charts and cards refresh automatically.
- Switching hubs also resets the multi-project picker and turns Compare mode off.
The dashboard shows data from check runs visible to your account. If multiple team members run checks on the same project, all results contribute to the dashboard.
All dates and times on the dashboard are shown in your timezone preference. You can set your preferred timezone in Account Settings.
Next Steps
- Running QA Checks -- run more checks to build trend data
- Exporting Results & PDF Reports -- generate reports for stakeholders