The MIDP/TIDP gap problem
Every project governed by ISO 19650 has a Master Information Delivery Plan (MIDP) or Task Information Delivery Plan (TIDP). These registers define exactly which documents must be produced, by whom, and by when. They are the contractual backbone of information delivery — the appointing party will measure your performance against them.
The problem is that registers live in spreadsheets, and documents live in Autodesk Forma. Keeping the two in sync is a manual exercise that document controllers perform by exporting folder contents, pasting them into a second worksheet, and running VLOOKUP formulas to find mismatches. On a project with 200-300 deliverables, this takes most of a working day. On a programme with 2,000 or more, it becomes a recurring multi-day effort.
The gaps fall into two categories. First, files that the register says should exist but are not in the CDE — these are missing deliverables. Second, files that exist in the CDE but are not on the register — these are unexpected uploads. Both are problems. Missing deliverables mean information hasn't been produced (or hasn't been uploaded). Unexpected files mean someone is working outside the agreed scope, or naming their files in a way that doesn't match the register's expected filenames.
Why this matters at stage gates
Stage gate reviews typically require evidence that all planned deliverables have been issued. Discovering 15 missing documents the week before a gate review creates a scramble that leads to rushed, low-quality submissions — or a failed gate. Catching gaps early gives teams time to respond.
What a register cross-reference check does
A register cross-reference rule compares two lists: the list of expected filenames from your deliverables register, and the list of actual filenames in one or more Forma folders. The comparison is bidirectional.
In the forward direction, the check looks at every entry in your register and asks: does a file with this name exist in the target folder? If not, it's flagged as a missing deliverable. In the reverse direction, the check looks at every file in the folder and asks: is this filename on the register? If not, it's flagged as an unexpected file.
The result is a completeness score — a percentage that tells you how many of the planned deliverables are actually present. On a project where the MIDP lists 247 documents and the folder contains 232 matching files plus 5 extras, the completeness score is 93.9%, with 15 missing items and 5 unexpected items. That is much more useful than "it looks about right."
Matching modes
Register cross-reference supports two matching modes: exact match (the filename must match the register entry character-for-character, excluding the file extension) and prefix match (the file must start with the register entry, allowing for revision suffixes like "-P01" or "-C02"). Prefix match is more forgiving and works well when your register lists base document IDs rather than full filenames.
Setting up a register cross-reference check
The setup involves three steps: creating a validation list from your register, creating a register cross-reference rule that uses that list, and running the check against a folder.
Step 1: Create a validation list
A validation list is a reusable set of allowed values stored in Foreman. For a register check, the list contains every expected filename (or document ID) from your MIDP or TIDP. You can create one by going to the QA/QC section, opening the Rules tab, and selecting "Validation Lists" from the sidebar. Click "New List" and give it a descriptive name — something like "MIDP Stage 3 Expected Deliverables."
You have two options for populating the list. You can paste values directly — one per line — which works well if your register is in Excel and you can copy the filename column. Or you can type them manually, which is practical for smaller lists. The paste option is recommended for registers with more than 20 entries. Once created, the list is available to any rule in your tenant.
Step 2: Create a register cross-reference rule
In your rule set, add a new rule and choose the "Register Cross-Reference" type. Select the validation list you just created as the reference register. Choose the matching mode — exact or prefix — depending on whether your register entries include revision suffixes.
Configure the directionality. "Check for missing" compares the register against the folder (which expected files are absent). "Check for unexpected" compares the folder against the register (which actual files are not on the list). Most teams enable both directions to get a full picture.
Step 3: Run the check
Navigate to the Run Check tab, select the folder (or folders) you want to check, assign the rule set containing your register cross-reference rule, and run the check. If you're using per-folder mode, you can assign different registers to different folders — useful when your MIDP is broken down by discipline or work package and each folder corresponds to a different subset of deliverables.
Reading the results
After the check completes, the results page shows each rule that was evaluated. The register cross-reference rule appears with a "[Register Check]" tag. Its summary line shows the completeness percentage — for example, "234 of 247 deliverables found (94.7%)."
Expanding the rule reveals two sub-lists. The "Missing from CDE" list shows every register entry that has no matching file in the checked folder. These are the deliverables that haven't been uploaded yet (or have been uploaded with an incorrect filename). The "Not on register" list shows every file in the folder that doesn't correspond to a register entry. These are the unexpected uploads that need investigation.
Each item in the missing list includes the expected filename from the register. Each item in the unexpected list includes the actual filename from Forma, plus a link to the file so you can inspect it directly. You can export both lists as CSV or XLSX for further analysis — for example, to share with a design manager who needs to chase missing submissions.
Example results summary
| Metric | Value |
|---|---|
| Register entries | 247 |
| Files in folder | 237 |
| Matched deliverables | 232 |
| Missing deliverables | 15 |
| Unexpected files | 5 |
| Completeness | 93.9% |
Tracking register completeness on the dashboard
The QA/QC dashboard aggregates results from all your check runs. When a register cross-reference rule is part of the check, the dashboard includes a register completeness gauge that shows the current score for each register you're tracking. If you're running separate registers for structural, mechanical, and electrical deliverables, each one gets its own gauge.
The trend chart is where the dashboard becomes particularly valuable. It plots completeness over time, so you can see whether the project is converging toward 100% or stalling. On a healthy project, you expect to see a steady upward curve as design teams submit their deliverables. A plateau — where completeness stops improving for two or three consecutive runs — is a warning sign that submissions have stalled and someone needs to intervene.
The "top missing entries" chart shows which specific deliverables have been missing for the longest. If the same 8 documents have been absent for three consecutive runs, they appear at the top of this list. This is useful for prioritizing chaser communications — you can export the list and send it directly to the responsible party.
Scheduling register checks for milestone gates
Running register checks manually is fine for ad-hoc audits, but the real value comes from scheduling them to run automatically in the weeks approaching a stage gate. If your Stage 3 gate is on May 15th, you might schedule a weekly register check starting on April 1st and increasing to daily checks from May 5th onwards.
To set this up, go to the scheduled jobs section and create a new QA Check job. Select the rule set containing your register cross-reference rule, choose the target folders, and set the cron schedule. The job will run at the specified times and produce a new check run each time. Dashboard trends will update automatically.
You can also configure email notifications so that key stakeholders receive a summary after each run. The notification includes the completeness percentage and the count of missing and unexpected files. If the completeness drops below a threshold you've defined, the email subject line reflects the urgency.
Updating the register mid-project
MIDPs evolve. Deliverables get added, removed, or renamed as the project progresses. When you update your MIDP, update the corresponding validation list in Foreman to match. The next scheduled run will use the updated list automatically. There's no need to recreate the rule or the schedule — only the list contents change.
Key takeaway
A deliverables register is only useful if someone is checking it against reality. VLOOKUP in Excel works, but it takes hours, it's error-prone, and it gives you a snapshot — not a trend. A register cross-reference check in Foreman takes minutes to set up and seconds to run. Schedule it weekly and you have continuous visibility into completeness, not just a pre-gate-review panic.
The combination of validation lists (for your register data), register cross-reference rules (for the bidirectional comparison), and scheduled runs (for continuous monitoring) replaces the manual reconciliation workflow entirely. If you're already using Foreman for naming convention checks, adding a register check is a 10-minute extension of your existing rule set.