The DCIM Platform Migration Checklist: 47 Things to Verify Before You Switch
A comprehensive pre-migration checklist covering data quality, platform readiness, team preparation, and rollback planning — for teams migrating between any two DCIM platforms.
A DCIM platform migration touches every team that depends on accurate asset data. This 47-point checklist covers the data quality, platform readiness, team preparation, and rollback planning steps that separate successful migrations from ones that drag on for months.
Key Takeaways
- Most DCIM migrations fail not because of technical problems but because of data quality issues discovered mid-migration — normalise your source data before starting the migration project.
- The cutover window is the highest-risk phase. Define a maximum acceptable downtime, a rollback trigger, and a rollback procedure before the migration starts.
- Parallel running — keeping both platforms live for 2–4 weeks post-cutover — is the most effective way to catch data gaps that were not visible during testing.
- Every team that uses the DCIM (operations, capacity planning, change management, compliance) needs to sign off on the migrated data before the old platform is decommissioned.
- A post-migration data quality audit 30 days after cutover will surface issues that were not caught during testing — budget time for this in the project plan.
Why DCIM Migrations Fail
A DCIM platform migration is one of the highest-risk infrastructure projects a DC team can undertake. The platform is the source of truth for capacity planning, change management, and compliance reporting. A failed migration does not just mean a delayed project — it means weeks or months of unreliable asset data across every team that depends on it.
Most migrations fail for the same reason: data quality problems that were not discovered until the migration was already underway. The source data looked clean in the old platform because the old platform had learned to work around its inconsistencies. The new platform has no such tolerance.
This checklist covers 47 verification points across five phases. Work through it before you start the migration project, not during it.
Phase 1: Data Quality (12 checks)
1. Export a complete asset inventory from the source platform, including all fields.
2. Count records with blank manufacturer fields. Target: < 5% of total records.
3. Count records with blank model fields. Target: < 10% of total records.
4. Count records with blank or placeholder serial numbers. Target: < 15% of total records.
5. Count records with incomplete location data (missing site, rack, or U position). Target: < 20% of total records.
6. Identify all unique manufacturer names. Expect 20–50% more unique values than actual manufacturers — these are aliases, typos, and abbreviations that need normalisation.
7. Identify all unique model names. Expect 30–60% more unique values than actual models.
8. Check for duplicate serial numbers. Any duplicate is a data quality issue that must be resolved before migration.
9. Validate that all rack locations correspond to racks that exist in the source platform. Orphaned location references will fail in the target platform.
10. Check for records with status values that do not map to the target platform's status enum. Build a mapping table.
11. Identify records that belong to decommissioned or archived sites. Decide whether to migrate these or exclude them.
12. Run the source data through a normalisation tool (such as Struktive) to get a quality score distribution. Any score below 60 represents a record that will likely cause problems in the target platform.
Phase 2: Target Platform Readiness (10 checks)
13. Create all required sites in the target platform before importing devices.
14. Create the full location hierarchy (buildings, floors, rooms, rows, racks) for each site.
15. Import all required manufacturers from the target platform's device type library.
16. Import all required device types from the target platform's device type library.
17. Create all required device roles and custom roles that are not in the platform's default set.
18. Configure all required custom fields that carry data from the source platform.
19. Set up user accounts and permissions for all teams that will use the target platform.
20. Configure integrations (ITSM, monitoring, automation) in test mode against the target platform.
21. Verify that the target platform's API is accessible from all systems that will integrate with it.
22. Run a test import of 50–100 records and verify the output matches expectations before running the full import.
Try Struktive on your own data
Upload a raw asset CSV and get back a normalised, DCIM-ready file in minutes. No account required.
Phase 3: Migration Execution (10 checks)
23. Freeze changes in the source platform for the duration of the migration window.
24. Export the final source dataset immediately before the migration starts.
25. Run the normalisation pass on the final export and verify the quality score distribution.
26. Generate the target-format import file (NetBox CSV, dcTrack XLSX, Device42 CSV).
27. Run the pre-flight validation report and resolve all errors before importing.
28. Import in batches of 500–1,000 records, not all at once. Verify each batch before proceeding.
29. After each batch, spot-check 10–20 records by comparing the source platform record to the target platform record.
30. Verify rack utilisation totals match between source and target after each site is imported.
31. Verify power totals per rack and per site match between source and target.
32. Confirm that all custom field values have been correctly migrated.
Phase 4: Validation and Sign-Off (9 checks)
33. Operations team: verify that all active devices are present in the target platform.
34. Capacity planning team: verify that rack utilisation and power density reports match the source platform.
35. Change management team: verify that all open change records have been linked to the correct devices in the target platform.
36. Compliance team: verify that all assets required for the current compliance scope are present and correctly classified.
37. Network team: verify that all network devices have correct IP addresses and management interfaces.
38. Run a record count comparison: source platform total vs. target platform total. Investigate any discrepancy > 1%.
39. Run a duplicate detection check in the target platform. Any duplicates introduced during migration must be resolved before cutover.
40. Get written sign-off from each team lead before proceeding to cutover.
41. Document all known data gaps (records that could not be migrated due to data quality issues) and assign owners for remediation.
Phase 5: Cutover and Post-Migration (6 checks)
42. Define the cutover window, maximum acceptable downtime, and rollback trigger before starting.
43. Switch all integrations (ITSM, monitoring, automation) from source to target platform.
44. Update documentation, runbooks, and SOPs to reference the target platform.
45. Keep the source platform in read-only mode for 2–4 weeks post-cutover (parallel running period).
46. Run a post-migration data quality audit 30 days after cutover to surface issues not caught during testing.
47. Decommission the source platform only after all teams have confirmed the target platform is the authoritative source of truth.
The Normalisation Step That Changes Everything
Of the 47 checks above, the single most impactful is check 12: running your source data through a normalisation tool before the migration starts. A normalisation pass will surface data quality issues that would otherwise be discovered mid-migration, when the cost of fixing them is highest.
For a complete guide to the normalisation process, see What Is DCIM Data Normalisation. For a project management framework for the full migration, see DCIM Migration Project Playbook.