Blog/dcTrack Data Migration: What to Expect and How to Prepare
dcTrack8 min read22 January 2026

dcTrack Data Migration: What to Expect and How to Prepare

dcTrack is one of the most capable DCIM platforms available — but migrating data into it requires careful preparation. Here is what experienced DC teams know before they start.

T
The Struktive Team
Struktive

Understanding dcTrack's Data Model

Sunbird's dcTrack is a full-featured DCIM platform that manages physical infrastructure from the rack level up. Its data model is built around the concept of a "Make and Model" — every device in dcTrack must be associated with a manufacturer (Make) and a device type (Model) that exists in the dcTrack Models Library. This library defines the physical properties of each device: form factor, U height, power draw, port configurations, and visual representation.

This architecture is powerful because it gives you accurate capacity modelling and visual rack diagrams out of the box. But it also means that before you can import a single device record, every Make and Model combination in your source data must exist in your dcTrack instance's Models Library. If it does not, the import will reject the record.

The Models Library Problem

The Models Library verification step is where most dcTrack migrations stall. A typical enterprise asset inventory contains hundreds of unique Make and Model combinations. Some are common devices that dcTrack's default library already includes. Many are not — particularly older hardware, niche vendors, and recently released models.

The only way to know which models are missing is to compare your source data against the library. This comparison needs to happen before you start the import, not during it. The practical approach is to extract every unique Make and Model combination from your source data, run it against the library, and produce a gap list. That gap list becomes a work item for your dcTrack administrator to resolve — either by importing model definitions from the vendor or by creating custom model entries.

For a dataset with 500 unique devices, this gap analysis typically reveals 60 to 100 models that need to be added to the library — a figure that rises sharply when the inventory includes older hardware or niche vendors. Each one requires either a vendor-supplied definition file or manual entry of the physical specifications. Plan for this work in your project timeline.

Location Hierarchy in dcTrack

dcTrack organises physical locations in a hierarchy: Data Centre → Room → Aisle → Rack. Every device must be assigned to a rack, and every rack must exist in the hierarchy before you can assign devices to it.

If your source data has location strings in a free-text format — "Row 3, Rack 12" or "NYC-DC1/Hall-A/R03-C12" — you need to parse those strings into the four-level hierarchy dcTrack expects. This parsing step is often underestimated. Location strings in real-world asset inventories are inconsistent, abbreviated, and sometimes wrong. A robust parser needs to handle multiple formats, resolve abbreviations (R03 = Row 3, C12 = Column 12), and flag records where the location string cannot be reliably parsed.

Power and Connectivity Data

One of dcTrack's strengths is its power chain modelling — the ability to trace power from a PDU outlet through a power strip to a device. To take advantage of this, your import data needs to include not just device records but also PDU records with outlet configurations, and the connectivity relationships between them.

Most source data does not have this level of detail. The practical approach is to import the device records first, then build the power chain data separately — either through a structured data collection exercise or by using dcTrack's discovery integrations.

Preparing Your Data for Import

A dcTrack import CSV requires specific column names that map to dcTrack's internal field names. The required fields for a device record are: Make, Model, Serial Number, Data Centre, Room, Aisle, Rack, and U Position. Optional but valuable fields include: Asset Tag, IP Address, Power Draw (W), and Status.

Before generating the import CSV, your data needs to go through several normalisation steps. Vendor names must match the Make values in dcTrack's Models Library exactly. Model names must match the Model values exactly — including capitalisation and punctuation. Location strings must be parsed into the four-level hierarchy. Status values must map to dcTrack's controlled vocabulary.

The Import Process

dcTrack's import process has three stages. First, upload the CSV and run a validation pass. dcTrack will flag every row where the Make and Model combination does not exist in the library, where the location does not exist in the hierarchy, or where a required field is missing. Second, resolve all validation errors — add missing models to the library, create missing location objects, and fix data quality issues in the source CSV. Third, re-run the import. Repeat until the validation pass returns zero errors.

For large datasets, this cycle can take several iterations. The key to minimising iterations is doing thorough pre-import validation — running your own gap analysis before you ever touch dcTrack's import wizard.

What Struktive Generates for dcTrack

Struktive's dcTrack export produces a normalised CSV with Make and Model values standardised, location fields parsed into the four-level hierarchy, and a separate gap report listing every Make and Model combination that could not be matched — along with the number of device records affected by each gap, so you can prioritise library additions that unblock the most records.

dcTrackDCIM migrationdata migrationSunbirddata center

Put this into practice

Upload your asset inventory and get back normalised, DCIM-ready data in minutes. No login required to try.