HAM for Servers & Datacenter Equipment

Rack-mounted hardware lives in a different world from the laptop on someone's desk: physical density, virtualized workloads, supply-chain integrity concerns, and decommission steps that have to deal with classified data on multiple drives at once.

Last reviewed on 2026-04-27

Why Datacenter HAM Differs From Endpoint HAM

The lifecycle stages from the main lifecycle guide still apply, and the data model from the data model guide still works at the field level. What changes in the datacenter is granularity: location is no longer a building or a desk but a specific rack and a specific U-position, a single physical host runs many logical workloads, and "the device" often has child components — drives, line cards, NVMe modules — that need their own records.

This guide is the datacenter-specific extension to those base models. It covers the things that endpoint HAM doesn't have to deal with and the things that endpoint HAM gets approximately right but datacenter HAM has to get exactly right.

The Rack as a Data Structure

Endpoint HAM treats locations as where the device "is". Datacenter HAM has to treat locations as a structured allocation problem. The model that scales:

  • Site — the named facility (your datacenter A, your colo provider's facility B).
  • Room / Hall — the data hall within the site, when the site is large enough to have more than one.
  • Row — the aisle the rack sits in.
  • Rack — the named cabinet, with a documented capacity in rack units (Us), a documented power capacity in kW, and a documented cooling allocation.
  • Elevation (U-position) — the specific rack-unit slot where the device starts. A 2U server starting at U23 occupies U23 and U24.

Each rack is itself an asset record: it has a serial number (or at least an organizational ID), a manufacturer, a power-strip configuration, and a useful life. Devices reference the rack record; the rack references the room and site. This nested structure lets HAM answer questions like "what is the power load and weight on rack DC-A-12 right now" without anyone having to count manually.

The most useful operational view is a rack elevation diagram driven directly off the HAM record. When deployment requests come in, the diagram shows where the available U-space is, what power phases it has, and how close to the cooling baffles. When something fails at 3am, the on-call engineer looks at the diagram before getting in a car.

Server Records: Chassis, Components, and Children

A server record is more layered than a laptop record. The pattern that holds up:

The Chassis Record

The top-level asset: the physical box, with its own serial number, asset tag, manufacturer, model, acquisition cost, warranty, and rack location. The chassis is what shows up on the financial register and what gets retired at end-of-life.

Component Sub-Records

Inside the chassis, several components are worth tracking individually:

  • Storage drives. Each drive has its own serial number and warranty; drives are replaced on different cycles than chassis. Tracking them as child records of the chassis lets you handle drive-level RMAs and prove sanitization at end-of-life.
  • NVMe modules and high-value accelerators. GPUs, FPGAs, and AI accelerators often cost more than the host they sit in. Track each as its own asset linked to the chassis.
  • Memory and CPU configuration. Usually tracked as attributes of the chassis rather than as separate records, unless there's a specific reason (regulated environment, frequent reconfiguration).
  • Power supplies. Redundant PSUs in a chassis are worth recording as components if they get replaced often enough to matter; otherwise, attributes of the chassis are sufficient.

The right level of granularity is whatever corresponds to the units that get bought, replaced, and disposed of independently. Tracking screws is overkill; tracking GPU cards in a 4-GPU host is essential.

Logical Host Identity

A server has a hardware identity (serial, MAC, IPMI/BMC address) and a logical identity (hostname, primary IP, role). HAM tracks both, but the bridge to the operating world is the BMC/iDRAC/iLO management address. That out-of-band address is what monitoring, recovery tools, and the firmware update workflow actually talk to. Recording it on the HAM record is what lets the on-call engineer find the right lights-out console at 3am.

The Hypervisor Layer: Hardware to VMs

A virtualized environment turns one physical host into many logical machines, and most of the operational world cares about the VMs, not the host. HAM's job is to keep the physical-to-logical mapping legible.

What HAM Does and Doesn't Track

HAM tracks the physical host. The CMDB or the virtualization platform's inventory tracks the VMs. The two are linked by the hypervisor: the host has a list of VMs running on it, and each VM record carries the host it currently runs on. The HAM vs CMDB guide covers the integration patterns; for hypervisor-managed environments specifically, the integration usually comes from the virtualization platform's API rather than from generic discovery.

The point HAM has to be careful not to get wrong: a VM is not a HAM asset. It has no serial number, no warranty, no purchase order, no disposal certificate. Trying to make HAM the system of record for VMs leads to massive churn and an inventory of records that turn over every few hours. Let HAM track the box; let the right tool track what runs on it.

What the Linkage Enables

When a physical host fails, the linkage answers the immediate questions: which VMs were running on it, what services those VMs provided, and where they failed over to. When a host is being decommissioned, the linkage drives the migration plan: which VMs need to move first, how much capacity has to exist on the destination hosts, what storage paths have to be re-mapped.

Hyperconverged and Composable

Hyperconverged platforms (where compute, storage, and networking blur into a single fabric) and composable infrastructure (where a logical "server" is assembled on demand from disaggregated pools) both stress the asset record concept. The right model: HAM still tracks the physical building blocks — the hyperconverged appliance chassis, the disaggregated compute trays, the storage shelves — as individual assets. The platform layer assembles them into logical services. HAM does not try to track the logical services.

Supply-Chain Serial Integrity

For most asset classes, serial number integrity is a quiet correctness concern. For datacenter equipment in regulated or high-security environments, it is a compliance and security concern.

Why It Matters More in the Datacenter

  • Counterfeit risk. Counterfeit network and server components turn up in the gray market. A device with a swapped serial label may not be the device the manufacturer thinks it is.
  • Warranty and support. Vendor support entitlements key off serial number. A serial-number mismatch between HAM and the device means the support entitlement is unverifiable when you need it.
  • Regulatory. Some frameworks (DFARS, supply-chain risk management programs) require demonstrable chain of custody from purchase through disposal.

The Practical Controls

  • Receive directly from authorized sellers. Trade-in markets are a fine source of refurbished equipment, but the supply-chain provenance has to be documented before the device enters HAM.
  • Verify at receiving. Photograph the serial number on the device (not just the box) and cross-check against the PO and the manufacturer's portal. Some vendors offer entitlement verification APIs that confirm a serial belongs to the buyer of record.
  • Track all the serials. Many enterprise servers have multiple serial numbers — chassis, motherboard, IPMI, drives. The HAM record should capture at least the chassis serial and the BMC/IPMI serial; component serials live on the component sub-records.
  • Photo evidence at major lifecycle events. Receiving, deployment, and disposal each produce a photo of the serial label. The audit story is much shorter when it can be traced through pictures.

Datacenter-Specific Status States

The state machine in the data model guide needs a couple of additional states for datacenter-resident equipment:

  • Pre-Racked. Mounted in the rack but not yet cabled or powered. Distinct from "In Preparation" because the physical work has happened; what's missing is provisioning.
  • Provisioned. Cabled, powered, configured, ready to take workload. Equivalent to "In Service" but specific to compute hosts that are joined to the management plane.
  • Drained. Workloads have been migrated off; the host is running no production VMs but is still in service for evacuation purposes. The state immediately preceding decommission for hypervisor hosts.
  • Awaiting Sanitization. Decommissioned, removed from the rack, awaiting drive sanitization or destruction. Important to model explicitly because the elapsed time in this state is a security control: a drained host with drives still in it that sits for months is a risk.

Decommission: Multiple Drives, Multiple Standards

Server decommission is not laptop decommission times one. The differences:

Many Drives, Possibly Different Generations

A 2U server might have eight bays. A storage chassis might have 24 or 60. Each drive needs its own sanitization record, its own serial number reference, and its own certificate of destruction or sanitization. The right HAM model records this at the component sub-record level, with the chassis-level disposal record summarizing the drive disposal.

Mixed Sensitivity Levels

A single host may have run workloads with different data classifications across its life. The disposal decision has to be driven by the highest classification the host ever handled, not the most recent. Tracking peak data classification on the host record is the practical way to keep this honest.

Encryption Changes the Calculus

Self-encrypting drives (SEDs) and encrypted storage volumes allow cryptographic erase: the data is rendered unrecoverable by destroying the encryption key, without overwriting every block. This is faster and is acceptable under standards like NIST SP 800-88 for many sensitivities, but only if the encryption was correctly enforced for the host's entire life. HAM should record the encryption posture as an attribute so the decom decision has the evidence it needs.

Hypervisor Memory and State

Most server data lives on disk, but RAM dumps and swap can contain credentials and key material. A clean decommission powers the host off in a controlled way and confirms RAM is no longer powered before any drives are removed. For DRAM this is automatic; for non-volatile DIMMs (NVDIMMs), the same sanitization rigor as a drive applies.

The Decommission Sequence

  1. Workloads drained (host moves to Drained).
  2. Configuration backed up (BMC settings, BIOS settings, RAID configuration if relevant).
  3. Removed from monitoring, the management plane, and the CMDB.
  4. Powered off, drives removed (or retained per disposal plan), serials photographed.
  5. Drive-by-drive sanitization or destruction per the data classification — software wipe, crypto erase, degaussing for magnetic media, or physical destruction.
  6. Per-drive certificates collected and attached to the chassis disposal record.
  7. Chassis sanitized (BMC/IPMI factory-reset, BIOS reset to defaults, identifying labels removed).
  8. HAM record moves to Disposed with the chassis disposal evidence and links to all child component disposal records.

The ITAD guide covers the wider disposal framework; the points above are the datacenter-specific extensions that endpoint HAM doesn't have to deal with.

Power, Cooling, and Capacity as HAM-Adjacent Data

HAM doesn't manage capacity planning, but the rack and chassis records carry the data capacity planning needs:

  • Power draw per chassis. Either nameplate (worst-case, conservative) or measured (more realistic, requires PDU integration).
  • Rack-level totals. Sum of chassis power, compared against rack PDU capacity and the room's power budget.
  • Weight per chassis. Important for upper rack-unit placements and for rack manufacturers' weight ratings.
  • Cooling implications. Hot-aisle / cold-aisle layout influences which racks can take what density.

Treating these as attributes of the chassis and rack records — populated automatically where possible, manually where not — turns HAM into the data source for the capacity-planning conversations that otherwise rely on spreadsheets that drift away from reality.

Common Mistakes

  • Tracking servers as endpoints. Same fields, no rack location detail, no component sub-records. Works until the first time someone has to find a specific failing drive in a 24-bay storage host.
  • Letting "the rack" be a free-text field. "Rack 12" means different things in DC-A and DC-B. Site-and-rack-and-elevation as structured fields is the only model that scales.
  • Treating VMs as HAM assets. Floods HAM with records that turn over hourly and dilutes the population that matters for finance and compliance.
  • Skipping the BMC/iDRAC/iLO address. Out-of-band management is exactly what you need at 3am; not having it on the HAM record means you can't recover the host without first hunting for the address.
  • Forgetting drive sub-records. A chassis disposal record without per-drive sanitization references is a compliance gap waiting for an auditor to find.
  • Approximating power and weight. Rack capacity decisions made on guesswork eventually produce an outage. Capture the data once, accurately, and let it drive the decisions afterward.
  • Letting the Awaiting Sanitization queue grow. Drained hosts with drives still in them sitting for months are a security finding. The state should have an SLA — typically a few business days — and a queue report that surfaces it.

Audit Considerations Specific to Datacenter Equipment

The audit methodology in the audit guide applies directly. Datacenter-specific notes:

  • Audit by rack, not by random sample. Walking a rack and verifying every device against HAM is more efficient than sampling individual chassis spread across the data hall, because you're already there.
  • Reverse-test from the management plane. The list of hosts the hypervisor cluster, monitoring system, or out-of-band management network can reach is the most reliable second source for reconciliation.
  • Drive-level evidence at disposal audits. When external auditors review disposal evidence, they will ask for per-drive certificates on at least a sample of disposed servers. The Disposed record should make those certificates easy to surface.

Cloud and Colocated Equipment

Two boundary cases worth being explicit about:

  • Cloud-hosted infrastructure (IaaS, dedicated bare-metal services). Generally not in HAM, even when the bare metal is "yours" for a contract period. The financial relationship is operating expense, the supplier owns the chassis, and the chain of custody for sanitization is theirs to document. What HAM should track is the contract relationship and the data-classification boundary, not the underlying hardware.
  • Colocated hardware in a third-party datacenter. Owned by your organization, hosted by someone else. Fully in HAM, with the colo facility as the site, the rack as either yours or shared, and the access controls (smart hands, biometric access logs) as part of the location's compliance posture.

Next Steps

Network Equipment HAM

The companion guide covering switches, routers, firewalls, and the rest of the network gear that shares the rack with your servers.

Read the network guide →

HAM vs CMDB

How the physical-host record in HAM relates to the CIs, VMs, and services in the CMDB.

Read HAM vs CMDB →

ITAD & Disposal

Sanitization standards and disposal evidence — the broader framework the per-drive disposal records sit inside.

Read the ITAD guide →