How to remove a stale datastore from VMware vCenter

When a datastore loses its backend storage connection, vCenter can keep it registered in its inventory even though the underlying volume is no longer accessible. The GUI unmount task fails with a “datastore in use” error, and the ESXi host reports a stale file handle on the mount point. In this situation, the only reliable path forward is to force-unmount the datastore directly on the ESXi host via CLI, then remove the orphaned record from the vCenter database manually.

This guide covers the complete procedure for VMware vSphere with vCenter Server Appliance (VCSA).

Problem symptoms

You are likely dealing with a stale datastore if you see one or more of the following:

  • The datastore is listed as Inaccessible in the vCenter inventory
  • Unmounting via the vCenter GUI fails with a task error such as “The resource is in use”
  • On the ESXi host, ls -lah /vmfs/volumes shows a Stale file handle error for the datastore UUID
  • The datastore symlink in /vmfs/volumes points to a UUID that has no corresponding directory

Root cause

The ESXi host holds a mount point entry for the datastore UUID, but the underlying storage volume is no longer reachable. Because the mount point is in a broken state rather than cleanly unmounted, vCenter cannot complete a normal unmount task — it sees the datastore as still attached. At the same time, vCenter retains a database record for the datastore even after the storage becomes permanently unavailable, keeping it visible in the inventory in an inaccessible state.

The fix requires two separate actions: force-unmounting the stale file system entry on each affected host, then deleting the orphaned rows from the vCenter Postgres database.

Prerequisites

  • SSH access to each ESXi host that had the datastore mounted
  • SSH access to the vCenter Server Appliance (VCSA) with root credentials
  • Sufficient time to stop and restart the vpxd service (brief vCenter management outage)
Warning: This procedure involves direct edits to the vCenter Postgres database. Take a snapshot of the vCenter Server Appliance before starting. If something goes wrong during the database operation, you need a clean rollback point.

Step 1 — Identify and force-unmount the stale datastore on each ESXi host

Connect to the affected ESXi host via SSH and list the volumes directory:

ls -lah /vmfs/volumes

The output will include a Stale file handle error for the broken UUID, and a symlink pointing to it with no corresponding directory. Look for the entry where the symlink target has no drwx directory line above it:

ls: /vmfs/volumes/xxxxxxxx-xxxxxxxx-0000-000000000000: Stale file handle
total 5640
drwxr-xr-x    1 root     root         512 Mar 23 06:59 .
drwxr-xr-x    1 root     root         512 Oct 26 20:03 ..
drwxr-xr-t    1 root     root       76.0K Oct 16 12:54 674a490e-12d76a41-9f6d-043201326090
drwxr-xr-t    1 root     root       72.0K Oct 16 12:54 674a490e-23721f31-0eef-043201326090
drwxr-xr-t    1 root     root       76.0K Mar 22 07:41 674f27e4-4daf8fad-1c89-043201326520
lrwxr-xr-x    1 root     root          35 Mar 23 06:59 BOOTBANK1 -> bab81f18-6f4e2cbd-0bba-8b147f5c4efa
lrwxr-xr-x    1 root     root          35 Mar 23 06:59 Datastore-Name -> xxxxxxxx-xxxxxxxx-0000-000000000000
lrwxr-xr-x    1 root     root          35 Mar 23 06:59 Datastore1 -> 674f27e4-4daf8fad-1c89-043201326520
lrwxr-xr-x    1 root     root          35 Mar 23 06:59 Local_ESXi01 -> 674a490e-23721f31-0eef-043201326090

Note the datastore name and its UUID. To confirm no processes are holding the mount point open, run:

lsof | grep xxxxxxxx-xxxxxxxx-0000-000000000000

Replace xxxxxxxx-xxxxxxxx-0000-000000000000 with the actual UUID from your output. If the command returns no output, nothing is actively using the mount point. Force-unmount using the UUID:

esxcli storage filesystem unmount -u xxxxxxxx-xxxxxxxx-0000-000000000000

A successful unmount produces no output. The command returns silently to the prompt.

Note: If the datastore was mounted on more than one ESXi host, repeat this step on each host before proceeding to Step 2.

After unmounting, verify the stale symlink is gone:

ls -lah /vmfs/volumes

The stale file handle error and the Datastore-Name symlink should no longer appear in the output:

total 5640
drwxr-xr-x    1 root     root         512 Mar 23 07:03 .
drwxr-xr-x    1 root     root         512 Oct 26 20:03 ..
drwxr-xr-t    1 root     root       76.0K Oct 16 12:54 674a490e-12d76a41-9f6d-043201326090
drwxr-xr-t    1 root     root       72.0K Oct 16 12:54 674a490e-23721f31-0eef-043201326090
drwxr-xr-t    1 root     root       76.0K Mar 22 07:41 674f27e4-4daf8fad-1c89-043201326520
lrwxr-xr-x    1 root     root          35 Mar 23 07:03 BOOTBANK1 -> bab81f18-6f4e2cbd-0bba-8b147f5c4efa
lrwxr-xr-x    1 root     root          35 Mar 23 07:03 Datastore1 -> 674f27e4-4daf8fad-1c89-043201326520
lrwxr-xr-x    1 root     root          35 Mar 23 07:03 Local_ESXi01 -> 674a490e-23721f31-0eef-043201326090

Step 2 — Remove the stale entry from the vCenter database

Even after force-unmounting on the host, vCenter retains the datastore record in its Postgres database. You need to delete it manually.

2.1 Stop the vpxd service

Connect to the VCSA via SSH and stop the vCenter management service:

service-control --stop vpxd

Wait for the confirmation message before continuing:

Operation not cancellable. Please wait for it to finish...
Performing stop operation on service vpxd...
Successfully stopped service vpxd

2.2 Connect to the Postgres database

cd /opt/vmware/vpostgres/current/bin/
psql -U postgres -d VCDB

You should see the Postgres prompt:

psql (14.19)
Type "help" for help.

VCDB=#

2.3 Find the datastore ID

Run the following query to list all registered datastores:

select id, name, storage_url from vpx_datastore;

The output will list all datastores with their numeric ID and storage URL. Locate the stale datastore by matching the name or UUID in the storage_url column:

  id   | name           | storage_url
-------+----------------+---------------------------------------------------------
  1017 | Datastore1     | ds:///vmfs/volumes/674f27e4-4daf8fad-1c89-043201326520/
  1018 | Datastore2     | ds:///vmfs/volumes/674f27f7-555ed453-fdd4-043201326520/
  1234 | Datastore-Name | ds:///vmfs/volumes/xxxxxxxx-xxxxxxxx-0000-000000000000/
  2004 | Local_ESXi01   | ds:///vmfs/volumes/674a490e-23721f31-0eef-043201326090/
(4 rows)

Note the numeric id value for the stale datastore — you will use it in the next step.

Warning: Double-check the ID before running any delete commands. Deleting the wrong datastore ID from the database will remove a healthy datastore from the vCenter inventory.

To confirm you have the correct record, run:

select * from vpx_entity where id=1234;

Replace 1234 with the actual ID from your environment.

2.4 Delete the stale records

Delete the datastore entry from all four affected tables in order. Replace 1234 with the actual ID from your environment:

delete from vpx_ds_assignment where ds_id=1234;
delete from vpx_vm_ds_space where ds_id=1234;
delete from vpx_datastore where id=1234;
delete from vpx_entity where id=1234;

Each statement should confirm exactly one deleted row:

DELETE 1
DELETE 1
DELETE 1
DELETE 1

If any statement returns DELETE 0, verify that the correct ID was used. Exit the database session:

\q

2.5 Start the vpxd service

service-control --start vpxd

Wait for the confirmation before checking the vCenter inventory:

Operation not cancellable. Please wait for it to finish...
Performing start operation on service vpxd...
Successfully started service vpxd

Verification

After vpxd is running again, open the vCenter UI and navigate to the datastore inventory. The stale datastore should no longer appear.

On the ESXi host, confirm the volume is fully gone:

ls -lah /vmfs/volumes

No symlink or directory entry should remain for the removed UUID.

Result: The datastore has been cleanly removed from both the ESXi host and the vCenter inventory. No further cleanup is required unless the same datastore was registered on additional hosts.

Related guides