News, Blog & Events
F3 Technology Partners, Connecticut’s leading IT Solutions Provider, announced today that CRN® a…
CRN has named F3 Technology Partners to its 2020 Tech Elite 250 list. This annual list acknowledges…
F3 Technology Partners, “Distinct Partner of Innovations”, announced that CRN®, a brand of The…
Unexpected Solaris Cluster behavior when replacing a ZFS zpool with newer pool
We have a Solaris Cluster which contains several zpools under the control of the HAStoragePlus resource type. Much of the user data has been migrated off the zpools over time, and in order to save on storage we want to migrate the remaining data to new smaller zpools with newer storage LUNs. We would then destroy the older large zpools and free their SAN LUNs for use elsewhere.
In order to preserve our standard pool naming, we want the new pools to have the same names as the old pools.
The implementation of our plan went well and we used zfs send/receive to copy all the data to the new pools. We renamed the old pools and exported them, then we imported the new pools using the original names. We exported the pools in preparation from bringing them online in the Cluster.
We then brought the Cluster Resource Groups online. We found out that Solaris Cluster had imported the old storage, and given the old pools their old names back. There was no data loss or application impact, but the migration was essentially undone.
After much research, we discovered that Solaris Cluster has its own internal zpool cache files, which it keeps in the Cluster Configuration Repository (CCR). When the cluster imports a pool, it consults these cache files and imports the devices (still the old ones) that are in the file. The cache files are kept in /etc/cluster/ccr/global, and are named with the form <poolname>.cachefile.
Delete the zpool cache files associated with the pools which are being replaced. Do NOT delete the files manually. You MUST use the cluster utility ‘ccradm’ to perform this step. ccradm ensures data consistency among all cluster nodes and computes file checksums.
For example, if you have a zpool named ‘userdata’, first take the resource group offline so that the zpool is exported.
- Next, rename the old pool and new pool so that the new pool has the desired name. This will require a few ‘zpool import’ commands.
- Next, export all the affected pools.
- Then, run the following to delete its cache files:
- # /usr/cluster/lib/sc/ccradm remtab userdata.cachefile
- Now bring the RG online to import the pool. The pool should be on the new storage. After verification, the old pool can be imported, destroyed, and the SAN LUNs unzoned from the server.