Skip to main content
Question

Validating Storage Policy Migration Workflow to new HyperScale Edge

  • February 24, 2026
  • 1 reply
  • 10 views

Forum|alt.badge.img+7

Hi everyone,

We recently deployed a new HyperScale Edge appliance.

I am planning a migration/cutover of an existing Storage Policy (currently writing to legacy local storage) to the new HyperScale Edge. I want to validate the following GUI-based procedure to ensure it aligns with Commvault Best Practices, specifically regarding data retention, limits, and potential known issues during the cutover.

Additionally, if anyone has an official flowchart or graphical diagram of this specific cutover process, please share it.

Proposed Migration Flow:

Step 1: Target Infrastructure Verification

  • Verify the newly installed HyperScale Edge has automatically provisioned its dedicated Storage Pool.

Step 2: Create Secondary Copy

  • Navigate to the relevant Storage Policy (e.g., SP_HafatzaDan_All).

  • Right-click -> All Tasks -> Create New Copy -> Synchronous Copy.

  • Name the new copy (e.g., HafatzaDan_HSX).

  • From the drop-down menu, point the default Datapath to the new HyperScale Storage Pool.

  • Match the Retention settings exactly with the Primary copy (e.g., 30 days).

Step 3: Data Synchronization (Auxiliary Copy)

  • Right-click the Storage Policy -> All Tasks -> Run Auxiliary Copy.

  • Under the selection criteria, choose Select A Copy and select the newly created copy (HafatzaDan_HSX).

  • Configure the number of Data Streams (e.g., 10) and execute to seed the data from the legacy storage to the new HSX node.

Step 4: Cutover (Promote to Primary)

  • Wait for the Aux Copy to complete 100% (verify no jobs are left to be copied).

  • Right-click the new copy (HafatzaDan_HSX) -> Promote to Primary.

Step 5: Decommissioning the Legacy Copy

  • The legacy copy (HafatzaDan) is now automatically demoted to a synchronous secondary copy.

  • Right-click the legacy copy -> Properties -> clear the Active checkbox to disable it.

  • Monitor the environment for 1-3 days to ensure production backups are writing successfully to the new HSX primary copy.

  • Once validated, right-click the legacy copy -> Delete to permanently remove it and prune the legacy storage.

Additional Queries for Best Practices:

  1. Network Throttling: During the initial massive Aux Copy (Step 3), what are the best practices for implementing network bandwidth throttling between the legacy storage and the new HSX nodes? Should this be configured via Network Topologies/Routes, or is there a better approach to prevent impacting production storage traffic?

  2. CLI / Automation: Are there specific qoperation or qcli commands recommended for executing or automating this cutover (specifically the promotion to Primary in Step 4) and monitoring the exact queue of jobs pending to be copied?

  3. Performance Impact: Are there any caveats or known performance impacts on the CommServe database or the underlying legacy storage during Step 3 (Aux Copy) and Step 5 (Pruning/Deletion) that I should be aware of?

Thanks in advance.

1 reply

Forum|alt.badge.img

Hi ​@Jean-Paul ,

 

Step 1: Target Infrastructure Verification

Verify the newly installed HyperScale Edge has automatically provisioned its dedicated Storage Pool.

In addition to the above, please verify the following,

- Deduplication DB (DDB) health on HSE (if using dedupe).
- Disk library mount paths show Ready.
- Data path is enabled and online.
- Verify Space Reclamation schedule is configured.

Best Practice:
Run a small test backup to the new Storage Pool before seeding production data.

 

Step 2: Create Secondary Copy

Navigate to the relevant Storage Policy (e.g., SP_HafatzaDan_All).
Right-click -> All Tasks -> Create New Copy -> Synchronous Copy.
Name the new copy (e.g., HafatzaDan_HSX).
From the drop-down menu, point the default Datapath to the new HyperScale Storage Pool.
Match the Retention settings exactly with the Primary copy (e.g., 30 days).

Above steps look good.

 

Step 3: Data Synchronization (Auxiliary Copy)

Right-click the Storage Policy -> All Tasks -> Run Auxiliary Copy.
Under the selection criteria, choose Select A Copy and select the newly created copy (HafatzaDan_HSX).
Configure the number of Data Streams (e.g., 10) and execute to seed the data from the legacy storage to the new HSX node.

Expect:
High read IOPS on legacy storage.
High CPU on HSE during ingestion.

This is normal.

 

Step 4: Cutover (Promote to Primary)

Wait for the Aux Copy to complete 100% (verify no jobs are left to be copied).
Right-click the new copy (HafatzaDan_HSX) -> Promote to Primary.

Follow the below steps for promoting to primary,

- Ensure that there are no jobs running to the legacy storage at the time of promotion.
- Go to the storage policy properties --> Copy Precedence --> Move the Synchronous copy above the primary copy and save
- Then promote the Synchronous copy to primary by right clicking the copy and ensure that the primary legacy copy changes to Secondary copy automatically.

 

Step 5 – Legacy Copy Decommissioning

The legacy copy (HafatzaDan) is now automatically demoted to a synchronous secondary copy.
Right-click the legacy copy -> Properties -> clear the Active checkbox to disable it.
Monitor the environment for 1-3 days to ensure production backups are writing successfully to the new HSX primary copy.
Once validated, right-click the legacy copy -> Delete to permanently remove it and prune the legacy storage.


For decommissioning follow the below steps,

- Right-click the legacy copy -> Properties -> clear the Active checkbox to disable it.
- Monitor the environment for 1-3 days to ensure production backups are writing successfully to the new HSX primary copy.
- Once validated, Delete all the backup jobs in the legacy storage pool and seal the DDB.
- Sealing the DDB helps to delete the baseline data in the legacy Storage pool and hence deletion can go smooth.
- Run a manual Data aging job and let pruning happen automatically after which the decommission can be performed.

Since you are deleting all the jobs and pruning has to happen this will be resource intensive only. But should not be impacting the backups performed to the HSE storage appliance.