Solved

restore AWS ec2 instance to VMware infrastructure

  • 22 September 2021
  • 9 replies
  • 279 views

Userlevel 4
Badge +14

Dear community,

I am trying to restore an ec2 instance to VMware on premise.

Actually we do that manually but if I can do it by Commvault that will be very helpful.

here is the log file for the restore job

 

The VM is created on VMware but I think that the transfer of data is not done correctly

more informations.

on premise we do direct SAN transport mode.

when I cross restore if I leave the default settings it uses the nbd mode.

 

 

 

Thanks !!

 

 

-------------------------------------

sample errors :

4184  2848  09/22 13:27:01 147179 10-# [DM_BASE    ] IncrOvlThds: Changed Simulated Ovl pool thread count. Max=32, Min=4, Limit=64, Rdrs [1]
4184  33c8  09/22 13:27:01 147179  [DEVICE_IO] DirectReadFile() - Error:  Message: The operation is not valid for the object's storage class
4184  358   09/22 13:27:01 147179 10-# [DM_BASE    ] SyncWithReadAheadThread: Cannot wait for the I/O to complete. iRet [0], LastError [0xEC02AD1D:{CCloudFile::Read(1365)} + {CVBaseRemoteFile::GetLastMMErrorCode(1195)/MM.44317- Message: The operation is not valid for the object's storage class
4184  358   09/22 13:27:01 147179 10-# [DM_BASE    ] ReadTagHeader: Cannot sync up with the read ahead thread. iRet [-1]
4184  358   09/22 13:27:01 147179 10-# [DM_BASE    ] [error] Invalid Read size - -1
4184  358   09/22 13:27:01 147179 10-# [DM_READER  ] DataReader::Read: Failed to READ data from ARCHIVE FILE [499212] COPY [189]. PHYSICAL LEN = [96] LOGICAL LEN =[0]. TAGHEADER = [1]
4184  358   09/22 13:27:01 147179 19-# [FSRESTHEAD ] Encountered error when reading tagHdr from media
4184  358   09/22 13:27:01 147179 19-# [FSRESTHEAD ] Encountered error when reading tagHdr from media
4184  358   09/22 13:27:01 147179 19-# [FSRESTHEAD ] SendDataBuffer: fsRestoreRead reported failure

icon

Best answer by Damian Andre 22 September 2021, 19:52

View original

9 replies

Userlevel 6
Badge +14

Hi @Bloopa ,

 

I note the error in the log: “The operation is not valid for the object's storage class”.
- What’s the Storage Type of the Library that you are restoring the Data from here?
- Do you have any other Copies of the Job?

 

Best Regards,

Michael 

Userlevel 4
Badge +14

Hi,

I have only snapshost then a backup copy because we use intellisnap.

 

Userlevel 6
Badge +14

Thanks @Bloopa ,

Since you’re using IntelliSnap, the VM Conversion (restore) to VMware would be performed from the Backup Copy in the S3 Bucket.

Is this Bucket definitely Standard IA? or is it a Combined Tier?
-If it’s combined with Archive then a recall would be required: https://documentation.commvault.com/11.22/expert/9218_restoring_data_from_archive_cloud_storage.html

 

Best Regards,

Michael

Userlevel 4
Badge +14

@MichaelCapon it is a standard IA.

But the support seems to ask me to make a recall. And this what I am testing.

It is very slow :(

 

Userlevel 4
Badge +14

Does the data goes to Glacier after certain period with S3 Standard IA ?

Userlevel 6
Badge +14

Thanks @Bloopa ,

 

Do you have a Lifecycle Policy Configuration on the Bucket (to move data into archive) at all here? 

 

Best Regards,

Michael

Userlevel 7
Badge +23

Thanks @Bloopa ,

 

Do you have a Lifecycle Policy Configuration on the Bucket (to move data into archive) at all here? 

 

Best Regards,

Michael

I was just going to say this. The error message indicate the object has been archived - so a lifecycle policy may be on the bucket to move it to glacier. S3-IA or commvault wont move it to glacier, it sounds like a lifecycle policy has been configured.

At this point you should be careful - recovering data from glacier costs money. Commvault will try as granular a recovery as possible, but if your entire bucket is now in glacier you need to be careful.

Userlevel 4
Badge +14

Hello, sorry for the delay,

Yes a lot af data are in Glacier.

I have asked to AWS team and they confirmed that.

Lifecycle Policy was disabled 2 weeks ago.

But when I try to restore an Instance from a synthetic full that was completed on 1 week ago, on the logs I can see that there is also chunks that are in Glacier… :(

Should I have to seal the dedup DB and create a new to keep all the data in S3. it is not a problem if we cannot restore older data. What is important is to restore data from now without having to recall from glacier.

 

Thanks for your help !!

Userlevel 7
Badge +23

Hey @Bloopa - yes deduplicated synthetic full is just a logical operation, it does not actually recreate the data but just links to data already contained in other jobs (which is why its so quick). That’s why its referring to jobs that are in glacier. If you seal the DB then it wont refer to any of those old jobs - that could be the solution but its going to create a new baseline and use more storage.

Reply