Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
Can DDB Backups be encrypted?
Hello, I am enabling encryption for my backup data per new requirement by enabling it through the storage policy copy encryption setting. After subsequent backup jobs have completed I have verified encryption to the backup data is set from the storage policy copy report. I do not see an indication that the DDB backups are encrypted. Do they need to be encrypted? This is a requirement by our auditors and they will see this in the report like I did and might ask me why the DDB backups are excluded.Thanks.
Cloud library migration from Azure one tenant to another
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Compression VS deduplication SQL and ORA transaction logs.
How do I read this job status data? What is my deduplication savings if any, or is it just compression alone. Where do I look for actual deduplication savings for the job.Can I leave deduplication enabled for transaction logs, or will that affect performance?Once job is completed I only see this :Could it be thatmy dedupe savings is nil? Or am I am mixing the terms and compression is actually a deduplication?
Configuring WORM on cloud storage
Hi All, Was documenting about the WORM activation on our cloud storage, from different threads here and using the documentation. Came across different questions, which I hope will get some answer through this topic.1 - On the link that follows, it’s stating that “Note: Once applied, the WORM functionality is irreversible”.Does that mean when we activate the WORM on the storage through the workflow, we cannot change the retention ? We wanted as a first time test the WORM, with the setting of the retention of one storage policy copy on the storage pool as a test with 1 day only. Does that mean that we cannot change the retention of the workflow to something else ? Let's say 15 days. 2 - Same from the link, since our storage pool is using deduplication, it’s stated that the retention on the storage will be set twice of the one on the storage pool, our copies on the storage pool will be set to 15 days, does that mean the data will remain for 30 days on the storage without being deleted, af
Catalog jobs from a cloud storage object
Hi Guys,Is there a way to catalog jobs from a bucket within a cloud storage library, like below:The tool offers only a Tape or a Disk as a Media. How do we retrieve our DR backups from a Cloud storage in case we lose everything in order to perform a Disaster Recovery.I found the link below, however it doesn’t show how to retrieve the DR DB.https://documentation.commvault.com/11.24/expert/43588_retrieving_disaster_recovery_dr_backups_from_cloud_storage_using_cloud_test_tool.htmlI’ve also found the below note:Does this mean that if deduplication is enabled, there is no way to retrieve the DR DB?Thanks a lot. Best Regards
Failed to mount media drive
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
DDB Backups: Is the media agent that has a DDB partition associated with it supposed to back that partition up (and not another media agent)?
We have several DDB’s, all partitioned across several media agents. When the DDB backups run, I’m seeing most of the Media agents doing a backup for “themselves” meaning the client and Media agent are the same when the DDB backup runs. But for one of them, I cannot seem to get the DDB backup copy to choose the primary/default media agent (in the copy → data paths settings), to do any of the DDB backups, it always chooses the alternate media path for both DDB backup partitions.I have *not* yet chosen the “use preferred data path” setting (where it should only? use the primary media argent and not use any alternates) as I feel that it should choose the primary and it would auto choose the secondary media agent for the other partition if it needs to.Also: I want the DDBBackups to be slit over 2 media agents because 1 media agent is very overpowered (lots of CPU/memory) relative to the other (older and few CPU’s). The media agent the DDB backups is choosing is this underpowered media age
Media Agent Down
Hi,One of our Media agent is down. It has windows server OS. We are unable to bring up the server. Currently MA is offline. The server also have over 10TB of critical backed up data on it.Our OS team has failed to bring up the server. Please suggest how can we recover from this situation.
Copy data of first week in SSD disks and copy data from 2nd week to 4th week to NLSAS disks
Hi,We have data to backup with a retention period of 4 weeks. The challenge is the following:the data within the fist week of retention period must be copied to SSD disksthe data within 2nd week to 4th week of retention must go to NLSAS disks. So, the goal is to not have the data of the first week retention in NLSAS disks to reduce the space.Is there a way to reach this goal? ThanksRegards,
Health report | exclude sealed DDBs from the DDB Disk Space utilization /strike count
Hello team, I noticed two sealed DDBs have a space warning under DDB disk space Utilization section in webconsole health report.We have a long term retention for mailbox backup that prevent DDB store from reclaim I’m looking to see if there is a way to exclude sealed DDB from the DDB disk space utilization /strike count (I have search CV bol but it doesn’t bring any search result)Or i’ll need to contact support to manually free up sealed ddb space. Do need upload CS DB for CV staging ? thank you
Pure Storage as Immutable Secondary Copy
Hi Community ,We are using disk library as our Primary Backup Storage . We would like to configure immutable secondary DASH copy on Pure Flash Blade . I would like to understand -- Can i create a disk library from Pure Flash Blade with hardware immutability ? Can i use Pure Flash Blade array to create a DASH Aux copy with source as disk library primary copy to target as library created from Pure Flash Blade . Backups are Streaming & VSA , not IntelliSnap . As per the CV documentation and videos , i see that Pure is only configured as Primary backup storage with IntelliSnap backups . Can we use it as backup library target for aux copies ?
Large S3 Bucket Backup
Hi Community ,Can we take a backup of S3 bucket which is 80 TB in size using Commvault ?Consider 10-15% daily change of data. How does Commvault takes backup of S3 . IIs it streaming backup , reading objects 1 by 1 which i expect would be very slow or some sort of Intellisnap capability is available for S3 backup ?Regards, Mohit
Archive backup from local Commserve on tapes to AZURE.
Hello, One of our client has Archive backup on tapes which is configured in one location and that type of infra is managed by local Commserve server. There is a question about the possibility of migration that Archive backup to AZURE. In the end client would like to get all archive backup in AZURE cloud with possibility of manage the backup from Commserve in AZURE cloud which is a different machine (Global solution for organization). And my question what will be the best approach for that migration and how to calculate the storage usage in AZURE for that migration process? Regards, Michal
Selective copies using same Global Deduplication Copy
Hello, When using plans we would like to extend the retention time for some client using a selective copies and reuse the same Global deduplication policy. Most of our backups are IntelliSnap snapshots. Is this possible, how would we best do it? Are there any best practices for this?
AUX Copy optimization from Disklib to S3 library
We’re running CV11.24.25 with a two-node grid (physical) with CIFS mount paths from a Nexsan Unity that takes secondary copies from MAs that perform backups (no direct backups other than DDB), with a a partition on each MA. We decided to replace this with a four-node (virtual) grid with S3 (NetApp) storage. The four-node grid was set up with a global dedupe policy based on 512KB dedupe block size with a partition on each node. The two-node grid is the standard 128KB dedupe block size.We had ~600TB of back-end storage (~3.3PB of front-end) and have ~1.75PB front-end left to process after about two months of copying. There were 105 storage policies (multi-tenant env) with retentions ranging from 30 days to 12 years (DB, file, VM, O365 apps) with anything higher than 30 days being extended retentions (normally 30 days/1 cycle and then monthly/yearly with extended retention).We do not seem able to maintain any reasonably high copy rates. Having looked at other conversations here we’ve trie
Disk Pruning Not Running on DDB
Hi All.I have a DDB on a Linux Media Agent running in Azure. The “CommServe Job Records to be Deleted” is very high and reporting as Critical in the Command Center Health Report.I have confirmed that the Storage Policies associated to this DDB is enabled for Data Aging. Physial pruning is also enabled on the DDB.When running Data Aging to this specific DDB/Copy there is no entry in the MediaManagerPrune log on the CommServe for this DDB ID. All the other DDBs are listed.There is also no SIDBPhysicalDeletes log file on the MA.I have checked the jobs and no jobs are retained past the retention period.Any idea what would cause the records to remain on this DDB?Let me know if you require any additional information that could assist.Thank you.Ignes
Regarding Dedup path for cloud Storage pool creation
Hi Team, I have a HyperScale with 6 node cluster in on-prem for primary copy. For Secondary copy I need to move the data to Cloud (Archive Storage).My doubt is, when I am creating the Cloud Storage Pool, Do I select the existing (On-Prem) de-dup path (/ws/ddb/P_1/Copy/_21/Files/31) ( or) Do I need to create a dedicated the dedup path in the on-prem MAIf I need to option 2, then what is the recommended dedup partition value and reason behind that? And also share the your best practice for hybrid data protection, if any. Thanks,Manikandan
Stream allocation for Auxcopy
Could someone explain a flow of process for allocating readers for an auxcopy job?What does stream allocation take place in CVJobReplicatorODS exactly? I see that the number of readers is a function of num of CPU and RAM. By default, each CPU in a proxy can support 10 streams, and each stream requires 100 MB of memory. For e.g for VSA.E.g. My VSA backups were backed up with x number of readers and y number of writers to HPE StoreOnce Catalyst.How does the replication agent (auxcopy) decide and allocate readers for copying a list of N jobs to tape. Imagine there is a primary copy → Storeonce Catalystsecondary copy → tape (combine streams to 2 tape drives with multiplexing factor of 25). Number of readers varies from 12 to 38 in my environment. It appears that the number of readers used for the subclients plays a role in the number of readers that will be assigned in the auxcopy. My goal is to increase the readers for the auxcopy jobs for improving the performance. My auxcopy with 38 rea
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.