Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
Hello,We have been out of space in one of the two libraries we have, and to expand it we have to change the current hard disks for others of higher capacity, as we have no option to add additional disks or modules. The affected library contains the DR copy, so we have changed the path in the storage policy to copy it to another library and be able to free the affected library, delete it and configure it again with the new disks.When we try to do this we get an error, because the DR copy is of warm type, and has a long retention that until it expires we cannot delete it, even though we have another copy in the other library.How can we proceed?Best regards, and thanks in advance.
Someone on my team wanted to try to add a new Tape Library to the environment using Command Center. I had never done this in Command Center before, so I watched the user go through the steps. We found that it had created the library and the storage pools. But we could not find any way to create barcode patterns or to create another scratch pool from the Command Center.We are an MSP and this is an essential step in being able to use the library.We then tried to create the barcode patterns and other scratch pools from the CommCell Console. We created the entities but then found we could not associate it to the pool/plan.It appears that the option to change the scratch pool is greyed out in this case.Is it expected that you cannot edit the scratch pool when the library/storage policy/plan is created from the Command Center? Or are we missing something here?Furthermore I was expecting a much more user-friendly approach to adding tape to the environment. For example, the user had to s
HelloAt the request of the client company, I proceeded with the recovery of Oracle DB backed up in PTL Media as follows.1. Backed up original DB to PTL Media. (SAN)2. The DB backed up with PTL was restored to the recovery server. (Network, using Backup Server MA)Different drives were used for backup and recovery.No issues occurred during the backup.The following message is output with a Failed unmount error during the recovery process:The path is being accessed by another application.Advice : please make sure that no other device explorer application like SAN explorer are running on the machine.PTL devices are used by Commvault only.I would appreciate it if you could advise me on what to check to resolve the issue.
Commcell v11Backed up server folder needs to be restored from multiple backups which are on multiple tapes over a one year timespan. (38 Full BU)We need all backup from the full year. Looking for a way to complete this short of doing individual recoveries. Suggestions?Thank you.
Hello, a customer asked if it would be possible to make all their primary copies (for disk libraries) WORM-protected and what the implications were.Up until now our standard is n-days/1cycle retention on primary copies and n-days/0cycle retention on a secondary copy.We basically use the 1 cycle as a safety net. So if for whatever reason the backup of a client does not work for a long time, they always have one backup available without setting a manual retention on those jobs.Now we have an internal discussion about how the retention for WORM works. Specifically if the cycle retention is also relevant for manual deletion of the jobs/clients that hold those jobs.So for example: A client is using a WORM storage policy with 14 days/1 cycle retention.Data Aging will not age out and delete the jobs automatically until both conditions are met.But is it possible to manually delete the jobs on day 15? I would say it is not possible because it is still retained by the cycles. If that is the case
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
Hello,I am trying to fully understand restore point retention to achieve my goals. I also have Incident 221117-488 currently open about this.Here’s what I am trying to achieve: a low priority backup plan that runs on a daily schedule, and consistently retains as close to 3 restore points as possible.Here are the settings I currently have for my base and derived planBase plan: WIN_SYS_STD_BASE_LOWSLA: 1 week, inherited from CommCellBackup destinationsPrimary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO - 4 hour(s)Run full backup every: 1 weekStorage pool Override not allowedRPO Override requiredFolders to backup Override optionalDerived plan (Defines scheduling only): WIN_SYS_STD_BASE_LOW_10PMDefined in Java GUI: Run synthetic full every 3 daysBackup frequency: Run incremental every day at 10:00 PMBackup destinations (Inheriting from base plan)Primary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO
We have 2 Synology 24TB storage libraries, one for the primary and the other at another site for the AuxCopy.The primary has Free Space 15.34TB, Size on Disk 8.59TB, Total Application Size 30.5TB.The AuxCopy has Free Space 3.62TB, Size on Disk 20.33TB, Total Application Size 30.33TB Shouldn’t the AuxCopy be an exact replica of the Primary? Why would it be larger? Thanks for any suggestions.Larry
Hello,I’ve set up a new media agent, v 11 SP24.34 and installed the CV MA software. I’ve created my Maglib also. I’ve created a storage policy and my primary copy. What i want to do now is to create a secondary copy. I’ve done that but when i click ok i get an error : “Internal Error. Incorrect parameter passed to the SIDB engine”. When i click ok to that error, i get another error : “Invalid library ”. My secondary copy does not get created. Has anyone experienced this before?RegardsFergus
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
Hello CV community!I see that from 11.24, you can add snapshot copies to server plans.https://documentation.commvault.com/v11/essential/139040_new_features_for_snapshot_management_in_1124.htmlIm not sure, this snap copy in supported only with specific type of storages?Does anyone actually use it ?Please for your feedback,Nikos
Hi,I'm setting up the new Linux (Red-Hat) MediaAgent right now and I have a "what would be better" question. Maybe someone would like to share experiences :) On this new MediaAgent I plan to create a new disk library. MediaAgent will have available resources via SAN from an array (several volumes of 8TB each). Is it better to do LVM on these volumes (create vgs, create lvols and finally create filesystem - for example ext4) or better to make a gpt (parted) partitions and create ext4 without creating vgs and lvols?I am very curious about your opinion what would be a better solution.greetings
We have an old physical media agent doing Aux Copies AND writing tapes every month. When tape jobs start for this media agent, the throughput of the aux copies suffers greatly as the CPU goes to 100% and stays there until tapes finish (might be 10+ days). We wanted to see if we could easily offload the “tape writing” jobs as we have a newer (but not very powerful) physical system we could repurpose. This would entail installing a new physical media agent in the rack, adding SCSI HBA’s to it and connecting them to the tape drives. Are there any “gotchas” when setting up a new media agent for tape jobs (either special physical hardware considerations, or software/config gotchas, or guides I should be looking at)? I haven’t set a media agent up (especially doing tapes) and this is a that was maintained by a person who is no longer working with us. Also: We’re using LTO7 M8 tapes with LTO8 drives.
Hi, I came into work and noticed dozens of jobs in a waiting state due to the mount path not having enough free space. I am aware that we need to add more storage and we are going to, but in the interim I tried lowering the reserve space from 6TB to 2TB.so that the jobs can finish, and I also see what can be cleared. It’s not letting me change it. It will only go to 5960 GB. I currently have a ticket in with CV. 221017-401. Is there a way to fix this?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.