Q&A. Technical expertise. Configuration tips. And more!
Recently active
Hello,I am looking for a solution which can trigger a script outside from Commvault when ever a backup job is failing. The script needs only give over the failing JOBID.Plan is to automate a quick analyze of the failure and trigger the right team to fix the issue without manual invention.thanks in advance for your suggestions.regards Jürgen
Hi all, we are looking into our backup strategy and investigating few scenarios of backend storage. Right now we are considering object storage via S3 protocol and file storage (JBOD) over NFS protocol. The data which will be send there is on-prem filesystems, databases, VM’s etc. Total capacity - over 10 PB.Currently we tested some object storage over S3 protocol but we faced issues with data reclamation (garbage collection for expired objects taking way too long and capacity reclaimed wait time takes over month or few). Can you share your experience with back-end storage, what challenges you faced or how you solved my mentioned issues, also, what advantages you see comparing S3 & NFS protocols for backups. All feedback is very appreciated.Thanks!
Hello, as the commserve DR is stopped, what is the procedure to modify the path used for DR backup ?
One of the benefits of application modernization, containers, and orchestrating your data-center with Kubernetes is the ability to move to 'fully programmable' infrastructure.Now, while this makes the developers day much much easier as they provision apps / storage whenever and wherever they need it - what impact does that have on backup & recovery?Well, for Metallic (and Commvault) it is simple...Commvault is cloud-native, and we use Kubernetes Label Selectors to automatically discover applications based on the 'labels' that developers apply to their apps.Let's take a look at a standard application definition:apiVersion: v1kind: ConfigMapmetadata: name: postgres-config-demo namespace: env-prod labels: app: postgresdata: POSTGRES_DB: demopostgresdb POSTGRES_USER: demopostgresadmin POSTGRES_PASSWORD: demopostgrespwd---apiVersion: v1kind: Servicemetadata: name: postgres namespace: env-prod labels: app: postgresspec: ports: - port: 5432 name: postgres clusterIP: N
We converted several VM’s that were backing up as FSA to be backed up via a VSA backup.Question: What settings can I use in the Reporting just to tell me ONLY the clients that were disabled and deconfiguredREF: job summary report
My Boss found something he wants as a report and I don't know if there is a way specifically provide it. After reviewing a FLA (File Level Analytical) Report, the report shows specific servers that are backed up showing how much data we are backing up per client that hasn't been modified within Less than 3 Months, 3-6 Months, 6-12 Months, 1-2 Years, 2-5 Years, More than 5 Years. My boss loves this view point on a client level but wants to know if Commvault can do this on a grand level to show how much data we are backing up as a whole that hasnt been modified within Less than 3 Months, 3-6 Months, 6-12 Months, 1-2 Years, 2-5 Years, More than 5 Years. Does anyone know if there is a report such as this? My boss wants it at per site, but trying to think Commvault level and see if this is possible at library level, or media agent level?
We have a subclient that is a Drobo Share, we are backing up as a file system.This is effectively a Drobo share on a server we are backing up.So I see the error Cannot scan [\] but I am able to browse the share, however there is a failed drive on the Drobo device. Raid 1My question is, will a failed drive on a Drobo device cause a Cannot scan [\] error ?
Hello All, I’m just starting to play with Office 365 configurations in my lab. As I’m going through the documentation and watching some YouTube videos, I’ve come across a question I haven’t seen mentioned. Can I have the Index Server and the Exchange Proxy server on the same Windows Server. I’m with an MSP and we haven’t started using Office365 for our tenants yet. It seems going through the documentation that we need a separate Exchange Proxy for each tenant but I’m unsure of the Indexing Server. Do I need an Index Server for each tenant and can that be on the Exchange Proxy server? Looking to cut back on the number of Windows VMs to manage basically. If they can’t be on the same server, can I use the same index server for multiple tenants then have separate Exchange proxies. Any other tips would be welcomed. Thanks.
Hi All,Is the Offline Mining Tool compatible with Exchange 2019?Unfortunately I cannot use the live browse as the backup is performed on a VTL library. With Exchange 2013 it works fine. Now when I try to open the restored DB (Exchange 2019) I see the following errors:
Something that is annoying me for some time now is that we often see running jobs with a job update time that is lacking behind for 15 min. and more. This has an effect on the information like average speed and also gives a "incorrect” indication to the end-user performing a backup/recovery. Especially during a restore this is seen so a user has no idea how long the recovery will take.I did some investigation already but without results. Anyone an idea if there is something to tweak?
I was installing a 3 node HS1300 (HS1.5) as a Secondary CS and it failed after 50-70% of run. It has created the secondary CS and I am able to login there. But soon after the setup failed running some scripts and there it ended.Though support case is logged. I was wondering if I can do some cleanup and retry the setup?