Question

Successful Full backup of FS backup not included all the files


Badge +4

We are running Ubuntu Full FS backup for a client with Backup size ~18TB with ~100+ Crore numbers of files. Which is successful but the backed up date size is only ~9TB. Backup completed without any errors or warnings. There is no exclusions in the sub client content.

When we are trying to restore most of the directory are showing empty which is not actually empty in the client server. 

Can someone suggest why this is happening ?


13 replies

Userlevel 6
Badge +15

Hi @Arun Kumar Dadi 

Are you sure that there are no global or local file filters applied in the subclient configuration ?

I need to ask but when you are trying to restore, do you restore from the FS client/subclient, or do you try to restore from the job history ?

 

Badge +4

There is no global or local filters. Even we didn’t restored the data because the total back up data size is 50% of the actual data in the client server. That’s where we stopped and we are not bale to start the restore as the data size of few sub-directors is zero from the console of browser and restore

Userlevel 6
Badge +17

Is the missing data on an NFS mount path? If so, it may be excluded by default.

https://documentation.commvault.com/2022e/expert/24174_frequently_asked_questions_linux_file_system.html
https://documentation.commvault.com/additionalsetting/details?name=ignoreFStype

Thanks,
Scott

Badge +4

Hi Scott, This is not an NFS mount. This is a storage mount on ubuntu server with crores of files. size ~18TB

Userlevel 6
Badge +15

Hi, 

Still several possibilities..

If you check the ‘missing files’ full path +name length, how long is it? 

Charset used for names?

I have never come across any limitation in amount of files that a unix flesystem client could backup. 

Most likely the limit was always the storage itself, the access rights of files to backup compared to rights used by file-system agent, depth of path/filenames..

Have you tried to log a case at commvault support and reviewing the initial backup job logs ? 

Badge +4

The directory structure is parent/subdirectories/files

only 3 layer file structure but in the 2nd layer we have 10,000 directories and each directory will have individual files ~1.8 GB

Userlevel 6
Badge +15

OK and how many data reader / streams are configured in this subclient ? 

open subclient, then Advanced subclient properties, and check the Performance tab. You can screenshot the result here.

And BTW how many CPUs has this server ? is it physical or virtual ?

Badge +4

This is a physical server and maximum streams(4) are allowed.

Userlevel 1
Badge +5

Hi @Arun Kumar Dadi 
Are these backup taken using the default subclient and if there are any other subclients in the backupset that is configured with the same content? as the default subclient does not back up the content that is specified in the other subclients within the same backup set.

Userlevel 4
Badge +10

Hi @Arun Kumar Dadi 

 

1. What is the mount point type for which data is missing?

 

2. Do you have multiple mounts with same name but different case? For instance, /usr/folder1 & /usr/FOLDER1?

 

3. Could you restore troubleshooting folders for the job? Take a sample file which is not visible during browse. If the file name is “file1.txt”, grep it using “grep file1.txt CollectTot*”

This will help us identify whether the file was even qualified for backup or not.

 

Thanks,

Sparsh

Userlevel 7
Badge +23

Are you backing up from a snapshot - i.e an exposed NFS mount from a snapshot copy of production data?

You should also try explicitly targeting the directory that is empty and see if it works.

Userlevel 3
Badge +10

I think this scenario is wierd enough to warrant a ticket. Without details on your specific configuration there is really nothing we can but guess.

Badge +4

No its a flat files of count around 1500 millions with 18 TB in size.

We configured intellisnap still it is taking 3 days to complete with 2 TB IC, JR with 3 TB.

Reply