Skip to main content
Question

MP deletion

  • December 5, 2024
  • 3 replies
  • 35 views

Forum|alt.badge.img+5

Hi Guys,

I have disk library with 8 MPs inside and 1 of them selected /plan for decommission as i dont need it or must destroy hardware under it (mean plan for decommission as selected on MP level “disable MP for new data” + “prevent data block references for new backup” 

1st mine question is why CV still, after so many years on market,  have no possibility to decomm MP. Mean here simple click on MP from GUI and then options “decomm it” which will move/spill from that MPs to other MPs in the same library all data (jobs) and  dedup data for jobs ?  And again as probably will be here such suggestion i dont want move MP to other place  as i want to get rid of it and have clean environment beside to remeber that MP called X is laying here but i real is not need :( !

 

But as i have not what i want i must take longer way to achieve what i want so wait till data decommiss ….here lucky if user have not INFINITIVE retention as then CV is as child “can not do anything”

But let imagine that we have 1 month retention and after one month data disappeared = no jobs on MP.

But MP still cannot be dleted as we see “Note: Mount path may contain baseline data still in use by a deduplication store.  Please view the Deduplication DBs tab on Mount path properties.”. Other word in mine environment are other deduplicated jobs which referenced block lays on mine MP.

And now what CV …..if these data with longest retention wich use that blocks have 3 years i need wait 3 years ?? so stupid 

Of course we can fight by selecting on ddb to not deduplicate through objects older then X days (btw minimum is 365d)  but is again so around road to target :(

here comes 2nd mine question why CV not support lets cal it “selective” sealing on MP level and again i can not make seal full ddb as it 1st will create PTB of redundant data and also will not make any faster to stop use older block from sealed ddb  

 

and last lest say cherry on top of cake dear CV please say how to check, after data/jobs disappeared from MP, which jobs referencing still to block on that MPs.  Yes SQL query will be enough. If You ask why i want to know that as maybe then i can “manually decide” that  deleting (read loosing backups) of particular jobs can be better then waiting for example years before data stop be referenced to that MP/ddb on it …...here on margin i have already case that i proof to You that sometimes You forgot about data on disk read i have left V_ folders which occupy space but were not used by CV anymore ...    

 

 

 

3 replies

Damian Andre
Vaulter
Forum|alt.badge.img+23
  • Vaulter
  • 1306 replies
  • December 13, 2024

It’s not possible to decommission mount paths to other mount paths because of the deduplication architecture. The benefit commvault has over so many other solutions is that the deduplication database is not required for restores (which, is hugely beneficial). In order to accomplish that, static references between blocks have to exist, so a lookup is not requires when trying to string together data for a job which would harm performance. If you move a block from one MP to another, the static reference is lost and the data wont be found. Decommissioning mount paths is extremely rare, so it is a good trade off.

As you mentioned, moving the mount path is the best option in this situation, else you will have to wait for all the data to age to remove it.

‘Selective sealing’ as you mentioned does exist. Support has a script that will disable referencing blocks from a specific mount path for new data.

The CommServe database does not keep track of data blocks to jobs, that is the job of the deduplication database which is why a query does not exist for this. Again, this is all by design to support performant deduplication. The deduplication database supports billions of records, it is not appropriate for the CommServe to hold that information. If you feel like orphaned data exists, there is a tool to scan and remove data that may have failed data aging and is orphaned. Its part of the disk reclamation feature, you can read more about there in this thread:

 


Forum|alt.badge.img+5
  • Author
  • Byte
  • 14 replies
  • December 16, 2024

on start thank You for all You answers but ...

It’s not possible to decommission mount paths to other mount paths because of the deduplication architecture. The benefit commvault has over so many other solutions is that the deduplication database is not required for restores (which, is hugely beneficial). In order to accomplish that, static references between blocks have to exist, so a lookup is not requires when trying to string together data for a job which would harm performance. If you move a block from one MP to another, the static reference is lost and the data wont be found. Decommissioning mount paths is extremely rare, so it is a good trade off.

 

this is what i dont understood ..You say “static references between blocks must exist”  ...so please excuse how that reference work and i will write mine own program/query/script etc  to look what i need 

 

second thing  ...i understood that if i move just by copy paste that reference will be lost but i dont propose that i asking CV development to make such function so then it will be not stupid copy paste but still will keep that reference …

 

third thing  ...rare ...not rare ...i put nuts contra dollars that You will be be surprised when You will make stupid survey how many people will vote for that ...You dont believe just use google and check when first question about that was asked ...it is not 1 or 2 years but much more   You must understood that from fact that CV doesn't see that is need doesnt mean that customers using it in daily routine dont need it also ….i can give example from another area how many years we waiting that CV console windows will have “maximize button” ….near 20 years ago wen was this time programming in stupid VB i made that for mine programs but here in near 2025 we have still CV windows which take ¼ of screen and data  presented in it take ¼ of that window and are hardly readable ….we have AI on scene and You still not have resizable windows ;) 

 

As you mentioned, moving the mount path is the best option in this situation, else you will have to wait for all the data to age to remove it.

 

and here we make big circle …..cause You say can not be done but no one question why customer need it ….1st some people have infinitive retention and must remove MP as decommission hardware under it and keep it only cause CV can make override script for static reference is little stupid for me ...other word to make easier for You i must keep mess in mine environment ….2nd even worst is fact that i dont believe CV in cleaning data ..i already have case Your software forget about data on disk and You development wrote script which after start clean it for me  ... read it delete data which should be deleted many years ago 

‘Selective sealing’ as you mentioned does exist. Support has a script that will disable referencing blocks from a specific mount path for new data.

 

of course exist but is not developed as should be ….i can close MP for new data + block it to reference for new data + set on ddb level to not deduplicate against data older then X days (here You made unfortunately minimum 365 days)  and then lucky after ..year maybe i achieve what i need ...of course if ddb logic will work ...but not as i have still data on MP ….no more job data as that CV stated clearly in MP properties but CV still see block used /referenced by DDB and that i need to understood who is referencing to that data as follow mine knowledge after so long time should be no one 

 

The CommServe database does not keep track of data blocks to jobs, that is the job of the deduplication database which is why a query does not exist for this. Again, this is all by design to support performant deduplication. The deduplication database supports billions of records, it is not appropriate for the CommServe to hold that information. If you feel like orphaned data exists, there is a tool to scan and remove data that may have failed data aging and is orphaned. Its part of the disk reclamation feature, you can read more about there in this thread:

 

and here we are in clue of mine question ...what id buit in mechanism for orpahns files doesnt work ...mean when i start it and that still says that data are need it but now one in CV can say need by what ...as date & time for files shows that no one should still use it .

and i believe You that “database does not keep track of data blocks to jobs” but for sure something keeps as if not so how knows when data are not more longer need and can be deleted  or other words how CV now that on MP which have no more jobs are ddb data which must be kept as are still referenced ? Yes i ask about that counter which says that no more reference and ddb data can be also deleted ….just be for seconds in mine perspective i have x TB data on MP which follow all mine knowledge should be no more longer utilized but is ….   and i have tip from You just forget about it ..move somewhere (as that we give to You as MP move ) and  someday it will be deleted but only CV knows when …..a and if You still will be snippy then CV development will do one time script which in real will do what should be done years ago but still will not say why it (deletion) not happen automatically and need by triggered by script on customer request 


Damian Andre
Vaulter
Forum|alt.badge.img+23
  • Vaulter
  • 1306 replies
  • December 19, 2024

 

on start thank You for all You answers but ...

 

Just trying to help where I can with the knowledge I have 😊

 

this is what i dont understood ..You say “static references between blocks must exist”  ...so please excuse how that reference work and i will write mine own program/query/script etc  to look what i need 

 

Unfortunately It is not possible to share this information. If you like, you could try crank up the debug for the auxcopy process and run a verification for your entire disk library. That could print out the each chunk and each of the billions of block offset if the debug is high enough, but I would not recommend it.

 

second thing  ...i understood that if i move just by copy paste that reference will be lost but i dont propose that i asking CV development to make such function so then it will be not stupid copy paste but still will keep that reference …


Here I would suggest to open a support case and put in a feature request or CMR for this if you have not already. To me this seems like it is not practical to do - especially while jobs are moving and continuously referencing these blocks. This is my own opinion and I don’t speak for the engineering team. I understand the frustration you have but this is not a trivial ask. A CMR would be the best way to ask for this feature and for the engineering to explore the possibility

 

….i can give example from another area how many years we waiting that CV console windows will have “maximize button” ….near 20 years ago wen was this time programming in stupid VB i made that for mine programs but here in near 2025 we have still CV windows which take ¼ of screen and data  presented in it take ¼ of that window and are hardly readable ….we have AI on scene and You still not have resizable windows ;) 

 

You are right - the Java UI has not been developed improved for several years - instead the company is developing the Command Center (which, includes AI). Access to the CommCell console has been blocked for a year now for new deployments. But I understand many customers are comfortable with the CommCell console and right now prefer that UI which is why it is still available for deployments prior to 11.34.

 

and here we make big circle …..cause You say can not be done but no one question why customer need it ….

 

I can understand why its needed, but that does not make it any more practical to do, at least from my perspective. I can’t speak for engineering so the CMR is probably the best way forward on this.

 

of course exist but is not developed as should be ….i can close MP for new data + block it to reference for new data + set on ddb level to not deduplicate against data older then X days (here You made unfortunately minimum 365 days)  and then lucky after ..year maybe i achieve what i need ...of course if ddb logic will work ...but not as i have still data on MP ….no more job data as that CV stated clearly in MP properties but CV still see block used /referenced by DDB and that i need to understood who is referencing to that data as follow mine knowledge after so long time should be no one 


I think we’re going in circles here. You already mentioned this is not a good solution for you. I was merely indicating that an option does exist to stop referencing blocks against a mount path. You asked for this capability and I am saying it exists with the use of a script that support can provide.

 

and here we are in clue of mine question ...what id buit in mechanism for orpahns files doesnt work ...mean when i start it and that still says that data are need it but now one in CV can say need by what ...as date & time for files shows that no one should still use it .


If you are certain you have orphaned data, ask for an escalation to engineering in your support case.

 

and i believe You that “database does not keep track of data blocks to jobs” but for sure something keeps as if not so how knows when data are not more longer need and can be deleted  or other words how CV now that on MP which have no more jobs are ddb data which must be kept as are still referenced ?

 

Ah this is all part of the secret sauce. There is a big difference between knowing how many secondary references there are to a primary block, and knowing where those secondary references are. We don't need to keep track of where the secondary references are - only a count of how many references there are against a primary. When the count is zero, nothing else is refencing that block and it can be deleted. The way those counts are managed depends on if you are using garbage collection or not but is not really relevant.

I’ll wrap up by saying. I totally understand your frustration - not trying to be obstructionist or combative, but sharing my experiences and opinions. I wish we had a better solution to your problem but - and it may not seem like it - its a really difficult problem to solve.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings