Hello Jmiamaral,
It might be best to contact Amazon and ask them how much it would cost to egress all of the data you have stored rather than actually egressing the data, incurring cost.
DDB Verifications against deduplicated data backed by Archive\Glacier storage are not recommended specifically due to the cost incurred from retrieving the data from the Archive\Glacier bucket.
Regards,
Collin
@Collin Harper
Sure i can do that, but what about my main question?
Kind regards,
Jmiamaral
@jmiamaral
There is no way to run the workflow “for the whole DDB”. It would have to be done by individual job associated to the DDB. There used to be a manual way to recall the data (we may have removed the steps from our documentation and the procedure deprecated) but it would still require you to browse each job to generate a chunk listing, then run CloudTestTool to rehydrate the data.
Additionally if the data is direct to Glacier than dedpending on the amount of data, recalling all of it may not be feasible. This is because Direct to Glaicer does not honor recall retention and data is again de-hydrated back to Glacier after 24hrs. This means if the recall takes more than 24hrs, the first data would be dehydrated while the recall job is still running.
Regards,
Collin
@jmiamaral
A custom workflow may be possible, but that would require Professional Services to design for you.
Regards,
Collin
@Collin Harper
Ok thank you for your insight :)
Kind regards
Jmiamaral