Skip to main content
Solved

AWS snowball as temporary S3 endpoint for S3 AUX COPY to catch up delay on Aux copy

  • October 6, 2025
  • 2 replies
  • 39 views

JosserandM
Apprentice
Forum|alt.badge.img+4

Hello Team,

We are using AWS GIR S3 public endpoint for AUX COPY on remote site

we get trouble in SDWAN, and I have a large amount of data to catch up on 100Mb link… not enough

The idea is to get a local snowball on catch up aux copy, anyone experience this ?

Anyone experience this ?

Best answer by JosserandM

Thx !!

 

the only thing is that we already have a cloud lib S3 using GIR, so in this case, CS knows already the bucket and we have some directory trees… I m afraid to have 2 Lib in the same time..

Or i have to get the entire bucket back to continue aux copy on it in waiting

 

2 replies

Forum|alt.badge.img+6
  • Explorer
  • October 6, 2025

I did this same exact scenario about 5-6 years ago and seeded a locally racked snowball unit to then copy to an AWS s3 bucket. Here are the steps i used now updated with today’s newer processes.
 

Step-by-Step: NAS to Amazon S3 Cold Storage via Snowball

1. Order the Snowball Device

  • Go to the AWS Snowball console and create a job.
  • Choose Snowball Edge Storage Optimized for large-scale data transfer.
  • Specify your destination S3 bucket (you can later transition objects to Glacier tiers via lifecycle policies).

2. Receive and Set Up Snowball

  • AWS ships the encrypted Snowball device to your location.
  • Connect it to your local network (same subnet as your NAS).
  • Use the Snowball client or AWS OpsHub to manage the transfer.

3. Copy Data from NAS to Snowball

  • Mount your NAS shares (e.g., via SMB or NFS).
  • Use the Snowball client or standard copy tools (e.g., robocopy, rsync, or PowerShell) to transfer files.
  • Snowball encrypts data on-device using AWS KMS-managed keys.

4. Finalize and Ship Back

  • Once the copy is complete, use the client to mark the job as complete.
  • AWS provides prepaid shipping labels — send the device back.

5. AWS Imports Your Data

  • At the AWS data center, your data is decrypted and copied into the specified S3 bucket.
  • You’ll get notifications once the import is complete.

6. Transition to Cold Storage

  • Use S3 Lifecycle Policies to automatically move data to:
    • S3 Glacier (low-cost archival)
    • S3 Glacier Deep Archive (lowest-cost, long-term storage)

🔐 Security & Integrity

  • Data is encrypted end-to-end.
  • Snowball has tamper detection and logs all activity.
  • You can verify file integrity using checksums or hash comparisons.

 

!--endfragment>


JosserandM
Apprentice
Forum|alt.badge.img+4
  • Author
  • Apprentice
  • Answer
  • October 6, 2025

Thx !!

 

the only thing is that we already have a cloud lib S3 using GIR, so in this case, CS knows already the bucket and we have some directory trees… I m afraid to have 2 Lib in the same time..

Or i have to get the entire bucket back to continue aux copy on it in waiting