I use put_object to copy from s3 bucket to another cross-region, cross-partition. The problem is the file sizes have become more unpredictable and since get_object stores to memory, I end up giving it more resource than it needs most of the time.
Ideally I want to "stream" the download/upload processes
For example, Given i have an object hash of 123abc456def789
Scenario: Download/Upload object in chunks
- Download part of the object
123save to memory - Upload part of the object
123remove from memory - ... and so on until
789
This way what gets written to buffer is constant space
It was suggested to use copy_object but I transfer between normal to GovCloud so this is not possible. Ideally i want to get away from downloading to disk.