AWS S3 bucket - Moving all xmls files form one S3 bucker to Another S3 bucker using Python lamda - aws-lambda

In my case, I wanted to read all XML form my s3bucket/ parsing then move all parsed files to the same s3Bucker/
for me parsing logic is working fine but I am not able to move all files.
this is the example I am trying to use
**s3 = boto3.resource('s3')
src_bucket = s3.Bucket('bucket1')
dest_bucket = s3.Bucket('bucket2')
for obj in src_bucket.objects.all():
filename= obj.key.split('/')[-1]
dest_bucket.put_object(Key='sample/' + filename, Body=obj.get()["Body"].read())**
above code is not working for me at all (I have to give s3 folder full access and for testing given public full access as well).
Thanks

Check out this answer. You could use python endshwith() function and pass ".xml" to it, get a list of those files and copy them to the destination bucket and then delete them from the source bucket.

Related

Upload CSV payload to Google Storage using Ruby

I am trying to upload string payload to Google Storage directly using Ruby. But, it seems there's no direct way to do this without creating a temporary file in the disk.
I am using the CSV library to generate a string payload.
Current method suggests to store the string payload in a temporary file and then use code something like below to upload the file to google storage:
require "google/cloud/storage"
storage = Google::Cloud::Storage.new
bucket = storage.bucket bucket_name
file = bucket.create_file local_file_path, file_name
Is there a way to avoid creating a temporary file to upload?
I found the documentation where it states that we can use any File-like object such as StringIO to upload string payload directly.
Here's my code:
require "google/cloud/storage"
storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-todo-app"
bucket.create_file StringIO.new("Hello world!"), "hello-world.txt"
Go to Creating a File section of this link for example
Documentation Link:
https://googleapis.dev/ruby/google-cloud-storage/latest/Google/Cloud/Storage/Bucket.html#create_file-instance_method

Placing file inside folder of S3 bucket

have a spring boot application, where I am tring to place a file inside folder of S3 target bucket. target-bucket/targetsystem-folder/file.csv
The targetsystem-folder name will differ for each file which will be retrived from yml configuration file.
The targetsystem-folder have to created via code if the folder doesnot exit and file should be placed under the folder
As I know, there is no folder concept in S3 bucket and all are stored as objects.
Have read in some documents like to place the file under folder, have to give the key-expression like targetsystem-folder/file.csv and bucket = target-bucket.
But it doesnot work out.Would like to achieve this using spring-integration-aws without using aws-sdk directly
<int-aws:s3-outbound-channel-adapter id="filesS3Mover"
channel="filesS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.target.bucket}"
key-expression="headers.targetsystem-folder/headers.file_name"
command="UPLOAD">
</int-aws:s3-outbound-channel-adapter>
Can anyone guide on this issue
Your problem that the SpEL in the key-expression is wrong. Just try to start from the regular Java code and imagine how you would like to build such a value. Then you'll figure out that you are missing concatenation operation in your expression:
key-expression="headers.targetsystem-folder + '/' + headers.file_name"
Also, please, in the future provide more info about error. In most cases the stack trace is fully helpful.
In the project that I was working before, I just used the java aws sdk provided. Then in my implementation, I did something like this
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest("target-bucket", "/targetsystem-folder/"+fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
I didn't create anymore configuration. It automatically creates /targetsystem-folder inside the bucket(then put the file inside of it), if it's not existing, else, put the file inside.
You can take this answer as reference, for further explanation of the subject.
There are no "sub-directories" in S3. There are buckets and there are
keys within buckets.
You can emulate traditional directories by using prefix searches. For
example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2

AWS copying one object to another

I'm trying to copy data from one bucket to another using the ruby "aws-sdk" gem version 3.
My code is shown below:
temporary_object = #temporary_bucket.object(temporary_path)
permanent_object = #permanent_bucket.object(permanent_path)
temporary_object.copy_to(permanent_object)
However I keep getting the error Aws::S3::Errors::NoSuchKey: The specified key does not exist. Which makes sense as the permanent bucket doesn't exist at this moment however I thought that using copy_to creates the bucket if it does not exist.
Any advice would be very helpful.
Thanks

Manually populate an ImageField

I have a models.ImageField which I sometimes populate with the corresponding forms.ImageField. Sometimes, instead of using a form, I want to update the image field with an ajax POST. I am passing both the image filename, and the image content (base64 encoded), so that in my api view I have everything I need. But I do not really know how to do this manually, since I have always relied in form processing, which automatically populates the models.ImageField.
How can I manually populate the models.ImageField having the filename and the file contents?
EDIT
I have reached the following status:
instance.image.save(file_name, File(StringIO(data)))
instance.save()
And this is updating the file reference, using the right value configured in upload_to in the ImageField.
But it is not saving the image. I would have imagined that the first .save call would:
Generate a file name in the configured storage
Save the file contents to the selected file, including handling of any kind of storage configured for this ImageField (local FS, Amazon S3, or whatever)
Update the reference to the file in the ImageField
And the second .save would actually save the updated instance to the database.
What am I doing wrong? How can I make sure that the new image content is actually written to disk, in the automatically generated file name?
EDIT2
I have a very unsatisfactory workaround, which is working but is very limited. This illustrates the problems that using the ImageField directly would solve:
# TODO: workaround because I do not yet know how to correctly populate the ImageField
# This is very limited because:
# - only uses local filesystem (no AWS S3, ...)
# - does not provide the advance splitting provided by upload_to
local_file = os.path.join(settings.MEDIA_ROOT, file_name)
with open(local_file, 'wb') as f:
f.write(data)
instance.image = file_name
instance.save()
EDIT3
So, after some more playing around I have discovered that my first implementation is doing the right thing, but silently failing if the passed data has the wrong format (I was mistakingly passing the base64 instead of the decoded data). I'll post this as a solution
Just save the file and the instance:
instance.image.save(file_name, File(StringIO(data)))
instance.save()
No idea where the docs for this usecase are.
You can use InMemoryUploadedFile directly to save data:
file = cStringIO.StringIO(base64.b64decode(request.POST['file']))
image = InMemoryUploadedFile(file,
field_name='file',
name=request.POST['name'],
content_type="image/jpeg",
size=sys.getsizeof(file),
charset=None)
instance.image = image
instance.save()

Need help transferring a jpg from web server to S3 - PHP and CodeIgniter

I have a Flash application that captures an image and passes an encoded image to my web server. The web server then decodes the image and saves it to a tmp directory. That part works fine. Next I want to move this image from the web sever to my S3 account but am having trouble. I am using the code below. Any help is appreciated.
In the CodeIgniter Controller Constructor:
$this->load->library('S3');
In the function (also within the controller for now)
/************** S3 upload example***************/
if (!defined('awsAccessKey')) define('awsAccessKey', 'xxxxxxx');
if (!defined('awsSecretKey')) define('awsSecretKey', 'xxxxxxx');
$s3 = new S3(awsAccessKey, awsSecretKey);
if($s3->putObjectFile($filePathTemp, "bucket", $filePathNew, "ACL_PUBLIC_READ_WRITE")){
#unlink($filePathTemp);
}
I can't even get the $s3 variable to return anything in an "echo" statement by returning a value on the first line of the S3 constructor. I can't access the S3 object/class, but the following statement returns a "1":
echo "S3 --> " . class_exists('S3');
Thanks in advance for your time.
-Tim
I'm assuming (you don't say: you should specify which one) you are using this. Your usage looks correct: I'm assuming your keys are wrong. If
print_r($s3->listBuckets());
doesn't return anything check your server logs.

Resources