AWS copying one object to another - ruby

I'm trying to copy data from one bucket to another using the ruby "aws-sdk" gem version 3.
My code is shown below:
temporary_object = #temporary_bucket.object(temporary_path)
permanent_object = #permanent_bucket.object(permanent_path)
temporary_object.copy_to(permanent_object)
However I keep getting the error Aws::S3::Errors::NoSuchKey: The specified key does not exist. Which makes sense as the permanent bucket doesn't exist at this moment however I thought that using copy_to creates the bucket if it does not exist.
Any advice would be very helpful.
Thanks

Related

AWS S3 bucket - Moving all xmls files form one S3 bucker to Another S3 bucker using Python lamda

In my case, I wanted to read all XML form my s3bucket/ parsing then move all parsed files to the same s3Bucker/
for me parsing logic is working fine but I am not able to move all files.
this is the example I am trying to use
**s3 = boto3.resource('s3')
src_bucket = s3.Bucket('bucket1')
dest_bucket = s3.Bucket('bucket2')
for obj in src_bucket.objects.all():
filename= obj.key.split('/')[-1]
dest_bucket.put_object(Key='sample/' + filename, Body=obj.get()["Body"].read())**
above code is not working for me at all (I have to give s3 folder full access and for testing given public full access as well).
Thanks
Check out this answer. You could use python endshwith() function and pass ".xml" to it, get a list of those files and copy them to the destination bucket and then delete them from the source bucket.

How to Fix Document Not Found errors with find

I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!

Ruby: Undefined method `bucket' for #<Aws::S3::Client>

Using the aws-sdk-s3 gem, I am currently able to upload items to buckets and create signed URLs, and am trying to determine if an object exists in a bucket or not. All the documentation I see says client.bucket('bucketname') should work, but in my case it is not. I've tried:
client.bucket('bucketname')
client.bucket['bucketname']
client.buckets('bucketname')
client.buckets['bucketname']
but none work. This suggestion using head_object is a possibility (https://github.com/cloudyr/aws.s3/issues/160), but I'm still curious why bucket isn't working.
DOCS:
https://gist.github.com/hartfordfive/19097441d3803d9aa75ffe5ecf0696da
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/index.html#Resource_Interfaces
You should call bucket or buckets on Aws::S3::Resource instance and not on Aws::S3::Client instance as an error states.
And the links you provided as well as docs show that:
s3 = Aws::S3::Resource.new(
region: 'us-east-1',
credentials: Aws::InstanceProfileCredentials.new()
)
bucket = s3.bucket('my-daily-backups')

Placing file inside folder of S3 bucket

have a spring boot application, where I am tring to place a file inside folder of S3 target bucket. target-bucket/targetsystem-folder/file.csv
The targetsystem-folder name will differ for each file which will be retrived from yml configuration file.
The targetsystem-folder have to created via code if the folder doesnot exit and file should be placed under the folder
As I know, there is no folder concept in S3 bucket and all are stored as objects.
Have read in some documents like to place the file under folder, have to give the key-expression like targetsystem-folder/file.csv and bucket = target-bucket.
But it doesnot work out.Would like to achieve this using spring-integration-aws without using aws-sdk directly
<int-aws:s3-outbound-channel-adapter id="filesS3Mover"
channel="filesS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.target.bucket}"
key-expression="headers.targetsystem-folder/headers.file_name"
command="UPLOAD">
</int-aws:s3-outbound-channel-adapter>
Can anyone guide on this issue
Your problem that the SpEL in the key-expression is wrong. Just try to start from the regular Java code and imagine how you would like to build such a value. Then you'll figure out that you are missing concatenation operation in your expression:
key-expression="headers.targetsystem-folder + '/' + headers.file_name"
Also, please, in the future provide more info about error. In most cases the stack trace is fully helpful.
In the project that I was working before, I just used the java aws sdk provided. Then in my implementation, I did something like this
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest("target-bucket", "/targetsystem-folder/"+fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
I didn't create anymore configuration. It automatically creates /targetsystem-folder inside the bucket(then put the file inside of it), if it's not existing, else, put the file inside.
You can take this answer as reference, for further explanation of the subject.
There are no "sub-directories" in S3. There are buckets and there are
keys within buckets.
You can emulate traditional directories by using prefix searches. For
example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2

What are likely root causes of "Failed to list data bag items in data bag"?

I keep getting this error from Chef but can't find any documentation or other people who have had it.
What are the likely root causes?
Some more info would be helpful here. What workflow are you going down when you see this?
I'm going to make an assumption that it's not a knife call. I attempted to put in some debug around the source of your error in the chef-gem and called data bag list and data bag show. Neither seemed to hit the mixin code.
The following is the source of your error in the chef gem under mixin/language
def data_bag(bag)
DataBag.validate_name!(bag.to_s)
rbag = DataBag.load(bag)
rbag.keys
rescue Exception
Log.error("Failed to list data bag items in data bag: #{bag.inspect}")
raise
end
Now I'm at a loss as to what is accessing that mixin code because all other references to data_bag() in the gem refer to the code around the data_bag_item object.
Is this custom code you've created? Is there a chance you are referencing the wrong module?
You normally get this error when Chef cannot find the data bag "id"
Say I would like to load the following data_bag
data_bags
apps
mywebserver.json
apps
recipes
default.rb
[mywebserver.json]
{
"id": "mywebserver"
}
[default.rb]
data_bag_item("apps", "mywebserver") # The id specified in the json
I believe chef does not care for the data_bag_item file name but only cares for the "id" specified in one of the data bag item json files.

Resources