AWS S3 API - Objects doesn't contains Metadata - ajax

trying to figure AWS S3 API and failing miserably...
I currently got a bucket which consist lots of videos.
I need to request all the videos as an object, that will have the video meta-data which I set once uploading, and the link to share the video.
Problem is I'm getting the object without any of the above...
What Iv'e got so far -
AWS.config.update({accessKeyId: 'id', secretAccessKey: 'key', region: 'eu-
west-1'});
var s3 = new AWS.S3();
var params = {
Bucket: 'Bucket-name',
Delimiter: '/',
Prefix: 'resource/folder-with-videos/'
}
s3.listObjects(params, function (err, data) {
if(err)throw err;
console.log(data);
});
Thanks for reading :)
UPDATE - found that when using getObject and adding ExposeHeader to the CORS setting I can indeed get the metadata I set.
problem is getObject only works on a specific Object (video in my case).
any Idea how I can get all the object like listObject and have values of each object like I do on getObject?
Only solution I can think of is doing listObject to get a list of all the objects, and then by this result to do for each object an getObject ajax?... rip UX
thanks :)

A couple of things to sort out first.
As per the documentation, http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property listObjects API returns exactly what is mentioned in the callback parameters. Using a delimiter causes the S3 to list objects only one level below the prefix given. It won't traverse all objects recursively.
In order to get the URL, you can use http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property. I am not really sure what you meant by meta-data.

Related

Ruby: Undefined method `bucket' for #<Aws::S3::Client>

Using the aws-sdk-s3 gem, I am currently able to upload items to buckets and create signed URLs, and am trying to determine if an object exists in a bucket or not. All the documentation I see says client.bucket('bucketname') should work, but in my case it is not. I've tried:
client.bucket('bucketname')
client.bucket['bucketname']
client.buckets('bucketname')
client.buckets['bucketname']
but none work. This suggestion using head_object is a possibility (https://github.com/cloudyr/aws.s3/issues/160), but I'm still curious why bucket isn't working.
DOCS:
https://gist.github.com/hartfordfive/19097441d3803d9aa75ffe5ecf0696da
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/index.html#Resource_Interfaces
You should call bucket or buckets on Aws::S3::Resource instance and not on Aws::S3::Client instance as an error states.
And the links you provided as well as docs show that:
s3 = Aws::S3::Resource.new(
region: 'us-east-1',
credentials: Aws::InstanceProfileCredentials.new()
)
bucket = s3.bucket('my-daily-backups')

how can I get ALL records from route53?

how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end

Socket.io: Client receiving empty object/array despite data existing on server

I'm working with socket.io. I am having trouble receiving data from the server even though I can console.log() the data (an array of objects) right before I try to emit the data back to the calling client. If I hard code info into the emit, it will display on the client but when I use the dynamically created array of objects, it doesn't go through. My gut tells me its a asyn issue but I'm using bluebird to manage that. Perhaps its an encoding issue? I tried JSON.stringify and JSON.parse, no dice.
Server
socket.on('getClassList', function(){
sCon.getClassList()
.then(function(data){
console.log(data) //data is an array full of objects
socket.emit('STC_gotDatList', data)
})
})
Hard-Coded Expected Output:
classes['moof'] = {
accessList: [888],
connectedList: [],
firstName: "sallyburnsjellyworth"
}
Client
socket.on('STC_gotDatList', function(info){
console.log(info) //prints [] or {}
})
EDIT:
I remember reading somewhere that console.log() may not be printing data at the time the data is available/populated. Could that be it even though im using Promises? At the time I'm emitting the data, it just hasn't been populated into the array? How would i troubleshoot this scenario?
EDIT2:
A step closer. In order to get anything to return, for some reason I have to return each specific object in the 'classes' array. It wont allow me to send the entire array, for the example I gave above, to get data to the client, I have to return(classes['moof']), can't return(classes) to get the entire array... Not sure why.
EDIT3: Solution: You just can't do it this way. I had to put 'moof' inside classes as a property (className) and then I was able to pass the whole classes array.
How is the 'classes' array created?
The problem might be that it is being given properties dynamically, but has not been created as an object, but rather an array.
If you plan to dynamically add properties to it (like property moof), you should create it as an object (with {}) rather than an array (with []).
Try
var classes = {};//instead of classes = []
//then fill it however you do it
classes[property] = {
accessList: [888],
connectedList: [],
firstName: "sallyburnsjellyworth"
};
Also, the credit goes to Felix, I'm just paraphrasing his answer: https://stackoverflow.com/a/8865468/7573546

Direct (and simple!) AJAX upload to AWS S3 from (AngularJS) Single Page App

I know there's been a lot of coverage on upload to AWS S3. However, I've been struggling with this for about 24 hours now and I have not found any answer that fits my situation.
What I'm trying to do
Upload a file to AWS S3 directly from my client to my S3 bucket. The situation is:
It's a Single Page App, so upload request must be in AJAX
My server and my client are not on the same domain
The S3 bucket is of the newest sort (Frankfurt), for which some signature-generating libraries don't work (see below)
Client is in AngularJS
Server is in ExpressJS
What I've tried
Heroku's article on direct upload to S3. Doesn't fit my client/server configuration (plus it really does not fit harmoniously with Angular)
ready-made directives like ng-s3upload. Does not work because their signature-generating algorithm is not accepted by recent s3 buckets.
Manually creating a file upload directive and logic on the client like in this article (using FormData and Angular's $http). It consisted of getting a signed URL from AWS on the server (and that part worked), then AJAX-uploading to that URL. It failed with some mysterious CORS-related message (although I did set a CORS config on Heroku)
It seems I'm facing 2 difficulties: having a file input that works in my Single Page App, and getting AWS's workflow right.
The kind of solution I'm looking for
If possible, I'd like to avoid 'all included' solutions that manage the whole process while hiding of all of the complexity, making it hard to adapt to special cases. I'd much rather have a simple explanation breaking down the flow of data between the various components involved, even if it requires some more plumbing from me.
I finally managed. The key points were:
Let go of Angular's $http, and use native XMLHttpRequest instead.
Use the getSignedUrl feature of AWS's SDK, instead on implementing my own signature-generating workflow like many libraries do.
Set the AWS configuration to use the proper signature version (v4 at the time of writing) and region ('eu-central-1' in the case of Frankfurt).
Here below is a step-by-step guide of what I did; it uses AngularJS and NodeJS on the server, but should be rather easy to adapt to other stacks, especially because it deals with the most pathological cases (SPA on a different domain that the server, with a bucket in a recent - at the time of writing - region).
Workflow summary
The user selects a file in the browser; your JavaScript keeps a reference to it.
the client sends a request to your server to obtain a signed upload URL.
Your server chooses a name for the object to put in the bucket (make sure to avoid name collisions!).
The server obtains a signed URL for your object using the AWS SDK, and sends it back to the client. This involves the object's name and the AWS credentials.
Given the file and the signed URL, the client sends a PUT request directly to your S3 Bucket.
Before you start
Make sure that:
Your server has the AWS SDK
Your server has AWS credentials with proper access rights to your bucket
Your S3 bucket has a proper CORS configuration for your client.
Step 1: setup a SPA-friendly file upload form / widget.
All that matters is to have a workflow that eventually gives you programmatic access to a File object - without uploading it.
In my case, I used the ng-file-select and ng-file-drop directives of the excellent angular-file-upload library. But there are other ways of doing it (see this post for example.).
Note that you can access useful information in your file object such as file.name, file.type etc.
Step 2: Get a signed URL for the file on your server
On your server, you can use the AWS SDK to obtain a secure, temporary URL to PUT your file from someplace else (like your frontend).
In NodeJS, I did it this way:
// ---------------------------------
// some initial configuration
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
signatureVersion: 'v4',
region: 'eu-central-1'
});
// ---------------------------------
// now say you want fetch a URL for an object named `objectName`
var s3 = new aws.S3();
var s3_params = {
Bucket: MY_BUCKET_NAME,
Key: objectName,
Expires: 60,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3_params, function (err, signedUrl) {
// send signedUrl back to client
// [...]
});
You'll probably want to know the URL to GET your object to (typically if it's an image). To do this, I simply removed the query string from the URL:
var url = require('url');
// ...
var parsedUrl = url.parse(signedUrl);
parsedUrl.search = null;
var objectUrl = url.format(parsedUrl);
Step 3: send the PUT request from the client
Now that your client has your File object and the signed URL, it can send the PUT request to S3. My advice in Angular's case is to just use XMLHttpRequest instead of the $http service:
var signedUrl, file;
// ...
var d_completed = $q.defer(); // since I'm working with Angular, I use $q for asynchronous control flow, but it's not mandatory
var xhr = new XMLHttpRequest();
xhr.file = file; // not necessary if you create scopes like this
xhr.onreadystatechange = function(e) {
if ( 4 == this.readyState ) {
// done uploading! HURRAY!
d_completed.resolve(true);
}
};
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader("Content-Type","application/octet-stream");
xhr.send(file);
Acknowledgements
I would like to thank emil10001 and Will Webberley, whose publications were very valuable to me for this issue.
You can use the ng-file-upload $upload.http method in conjunction with the aws-sdk getSignedUrl to accomplish this. After you get the signedUrl back from your server, this is the client code:
var fileReader = new FileReader();
fileReader.readAsArrayBuffer(file);
fileReader.onload = function(e) {
$upload.http({
method: 'PUT',
headers: {'Content-Type': file.type != '' ? file.type : 'application/octet-stream'},
url: signedUrl,
data: e.target.result
}).progress(function (evt) {
var progressPercentage = parseInt(100.0 * evt.loaded / evt.total);
console.log('progress: ' + progressPercentage + '% ' + file.name);
}).success(function (data, status, headers, config) {
console.log('file ' + file.name + 'uploaded. Response: ' + data);
});
To do multipart uploads, or those larger than 5 GB, this process gets a bit more complicated, as each part needs its own signature. Conveniently, there is a JS library for that:
https://github.com/TTLabs/EvaporateJS
via https://github.com/aws/aws-sdk-js/issues/468
Use s3-file-upload open source directive having dynamic data-binding and auto-callback functions - https://github.com/vinayvnvv/s3FileUpload

SharpGS how to download file?

I am using SharpGS for google cloud storage. I could upload file using the
GetBucket("some-bucket").AddObject() method but I could not download the file using the following code
GetBucket("some-bucket").GetObjectHead("some-file").Content
It gave me null value for the byte return
any idea?
thanks
The GetObjectHead looks up the object using a HEAD request, so it doesn't retrieve the content.
If you take a look at the demo code, you can retrieve object contents by listing the bucket:
var bucket = GetBucket("some-bucket");
foreach (var o in bucket.Objects) {
Console.WriteLine(Encoding.UTF8.GetString(o.Retrieve().Content));
}
There doesn't seem to be a way to get an IObject without listing the bucket. I would suggest adding a method to the IObjectContent class returned from GetObjectHead to fetch the IObject. The project is on GitHub.

Resources