SharpGS how to download file? - download

I am using SharpGS for google cloud storage. I could upload file using the
GetBucket("some-bucket").AddObject() method but I could not download the file using the following code
GetBucket("some-bucket").GetObjectHead("some-file").Content
It gave me null value for the byte return
any idea?
thanks

The GetObjectHead looks up the object using a HEAD request, so it doesn't retrieve the content.
If you take a look at the demo code, you can retrieve object contents by listing the bucket:
var bucket = GetBucket("some-bucket");
foreach (var o in bucket.Objects) {
Console.WriteLine(Encoding.UTF8.GetString(o.Retrieve().Content));
}
There doesn't seem to be a way to get an IObject without listing the bucket. I would suggest adding a method to the IObjectContent class returned from GetObjectHead to fetch the IObject. The project is on GitHub.

Related

Cloudinary Error: {"error":{"message":"Missing required parameter - timestamp"}}

I am trying to use Cloudinary's .downloadMulti(String tag, Map options) to generate a URL to download multiple images as a zip with the same tag. I am generating the URL seemingly just fine, but when I go to the URL, I am met with {"error":{"message":"Missing required parameter - timestamp"}}.
I've researched a bit and I saw that I need to sign the request, but it isn't saying that I'm missing that - just the timestamp. I believe the request is already being signed, just need a proper timestamp. I believe it needs to be within the constructor, but when I called Util.timestamp() it isn't recognized as a reference.
My Cloudinary initializer:
private final Cloudinary cloudinary = new Cloudinary(ObjectUtils.asMap(
"cloud_name", "dxoa7bbix",
"api_key", "161649288458746",
"api_secret", "..."));
My upload method:
public Photo uploadOrderImage(String imageURL, String publicId, Order order, String photoType) throws IOException {
Map result = cloudinary.uploader().upload(new File(imageURL), ObjectUtils.asMap(
"public_id", publicId,
"tags", order.getId().toString()));
Photo sellOrderPhoto = new Photo(
result.get("secure_url").toString(),
photoType,
order
);
return photoRepository.save(sellOrderPhoto);
}
This is my download method:
public String downloadPhotos(String tag) throws IOException {
return cloudinary.downloadMulti(tag, ObjectUtils.asMap(
"tags", tag
));
}
An example URL my download method returns: Generated URL: https://api.cloudinary.com/v1_1/dxoa7bbix/image/multi?mode=download&async=false&signature=5f5da549fc78ea3fd50f034cdc76e2cce3089d48&api_key=161649288458746&tag=137×tamp=1638583257.
Overall, I think the problem is the lack of a timestamp. If you have any ideas, that would be great!
The reason for the error is that the parameter is expected to be called timestamp but based on the URL you shared, it is actually ×tamp.
If you want to generate a URL to a ZIP file containing assets that share a particular tag, then you will want to use the generate_archive method and not multi which provides different functionality.
If you replace the following code:
return cloudinary.downloadMulti(tag, ObjectUtils.asMap(
"tags", tag
));
With:
return cloudinary.downloadZip(ObjectUtils.asMap(
"tags", tag,
"resource_type", "image")
);
Then that will generate and return a URL to a ZIP file that will get created when the URL is accessed and contain images from your cloud that contain the tag you specified.
The Cloudinary Java SDK you're using will handle the signature/timestamp generation automatically when using any of the built-in methods, therefore, you don't need to make any changes to the SDK code or calculate the signature yourself if using the built-in methods. Signature generation is only necessary if you're not intending to use any of the SDKs but integrate with the Cloudinary API using your own custom code - such as if you're using a language for which a Cloudinary SDK doesn't yet exist. In such cases, if you want to perform authenticated API calls, you will need to generate authentication signatures yourself.

Upload CSV payload to Google Storage using Ruby

I am trying to upload string payload to Google Storage directly using Ruby. But, it seems there's no direct way to do this without creating a temporary file in the disk.
I am using the CSV library to generate a string payload.
Current method suggests to store the string payload in a temporary file and then use code something like below to upload the file to google storage:
require "google/cloud/storage"
storage = Google::Cloud::Storage.new
bucket = storage.bucket bucket_name
file = bucket.create_file local_file_path, file_name
Is there a way to avoid creating a temporary file to upload?
I found the documentation where it states that we can use any File-like object such as StringIO to upload string payload directly.
Here's my code:
require "google/cloud/storage"
storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-todo-app"
bucket.create_file StringIO.new("Hello world!"), "hello-world.txt"
Go to Creating a File section of this link for example
Documentation Link:
https://googleapis.dev/ruby/google-cloud-storage/latest/Google/Cloud/Storage/Bucket.html#create_file-instance_method

Google sheets IMPORTXML fails for ASX data

I am trying to extract the "Forward Dividend & Yield" value from https://finance.yahoo.com/ for multiple companies in different markets, into Google Sheets.
This is successful:
=IMPORTXML("https://finance.yahoo.com/quote/WBS", "//*[#id='quote-summary']/div[2]/table/tbody/tr[6]/td[2]")
But this fails with #N/A:
=IMPORTXML("https://finance.yahoo.com/quote/CBA.AX", "//*[#id='quote-summary']/div[2]/table/tbody/tr[6]/td[2]")
I cannot work out what needs to be different for ASX ticker codes, why does CBA.AX cause a problem?
Huge thanks for any help
When I tested the formula of =IMPORTXML("https://finance.yahoo.com/quote/CBA.AX", "//*"), an error of Error Resource at url not found. occurred. I thought that this might be the reason of your issue.
But, fortunately, when I try to retrieve the HTML from the same URL using Google Apps Script, the HTML could be retrieved. So, in this answer, I would like to propose to retrieve the value using the custom function created by Google Apps Script. The sample script is as follows.
Sample script:
Please copy and paste the following script to the script editor of Google Spreadsheet and save it. And, please put a formula of =SAMPLE("https://finance.yahoo.com/quote/CBA.AX") to a cell. By this, the value is retrieved.
function SAMPLE(url) {
const res = UrlFetchApp.fetch(url).getContentText().match(/DIVIDEND_AND_YIELD-value.+?>(.+?)</);
return res && res.length > 1 ? res[1] : "No value";
}
Result:
When above script is used, the following result is obtained.
Note:
When this script is used, you can also use =SAMPLE("https://finance.yahoo.com/quote/WBS").
In this case, when the HTML structure of the URL is changed, this script might not be able to be used. I think that this situation is the same with IMPORTXML and the xpath. So please be careful this.
References:
Custom Functions in Google Sheets
Class UrlFetchApp
An other solution is to decode the json contained in the source of the web page. Of course you can't use importxml since the web page is built on your side by javascript and not on server's side. You can access data by this way and get a lot of informations
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
i.e. for what you are looking for you can use
function trailingAnnualDividendRate(){
var url='https://finance.yahoo.com/quote/CBA.AX'
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
var data = JSON.parse(jsonString)
var dividendRate = data.context.dispatcher.stores.QuoteSummaryStore.summaryDetail.trailingAnnualDividendRate.raw
Logger.log(dividendRate)
}

AWS S3 bucket - Moving all xmls files form one S3 bucker to Another S3 bucker using Python lamda

In my case, I wanted to read all XML form my s3bucket/ parsing then move all parsed files to the same s3Bucker/
for me parsing logic is working fine but I am not able to move all files.
this is the example I am trying to use
**s3 = boto3.resource('s3')
src_bucket = s3.Bucket('bucket1')
dest_bucket = s3.Bucket('bucket2')
for obj in src_bucket.objects.all():
filename= obj.key.split('/')[-1]
dest_bucket.put_object(Key='sample/' + filename, Body=obj.get()["Body"].read())**
above code is not working for me at all (I have to give s3 folder full access and for testing given public full access as well).
Thanks
Check out this answer. You could use python endshwith() function and pass ".xml" to it, get a list of those files and copy them to the destination bucket and then delete them from the source bucket.

AWS S3 API - Objects doesn't contains Metadata

trying to figure AWS S3 API and failing miserably...
I currently got a bucket which consist lots of videos.
I need to request all the videos as an object, that will have the video meta-data which I set once uploading, and the link to share the video.
Problem is I'm getting the object without any of the above...
What Iv'e got so far -
AWS.config.update({accessKeyId: 'id', secretAccessKey: 'key', region: 'eu-
west-1'});
var s3 = new AWS.S3();
var params = {
Bucket: 'Bucket-name',
Delimiter: '/',
Prefix: 'resource/folder-with-videos/'
}
s3.listObjects(params, function (err, data) {
if(err)throw err;
console.log(data);
});
Thanks for reading :)
UPDATE - found that when using getObject and adding ExposeHeader to the CORS setting I can indeed get the metadata I set.
problem is getObject only works on a specific Object (video in my case).
any Idea how I can get all the object like listObject and have values of each object like I do on getObject?
Only solution I can think of is doing listObject to get a list of all the objects, and then by this result to do for each object an getObject ajax?... rip UX
thanks :)
A couple of things to sort out first.
As per the documentation, http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property listObjects API returns exactly what is mentioned in the callback parameters. Using a delimiter causes the S3 to list objects only one level below the prefix given. It won't traverse all objects recursively.
In order to get the URL, you can use http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property. I am not really sure what you meant by meta-data.

Resources