How to use StorageProvider.Download(...) to download private files? - genexus

When using "StorageProvider.Download(...)", it seems that I can only download files uploaded to the public storage.
Is there a way to download private uploaded files to my server local storage?
I´m using Azure external storage provider.
Thanks!
Example:
// In this case the file is downloaded to the local filesystem:
&LocalFile.Source = 'LocalFile.txt'
&Result = &StorageProvider.Download('AzurePublicFile.txt', &LocalFile, &Messages)
// In this case the file is not downloaded locally, it only loads a reference to the URI in the Azure blob container:
&LocalFile.Source = 'LocalFile.txt'
&Result = &StorageProvider.GetPrivate('AzurePrivateFile.txt', &LocalFile, 5, &Messages)

Try something like this:
StorageProvider.GetPrivate(...., &File)
&URL = &File.GetURI()
&HttpClient.Execute('GET', &URL)
&HttpClient.ToFile("C:\....")
Update: StorageProvider.DownloadPrivate() will be available in GeneXus 16 Upgrade 2.

Related

Can't access the blob folder but files inside it are able to download

I have azure storage where I am using containers to store blobs. I am trying to download the blob from this container. But either using python SDK or rest, I am getting error "The specified blob does not exist." but when I giving the full path with the final file such as .txt or whatever instead of root folder, it is able to download it.
For example:
following URL gives error https://mlflowsmodeltorage.blob.core.windows.net/mlflow-test/110/63e7b9f2482b45e29b8c2983fa9522ef/artifacts/models The specified blob does not exist.
but the URL https://mlflowsmodeltorage.blob.core.windows.net/mlflow-test/110/63e7b9f2482b45e29b8c2983fa9522ef/artifacts/models/conda.yaml able to download the file.
Same thing happens with the python SDK. But I want to download the whole folder rather than the files inside it.
How can I achieve it.
Below is the code I am using to access the blob using pytohn SDK
from azure.storage.blob import BlobServiceClient
STORAGEACCOUNTURL = "https://mlflowsmodeltorage.blob.core.windows.net"
STORAGEACCOUNTKEY = "xxxxxxxxxxxxxx"
CONTAINERNAME = "mlflow-test"
BLOBNAME = "110/63e7b9f2482b45e29b8c2983fa9522ef/artifacts/models/"
blob_service_client_instance = BlobServiceClient(
account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY,
)
blob_client_instance = blob_service_client_instance.get_blob_client(
CONTAINERNAME, BLOBNAME, snapshot=None)
blob_data = blob_client_instance.download_blob()
data = blob_data.readall()
print(data)

How to make Terraform archive_file resource pick up changes to source files?

Using TF 0.7.2 on a Win 10 machine.
I'm trying to set up an edit/upload cycle for development of my lambda functions in AWS, using the new "archive_file" resource introduced in TF 0.7.1
My configuration looks like this:
resource "archive_file" "cloudwatch-sumo-lambda-archive" {
source_file = "${var.lambda_src_dir}/cloudwatch/cloudwatchSumologic.js"
output_path = "${var.lambda_gen_dir}/cloudwatchSumologic.zip"
type = "zip"
}
resource "aws_lambda_function" "cloudwatch-sumo-lambda" {
function_name = "cloudwatch-sumo-lambda"
description = "managed by source project"
filename = "${archive_file.cloudwatch-sumo-lambda-archive.output_path}"
source_code_hash = "${archive_file.cloudwatch-sumo-lambda-archive.output_sha}"
handler = "cloudwatchSumologic.handler"
...
}
This works the first time I run it - TF creates the lambda zip file, uploads it and creates the lambda in AWS.
The problem comes with updating the lambda.
If I edit the cloudwatchSumologic.js file in the above example, TF doesn't appear to know that the source file has changed - it doesn't add the new file to the zip and doesn't upload the new lambda code to AWS.
Am I doing something wrong in my configuration, or is the archive_file resource not meant to be used in this way?
You could be seeing a bug. I'm on 0.7.7 and the issue now is the SHA changes even when you don't make changes. Hashicorp will be updating this resource to a data source in 0.7.8
https://github.com/hashicorp/terraform/pull/8492

Passbook save pem file in AWS

I would like to create a passbook sign at my ruby server hosted on AWS. what is the best way to save .pem files or .p12 file in AWS ? and retrieve them to sign the passbook.
I'm using passbook gem in https://github.com/frozon/passbook but note at the example he use files from local path
Passbook.configure do |passbook|
passbook.wwdc_cert = Rails.root.join('wwdc_cert.pem')
passbook.p12_key = Rails.root.join('key.pem')
passbook.p12_certificate = Rails.root.join('certificate.pem')
passbook.p12_password = 'cert password'
end
In my case I want to read them from AWS
Just use the url of your files hosted on amazon. Like
https://<bucket-name>.s3.amazonaws.com/<key>

how to set up Autosmush from Github to use

I have some images inside a bucket that are hosted on Amazon-s3 - http://aws.amazon.com/ and I want to optimise them using Autosmush.I understand the command line to use as shown below
./autosmush some-s3-bucket-name/path/to/files
but how do I set it up once I've cloned the repo from Github to make it work. This is the repo https://github.com/tylerhall/Autosmush
The autosmush file itself has some basic instructions to follow:
Autosmush requires the Amazon PHP SDK, which is not included in this project.
// To download and install the SDK, follow these steps...
//
// 1) Download the 1.6.x AWS SDK for PHP from here: https://github.com/amazonwebservices/aws-sdk-for-php/releases
// 2) Unzip file
// 3) Inside the unzipped folder, copy the 'sdk-x.x.x' folder into Autosmush's 'lib' folder
// 4) Rename 'sdk-x.x.x' to 'sdk'
// Enter your credentials
define('AWS_S3_KEY', '');
define('AWS_S3_SECRET', '');
That should be enough to start testing the function.

wsadmin upload file from local machine to remote

I'm trying to automate process of deployment and I want to upload some files to WAS using wsadmin (jython). My question is if it is possible to upload file from my standalone wsadmin to remote WAS Server. And if so, is it possible to upload file somewhere out of application (fe. /opt/IBM/WebSphere/AppServer/temp)? I don't want to upload it to specific profile, but to server root.
When I'm deploying application it is copying war/ear file to WAS, so is it there some mechani to upload separate file?
many thanks
AntAgent allows you to upload any file, provided that the content of the file can fit in memory:
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.javadoc.doc/web/mbeanDocs/AntAgent.html
In wsadmin you'll need to use invoke_jmx method of AdminControl object.
from java.lang import String
import jarray
fileContent = 'hello!'
antAgent = AdminControl.makeObjectName(AdminControl.queryNames('WebSphere:*,type=AntAgent,process=dmgr'))
str = String(fileContent)
bytes = str.getBytes()
AdminControl.invoke_jmx(antAgent, 'putScript', [String('hello.txt'),bytes], jarray.array(['java.lang.String', '[B'], String))
Afterwards you'll find 'hello.txt' file in WAS profile's temp directory. You may use relative paths as well.

Resources