I'm trying to generate a presigned url for an s3 bucket using Ruby.
client = Aws::S3::Client.new(
region: 'eu-west-1', #or any other region
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
)
#signer = Aws::S3::Presigner.new(client: client)
#signer.presigned_url(
:put_object,
bucket: ENV['S3_PROFILES_BUCKET'],
key: "test-#{SecureRandom.uuid}"
)
I try and take the url that is returned from this, something like:
"https://some-bucket.s3.eu-west-1.amazonaws.com/test-4ad40444-e907-4748-a025-a12515580450?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATTSSBDQFDFFX36UU4%2F20191204%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20191204T002242Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=31b0a90127f43e79462713b101b5fc80146c50f800cfce31c493d206ea142333"
When I try and make a POST (or PUT) request to this URL with an image binary (I'm using Postman) I get an error about the signature not being correct.
Related
We are moving away from using the DocuSign::Esign gem and we are trying to make the API calls following the How to get an access token with JWT Grant authentication instructions. The consent has already been granted for this application when we set up this originally with the DocuSign::Esign gem.
I am getting the following error:
{"error"=>"invalid_grant", "error_description"=>"unsupported_grant_type"}
I am using Ruby and am running this code in the console
config = Padrino.config.docusign
current_time = Time.now.utc
header = {
typ: 'JWT',
alg: 'RS256'
}
body = {
iss: config.integrator_key,
sub: config.user_id,
iat: current_time.to_i,
exp: (current_time + 1.hour).to_i,
aud: config.host,
scope: 'signature impersonation'
}
key_file = "-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAiOMDM5jdGYTEOC/nFVUTQ3+5U2TCUpEKyUD+mByldDbgvT9q
. . .
jDjfX6L15x8JcY9eiXvCvZNF6Za2dg8cagK+ff5d6KLodmVFD5o=
-----END RSA PRIVATE KEY-----"
private_key = OpenSSL::PKey::RSA.new(key_file)
token = JWT.encode(body, private_key, 'RS256')
uri = 'https://account-d.docusign.com/oauth/token'
data = {
grant_type: 'urn:ietf:params:oauth:grant-type:jwt-bearer',
assertion: token
}
auth_headers = {content_type: 'application/x-www-form-urlencoded'}
However, when I call the api, I get a RestClient::Bad response error
irb(main):352:0> begin
irb(main):353:1> RestClient.post(uri, data.to_json, auth_headers)
irb(main):354:1> rescue RestClient::BadRequest => e
irb(main):355:1> JSON.parse(e.http_body)
irb(main):356:1> end
=> {"error"=>"invalid_grant", "error_description"=>"unsupported_grant_type"}
I am not sure what I am doing wrong. The JWT decodes correctly when I check it in https://jwt.io/. I am using the grant_type exactly as provided in the documentation.
Hmmm,
scope claim only needs to be signature (impersonation is implied since you're using the JWT grant flow.)
For the aud claim, what is config.host? It should be account-d.docusign.com for the developer system (Do not include https://)
Your main error is that you are sending the data hash in JSON format. That's wrong, it must be sent in url form format. Try
RestClient.post(uri, data, auth_headers)
instead. (Don't convert the data to json.)
As Softlayer or IBM Cloud has moved from Swift based Object Storage to S3 based Cloud Object Storage. I am using fog/aws instead of fog/softlayer.
The below is the code:
require 'fog/aws'
fog_properties = {
provider: 'AWS',
aws_access_key_id: username,
aws_secret_access_key: api_key
}
#client = Fog::Storage.new(fog_properties)
#client.directories
But it failed even with valid key and id.
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.\</Message><AWSAccessKeyId>####</AWSAccessKeyId><RequestId>####</RequestId><HostId>##</HostId></Error>
End Point IBM COS uses is "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
When I tried to use fog alone(require 'fog'). It throws the below error:
Unable to activate google-api-client-0.23.9, because mime-types-2.99.3 conflicts with mime-types (~> 3.0) (Gem::ConflictError)
Please suggest how to resolve these issues.
https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
This is not an endpoint but a list of endpoints in JSON.
Choose the endpoint for your bucket location.
For example if your bucket is in us-south the public endpoint is
https://s3.us-south.cloud-object-storage.appdomain.cloud
The following code worked for IBM Cloud Objects Storage
properties = {
region: region,
endpoint: URI('https://s3.us-south.cloud-object-storage.appdomain.cloud'),
credentials: Aws::Credentials.new(access_key_id, secret_access_key)
}
Aws.config.update(properties)
#client = Aws::S3::Client.new
Properties for the config can also be set as ENV variables.
Below are few basic operations performed on COS.
List all the bucker names
#client.list_buckets.buckets.map(&:name)
Create Bucket
#client.create_bucket(bucket: )
Upload a file
#client.put_object(bucket: , key: , body: )
Download a file
#client.get_object(bucket: , key: )
Delete a file
#client.delete_object(bucket: , key: )
Delete a Bucket
#client.delete_bucket(bucket: )
So I am trying to write a simple script to connect to AWS s3 and create a bucket but I keep getting Access Denied (Aws::S3::Errors::AccessDenied)
This is my code
require 'aws-sdk'
require 'csv'
def test()
creds = CSV.read('accessKeys.csv')
s3_client = Aws::S3::Client.new(
region: 'us-west-2',
credentials: Aws::Credentials.new(creds[1][0], creds[1][1]),
)
s3 = Aws::S3::Resource.new(client: s3_client)
s3.create_bucket({
bucket: "dns-complaint-bucket",
})
end
test()
I have also attached AmazonS3FullAccess policy to the IAM user that I am using.
I'm unable to a folder by providing an id to that folder using Boxr gem. Previously I didn't has the enterprise settings as shown in this post which I have now fixed. I'm creating a token using JWT authentication get_user_token method the following way.
token = Boxr::get_user_token("38521XXXX", private_key: ENV.fetch('JWT_PRIVATE_KEY'), private_key_password: ENV.fetch('JWT_PRIVATE_KEY_PASSWORD'), public_key_id: ENV.fetch('JWT_PUBLIC_KEY_ID'), client_id: ENV.fetch('BOX_CLIENT_ID'), client_secret: ENV.fetch('BOX_CLIENT_SECRET'))
I then pass this this token when creating a client.
client = Boxr::Client.new(token)
when I check the current user on client this is what I get:
client.current_user
=> {"type"=>"user",
"id"=>"60853XXXX",
"name"=>"OnlineAppsPoC",
"login"=>"AutomationUser_629741_06JgxiPtPj#boxdevedition.com",
"created_at"=>"2018-10-04T08:41:32-07:00",
"modified_at"=>"2018-10-04T08:41:50-07:00",
"language"=>"en",
"timezone"=>"America/Los_Angeles",
"space_amount"=>10737418240,
"space_used"=>0,
"max_upload_size"=>2147483648,
"status"=>"active",
"job_title"=>"",
"phone"=>"",
"address"=>"",
"avatar_url"=>"https://app.box.com/api/avatar/large/6085300897"}
When I run client.methods I see there is folder_from_id however when I call that method I get the following error:
pry(#<FormsController>)> client.folder_from_id("123456", fields: [])
Boxr::BoxrError: 404: Not Found
from /usr/local/bundle/gems/boxr-1.4.0/lib/boxr/client.rb:239:in `check_response_status'
I have the following settings:
I also authorize the application. Not sure what else to do.
token = Boxr::get_user_token(user_id,
private_key: ENV.fetch('JWT_PRIVATE_KEY'),
private_key_password: ENV.fetch('JWT_PRIVATE_KEY_PASSWORD'),
public_key_id: ENV.fetch('JWT_PUBLIC_KEY_ID'),
client_id: ENV.fetch('BOX_CLIENT_ID'),
client_secret: ENV.fetch('BOX_CLIENT_SECRET'))
client = Boxr::Client.new(token.access_token)
folder = client.folder_from_id(folder_id)
client.upload_file(file_path, folder)
For anybody using C# and BOXJWT.
You just need to have a boxManager set up and will get you with anything you need, say BoxFile, Folder etc.
If you have the folderID, well & good, but if you need to retrieve, this can be done as shown below:
string inputFolderId = _boxManager.GetFolder(RootFolderID).Folders.Where(i => i.Name == boxFolder).FirstOrDefault().Id; //Retrieves FolderId
Folder inputFolder = _boxManager.GetFolder(inputFolderId);
Hello I need some help with sending a PUT request to my ElasticSearch on AWS to create a snapshot in a S3 bucket, with POSTMAN.
I have created a S3 bucket called cb-search-es-backup.
I've created a role, and a policy for S3 (see:this post of mine for the steps I've taken).
REQUEST URL https://myelasticsearchendpoint.eu-west-1.es.amazonaws.com/
REQUEST METHOD: PUT
BODY : RAW / json
{
"type": "s3",
"settings": {
"bucket": "cb-search-es-backup", // my bucketname
"region": "eu-west-1", // region
"role_arn": "arn:aws:iam::12345676890:role/Role_ES_TO_S3" // my role arn
}
}
I've also tried the authorization type: 'AWS Signature', with access and secret key filled in.
It looks like you are not passing AWS credentials with this request.
There is a detailed guide how to make a Postman request with AWS authentication here: Use Postman to Call an API.
Your Postman window might look like this:
To do the same from python please check out Sample python client section of this documentation page, note that AWS4Auth object is created and it's passed as auth parameter to requests.put():
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/my-snapshot-repo' # the Elasticsearch API endpoint
url = host + path
payload = {
...
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)