I'm generating a fog pre-signed URL for AWS using the following snippet:
bucket = "..."
object = "demo.jpg"
expires = Integer(Time.now + 4.hours)
headers = {}
options = { path_style: true }
fog.put_object_url(bucket, object, expires, headers, options)
This works great - except that the uploaded objects aren't accessible to the public. How can a public-read access control list (ACL) be applied to the upload path?
You have to list these extra parameters (eg. x-amz-acl, Content-Type) under the "query" key of the options hash.
So your example would be.
bucket = "..."
object = "demo.jpg"
expires = Integer(Time.now + 4.hours)
headers = {}
query = {"x-amz-acl" => "public-read"}
options = { path_style: true, query: query }
fog.put_object_url(bucket, object, expires, headers, options)
You have probably solved this by now but incase anyone else is stuck on this, as the lack of surrounding documentation does not make it very straight forward to implement.
Related
I'm using serverless stack, now attempting to add a Lambda Custom Authenticator to validate authorization tokens with Auth0 and add custom data to my request context when the authentication passes.
Everything works mostly fine at this point, except for when I cache the Authenticator response for the same token.
I'm using a 5-second cache for development. The first request with a valid token goes through as it should. The next requests in the 5-second window fail with a mysterious 500 error without ever reaching my code.
Authorizer configuration
// MyStack.ts
const authorizer = new sst.Function(this, "AuthorizerFunction", {
handler: "src/services/Auth/handler.handler",
});
const api = new sst.Api(this, "MarketplaceApi", {
defaultAuthorizationType: sst.ApiAuthorizationType.CUSTOM,
defaultAuthorizer: new HttpLambdaAuthorizer("Authorizer", authorizer, {
authorizerName: "LambdaAuthorizer",
resultsCacheTtl: Duration.seconds(5), // <-- this is the cache config
}),
routes: {
"ANY /{proxy+}": "APIGateway.handler",
},
});
Authorizer handler
const handler = async (event: APIGatewayAuthorizerEvent): Promise<APIGatewayAuthorizerResult> => {
// Authenticates with Auth0 and serializes context data I wanna
// forward to the underlying service
const authentication = await authenticate(event);
const context = packAuthorizerContext(authentication.value);
const result: APIGatewayAuthorizerResult = {
principalId: authentication.value?.id || "unknown",
policyDocument: buildPolicy(authentication.isSuccess ? "Allow" : "Deny", event.methodArn),
context, // context has the following shape:
// {
// info: {
// id: string,
// marketplaceId: string,
// roles: string,
// permissions: string
// }
// }
};
return result;
};
CloudWatch logs
☝️ Every uncached request succeeds, with status code 200, an integration ID and everything, as it's supposed to. Every other request during the 5-second cache fails with 500 error code and no integration ID, meaning it doesn't reach my code.
Any tips?
Update
I just found this in an api-gateway.d.ts #types file (attention to the comments, please):
// Poorly documented, but API Gateway will just fail internally if
// the context type does not match this.
// Note that although non-string types will be accepted, they will be
// coerced to strings on the other side.
export interface APIGatewayAuthorizerResultContext {
[name: string]: string | number | boolean | null | undefined;
}
And I did have this problem before I could get the Authorizer to work in the first place. I had my roles and permissions properties as string arrays, and I had to transform them to plain strings. Then it worked.
Lo and behold, I just ran a test right now, removing the context information I was returning for successfully validated tokens and now the cache is working 😔 every request succeeds, but I do need my context information...
Maybe there's a max length for the context object? Please let me know of any restrictions on the context object. As the #types file states, that thing is poorly documented. This is the docs I know about.
The issue is that none of the context object values may contain "special" characters.
Your context object must be something like:
"context": {
"someString": "value",
"someNumber": 1,
"someBool": true
},
You cannot set a JSON object or array as a valid value of any key in the context map. The only valid value types are string, number and boolean.
In my case, though, I needed to send a string array.
I tried to get around the type restriction by JSON-serializing the array, which produced "[\"valueA\",\"valueB\"]" and, for some reason, AWS didn't like it.
TL;DR
What solved my problem was using myArray.join(",") instead of JSON.stringify(myArray)
The request to start the client iniated account linking fails.
The console is showing a WARN of type: CLIENT_INITIATED_ACCOUNT_LINKING_ERROR with error: invalid_token.
The url was generated as described here: https://www.keycloak.org/docs/latest/server_development/#client-initiated-account-linking, by php backend system.
Also making sure to use UTF8 encoding when generating the hash
All prerequisites as describe it the section have been fulfilled.
Im' using Keycloak 15.0.2 and Laravel with Socialite to authenticate users.
This is how the hash is generated.
$keycloack_user = Socialite::driver('keycloak')->user();
$bearerToken = $keycloack_user->token;
$tokenParts = explode(".", $bearerToken);
$tokenHeader = base64_decode($tokenParts[0]);
$tokenPayload = base64_decode($tokenParts[1]);
$jwtHeader = json_decode($tokenHeader);
$jwtPayload = json_decode($tokenPayload);
$client_id = $jwtPayload->azp;
$host = $jwtPayload->iss;
$session_state = $jwtPayload->session_state;
$nonce = Str::random(20);
$provider = "google";
$input = $nonce . $session_state . $client_id . $provider;
$utf8encoded = utf8_encode($input);
$hashed = hash('sha256', $utf8encoded);
$encoded = rtrim(strtr(base64_encode($hashed), '+/', '-_'), '=');
Then the linking url is constructed as shown below:
$redirect_uri = urlencode(...);
$full_url = $host . "/broker/". $provider ."/link?client_id=". $client_id ."&redirect_uri=". $redirect_uri ."&nonce=". $nonce ."&hash=" . $encoded;
I'm currently testing a my local machine, without using https for any of the applications. Loging in works fine and when inspecting the JWT token, the needed role mappings are present:
"account": {
"roles": [
"manage-account",
"manage-account-links",
"view-profile"
]
}
But when accessing the url it says "Invalid request" and the Keycloak console indicates the token is invalid.
Update: Solution was to return the result of the hash method as raw binary data
$hashed = hash('sha256', $utf8encoded, true);
I had to work on the same task lately but with the client implemented in JavaScript. I was also stuck for quite a while till I realized how uncommonly keycloak is expecting the encoded hash value. You need to consider following two points:
Encode the hash string into hexadecimal before base64 conversion
Replace + by - and / by _. Besides that remove trailing = symbols
Below you find a working snippet written in JS:
import sjcl from "sjcl";
hexToBase64(hexstring) {
return btoa(hexstring.match(/\w{2}/g).map(function(a) {
return String.fromCharCode(parseInt(a, 16));
}).join(""));
},
// Assume nonce, session_state, clientId, provider to be given
var data = nonce + session_state + clientId + provider;
var myBitArray = sjcl.hash.sha256.hash(data)
var hashedData = sjcl.codec.hex.fromBits(myBitArray)
var base64HashedData = this.hexToBase64(HashedData)
base64HashedData = base64HashedData.replaceAll('+','-').replaceAll('/','_').replaceAll('=','')
base64HashedData is then what you need to pass as hash query parameter to the link endpoint of keycloak.
We have a asp.net web api application which uses swagger/swashbuckle for it's api documentation. The api is secured by azure AD using oauth/openid-connect. The configuration for swagger is done in code:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" }
};
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
c.SingleApiVersion(Version, Name);
c.UseFullTypeNameInSchemaIds();
c.OAuth2("oauth2")
.Description("OAuth2 Implicit Grant")
.Flow("implicit")
.AuthorizationUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/authorize")
.TokenUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/token");
c.OperationFilter<AssignOAuth2SecurityRequirements>();
})
.EnableSwaggerUi(c =>
{
c.EnableOAuth2Support(_applicationId, null, "http://localhost:49919/swagger/ui/o2c-html", "Swagger", " ", oauthParams);
c.BooleanValues(new[] { "0", "1" });
c.DisableValidator();
c.DocExpansion(DocExpansion.List);
});
When swashbuckle constructs the auth url for login, it automatically adds:
&scope=
However I need this to be:
&scope=openid
I have tried adding this:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" },
{ "scope", "openid" }
};
But this then adds:
&scope=&someotherparam=someothervalue&scope=openid
Any ideas how to add
&scope=openid
To the auth url that swashbuckle constructs?
Many thanks
So, found out what the issue was, the offending code can be found here:
https://github.com/swagger-api/swagger-ui/blob/2.x/dist/lib/swagger-oauth.js
These js files are from a git submodule that references the old version of the UI.
I can see on lines 154-158 we have this code:
url += '&redirect_uri=' + encodeURIComponent(redirectUrl);
url += '&realm=' + encodeURIComponent(realm);
url += '&client_id=' + encodeURIComponent(clientId);
url += '&scope=' + encodeURIComponent(scopes.join(scopeSeparator));
url += '&state=' + encodeURIComponent(state);
It basically adds scopes regardless of whether there are scopes or not. This means you cannot add scopes in the additionalQueryParams dictionary that gets sent into EnableOAuth2Support as you will get a url that contains 2 scope query params i.e.
&scope=&otherparam=otherparamvalue&scope=openid
A simple length check around the scopes would fix it.
I ended up removing swashbuckle from the web api project and added a different nuget package called swagger-net, found here:
https://www.nuget.org/packages/Swagger-Net/
This is actively maintained and it resolved the issue and uses a newer version of the swagger ui. The configuration remained exactly the same, the only thing you need to change is your reply url which is now:
http://your-url/swagger/ui/oauth2-redirect-html
I'm trying to integrate Sinch into my ROR webapp, and am having some difficulty formatting the signedUserToken to start the sinchClient.
Here is my view, using haml :
#{#signedUserTicket}
%script{src: "//cdn.sinch.com/latest/sinch.min.js", type: "text/javascript"}
= javascript_tag do
$(function(){
$sinchClient = new SinchClient({
applicationKey: 'APP_KEY',
capabilities: {messaging: true, calling: true},
supportActiveConnection: true,
onLogMessage: function(message) {
console.log(message);
},
});
$sinchClient.start({
'userTicket' : "#{#signedUserTicket}",
});
});
And whatever formatting I try to do in the controller, the closest I get to succeeding is :
DOMException [InvalidCharacterError: "String contains an invalid character"
code: 5
nsresult: 0x80530005
location: http://cdn.sinch.com/latest/sinch.min.js:5]
I'd appreciate a little help and would even build a Rubygem for integrating Sinch in Rails if I get the right info and can spare some time.
Cheers,
James
Edit :
I have tried a few modifications and am getting closer (I think).
The problem of InvalidCharacter came from the trailing '='s which apparently don't decode well in Javascript.
My new controller is now :
class SinchController < ApplicationController
skip_before_filter :verify_authenticity_token
before_filter :authenticate_user!
def client
username = current_user.username
applicationKey = "APP_KEY"
applicationSecret = "APP_SECRET_B64"
userTicket = {
"identity" => {"type" => "username", "endpoint" => username},
"expiresIn" => 3600,
"applicationKey" => applicationKey,
"created" => Time.now.utc.iso8601
}
userTicketJson = userTicket.to_json
userTicketBase64 = Base64.strict_encode64(userTicketJson).chop
digest = Digest::HMAC.digest(Base64.decode64(applicationSecret), userTicketJson, Digest::SHA256)
signature = Base64.strict_encode64(digest).chop
#signedUserTicket = (userTicketBase64 + ':' + signature).remove('=')
end
end
But now I'm facing the following error:
POST https://api.sinch.com/v1/instance 500 (Internal Server Error)
client:1 XMLHttpRequest cannot load https://api.sinch.com/v1/instance. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http:// localhost:3000' is therefore not allowed access. The response had HTTP status code 500.
(the space before localhost is due to new user restrictions on SO)
I added Rack::Cors to my rails server to try and allow Cross-domain requests in case it came from my own requests, but whatever configuration I tried, it seems the request never contains the right headers.
Am I misunderstanding CORS requests? Does the problem come from the requests generated by sinch.min.js?
Regards,
James
Error message is due to Firefox base64 decoder can't decode the token, due to symbols (such as #) that are not in the base64 character set. This suggest that the ticket is actually not passed to start(), and this line may be incorrect;
'userTicket' : "#{#signedUserTicket}",
I dont know HAML but shouldnt
'userTicket' : "#{#signedUserTicket}",
be 'userTicket' : #signedUserTicket,
I am using the Fog gem to generate presigned urls. I can do this successfully to get read access to the file. Here's what I do:
fog_s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => key,
:aws_secret_access_key => secret
})
object_path = 'foo.wav'
expiry = Date.new(2014,2,1).to_time.to_i
url = fog_s3.directories.new(:key => bucket).files.new(:key => object_path).url(expiry,path_style: true)
But this doesn't work when I try to upload the file. Is there a way to specify the http verb so it would be a PUT and not a GET?
EDIT I see a method: put_object_url which might help. I don't know how access it.
Thanks
EDIT based upon your suggestion:
It helped - it got me a PUT - not GET. However, I'm still having issues. I added content type:
headers = { "Content-Type" => "audio/wav" }
options = { path_style: true }
object_path = 'foo.wav'
expiry = Date.new(2014,2,1).to_time.to_i
url = fog_s3.put_object_url(bucket,object_path, expiry, headers, options)
but the url does not contain Content-Type in it. When done from Javascript in HTML I get the Content-Type in the url and that seems to work. Is this an issue with Fog? or is my header incorrect?
I think put_object_url is indeed what you want. If you follow the url method back to where it is defined, you can see it uses a similar method underlying it called get_object_url here (https://github.com/fog/fog/blob/dc7c5e285a1a252031d3d1570cbf2289f7137ed0/lib/fog/aws/models/storage/files.rb#L83). You should be able to do something similar and can do so by calling this method from the fog_s3 object you already created above. It should end up just looking like this:
headers = {}
options = { path_style: true }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)
Note that unlike get_object_url there is an extra headers option snuck in there (which you can use to do stuff like set Content-Type I believe).
Hope that sorts it for you, but just let me know if you have further questions. Thanks!
Addendum
Hmm, seems there may be a bug related to this after all (I'm wondering now how much this portion of the code has been exercised). I think you should be able to work around it though (but I'm not certain). I suspect you can just duplicate the value in the options as a query param also. Could you try something like this?
headers = query = { 'Content-Type' => 'audio/wav' }
options = { path_style: true, query: query }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)
Hopefully that fills in the blanks for you (and if so we can think some more about fixing that behavior within fog if it makes sense to do so). Thanks!
Instead of using the *put_object_url* might I suggest that you try using the bucket.files.create action which take a Fog file Hash attributes and return a Fog::Storage::AWS::File.
I prefer to break it down in a bit more steps, here is an example:
fog_s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => key,
:aws_secret_access_key => secret
})
# Define the filename
ext = :wav
filename = "foo.#{ext.to_s}"
# Path to your audio file?
path ="/"
# Define your expiry in the amount of seconds
expiry = 1.day.to_i
#Initialize the bucket to store too
fog_bucket = connection.directories.get(bucket)
file = {
:key => "#{filename}",
:body => IO.read("#{path}#{filename}"),
:content_type => Mime::Type.lookup_by_extension(ext),
:cache_control => "public, max-age=#{expiry}",
:expires => CGI.rfc1123_date(Time.now + expiry),
:public => true
}
# Returns a Fog::Storage::AWS::File
file = fog_bucket.files.create( file )
# Now to retrieve the public_url
url = file.public_url
Note: For subdir's checkout the :prefix option for a AWS bucket.
Fog File Documentation:
Optional attributes... bottom of the page, :) http://rubydoc.info/gems/fog/Fog/Storage/AWS/File
Hopefully the example will help explain the steps in creating a fog file... Cheers! :)