I'm using ruby v2 for this purpose. So, I want to make my object private and then share a link that can be opened up in the web browser. I used pre-signed URLs for this purpose. But, the connection's getting refused for some reason.
bucket = resource.bucket(name_of_bucket)
object = bucket.object(jey)
object.upload_file(file)
client.put_object_acl(bucket: bucket_name, key: key, acl: 'private')
url = object.presigned_url(:get, bucket: bucket_name, key: key, expires_in: 86400)
The client is an object of Aws::S3::Client which is initialised by the correct region, credentials and endpoint. (It's accurate as I'm able to create a bucket). What can be done?
Related
I am currently creating a S3 multipartupload program where the client side ideally only makes one request to the server side to complete this. This means the server side
Creates the MultipartUpload
creates the upload IDs as presigned urls
creates the CompleteMultipartUpload/AbortMultipartUpload as presigned urls
returns all those URLs to the client side for the client to make all those requests.
Currently I have everything working but getting the CompleteMutlipartUpload/Abort presigned URL working. I get the error:
<Response [403]>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
My code looks something like this
Server side in Go:
completeInput := &s3.CompleteMultipartUploadInput{
Bucket: aws.String(BUCKET),
Key: aws.String(FILE),
UploadId: uploadId,
MultipartUpload: &s3.CompletedMultipartUpload{
Parts: completedParts,
},
}
resp, _ := svc.CompleteMultipartUploadRequest(completeInput)
// Note that calling resp.Send() works and uploads the file correctly
completeUrl, _ := resp.Presign(15 * time.Minute)
Then I call the Client side code in Python:
// Note I also upload code from this environment so I believe my credentials are good to go
def complete(completeUrl):
complete = requests.put(completeUrl)
I've set up an Azure Functions proxy (using proxies.json). This should just pick the value given in the original request's url query string parameter and use that as a value for backendUri. So the goal is that the response of the call to the proxy contains the response of calling the URL that's in the url query string parameter directly. I need this because of CORS.
Here's my proxies.json
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"debug": true,
"matchCondition": {
"methods": ["GET"],
"route": "/proxy/"
},
"backendUri": "{request.querystring.url}"
}
}
}
When I call the proxy using https://not-an-actual-url.azurewebsites.net/proxy/?url=https://stackoverflow.com I'm getting back a 404. Same if I encode the value of the url parameter. If I set the backendUri in proxies.json to a static URL instead of trying to use the query string, it works, however.
To summarize, I want the value of backendUri to depend on the URL of the original request. As stated in the docs this should be possible. Quote from the docs:
Set the backend URL to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value does not need to be static, and it can reference application settings and parameters from the original client request.
When I call the proxy using
https://not-an-actual-url.azurewebsites.net/proxy/?url=https://stackoverflow.com
I'm getting back a 404. Same if I encode the value of the url
parameter. If I set the backendUri in proxies.json to a static URL
instead of trying to use the query string, it works, however.
Judging from your problem description, you don't seem to have a real HttpTrigger. You want to use function app as a server to forward requests to an address, right?
I think it is unrealistic that you want to dynamically get the url from the request and apply it to proxies.json. Because this file is already loaded when the function app is started, you cannot let the requested information enter, it will read your value as a normal string, if it is not a direct url, it cannot be read.
For CORS, you can find some free and public servers for forwarding, or build a server for forwarding by yourself. The proxies.json of function app may not realize your idea.
Is it possible to read and use the data values of a Kubernetes secret from within a Kubernetes go operator? Specifically I need the reconcile function to be able to make a call out to a private github. The authorization token is to be stored in a k8s secret. Thus I need to be able to load the specified secret and extract the token from the secret data.
I am able to get the secret:
secret := &corev1.Secret{}
err = r.client.Get(context.TODO(), secretNamespaceName, secret)
reqLogger.Info("Secret", "object", secret, "data", secret.Data)
The output shows the correct secret data:
"data":{"Token":"<the token string>"}
I was expecting that I could then just use the secret.Data["Token"] as a string in the github request Authorization header:
reqLogger.Info(string(secret.Data["Token"]))
req.Header.Add("Authorization", "Token" + " " + string(secret.Data["Token"]))
This isn't working and the log shows the Token string as a non readable series of unicode chars.
Is there some decode step or similar that I am missing here or is this even possible?
I would like to apply a reverse proxy on top of S3 to make content serving decisions based on the incoming request.
The S3 bucket is set to website mode and hosted publically.
I'll obviously have more logic to determine where I am getting the files from, but hopefully this will illustrate my desire.
This is using JavaScript, happy to use Go as well.
The following code does not work, but I'm not sure how best to get it working. Can I just send an arrayBuffer through?
module.exports.handler = async (event, context) => {
const data = await fetch(S3WebsiteURL + event.path)
const buffer = await data.arrayBuffer()
return {
headers: data.headers,
body: buffer,
statusCode: 200
}
}
I would use https://www.npmjs.com/package/request-promise
var rp = require('request-promise');
const htmlString = await rp(S3WebsiteURL + event.path);
return {
body: htmlString,
statusCode: 200
}
Try without headers and if it works, add header support.
I've found it difficult to use Lambda to proxy data - using API gateway, at least, it expects binary data in base-64 format at various points depending on how you set it up. They've improved things since I tried to do it that way last, so hopefully someone else can answer based on more recent experience.
If your content serving decisions are limited to access control (you don't need to transform the data you're serving), then you can use your lambda as a URL provider instead of a content provider - switch public sharing of the S3 bucket off, access the items using the S3 API, and have your lambda call S3.getSignedUrl() to get a link to the actual content. This means that only the callers of the lambda will have a valid URL to the content you want to protect, and depending on your application you can set the timeout on the pre-signed URL to be short enough you don't have to worry about it being shared.
The advantage here is that since the content itself doesn't get proxied through the lambda, your lambda runtime and memory costs can be lower and performance should be better.
A couple of tutorials on oAuth use the Flask session to store state parameters and access tokens in the flask session. (Brendan McCollam's very useful presentation from Pycon is an example)
I understand that Flask stores the session in cookies on the client side and that they are fairly easy to expose (see Michael Grinberg's how-secure-is-the-flask-user-session). I tried this myself and was able to see the token the expiration, etc.
Is it correct to store the state and tokens in the flask session or they should be stored somewhere else?
Code example:
#app.route('/login', methods=['GET'])
def login():
provider = OAuth2Session(
client_id=CONFIG['client_id'],
scope=CONFIG['scope'],
redirect_uri=CONFIG['redirect_uri'])
url, state = provider.authorization_url(CONFIG['auth_url'])
session['oauth2_state'] = state
return redirect(url)
#app.route('/callback', methods=['GET'])
def callback():
provider = OAuth2Session(CONFIG['client_id'],
redirect_uri=CONFIG['redirect_uri'],
state=session['oauth2_state'])
token_response = provider.fetch_token(
token_url=CONFIG['token_url'],
client_secret=CONFIG['client_secret'],
authorization_response=request.url)
session['access_token'] = token_response['access_token']
session['access_token_expires'] = token_response['expires_at']
transfers = provider.get('https://transfer.api.globusonline.org/v0.10/task_list?limit=1')
return redirect(url_for('index'))
#app.route('/')
def index():
if 'access_token' not in session:
return redirect(url_for('login'))
transfers = requests.get('https://transfer.api.globusonline.org/v0.10/task_list?limit=1',
headers={'Authorization': 'Bearer ' + session['access_token']})
return render_template('index.html.jinja2',
transfers=transfers.json())
I think some tutorials over-simplify in order to show simpler code. A good rule of thumb is to use session cookies only for information that MUST be known by your application and your user's browser, and is not private. That normally translates into a Session ID and possibly other non sensitive information such as a language selection.
Applying that rule of thumb, I'd suggest the next to each of the tokens:
Authorization Token: this data is by definition known to both the user and the application, so it shouldn't be a security concern to expose it in the cookie. However, there really is no need to keep this token once you're given an access code, so I advice against keeping it locally or in your cookies.
Access Code: this data must be considered secret, and must only be known by your application and the provider. There is no reason to make it know to any other parties, including the user, therefore it should NOT be included in cookies. If you need to store it, keep it locally in your servers (perhaps in your database, referencing your users session ID).
CSRF State Token: this data is ideally included as a hidden form field and validated against a server side variable, so cookies seem like an unnecessary complication. But I wouldn't be concerned about this data being in a cookie, since it's part of the response anyways.
Keep in mind there are extensions such as flask-sessions, with which practically the same code uses server side variables instead of cookie variables.