Fineuploader set ACL public-read - fine-uploader

My upload script is working fine other than I try to set acl to public-read
objectProperties: {
acl: "public-read",
other properties
},
I get access denied when uploading. If I remove this or change to acl: "private" it works.
i think it's got to do with S3 CORS or maybe headers being sent, could you please set me straight.

I worked it out, sorry my bad. IAM user did not have putobject-acl provisioned

Related

Auth0 with Cypress gives Content Security Policy directive: "frame-ancestors 'none'"

I have been trying to run Cypress with an implementation of Auth0 in our website. I have tried tons of things out that the Auth0 community has already provided but nothing seems to work.
Here is the problem:
When I visit my url endpoint, it redirects me to the Auth0 login page. There I can add my username and password to login. When I do this manually its not a problem but when I do this with cy.get or cy.visit, I get the following error:
Refused to frame 'auth0tenant.auth0.com' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'none'".
I have tried adding the following headers to my request:
X-Frame-Options: deny
Content-Security-Policy: frame-ancestors 'none'
This does not work.
Following this blog that Auth0 has for Cypress does not work either.
I have also set clickjacking to ON in my Auth0 tenant settings. That did not do the job either.
I have also tried setting the cookies in my local storage just the way it sets in my browser but that doesn't work either. Basically I get the jwt token and I decode it and add the values of audience and domain to the local storage in my browser. But the browser itself is failing to load because of the CSP error which I have not been able to fix.
Any help in the area would be highly appreciated.
Thank you!
I had the same problem, what I had to do is in the auth0 on a tenant level set the disable Clickjacking Protection for Universal Login Change.
Refer to this link for where and how to configure: https://auth0.com/docs/troubleshoot/product-lifecycle/past-migrations/clickjacking-protection-for-universal-login
The "New" login page always drops the Content Secuirty Policy and X-frame: deny headers
Switch back to the "classic"

Manually enter auth credentials in dialog

I need Cypress authenticate a XHR request done in my app. The authentication is not Basic, but Digest, which has made finding help harder.
There also seems to be a bug for authenticating requests, such like:
cy.request({
url: 'http://postman-echo.com/digest-auth',
auth: { user: 'postman', pass: 'password', sendImmediately: false },
})
https://github.com/cypress-io/cypress/issues/2128
I'm wondering if there is a temporary workaround involving making Cypress manually entering the credentials in the dialog?
I've looked into listening to events such as window:alert and window:confirm, but these don't seem to catch the auth dialog.
TL:DR: How can I make Cypress enter the credentials manually in the dialog?
cy.visit currently only supports sending Basic HTTP auth, but I've opened a feature request for digest auth: https://github.com/cypress-io/cypress/issues/4669
In the meantime, the workaround would be to disable HTTP authentication in your local testing environment.
I tried this
Try and send the username and password in the url
Send Auth Headers with your request
Worked fine in the below example
https://www.youtube.com/watch?v=0HAn6B4E-Kc
You would probably need something like this
cy.visit('http://yoururl/')
cy.get('input#username').type('username')
cy.get('input#password').type('password')
cy.get('button#signin').click()

How can I add an email adapter to my amazon EC2 instance to enable password resets for my app

The steps that I have already done are as follows:
1)I've set up my EC2 instance already.
2) I've linked it up to amazon CodeDeploy
3) I've created an s3 bucket that will hold my cloud code when I push it to my instance.
4) I've created a folder that will contain my cloud code.
5) I've initialised npm within it and created an index.js file (it is a sublime text file actually - not sure if this is correct or not?)
6) I've set things up from the command line that index.js is the main entry point.
7) I have put the following email adapter code within it:
var server = ParseServer({
// Enable email verification
verifyUserEmails: true,
// The public URL of your app.
// This will appear in the link that is used to verify email addresses and reset passwords.
// Set the mount path as it is in serverURL
publicServerURL: 'etc',
// Your apps name. This will appear in the subject and body of the emails that are sent.
appName: 'etc',
// The email adapter
emailAdapter: {
module: 'parse-server-simple-mailgun-adapter',
options: {
// The address that your emails come from
fromAddress: 'parse#example.com',
// Your domain from mailgun.com
domain: 'example.com',
// Your API key from mailgun.com
apiKey: 'key-mykey',
}
}
});
8) I've set up an account with mailgun its asking me for a domain to send the emails from? I'm not sure what this is?
My main question is regarding the code i posted above. Is this enough to put into index.js to create an email adapter? And is uploading a sublime text file ok? How can the cloud know what "ParseServer" class is without importing libraries? - Do i have to add anymore code to index.js?
Additionally what else do I need in the cloud code package besides the index.js file? This has been such an obscure topic and there seems to be no clear guides online as to how to upload functional cloud code to amazon EC2 instances.
Any help appreciated, cheers.
Part of your steps are correct but you must also modify the form and the domain. Your domain (to your question) must be taken from your mailgun account. You must add some steps in order to setup your domain with your DNS provider (e.g. goDaddy etc.) If you don't want to use you can try to use the default sandbox domain that has been provided to you by mail-gun but it's better to use your own domain. In the from field you need to put some email address so users that will receive the email will see from which email this message sent to them. usually what I love to put is donotreplay# ( is your domain of course)
In my project, this is how I configure it (and it works):
"verifyUserEmails": true,
"emailAdapter": {
"module": "parse-server-simple-mailgun-adapter",
"options": {
"fromAddress": "donotreply#*******.com",
"domain": "mail.*******.com",
"apiKey": "<API_KEY_TAKEN_FROM_MAILGUN>"
}
},
Your list of domain in mail-gun can be found in here: https://app.mailgun.com/app/domains (login is required of course)
in here: https://documentation.mailgun.com/en/latest/user_manual.html#verifying-your-domain you can read how to verify your domain
Hope it helps.

How do I use a custom bucket URL with aws-sdk

Say I have an S3 bucket:
media.coolstuff.com
I have enabled static website hosting for the domain, and this is what AWS gives me as the URL:
media.coolstuff.com.s3-website-us-east-1.amazonaws.com
I've made a CNAME from my subdomain to the AWS domain. In my bucket I've put an index.html and made it public, and visiting media.coolstuff.com successfully renders it, so I know the redirect works as it should.
I am trying to figure out how to use the aws-sdk gem to return URLs using the subdomain for objects in the bucket.
Here's the code:
creds = Aws::Credentials.new(access_key, secret_access_key)
s3 = Aws::S3::Resource.new(
credentials: creds,
endpoint: 'http://media.coolstuff.com',
region: 'us-east-1'
)
bucket = s3.bucket('media.coolstuff.com')
obj = bucket.object('index.html')
obj.public_url
=> "http://media.coolstuff.com.media.coolstuff.com/index.html"
As you can see, the public url has the bucket duplicated. I get why this is: the bucket name is prefixed to the endpoint. But here, it shouldn't be. I don't know how to get around this.
Similarly, trying to get the etag fails because it can't figure out the right URL:
obj.etag
Seahorse::Client::NetworkingError: unable to connect to `media.coolstuff.com.media.coolstuff.com`; SocketError: getaddrinfo: nodename nor servname provided, or not known
According to the documentationyou can do this with
object.public_url(virtual_host: true)
Don't change the endpoint from the default - this is used for the actual api calls to S3

How to do facebook auth through a remote proxy

Say I have an app with a Sinatra REST API at http://example.com:4567. With my app I have a designer working on the front-end. Rather than set him up with a local back-end I edit his hosts file as follows:
127.0.0.1 local.example.com
and his httpd-vhosts.conf as follows:
ProxyPass /api http://example.com:4567
ProxyPassReverse /api http://example.com:4567
so that API calls are proxied to the remote host. I then create a vhost for local.example.com in apache that maps to his local directory where our front-end repo is. This allows me to give him a remote back-end with a local front-end.
The reason for the subdomain is because we do Facebook authentication which has its restrictive domain policies for auth. We can successfully facebook auth a user and get redirected back to the app, but when attempting to get an access token get a 400 response with the message:
{"error"=>{"message"=>"Missing client_id parameter.", "type"=>"OAuthException", "code"=>101}}
I believe the client_id is correctly set as it's set in the rack:oauth:client object correctly, and the flow is identical and only fails when the domain is different. The only thought I have is that facebook might not like that the user auth's from local.example.com while the access token is requested from example.com, but my understanding is facebook will authenticate on all subdomains. I've also whitelisted local.example.com on my App.
Any insight or advice into how to accomplish this? Thanks in advance.
Turns out it wasn't a domain issue, but rather fb_graph, the open source fb api from nov, uses basic auth by default, you need to set auth to something other than ":basic" when you get the access token in order to solve this error.

Resources