CORS issue from CloudFront to server for font - amazon-ec2

We are getting CORS issue for cloudfront to my site for FONT only.
Access to Font at 'http://d2v777xrj.cloudfront.net/assets/simple-line-icons/fonts/Simple-Line-Icons-ff94ad94c3a9d04bd2f80cb3c87dcccb.woff' from origin 'http://example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://example.com' is therefore not allowed access.

References (After looking into references, found these reference that worked for me. I didnt add any CORS on S3 bucket. Only Cloudfront accessing S3) [for your case change themes to assets]
https://deliciousbrains.com/wp-offload-media/doc/font-cors/
http://thelazylog.com/correct-configuration-to-fix-cors-issue-with-cloudfront/

Tried everything but nothing worked!
But the solution was very easy just two-step solution and nothing else.
Go to S3 Bucket->Permissions->Edit : Cross-origin resource sharing (CORS)->paste below configuration. In most of the other articles they are doing mistake of wrong header. you have to put "Origin" in AllowedHeaders.
[
{
"AllowedHeaders": [
"Origin"
],
"AllowedMethods": [
"HEAD",
"GET"
],
"AllowedOrigins": [
"http://www.yourdomain.com",
"https://www.yourdomain.com",
"https://yourdomain.com",
"http://yourdomain.com"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Go to Cloudfront->Behaviours->Default(*)->Edit
Change, Cache and origin request settings to : Use legacy cache settings
Change, Cache Based on Selected Request Headers to : Whitelist
Then, Add Whitelist Headers to : Origin [Only]

Cloud Front Added Origin Policy Recently. Updating the origin policy to s3origin worked for me as below once S3 was configured correctly.

Related

Can't make CloudFront supply cache-control headers

I've done lots of reading on this but none of the solutions I have seen seem to work for IIS websites - most seem to suggest some server-side solution but none of that works for me.
I'm optimising one of our sites, and PageSpeed, YSlow and Lighthouse all complain that images I'm serving from our CloudFront CDN don't have any cache headers. The CDN serves from an S3 bucket.
e.g. https://static.edie.net/webimages/new_new_new.png (expiration not specified)
Crops up as both 'There are static components without a far-future expiration date' and 'Leverage browser caching for the following cacheable resources'
I can't for the life of me work out how to make CloudFront serve a cache header for images like this.
I have set
Cache-Control: max-age=5500000
on the s3 bucket/file itself, and if you check the file via the bucket: https://devedienet.s3.amazonaws.com/webimages/new_new.png then it has the cache header present.
But that doesn't seem to affect the CloudFront image, which only has these headers:
Age: 12153
Connection: keep-alive
Date: Mon, 22 Oct 2018 11:18:49 GMT
ETag: "940fd4d68428cf3e4f88a45aab4d7157"
Server: AmazonS3
Via: 1.1 4f95eb10423b781564e79d7c85f85795.cloudfront.net (CloudFront)
X-Amz-Cf-Id: TZAWy8U12-ohhe-dwTkCLqXHbJKI7CJqQd21I-lvq-8rloZjTew6aw==
x-amz-meta-s3b-last-modified: 20181017T105350Z
X-Cache: Hit from cloudfront
I've tried adding custom behavours into AWS' Control Panel for the CloudFront distribution:
webimages/*.png
Minimum TTL: 5500000
But again this seems to have no effect.
Note that I invalidated all the images in the folder after adding the new rule above, this but no dice.
Am I missing something or misunderstanding what is required?
Since you are serving content from S3 through cloudfront, then you need to add the following headers to objects in S3 while uploading files to S3. Expires: {some future date}
Bonus: You do not need to specify this header for every object individually. You can upload a bunch of files together on S3, click next, and then on the screen that asks S3 storage class, scroll down and add these headers. And don't forget to click save!
I was facing a similar problem, so after doing some reading and experimenting what I figured out is that Cloudfront's Object Caching values of Minimum TTL, Max TTL, Default TTL doesn't add the Cache-Control header explicitly in the Response Headers for the resource if the resource doesn't have one at Server level. Secondlly even if the resource has aCache-control metadata added at S3, it should fall in between
MinTTL < s3 Cache < Max TTL .
The Object caching values state that for the provided value the resources will be cached for that much time at edge location and no Cache-Control will be added in Response headers for the resource.
What I did instead was to create a Lambda function and add it under Lambda associations by updating the Cache Behaviour settings for Viewer Response. Here is my Lambda fn. This added the Cache-Control header in the requested resource.
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
const headerCache = 'Cache-Control';
headers[headerCache.toLowerCase()] = [{
key: headerCache,
value: 'max-age=1096000'
}];
callback(null, response);
};

Socket.io-client Failed to load resource

I have this issue trying to connect to websocket using socket.io-client (Socket.IO.js build:0.9.16)
Failed to load resource: the server responded with a status of 404 (Not Found)
XMLHttpRequest cannot load https://myserver.it/socket.io/1/?t=1488475368547. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my_vps_ip' is therefore not allowed access. The response had HTTP status code 404.
the strange things are:
localhost it works fine
in amazon ec2 instance it works fine
in another vps (cloud vps bought here) it doesn't works
This is my code:
socket = io.connect('https://myserver.it' ,{
transports: ['websocket'],
secure: true,
'force new connection' : false,
'reconnect' : true,
});
Apache 2.4.18 in twice VPS, same configuration, same modules
I really don't understand ...
I dunno if this help but I've faced with same kind of problem time ago.
Check your URL and ensure it NOT end with /
e.g. https://myserver.it/ != https://myserver.it
It looks like you've run into a CORS issue. You need to make sure that you are explicitly allowing access to the requested resource.
You may want to look at enable-cors.org former guidance.

Firefox WebExtensions and Cross-domain privileges

I am trying to port a chrome extension to firefox using the relatively new WebExtensions from Firefox.
I always getting the following error
Cross-Origin Request Blocked:
The Same Origin Policy disallows reading the remote resource at .... (Reason: CORS header 'Access-Control-Allow-Origin' missing)
I added the website i would like to access to the permissions section inside the manifest.json like explained on the website, and also on Google Chrome its working.
Normally it should work that way, at least its explained that way on https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Content_scripts#Cross-domain_privileges
I would be very thankful for any help since I am out of ideas.
manifest.json
{
...
"permissions": [
"<all_urls>"
]
}
I think you need to add a CSP header to your HTML page. http://content-security-policy.com/ I had to add one to get mine to work with a similar warning.

Uploading pictures using Rack::Cors not working

I am trying to upload some pictures from my controller to my bucket on an Amazon S3. I am using the ruby Volt framework. I need CORS in order to do this, so I am using rack-cors. I have declared it correctly in my initializers/boot.rb file. This code was taken directly from the README.
Volt.current_app.middleware.use Rack::Cors do
allow do
origins '*'
resource '*', :headers => :any, :methods => [:get, :post, :options]
end
end
Unfortunately, it does not work correctly. When I try to post a picture to my S3, I get the following error:
XMLHttpRequest cannot load https://s3.amazonaws.com/bucket-name/uploads.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:3000' is therefore not allowed access.
The response had HTTP status code 403.
Any idea as to what might be causing this?

How to tell cloudfront to not cache 302 responses from S3 redirects, or, how else to workaround this image caching generation issue

I'm using Imagine via the LIIPImagineBundle for Symfony2 to create cached versions of images stored in S3.
Cached images are stored in an S3 web enabled bucket served by CloudFront. However, the default LIIPImagineBundle implementation of S3 is far too slow for me (checking if the file exists on S3 then creating a URL either to the cached file or to the resolve functionality), so I've worked out my own workflow:
Pass client the cloudfront URL where the cached image should exist
Client requests the image via the cloudfront URL, if it does not exist then the S3 bucket has a redirect rule which 302 redirects the user to an imagine webserver path which generates the cached version of the file and saves it to the appropriate location on S3
The webserve 301 redirects the user back to the cloudfront URL where the image is now stored and the client is served the image.
This is working fine as long as I don't use cloudfront. The problem appears to be that cloudfront is caching the 302 redirect response (even though the http spec states that they shouldn't). Thus, if I use cloudfront, the client is sent in an endless redirect loop back and forth from webserver to cloudfront, and every subsequent request to the file still redirects to the webserver even after the file has been generated.
If I use S3 directly instead of cloudfront there are no issues and this solution is solid.
According to Amazon's documentation S3 redirect rules don't allow me to specify custom headers (to set cache-control headers or the like), and I don't believe that CloudFront allows me to control the caching of redirects (if they do it's well hidden). CloudFront's invalidation options are so limited that I don't think they will work (can only invalidate 3 objects at any time)...I could pass an argument back to cloudfront on the first redirect (from the Imagine webserver) to fix the endless redirect (eg image.jpg?1), but subsequent requests to the same object will still 302 to the webserver then 301 back to cloudfront even though it exists. I feel like there should be an elegant solution to this problem but it's eluding me. Any help would be appreciated!!
I'm solving this same issue by setting the "Default TTL" in CloudFront "Cache Behavior" settings to 0, but still allowing my resized images to be cached by setting the CacheControl MetaData on the S3 file with max-age=12313213.
This way redirects will not be cached (default TTL behavior) but my resized images will be (CacheControl max-age on s3 cache hit).
If you really need to use CloudFront here, the only thing I can think of is that you don’t directly subject the user to the 302, 301 dance. Could you introduce some sort of proxy script / page to front S3 and that whole process? (or does that then defeat the point).
So a cache miss would look like this:
Visitor requests proxy page through Cloudfront.
Proxy page requests image from S3
Proxy page receives 302 from S3, follows this to Imagine web
server
Ideally just return the image from here (while letting it update
S3), or follow 301 back to S3
Proxy page returns image to visitor
Image is cached by Cloudfront
TL;DR: Make use of Lambda#Edge
We face the same problem using LiipImagineBundle.
For development, an NGINX serves the content from the local filesystem and resolves images that are not yet stored using a simple proxy_pass:
location ~ ^/files/cache/media/ {
try_files $uri #public_cache_fallback;
}
location #public_cache_fallback {
rewrite ^/files/cache/media/(.*)$ media/image-filter/$1 break;
proxy_set_header X-Original-Host $http_host;
proxy_set_header X-Original-Scheme $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:80/$uri;
}
As soon as you want to integrate CloudFront things get more complicated due to caching. While you can easily add S3 (static website, see below) as a distribution, CloudFront itself will not follow the resulting redirects but return them to the client. In the default configuration CloudFront will then cache this redirect and NOT the desired image (see https://stackoverflow.com/a/41293603/6669161 for a workaround with S3).
The best way would be to use a proxy as described here. However, this adds another layer which might be undesirable. Another solution is to use Lambda#Edge functions as (see here). In our case, we use S3 as a normal distribution and make use of the "Origin Response"-Event (you can edit them in the "Behaviors" tab of your distribution). Our Lambda function just checks if the request to S3 was successful. If it was, we can just forward it. If it was not, we assume that the desired object was not yet created. The lambda function then calls our application that generates the object and stores it in S3. For simplicity, the application replies with a redirect (to CloudFront again), too - so we can just forward that to the client. A drawback is that the client itself will see one redirect. Also make sure to set the cache headers so that CloudFront does not cache the lambda redirect.
Here is an example Lambda Function. This one just redirects the client to the resolve url (which then redirects to CloudFront again). Keep in mind that this will result in more round trips for the client (which is not perfect). However, it will reduce the execution time of your Lambda function. Make sure to add the Base Lambda#Edge policy (related tutorial).
env = {
'Protocol': 'http',
'HostName': 'localhost:8000',
'HttpErrorCodeReturnedEquals': '404',
'HttpRedirectCode': '307',
'KeyPrefixEquals': '/cache/media/',
'ReplaceKeyPrefixWith': '/media/resolve-image-filter/'
}
def lambda_handler(event, context):
response = event['Records'][0]['cf']['response']
if int(response['status']) == int(env['HttpErrorCodeReturnedEquals']):
request = event['Records'][0]['cf']['request']
original_path = request['uri']
if original_path.startswith(env['KeyPrefixEquals']):
new_path = env['ReplaceKeyPrefixWith'] + original_path[len(env['KeyPrefixEquals']):]
else:
new_path = original_path
location = '{}://{}{}'.format(env['Protocol'], env['HostName'], new_path)
response['status'] = env['HttpRedirectCode']
response['statusDescription'] = 'Resolve Image'
response['headers']['location'] = [{
'key': 'Location',
'value': location
}]
response['headers']['cache-control'] = [{
'key': 'Cache-Control',
'value': 'no-cache' # Also make sure that you minimum TTL is set to 0 (for the distribution)
}]
return response
If you just want to use S3 as a cache (without CloudFront). Using static website hosting and a redirect rule will redirect clients to the resolve url in case of missing cache files (you will need to rewrite S3 Cache Resolver urls to the website version though):
<RoutingRules>
<RoutingRule>
<Condition><HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
<KeyPrefixEquals>cache/media/</KeyPrefixEquals>
</Condition>
<Redirect>
<Protocol>http</Protocol>
<HostName>localhost</HostName>
<ReplaceKeyPrefixWith>media/image-filter/</ReplaceKeyPrefixWith>
<HttpRedirectCode>307</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>

Resources