Setting Access-Control-Allow-Origin on Cloudfront - firefox

I am having problems serving static assets to Firefox using AWS Cloudfront.
Chrome works perfect, but Firefox is returning a CORS error.
If I execute curl , I get:
HTTP/1.1 200 OK
Content-Type: application/x-font-opentype
Content-Length: 39420
Connection: keep-alive
Date: Mon, 11 Aug 2014 21:53:50 GMT
Cache-Control: public, max-age=31557600
Expires: Sun, 09 Aug 2015 01:28:02 GMT
Last-Modified: Fri, 08 Aug 2014 19:28:05 GMT
ETag: "9df744bdf9372cf4cff87bb3e2d68fc8"
Accept-Ranges: bytes
Server: AmazonS3
Age: 2743
X-Cache: Hit from cloudfront
Via: 1.1 c445b20dfbf3128d810e975e5d84e2cd.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ...
Which I think needs the header:
Access-Control-Allow-Origin: *
Can anyone help me? Why is it a problem on Firefox and not Chrome? How can I solve it?

Have you configured your distribution to support CORS by setting Origin header to be forwarded?
Reference:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-cors

Related

Character Encoding Issue when activating Cloudflare

When I activate cloudflare we have an encoding or caching issue whereby special characters appear all over the page.
Header response with cloudflare deactivated:
Alt-Svc: quic=":443"; ma=2592000; v="35,39,43,44"
Cache-Control: no-cache, must-revalidate
Connection: close
Content-Encoding: gzip
Content-Length: 8156
Content-Type: text/html; charset=UTF-8
Date: Wed, 14 Aug 2019 14:19:31 GMT
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 14 Aug 2019 14:19:31 GMT
Pragma: no-cache
Server: LiteSpeed
Set-Cookie: pmd_template=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Set-Cookie: pmd_template=listimia; expires=Fri, 13-Sep-2019 14:19:31 GMT; Max-Age=2592000; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Vary: Accept-Encoding,User-Agent
X-Powered-By: PHP/7.0.33
Header response with cloudflare activated:
Cache-Control: no-cache, must-revalidate
CF-RAY: 50639e64dd188074-CPT
Connection: keep-alive
Content-Encoding: zlib,gzip,deflate
Content-Type: text/html; charset=UTF-8
Date: Wed, 14 Aug 2019 14:29:03 GMT
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 14 Aug 2019 14:29:02 GMT
Pragma: no-cache
Server: cloudflare
Set-Cookie: pmd_template=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Set-Cookie: pmd_template=listimia; expires=Fri, 13-Sep-2019 14:29:02 GMT; Max-Age=2592000; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Transfer-Encoding: chunked
Vary: User-Agent
X-Powered-By: PHP/7.0.33
X-Turbo-Charged-By: LiteSpeed
I believe I might need to make sure my origin server sends a header that tells the cache to serve pages based on the content encoding, but I'm not too sure if this logic is correct because with Cloudflare activated I only see Vary: User Agent? which is in fact ignored by Cloudflare... If the logic is correct then I'm not too sure how to go about fixing this. I have tried to add a page rule from Cloudflare to cache everything and also added the following in .htaccess
</IfModule>
AddDefaultCharset UTF-8
<IfModule mod_headers.c>
<FilesMatch ".(js|css|xml|gz|html)$">
Header append Vary: Accept-Encoding
</FilesMatch>
</IfModule>
but both does not work.
Any help will be much appreciated to get this issue resolved and will accept the answer
Please help. Thank you
Cloudflare currently only respects the Accept-Encoding vary header.
If you want to vary based on other factors, you can either consider:
Custom caching set up for Enterprise Plan
"Bypass Cache" entirely using Page Rules
Serve different content types from distinct URLs to continue leveraging caching
Workaround using Cloudflare Workers

webhdfs rest api throwing file not found exception

I am trying to open a hdfs file that is present on cdh4 cluster from cdh5 machine using webhdfs from the command line as below:
curl -i -L "http://namenodeIpofCDH4:50070/webhdfs/v1/user/quad/source/JSONML.java?user.name=quad&op=OPEN"
I am getting "File Not Found Exception" even if the file JSONML.java is present in the mentioned path in namenode as well as datanode and its trace is as follows:
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Date: Mon, 22 Feb 2016 13:25:35 GMT
Pragma: no-cache
Date: Mon, 22 Feb 2016 13:25:35 GMT
Pragma: no-cache
Set-Cookie: hadoop.auth="u=quad&p=quad&t=simple&e=1456183535737&s=KdZYcA5iwJeIU2F9ZJfLSaT4qMY=";Path=/
Location: http://n3.quadratics.com:50075/webhdfs/v1/user/quad/source/JSONML.java?op=OPEN&user.name=quad&namenoderpcaddress=n1.quadratics.com:8020&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26.cloudera.4)
HTTP/1.1 404 Not Found
Cache-Control: no-cache
Expires: Mon, 22 Feb 2016 13:26:28 GMT
Date: Mon, 22 Feb 2016 13:26:28 GMT
Pragma: no-cache
Expires: Mon, 22 Feb 2016 13:26:28 GMT
Date: Mon, 22 Feb 2016 13:26:28 GMT
Pragma: no-cache
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /user/quad/source/JSONML.java\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1932)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1873)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1853)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1825)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:559)\n\tat org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)\n"}}
But I don't get any error and get the status of the above file when I use the below command:
curl -i -L http://namenodeIpofCDH4:50070/webhdfs/v1/user/quad/source/JSONML.java?user.name=quad&op=GETFILESTATUS"
I get the output response as below:
HTTP/1.1 200 OK
Cache-Control: no-cache
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Date: Mon, 22 Feb 2016 13:38:48 GMT
Pragma: no-cache
Date: Mon, 22 Feb 2016 13:38:48 GMT
Pragma: no-cache
Set-Cookie: hadoop.auth="u=quad&p=quad&t=simple&e=1456184328134&s=sE6esO8J39O+itl+ggNzX4/WzjQ=";Path=/
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{"FileStatus":{"accessTime":1456147448567,"blockSize":134217728,"group":"quad","length":14849,"modificationTime":1456143798039,"owner":"quad","pathSuffix":"","permission":"644","replication":3,"type":"FILE"}}
Any ideas of the reason of why opening a file is failing and fixing that would be greatly appreciated.
I saw a similar error when I had misconfigured my /etc/hosts
The OPEN command above returns a redirect which provides a hostname. The localhost will try and resolve this hostname based on the local DNS setup.
It will then look for the file at the IP address that the hostname resolves to. Not necessarily the one you issued the command to.

Azure storage - Images url become binary?

I use the Azure storage to upload my image,
but some images will display by binary,
like : http://fungogo.blob.core.windows.net/asdf0/18263359_e0d9199e-b2d3-11e5-b71b-46c19c40c550.jpg
and some images will display by images on the browser
like : https://fungogo.blob.core.windows.net/images/14600328358_a00eaa35c5_o.jpg
I want to display the image on the browser instead of download, how can I fixed it?
I believe the issue is with the Content-Type. In the first block below contains the headers for the failing link. You can see it is listed as image/jpg/jpeg.
HTTP/1.1 200 OK
Content-Length: 137496
Content-Type: image/jpg/jpeg
Content-MD5: zPyz4CSRnPhQtW7PT1w9LQ==
Last-Modified: Wed, 03 Feb 2016 11:06:14 GMT
ETag: 0x8D32C8A086B77F5
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: e25e0d4c-0001-0049-5ff0-5ea774000000
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
Date: Thu, 04 Feb 2016 02:08:52 GMT
The response headers for the working link have a Content-Type as image/jpeg.
HTTP/1.1 200 OK
Content-Length: 1689160
Content-Type: image/jpeg
Content-MD5: iAhgwODEpi7EaTAyUCMY1Q==
Last-Modified: Mon, 01 Feb 2016 08:18:24 GMT
ETag: 0x8D32AE0413D02B6
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 9792c6d9-0001-0045-2cf0-5e4985000000
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
Date: Thu, 04 Feb 2016 02:06:50 GMT
If you want to update the Content-Type of a lot of files at once, you can look at the example in the answer for this SO link at Set Content-type of media files stored on Blob
If you're wondering about the difference between jpg and jpeg, you can look at this SO link JPG vs. JPEG image formats

Cached content expiring too frequently on fastly CDN

I'm using Fastly as a CDN in front of my Heroku application, and am seeing many requests that I expect to be cached make it through.
An example of this behavior is two requests to the URL:
https://nuu-acceptance-herokuapp-com.global.ssl.fastly.net/attachments/f092ff0398b3bace19fae21b17a22320c3da5428/store/fit/240/160/28515a2fa2e47b59f13b2044ea5b9a7c8c9587ceca7d7dfadb28f08730f7/file.jpg. Here are two responses from the requests, which occurred fifteen minutes apart:
RESPONSE 1:
----------
Strict-Transport-Security: max-age=31536000
Content-Encoding: gzip
X-Content-Type-Options: nosniff
Age: 0
Transfer-Encoding: chunked
X-Cache: MISS
X-Cache-Hits: 0
Content-Disposition: inline; filename="file.jpg"
Connection: keep-alive
Via: 1.1 vegur
Via: 1.1 varnish
X-Request-Id: bc766069-c2ca-4a66-ba88-a8d76da72e2d
X-Served-By: cache-sjc3124-SJC
X-Runtime: 3.711698
Last-Modified: Tue, 23 Jun 2015 18:44:27 GMT
Server: Cowboy
X-Timer: S1435085062.909546,VS0,VE4437
Date: Tue, 23 Jun 2015 18:44:27 GMT
Vary: Accept-Encoding
Content-Type: image/jpeg
Access-Control-Allow-Origin: *
Cache-Control: public, must-revalidate, max-age=31536000
Set-Cookie: __profilin=; path=/; max-age=0; expires=Thu, 01 Jan 1970 00:00:00 -0000; secure
Accept-Ranges: bytes
Expires: Wed, 22 Jun 2016 18:44:27 GMT
----------
RESPONSE 2:
----------
Strict-Transport-Security: max-age=31536000
Content-Encoding: gzip
X-Content-Type-Options: nosniff
Age: 0
Transfer-Encoding: chunked
X-Cache: MISS
X-Cache-Hits: 0
Content-Disposition: inline; filename="file.jpg"
Connection: keep-alive
Via: 1.1 vegur
Via: 1.1 varnish
X-Request-Id: 60ee54b0-9509-42c5-9b03-c0f5854c5524
X-Served-By: cache-sjc3135-SJC
X-Runtime: 0.251021
Last-Modified: Tue, 23 Jun 2015 18:57:44 GMT
Server: Cowboy
X-Timer: S1435085863.749442,VS0,VE560
Date: Tue, 23 Jun 2015 18:57:44 GMT
Vary: Accept-Encoding
Content-Type: image/jpeg
Access-Control-Allow-Origin: *
Cache-Control: public, must-revalidate, max-age=31536000
Accept-Ranges: bytes
Expires: Wed, 22 Jun 2016 18:57:44 GMT
Both are cache misses, even though I expect this content to be cached for a year. It also appears that the same Fastly cluster handled the request. Can anyone point me to what I might be doing wrong? I'm seeing this behavior across many files served by Fastly - fastly seems to serve the files intermittently, but there are cache misses much more often than I expect.
I'd appreciate any help that anyone could give me with this - thanks!
If you look at the HTTP headers from your responses, you will see that they both contain a Set-Cookie header. Responses with cookies will not be cached by Fastly. You can remove them, however, in your app or within your Fastly configuration.

GZIP for CSS and JS dosen't work in Google Chrome, but does in Firefox

I have recently figured out how to enable GZIP (or Deflate) on my WAMP server that I will be using to serve my intranet application.
However, when testing in Google Chrome I see that the PHP file is compressed but Javascript files and CSS are not. The Response header shows that it is not compressed and Google Pagespeed confirms this.
Firefox on the other hand recives all files with compression without a problem.
Here are the headers for the main CSS file as an example:
Google Chrome
Date: Wed, 18 Jul 2012 14:48:43 GMT
Content-Length: 6533
Last-Modified: Wed, 18 Jul 2012 00:42:33 GMT
Server: Apache/2.2.21 (Win64) PHP/5.3.10
ETag: "a00000001509b-1985-4c50ff04b26ef"
Vary: Accept-Encoding
Content-Type: text/css
Accept-Ranges: bytes
200 OK
Firefox
Date: Wed, 18 Jul 2012 14:33:14 GMT
Server: Apache/2.2.21 (Win64) PHP/5.3.10
Last-Modified: Wed, 18 Jul 2012 00:42:33 GMT
Etag: "a00000001509b-1985-4c50ff04b26ef"
Accept-Ranges: bytes
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 1273
Content-Type: text/css
200 OK
Is this a problem with my WAMP setup, code, or is it just Google Chrome?
Thank you.
Google chrome does support JS/CSS gzip.
Request URL:http://d1o7y22ifnbryp.cloudfront.net/static/7793/css/all.min.css
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:text/css,*/*;q=0.1
Accept-Charset:GBK,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:zh-CN,zh;q=0.8
Connection:keep-alive
Host:d1o7y22ifnbryp.cloudfront.net
User-Agent:Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5
Response Headers
Accept-Ranges:bytes
Cache-Control:max-age=604800
Connection:keep-alive
Content-Encoding:gzip
Content-Length:17933
Content-Type:text/css
Date:Wed, 18 Jul 2012 15:43:46 GMT
ETag:"805dc-19c87-4c4a305735340"
Expires:Wed, 25 Jul 2012 15:43:46 GMT
Last-Modified:Thu, 12 Jul 2012 14:45:57 GMT
Server:Apache/2.2.22 (Amazon)
Vary:Accept-Encoding
I think the problem should be in apache configuration.

Resources