If an application developed to support only HTTP. What configuration we should do in google app engine, that it force developer to have HTTPS support. We can add an entry(for handler) in "app.yaml", but in order to redirection. Just want to know anything else we can do to prevent such thing(in short should work with HTTPS only). Probably we can do something from ingress/loadbalancer/ssl etc but that's looks paid and don't want that currently.
You just have to set secure: always in app.yaml for your route handlers. Any call to your app from http will automatically get redirected to https
I was having trouble putting a whole site over HTTPS, so searching I found a solution that worked for me:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
what it says is that you can change your app.yaml file and place in the handlers
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
I'm using Imagine via the LIIPImagineBundle for Symfony2 to create cached versions of images stored in S3.
Cached images are stored in an S3 web enabled bucket served by CloudFront. However, the default LIIPImagineBundle implementation of S3 is far too slow for me (checking if the file exists on S3 then creating a URL either to the cached file or to the resolve functionality), so I've worked out my own workflow:
Pass client the cloudfront URL where the cached image should exist
Client requests the image via the cloudfront URL, if it does not exist then the S3 bucket has a redirect rule which 302 redirects the user to an imagine webserver path which generates the cached version of the file and saves it to the appropriate location on S3
The webserve 301 redirects the user back to the cloudfront URL where the image is now stored and the client is served the image.
This is working fine as long as I don't use cloudfront. The problem appears to be that cloudfront is caching the 302 redirect response (even though the http spec states that they shouldn't). Thus, if I use cloudfront, the client is sent in an endless redirect loop back and forth from webserver to cloudfront, and every subsequent request to the file still redirects to the webserver even after the file has been generated.
If I use S3 directly instead of cloudfront there are no issues and this solution is solid.
According to Amazon's documentation S3 redirect rules don't allow me to specify custom headers (to set cache-control headers or the like), and I don't believe that CloudFront allows me to control the caching of redirects (if they do it's well hidden). CloudFront's invalidation options are so limited that I don't think they will work (can only invalidate 3 objects at any time)...I could pass an argument back to cloudfront on the first redirect (from the Imagine webserver) to fix the endless redirect (eg image.jpg?1), but subsequent requests to the same object will still 302 to the webserver then 301 back to cloudfront even though it exists. I feel like there should be an elegant solution to this problem but it's eluding me. Any help would be appreciated!!
I'm solving this same issue by setting the "Default TTL" in CloudFront "Cache Behavior" settings to 0, but still allowing my resized images to be cached by setting the CacheControl MetaData on the S3 file with max-age=12313213.
This way redirects will not be cached (default TTL behavior) but my resized images will be (CacheControl max-age on s3 cache hit).
If you really need to use CloudFront here, the only thing I can think of is that you don’t directly subject the user to the 302, 301 dance. Could you introduce some sort of proxy script / page to front S3 and that whole process? (or does that then defeat the point).
So a cache miss would look like this:
Visitor requests proxy page through Cloudfront.
Proxy page requests image from S3
Proxy page receives 302 from S3, follows this to Imagine web
server
Ideally just return the image from here (while letting it update
S3), or follow 301 back to S3
Proxy page returns image to visitor
Image is cached by Cloudfront
TL;DR: Make use of Lambda#Edge
We face the same problem using LiipImagineBundle.
For development, an NGINX serves the content from the local filesystem and resolves images that are not yet stored using a simple proxy_pass:
location ~ ^/files/cache/media/ {
try_files $uri #public_cache_fallback;
}
location #public_cache_fallback {
rewrite ^/files/cache/media/(.*)$ media/image-filter/$1 break;
proxy_set_header X-Original-Host $http_host;
proxy_set_header X-Original-Scheme $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:80/$uri;
}
As soon as you want to integrate CloudFront things get more complicated due to caching. While you can easily add S3 (static website, see below) as a distribution, CloudFront itself will not follow the resulting redirects but return them to the client. In the default configuration CloudFront will then cache this redirect and NOT the desired image (see https://stackoverflow.com/a/41293603/6669161 for a workaround with S3).
The best way would be to use a proxy as described here. However, this adds another layer which might be undesirable. Another solution is to use Lambda#Edge functions as (see here). In our case, we use S3 as a normal distribution and make use of the "Origin Response"-Event (you can edit them in the "Behaviors" tab of your distribution). Our Lambda function just checks if the request to S3 was successful. If it was, we can just forward it. If it was not, we assume that the desired object was not yet created. The lambda function then calls our application that generates the object and stores it in S3. For simplicity, the application replies with a redirect (to CloudFront again), too - so we can just forward that to the client. A drawback is that the client itself will see one redirect. Also make sure to set the cache headers so that CloudFront does not cache the lambda redirect.
Here is an example Lambda Function. This one just redirects the client to the resolve url (which then redirects to CloudFront again). Keep in mind that this will result in more round trips for the client (which is not perfect). However, it will reduce the execution time of your Lambda function. Make sure to add the Base Lambda#Edge policy (related tutorial).
env = {
'Protocol': 'http',
'HostName': 'localhost:8000',
'HttpErrorCodeReturnedEquals': '404',
'HttpRedirectCode': '307',
'KeyPrefixEquals': '/cache/media/',
'ReplaceKeyPrefixWith': '/media/resolve-image-filter/'
}
def lambda_handler(event, context):
response = event['Records'][0]['cf']['response']
if int(response['status']) == int(env['HttpErrorCodeReturnedEquals']):
request = event['Records'][0]['cf']['request']
original_path = request['uri']
if original_path.startswith(env['KeyPrefixEquals']):
new_path = env['ReplaceKeyPrefixWith'] + original_path[len(env['KeyPrefixEquals']):]
else:
new_path = original_path
location = '{}://{}{}'.format(env['Protocol'], env['HostName'], new_path)
response['status'] = env['HttpRedirectCode']
response['statusDescription'] = 'Resolve Image'
response['headers']['location'] = [{
'key': 'Location',
'value': location
}]
response['headers']['cache-control'] = [{
'key': 'Cache-Control',
'value': 'no-cache' # Also make sure that you minimum TTL is set to 0 (for the distribution)
}]
return response
If you just want to use S3 as a cache (without CloudFront). Using static website hosting and a redirect rule will redirect clients to the resolve url in case of missing cache files (you will need to rewrite S3 Cache Resolver urls to the website version though):
<RoutingRules>
<RoutingRule>
<Condition><HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
<KeyPrefixEquals>cache/media/</KeyPrefixEquals>
</Condition>
<Redirect>
<Protocol>http</Protocol>
<HostName>localhost</HostName>
<ReplaceKeyPrefixWith>media/image-filter/</ReplaceKeyPrefixWith>
<HttpRedirectCode>307</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>
I either do not quite get how I can have Google or any Social Network to use Phantomjs to render my site or I have did something wrong.
I am using Nginx, Angularjs with Html5Mode Urls and Phantomjs.
Furthermore I use steeve/angular-seo to handle the phantomjs stuff. For this I have included the angular-seo.js and added
$scope.htmlReady();
To the end of every controller.
My Nginx config for my domain dev.example.com includes this at the end of the server tag.
if ($args ~ "_escaped_fragment_=(.*)") {
rewrite ^ /snapshot${uri};
}
location /snapshot {
proxy_pass http://dev.example.com:8888;
proxy_connect_timeout 60s;
}
My index.html includes this meta tag to let google see that it is an AJAX site.
<meta name="fragment" content="!" />
Then I run
phantomjs --disk-cache=no angular-seo-server.js 8888 http://dev.example.com
in the server console.
Now I tried to visit my site manually with http://dev.example.com/?_escaped_fragment_=/
but the site that I get back is not correctly displayed and the snapshot is also not saved to the snapshot folder.
I am using grails resources plugin. On client I am using require.js to fetch js.
my require.js config -
baseUrl: '/js/lib',
With resources plugin enabled -
browser would make request for /js/lib/abc.js wasting ~300ms
On reaching server it will be redirected to /static/2432yi4h32kh4232h4k2h34ll.js
Browser will find this file in its cache and serve it.
So I disabled cached-resources plugin using -
grails.resources.mappers.hashandcache.excludes = ['**/*.js']
and new require.js config -
baseUrl: '/static/js/lib',
urlArgs: "bust=" + application_version,
Removing cached-resources solved redirect issue but also removed the expires header which was being set for js files causing browsers to not cache js files at all.
How can I only disable the name hashing in cached-resources and keep the expires headers it sets.
Otherwise, are there any plugins for Grails I can use to set these headers and they work well with Resources plugin.
I am using Tomcat and Haproxy to serve content.
I think best solution is to put the hashed js file name in require definition, not the original clear name.
You can echo the hashed name using the resource external tag
<r:external uri="js/custom.js"/>
<script type="text/javascript">
var urlOfCSSToLoadInJSCode = '${r.external(uri:"css/custom.css").encodeAsJavaScript()}';
</script>
<r:external uri="icons/favicon.ico"/>
I am attempting to configure an owncloud server that rewrites all incoming requests and ships them back out at the exact same domain and request uri but change the scheme from http to https.
This is failed miserably. I tried:
redirect 301 https://$hostname/$request_uri
and
rewrite ^ https://$hostname/$request_uri
Anyway, after removing that just to make sure the basic nginx configuration would work it as it had prior to adding the ssl redirects/rewrites it will NOT stop changing the scheme to https.
Is there a cached list somewhere in nginx's configuration that keeps hold of redirect/rewrite protocols? I cleared my browser cache completely and it will not stop.
AH HA!
in config/config.php there was a line
'forcessl' => true,
Stupid line got switched on when it received a request at the 443 port.
Turned off and standard http owncloud works and neither apache/nginx are redirecting to ssl.
Phew.