Allowing AJAX requests to service behind nginx proxy based on request_method - ajax

I'm new to Nginx, so pardon me if I'm being obtuse. I have a service that is sitting behind an nginx proxy, and I am using auth_basic to prevent anyone from currently being able to hit it. I want to allow an Angular app to hit this service, but I want to control who is allowed to perform particular request methods (GET, POST, DEL, OPTIONS), and using auth_basic doesn't seem like the best bet, since I don't want to hardcode the login/password into the JS (duh). The only way I can figure out how to do this is to say:
if ($request_method = OPTIONS)
{
proxy_pass_header Access-Control-Allow-Origin *;
proxy_pass_header Access-Control-Allow_Methods GET, OPTIONS;
etc...
}
At this point, I'd allow GET and OPTIONS requests from anyone, but I want to restrict POST, DEL to only from certain locations (such as internally, or from a trusted IP). Currently, though, if I put proxy_pass_header into that block, it says that the directive is not allowed. I've seen other examples where people use add_header inside an if block like that, so I'm confused why that isn't working.
So first of all, is this the best way of doing things, and if not, does someone have another recommendation? If it is the best way of handling things, what am I doing wrong here?
Any help would be appreciated. I find the nginx documentation to be very confusing.

You could use deny/allow directives from ngx_http_access_module.
location / {
deny 192.168.1.1;
allow 192.168.1.0/24;
allow 10.1.1.0/16;
allow 2001:0db8::/32;
deny all;
}
http://nginx.org/en/docs/http/ngx_http_access_module.html

Related

Laravel forcing Http for asssets

this is a little bit strange because most of the questions here wanted to force https.
While learning AWS elastic beanstalk. I am hosting a laravel site there. Everything is fine, except that none of my javascripts and css files are being loaded.
If have referenced them in the blade view as :
<script src="{{asset('assets/backend/plugins/jquery/jquery.min.js')}}"></script>
First thing I tried was looking into the file/folder permissions in the root of my project by SSHing into EC2 instance. Didn't work even when I set the permission to public folder to 777.
Later I found out that, the site's main page url was http while all the assets url were 'https'.
I dont want to get into the SSL certificates things just yet, if it is possible.
Is there anyway I can have my assets url be forced to Http only?
Please forgive my naiveity. Any help would be appreciated.
This usually happens if your site is for example behind an reverse proxy, As the URL helper facade, trusts on your local instance that is beyond the proxy, and might not use SSL. Which can be misleading/wrong.
Which is probaly the case on a EC2 instance... as the SSL termination is beyond load balancers/HA Proxies.
i usually add the following to my AppServiceProvider.php
public function boot()
{
if (Str::startsWith(config('app.url'), 'https')) {
\URL::forceScheme('https');
} else {
\URL::forceScheme('http');
}
}
Of course this needs to ensure you've set app.url / APP_URL, if you are not using that, you can just get rid of the if statement. But is a little less elegant, and disallows you to develop on non https

How to enable CORS on Sonatype Nexus?

I want to develop a Monitoring-WebApp for different things with AngularJS as Frontend. One of the core-elements is showing an overview of Nexus-Artifacts/Repositories.
When I request the REST-API I'm getting following error back:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:9090' is therefore not allowed access.
To fix this error, I need to modify the response headers to enable CORS.
It would be great if anyone is familiar with that type of problem and could give me an answer!
The CORS headers are present in the response of the system you are trying to invoke. (Those are checked on the client side [aka browser this case], you can implement a call on your backend to have those calls and there you can ignore those headers, but that could become quite hard to maintain.) To change those you'll need a proxy. So your application will not call the url directly like
fetch("http://localhost:9090/api/sometest")
There are at least two ways: one to add a proxy directly before the sonar server and modify the headers for everyone. I do not really recommend this because of security reasons. :)
The other more maintaneable solution is to go through the local domain of the monitoring web app as follows:
fetch("/proxy/nexus/api/sometest")
To achieve this you need to setup a proxy where your application is running. This could map the different services which you depend on, and modify the headers if necessary.
I do not know which application http server are you going to use, but here are some proxy configuration documentations on the topic:
For Apache HTTPD mod_proxy you could use a configuration similar to this:
ProxyPass "/proxy/nexus/" "http://localhost:9090/"
ProxyPassReverse "/proxy/nexus/" "http://localhost:9090/"
It is maybe necessary to use the cookies as well so you may need to take a look at the following configurations:
ProxyPassReverseCookiePath
ProxyPassReverseCookieDomain
For Nginx location you could employ something as follows
location /proxy/nexus/ {
proxy_pass http://localhost:9090/;
}
For node.js see documentation: https://github.com/nodejitsu/node-http-proxy
module.exports = (req, res, next) => {
proxy.web(req, res, {
target: 'http://localhost:4003/',
buffer: streamify(req.rawBody)
}, next);
};

How to tell cloudfront to not cache 302 responses from S3 redirects, or, how else to workaround this image caching generation issue

I'm using Imagine via the LIIPImagineBundle for Symfony2 to create cached versions of images stored in S3.
Cached images are stored in an S3 web enabled bucket served by CloudFront. However, the default LIIPImagineBundle implementation of S3 is far too slow for me (checking if the file exists on S3 then creating a URL either to the cached file or to the resolve functionality), so I've worked out my own workflow:
Pass client the cloudfront URL where the cached image should exist
Client requests the image via the cloudfront URL, if it does not exist then the S3 bucket has a redirect rule which 302 redirects the user to an imagine webserver path which generates the cached version of the file and saves it to the appropriate location on S3
The webserve 301 redirects the user back to the cloudfront URL where the image is now stored and the client is served the image.
This is working fine as long as I don't use cloudfront. The problem appears to be that cloudfront is caching the 302 redirect response (even though the http spec states that they shouldn't). Thus, if I use cloudfront, the client is sent in an endless redirect loop back and forth from webserver to cloudfront, and every subsequent request to the file still redirects to the webserver even after the file has been generated.
If I use S3 directly instead of cloudfront there are no issues and this solution is solid.
According to Amazon's documentation S3 redirect rules don't allow me to specify custom headers (to set cache-control headers or the like), and I don't believe that CloudFront allows me to control the caching of redirects (if they do it's well hidden). CloudFront's invalidation options are so limited that I don't think they will work (can only invalidate 3 objects at any time)...I could pass an argument back to cloudfront on the first redirect (from the Imagine webserver) to fix the endless redirect (eg image.jpg?1), but subsequent requests to the same object will still 302 to the webserver then 301 back to cloudfront even though it exists. I feel like there should be an elegant solution to this problem but it's eluding me. Any help would be appreciated!!
I'm solving this same issue by setting the "Default TTL" in CloudFront "Cache Behavior" settings to 0, but still allowing my resized images to be cached by setting the CacheControl MetaData on the S3 file with max-age=12313213.
This way redirects will not be cached (default TTL behavior) but my resized images will be (CacheControl max-age on s3 cache hit).
If you really need to use CloudFront here, the only thing I can think of is that you don’t directly subject the user to the 302, 301 dance. Could you introduce some sort of proxy script / page to front S3 and that whole process? (or does that then defeat the point).
So a cache miss would look like this:
Visitor requests proxy page through Cloudfront.
Proxy page requests image from S3
Proxy page receives 302 from S3, follows this to Imagine web
server
Ideally just return the image from here (while letting it update
S3), or follow 301 back to S3
Proxy page returns image to visitor
Image is cached by Cloudfront
TL;DR: Make use of Lambda#Edge
We face the same problem using LiipImagineBundle.
For development, an NGINX serves the content from the local filesystem and resolves images that are not yet stored using a simple proxy_pass:
location ~ ^/files/cache/media/ {
try_files $uri #public_cache_fallback;
}
location #public_cache_fallback {
rewrite ^/files/cache/media/(.*)$ media/image-filter/$1 break;
proxy_set_header X-Original-Host $http_host;
proxy_set_header X-Original-Scheme $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://0.0.0.0:80/$uri;
}
As soon as you want to integrate CloudFront things get more complicated due to caching. While you can easily add S3 (static website, see below) as a distribution, CloudFront itself will not follow the resulting redirects but return them to the client. In the default configuration CloudFront will then cache this redirect and NOT the desired image (see https://stackoverflow.com/a/41293603/6669161 for a workaround with S3).
The best way would be to use a proxy as described here. However, this adds another layer which might be undesirable. Another solution is to use Lambda#Edge functions as (see here). In our case, we use S3 as a normal distribution and make use of the "Origin Response"-Event (you can edit them in the "Behaviors" tab of your distribution). Our Lambda function just checks if the request to S3 was successful. If it was, we can just forward it. If it was not, we assume that the desired object was not yet created. The lambda function then calls our application that generates the object and stores it in S3. For simplicity, the application replies with a redirect (to CloudFront again), too - so we can just forward that to the client. A drawback is that the client itself will see one redirect. Also make sure to set the cache headers so that CloudFront does not cache the lambda redirect.
Here is an example Lambda Function. This one just redirects the client to the resolve url (which then redirects to CloudFront again). Keep in mind that this will result in more round trips for the client (which is not perfect). However, it will reduce the execution time of your Lambda function. Make sure to add the Base Lambda#Edge policy (related tutorial).
env = {
'Protocol': 'http',
'HostName': 'localhost:8000',
'HttpErrorCodeReturnedEquals': '404',
'HttpRedirectCode': '307',
'KeyPrefixEquals': '/cache/media/',
'ReplaceKeyPrefixWith': '/media/resolve-image-filter/'
}
def lambda_handler(event, context):
response = event['Records'][0]['cf']['response']
if int(response['status']) == int(env['HttpErrorCodeReturnedEquals']):
request = event['Records'][0]['cf']['request']
original_path = request['uri']
if original_path.startswith(env['KeyPrefixEquals']):
new_path = env['ReplaceKeyPrefixWith'] + original_path[len(env['KeyPrefixEquals']):]
else:
new_path = original_path
location = '{}://{}{}'.format(env['Protocol'], env['HostName'], new_path)
response['status'] = env['HttpRedirectCode']
response['statusDescription'] = 'Resolve Image'
response['headers']['location'] = [{
'key': 'Location',
'value': location
}]
response['headers']['cache-control'] = [{
'key': 'Cache-Control',
'value': 'no-cache' # Also make sure that you minimum TTL is set to 0 (for the distribution)
}]
return response
If you just want to use S3 as a cache (without CloudFront). Using static website hosting and a redirect rule will redirect clients to the resolve url in case of missing cache files (you will need to rewrite S3 Cache Resolver urls to the website version though):
<RoutingRules>
<RoutingRule>
<Condition><HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
<KeyPrefixEquals>cache/media/</KeyPrefixEquals>
</Condition>
<Redirect>
<Protocol>http</Protocol>
<HostName>localhost</HostName>
<ReplaceKeyPrefixWith>media/image-filter/</ReplaceKeyPrefixWith>
<HttpRedirectCode>307</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>

nginx - can proxy caching be configured so files are saved without HTTP headers or otherwise in a more "human friendly" format?

I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.

NGINX : Allow users to password protect directories themselves?

To my knowledge Nginx can only password protect directories from within the configuration file(s). That works nicely, but is not a real option for end-users who A) can not edit the configs and B) would break the configs if they could
Right now I am thinking about a webbased representation of the directory structure where they can point and click - rewriting the configs and re-kill-HUP-ing Nginx...
But somehow the whole idea feels like I am about to rewrite cPanel v0.0.1 ;-)
Anybody here had the same problem and came up with an elegant and maintainable solution?
I have full control over the server.
Thanks!
You don't really want users to change the configs, do you?
For password-protection, a htpasswd-file is sufficient, if the realm always stays the same.
And nginx itself can check for a file existense.
So, this is what could do the job:
location ~ ^/([^/]*)/(.*) {
if (-f $document_root/$1/.htpasswd) {
error_page 599 = #auth;
return 599;
}
}
location #auth {
auth_basic "Password-protected";
auth_basic_user_file $document_root/$1/.htpasswd;
}
Works for me with nginx-0.7.65.
0.6.x and earlier releases are probably no-go

Resources