Cross domain session with Sinatra and AngularJS - ruby

I am using Sinatra as a webservice and angularjs to make the calls
post '/loginUser' do
session[:cui]=user['cui']
end
get '/cui' do
return session[:cui].to_s
end
But it doesn't seem to work (the '/cui' call returns an empty string) any help would be greatly apreciated.
UPDATE:
setting this in sinatra headers['Access-Control-Allow-Credentials'] = 'true' allows me to send the session, but it seems like $http directive is not using the browsers cookies

on the sinatra app
before do
headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE, OPTIONS'
headers['Access-Control-Allow-Origin'] = 'http://localhost:4567'
headers['Access-Control-Allow-Headers'] = 'accept, authorization, origin'
headers['Access-Control-Allow-Credentials'] = 'true'
end
angularjs app
host='http://127.0.0.1:5445/'
#viewController = ($scope,$http)->
$scope.getCui = ()->
$http.get(host+'cui',{ withCredentials: true}).success (data)->
$scope.cui=data
console.log data
Explanation:
AngularJS uses his own cookie system, so we need to specify that we can pass the cookies trough the $http.get call using the {withCredentials:true} configuration object. Sinatra needs to accept the cross domain cookies so we need the headers mentioned above.
Note: 'Access-Control-Allow-Origin' header cannot be wildcard.

One option around this would be to configure a http server with a proxy pass, so you could hit the same domain without incurring a cross origin error. That way you can continue to properly maintain your abstractions as 2 separate apps.
Here is a brief example with nginx:
upstream angular_app {
server localhost:3003;
}
upstream sinatra_app {
server localhost:3004;
}
server {
listen 80;
server_name local.angular_app.com;
root /Users/username/source/angular_app/;
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ ^/api/(.*)$ {
proxy_set_header Host $http_host;
proxy_read_timeout 1200;
proxy_pass http://sinatra_app/;
}
}
By routing at the server level, you can successfully bypass domain restrictions AND you can keep the applications separate.

Related

index_not_found_exception - Elasticsearch

In image #1, as you can see, I am getting a valid ES response on firing a GET request. However, if I try doing the same things through the NGINX reverse proxy that I have created and hit myip/elasticsearch, it returns me the error (image #2). Can someone help me with this?
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601;
}
}
The right way is to specify both of those slashes. Slash after 127.0.0.1:9000 is essential, without it your request /elasticsearch/some/route would be passed as-is while with that slash it would be passed as /some/route. In nginx terms it means that you specified an URI after the backend name. That is, an URI prefix specified in a location directive (/elasticsearch/) stripped from an original URI (we having some/route at this stage) and an URI specified after the backend name (/) prepended to it resulting in / + some/route = /some/route. You can specify any path in a proxy_pass directive, for example, with proxy_pass http://127.0.0.1:9200/prefix/ that request would be passed to the backend as /prefix/some/route. Now if you understand all being said, you can see that specifying location /elasticsearch { ... } instead of location /elasticsearch/ { ... } would give you //some/route instead of /some/route. I'm not sure it is exactly the cause of your problem however configurations like
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
are more correct.
Now may I ask you what you get with exactly this configuration in response to curl -i http://localhost:9200/ and curl -i http://localhost/? I want to see all the headers (of cause except those containing private information).
The problem is the path. Nginx is passing it unmodified.
Add a slash at the proxy_pass urls.
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601/;
}
}
From the documentation:
Note that in the first example above, the address of the proxied server is followed by a URI, /link/. If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).

404 error for static assets when browser caching is implemented with nginx/web2py

I have a web2py configuration, operating on top of nginx, which is producing a 404 error when browser caching is implemented for certain static files. The problem is described here, and I'm now asking this question within a web2py context, because that may be relevant to the issue, or because there may be some web2py-specific workaround or solution.
nginx.conf looks like this:
worker_processes 3;
events {
worker_connections 1024;
}
http {
access_log [/...];
error_log [/...] crit;
include mime.types;
sendfile on;
server {
server_name [...] [...];
return 301 [...] $request_uri;
}
server {
listen 127.0.0.1:[...];
root [/...];
location / {
include uwsgi_params;
uwsgi_pass [.../uwsgi.sock];
}
}
}
Adding the following line either before or after the "location" clause above causes the server to stop serving the static files, which match the pattern in question:
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
}
It was suggested in the previous thread that this may be a uwsgi issue, although it's possible that the problem is caused by other issues. How can I implement browser caching, without causing the "404" issue?
It seems to me that you are serving only dynamic content. Also, nginx selects a location block to process a request, and it needs to be complete.
In your case, the uwsgi configuration from the location / block needs to be replicated across any new dynamic locations you may add. For example:
server {
...
include uwsgi_params;
location / {
uwsgi_pass [.../uwsgi.sock];
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
uwsgi_pass [.../uwsgi.sock];
}
}
You can probably move the include statement into the outer block and allow its statements to be inherited (assuming it only contains uwsgi_param statements).

Nginx Joomla Internationalization URL rewriting

I'm using Joomla in combination with Nginx, and I'm currently trying to achieve some URL rewriting for a website that has several langages supported (italian, french, chinese, and deutch)
The urls have the country code after the domain name, like so :
http://www.example.com/fr/test/test.html
or
http://www.example.com/de/test/test.html
I'm looking to rewrite the urls so the country code is part of the subdomain :
so
http://www.example.com/fr/test/test.html
becomes
http://fr.example.com/test/test.html
Is there a way to achieve this with Nginx or should I look into a third party extension for Joomla (not my favorite choice).
Thanks !!
Update :
I wasn't clear enough : I wanted the redirection from the rewrited URL to be transparent. Here is what I came up with, thanks to VBart help :
server {
server_name ~^(?<lang>.+)\.example\.com$;
location / {
rewrite /(.*)$ /$lang/$1 break;
proxy_pass http://www.example.com;
proxy_redirect http://www.example.com http://$lang.example.com/$request_uri;
}
}
Now, is there a way for Nginx to modify links on the fly in the served content ? ie: I want all the link in the generated page to look like http://fr... instead of http://.../fr/... ?
server {
server_name ~^(?<lang>.+)\.example\.com$;
...
}
server {
server_name www.example.com;
rewrite ^/(?<lang>[a-z]+)(?<rest>.+)$ http://$lang.example.com$rest? permanent;
}
opposite example:
server {
server_name ~^(?<lang>.+)\.example\.com$;
return 301 http://www.example.com/$lang$request_uri;
}
server {
server_name www.example.com;
...
}

ExpressJS/Node: 404 images

I've set up a basic node/express server which serves public static javascript and css files fine, but returns a 404 error when attempting to serve images.
The strangest part is that everything works fine when run locally. When run on my remote server (linode), the image problem arrises.
It's really got me scratching my head... What might be the problem?
Here's the server:
/**
* Module dependencies.
*/
var express = require('express')
, routes = require('./routes')
var app = module.exports = express.createServer();
// Configuration
app.configure(function(){
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(express.compiler({ src: __dirname + '/public', enable: ['less'] }));
app.use(express.static(__dirname + '/public'));
app.use(app.router);
});
app.configure('development', function(){
app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));
});
app.configure('production', function(){
app.use(express.errorHandler());
});
// Globals
app.set('view options', {
sitename: 'Site Name',
myname: 'My Name'
});
// Routes
app.get('/', routes.index);
app.get('/*', routes.fourohfour);
app.listen(3000);
console.log("Express server listening on port %d in %s mode", app.address().port, app.settings.env);
if it works fine locally, maybe it's a case sensitivity issue, do your files have capitals etc?
I had this issue, but it ended up being that I had caps in the trailing .JPG extension and was calling .jpg in html. Windows is not case sensitive on file types, CentOS is...
Alrighty, I got around the issue by renaming my images folder from "/public/images" to /public/image. I don't know why the naming would cause an issue, but I'm glad that's all that was needed.
I had this exact issue. All user generated images uploaded to /static/uploads were not being rendered by express. The strange thing is everything in static/images, static/js, static/css were rending fine. I ensured it wasn't a permissions issue but was still getting a 404. Finally I configured NGINX to render all of my static file (which is probably faster anyway) and it worked!
I'd still love to know why Express wasn't rending my images though.
Here's my NGINX conf if anyone is having this issue:
server {
# listen for connections on all hostname/IP and at TCP port 80
listen *:80;
# name-based virtual hosting
server_name staging.mysite.com;
# error and access outout
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
# redefine and add some request header lines which will be transferred to the proxied server
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
#set the location of the web root
root /var/www/mysite.com;
# set the address of the node proxied server. port should be the port you set in express
proxy_pass http://127.0.0.1:9001;
# forbid all proxy_redirect directives at this level
proxy_redirect off;
}
# do a case insensitive regular expression match for any files ending in the list of extentions
location ~* ^.+\.(html|htm|png|jpeg|jpg|gif|pdf|ico|css|js|txt|rtf|flv|swf)$ {
# location of the web root for all static files
root /var/www/mysite.com/static;
# clear all access_log directives for the current level
#access_log off;
# set the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years
#expires max;
}
}

Nginx proxy_pass to a password protected upstream

I want to pass a request to an upstream server. The original url is not password protected but the upstream server is. I need to inject a Basic auth username/password into the request but get errors when doing:
upstream supportbackend {
server username:password#support.yadayada.com;
}
and
upstream supportbackend {
server support.yadayada.com;
}
location /deleteuser {
proxy_pass http://username:password#supportbackend;
}
you need to add proxy_set_header Authorization "Basic ...."; where the .... is base64 of user:pass.

Resources