Passenger not logging Ruby STDOUT/STDERR? - passenger

On a server I built within the last 6 months or so, I've noticed I haven't been able to get Passenger to log any Ruby output, nor does it print friendly error pages, even when specifying passenger_friendly_error_pages on in both the HTTP and Server config of the Nginx config file.
I'm using Nginx built with Passenger, and everything else has been working perfectly fine. Whnever I got an application error, it's almost impossible to get any clues from the production environment due to no error logs, stack trace or anything except for the usual "Incomplete response received from server", which is what I get in both the browser and in the passenger log.
Here's a sample of what my configuration looks like:
http {
passenger_root /usr/local/rvm/gems/ruby-2.6.3/gems/passenger-6.0.2;
passenger_ruby /usr/local/rvm/gems/ruby-2.6.3/wrappers/ruby;
passenger_log_level 2;
passenger_max_pool_size 6;
passenger_min_instances 2;
passenger_pool_idle_time 60;
passenger_spawn_method direct;
passenger_friendly_error_pages on;
passenger_log_file /opt/services/nginx-1.15.8/logs/passenger.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 120;
client_max_body_size 200M;
...
server {
listen 443 ssl;
server_name app.domain.local;
root /opt/production/app/public;
passenger_enabled on;
access_log logs/myapp.log;
passenger_friendly_error_pages on;
rack_env production;
ssl_certificate ssl/example.crt;
ssl_certificate_key ssl/example.key;
location /scripts/ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
}
}
Any ideas what could be going on?

So turns out I had to set passenger_log_level to 3. The documentation says this:
0 (crit): Show only critical errors which would cause Passenger to abort.
1 (error): Also show non-critical errors – errors that do not cause Passenger to abort.
2 (warn): Also show warnings. These are not errors, and Passenger continues to operate correctly, but they might be an indication that something is wrong with the system.
3 (notice): Also show important informational messages. These give you a high-level overview of what Passenger is doing.
4 (info): Also show less important informational messages. These messages show more details about what Passenger is doing. They're high-level enough to be readable by users.
5 (debug): Also show the most important debugging information. Reading this information requires some system or programming knowledge, but the information shown is typically high-level enough to be understood by experienced system administrators.
6 (debug2): Show more debugging information. This is typically only useful for developers.
7 (debug3): Show even more debugging information.
One would assume anything written to STDERR by Ruby would be considered at least a warning (log level 2). Odd behaviour.

Related

Pass data from Nginx to Sinatra app

So we do have our Jekyll app that serves a static webpage, which includes a contact form.
This contact form request then gets passed (POST) to our Sinatra-app at http://sub.example.com/send_email to deliver an email to inform us about the new contact..
So far so good, everything works fine if we run the Sinatra-app by using
bundle exec rackup config.ru
However, upon starting nginx and passenger_ruby, our Sinatra app does not receive any data whatsoever.
We have set up the server like this:
server {
listen 80;
server_name sub.example.com;
error_log /home/example/error.log warn;
access_log /home/example/access.log;
passenger_enabled on;
passenger_ruby /home/example/.rbenv/shims/ruby;
root /home/example/evil_contact/public;
}
Now when accessing http://sub.example.com the access.log and error.log file do not have any entry. It is as if the request wouldn't arrive over at nginx or does not get passed through.
Let me know if there is something else needed.
Maybe we are just missing something obvious here.
Thank you in advance.
Alrighty,
After dozen of coffees the solution was not to change the nginx settings, but check the DNS records.
Afterall, the DNS entry had still a wrong IP assigned, so the request was going somewhere else.

Nginx using etags with gzipped content when proxying

I want to use nginx to be a proxy between clients and a set of my apps, the problem I encountered is:
I have an app(on different machine than nginx) that is having static content(images, css etc.)
That static content is gzipped
I configured nginx as follows:
location ^~ /static/ {
etag on;
expires 1d;
more_set_headers "Vary: Accept-Encoding";
location ~* \.(css|js)$ {
gzip_static on;
expires -1;
proxy_pass http://my_upstream;
}
proxy_pass http://my_upstream;
}
and I was expecting to have etags working for things like js and css but they are not. I'm supposing it's because that js and css files are not on the same machine as nginx and it's the problem that gzip_static on suppose to fix.
So basically my question is, is it possible to have it working that way? And if it is how to do it:)
According to this forum post, etag on is not compatible with a proxy_pass that may modify the response (as gzip_static does):
The ETag header used in it's strong from means that it must be
changed whenever bits of an entity are changed. This basically
means that ETag headers have to be removed by filters which change
a response content.
As suggested in that same post, passing through Last-Modified will usually be enough for clients to make conditional requests.

Firefox "Unable to connect" without www?

I wasn't sure whether to put this in Serverfault or on Stackoverflow; it doesn't seem to be a server issue so I though here would be best.
I am currently working on a university website, and for some reason Firefox refuses to load the site unless you use www (ex www.university.edu). Every other browser accepts university.edu and simply redirects to www.university.edu as nginx is setup to do. My nginx config:
server {
listen 80;
server_name university.edu www;
rewritei ^http://www.university.edu$request_uri? permanent;
}
server {
listen 80;
server_name www.university.edu static.university.edu m.university.edu www.university.com;
.
.
.
}
So what should happen is when a request comes in and is www.university.edu, the second block catches it and everything runs normally, but if a request comes in and is university.edu the first block catches it and redirects it to the second block. But for some reason Firefox is not doing this.
Any idea's what could be causing this issue?
Update 1:
rewritei is not mispelled. The university's nginx was changed before it was compiled to enable regex case insensitivity, and was placed under the function "rewritei". Also after playing around with the site I found figured out that if you visit the site at www.university.edu first, then try university.edu it will load, but if you clear the cache and try to visit university.edu it will not load until you visit www.university.edu.
You have a typo; "rewrite" and try removing the www.
server {
listen 80;
server_name university.edu;
return 301 http://www.university.edu$request_uri;
}
Also take a look at the pitfalls on rewrite - http://wiki.nginx.org/Pitfalls#Taxing_Rewrites

Debugging in nGinx yields 1 line and infinite frustration- How to Debug the Debugger

Yay for experimental projects! I decided to try setting up my blog with Facebook's new hhvm-fastcgi and wordpress. Followed instructions and am using the following nGnix configuration:
server {
listen *:80 default;
server_name _;
access_log /home/blogs/logs/nginx/access.log;
error_log /home/blogs/logs/nginx/error.log debug;
location / {
deny all;
}
}
server {
listen *:80;
server_name www.site.com;
root /home/blogs/wordpress/;
index index.html index.php index.htm;
access_log /home/blogs/logs/nginx/site/access.log main;
error_log /home/blogs/logs/nginx/site/error.log debug;
# proxy_redirect off;
set $myuri $request_uri;
rewrite ^/wp-admin\/a$ /wp-admin/index.php;
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last;
rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last;
}
# Try MemCached First
location / {
set $memcached_key "$uri?$args";
memcached_pass 127.0.0.1:11211;
error_page 404 405 502 504 = #fallback;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/blogs/wordpress$fastcgi_script_name;
include fastcgi_params;
}
location #fallback {
try_files $uri $uri/ /index.php?$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
It would be all too simple if it worked. Hitting the site just causes my browser to hang and ultimately give up, but the debug log (from /home/blogs/logs/nginx/error.log as /home/blogs/logs/nginx/site/error.log is just blank) yields only one line:
2014/01/03 19:20:35 [debug] 8536#0: epoll add event: fd:11 op:1 ev:00000001
I'm guessing the weak link is nGinx.
Trying to hit the site from a restricted domain, produces the 403 as expected, and the debug log to actually work.
My question is less how to make my setup work, but why the setup isn't debugging. A simple fuser tells me Hip-hop is running on 9000. I feel like I could make some headway if I knew what was wrong.
I'm super self-conscious of my questions on Stackoverflow; I've seen people been ripped apart and it's frankly quiet scary. I realize there's another similar, very recent, question: HHVM with Nginx fastcgi not working properly but given our configurations are not quiet the same, and my question is more about the debug log (albeit a very short one) I thought my situation warranted another question.
NOTE:
Tag by rights should be hhvm-fastcgi, but I don't have the rep to create it as a tag.
Wow. Just Wow.
Turns out after struggling with this for far too long, turns out it was my firewall was blocking port 80. Why I was able to invoke a 403? I occasionally run a proxy through the server, so the other domain I tested with was seen as an internal request.
I'm guessing this proxy mix up is what led to anything being in the error logs at all.
As deeply embarrassing as this mix up has been- I'm going to leave this question up, because I've taken something away from this experience.
First of all, don't point fingers:
I immediately assumed because nothing else could have been invoked it was nginx's fault. The strange debug log egged on my doubts.
Secondly, look higher:
No point looking at the middle of the stack. I should have looked for requests, and made sure that that calls were even being made.
Thirdly, keep track of what your doing:
My obvious confusion stemmed from the fact I had some weird proxy mojo going on. Had I taken the time to remember everything out of the ordinary I was doing with my server, I might have thought to check my firewall settings earlier.
There go my chances at a tumbleweed badge

nginx - can proxy caching be configured so files are saved without HTTP headers or otherwise in a more "human friendly" format?

I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.

Resources