Passenger Unknown Reason-Phrase - ruby

When I try to access my site throught main domain (example.com) I get message like below. But it doesn't happen if I access the my site through a subdomain. I'm using Passenger with Nginx. Any ideas on how I can fix this? Thanks!
HTTP/1.1 16797828 Unknown Reason-Phrase
Status: 16797828 Unknown Reason-Phrase
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Powered-By: Phusion Passenger 4.0.20
Date: Mon, 21 Oct 2013 09:06:42 GMT

It's because your app returned an invalid HTTP response code (namely '16797828'). You should fix your app not to do that.

Related

Certbot unauthorized and connection errors

I have a spring boot application on Google Cloud, CentOS 7. I wish to install SSL certificate via Let's Encrypt and Certbot. When I use certbot --apache -d mydomain.zone command I receive an error:
My domain is registered on Namecheap. My A records on Google Cloud:
Also I provided google cloud nameservers in Namecheap like in this tutorial: https://www.wpmentor.com/setup-domain-google-cloud-platform/
Can you tell me where the issue is? I also wonder is there an issue with my java code in app. For example sometimes while accessing index page, error_page is called. When I have a method in my controller:
#RequestMapping(value = "/error_page", method = RequestMethod.GET)
public String homeError(Model model)
{
return "/error_page";
}
I have a different certvbot error:
but when I comment/erase my controller method for error page I receive this error:
Can it be it's an application bug? Or issue with apache?
EDIT:
I tried to turn off Tomcat. Now I receive this error:
note: My Apache forwards to 8080, I don't know will it make any issue?
iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
After curl -I -L http://mydomain/.well-known/acme-challenge/zySNHSFB-qL95Ubx4jcIvuHPiiNbwkphE55kFuqP8jM:
HTTP/1.1 302
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: /error_page
Content-Language: en-US
Content-Length: 0
Date: Tue, 15 Feb 2022 20:01:50 GMT
HTTP/1.1 302
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: /error_page
Content-Language: en-US
Content-Length: 0
Date: Tue, 15 Feb 2022 20:01:50 GMT
curl: (47) Maximum (50) redirects followed
I needed to turn off the Apache web server to free my port 80. Also, I deleted iptables rule that forwards traffic from port 80 to port 8080. Now Certbot works

wget gives 403 on accessible files

First time poster with a bizarre issue I am having. I usually install software through conda, but from one moment to the other I stopped being able to use conda install because of a 403 I get from conda trying to access some configuration files. When trying to download those files with wget --spider --debug https://conda.anaconda.org/anaconda/noarch/current_repodata.json, I get the same 403 error.
DEBUG output created by Wget 1.19.4 on linux-gnu.
Reading HSTS entries from /home/jsequeira/.wget-hsts
URI encoding = ‘UTF-8’
Converted file name 'current_repodata.json' (UTF-8) -> 'current_repodata.json' (UTF-8)
Spider mode enabled. Check if remote file exists.
--2020-07-30 11:25:59-- https://conda.anaconda.org/anaconda/noarch/current_repodata.json
Resolving conda.anaconda.org (conda.anaconda.org)... 104.17.92.24, 104.17.93.24, 2606:4700::6811:5d18, ...
Caching conda.anaconda.org => 104.17.92.24 104.17.93.24 2606:4700::6811:5d18 2606:4700::6811:5c18
Connecting to conda.anaconda.org (conda.anaconda.org)|104.17.92.24|:443... connected.
Created socket 5.
Releasing 0x000056545deb1850 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 5 to SSL handle 0x000056545deb2700
certificate:
subject: CN=anaconda.org,O=Cloudflare\\, Inc.,L=San Francisco,ST=CA,C=US
issuer: CN=Cloudflare Inc ECC CA-3,O=Cloudflare\\, Inc.,C=US
X509 certificate successfully verified and matches host conda.anaconda.org
---request begin---
HEAD /anaconda/noarch/current_repodata.json HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: conda.anaconda.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 403 Forbidden
Date: Thu, 30 Jul 2020 11:25:59 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
CF-Chl-Bypass: 1
Set-Cookie: __cfduid=d3cd3a67d3926551371d8ffe5a840b04f1596108359; expires=Sat, 29-Aug-20 11:25:59 GMT; path=/; domain=.anaconda.org; HttpOnly; SameSite=Lax
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 01 Jan 1970 00:00:01 GMT
X-Frame-Options: SAMEORIGIN
cf-request-id: 044111dd9600005d4732b73200000001
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Vary: Accept-Encoding
Server: cloudflare
CF-RAY: 5baeb8dc2ba65d47-LIS
---response end---
403 Forbidden
cdm: 1
Stored cookie anaconda.org -1 (ANY) / <permanent> <insecure> [expiry 2020-08-29 11:25:59] __cfduid d3cd3a67d3926551371d8ffe5a840b04f1596108359
URI content encoding = ‘UTF-8’
Closed 5/SSL 0x000056545deb2700
Remote file does not exist -- broken link!!!
These files are accessible through the browser, and were always accessible with wget and conda until yesterday, when I was installing some tools not related to these network accesses. How can wget fail to download them?
So this was fixed by reinstalling apt-get. Some configuration file there must have been messed up.

How to do a proper handshake with the Coinbase Websocket for Market Data?

What is the first HTTP request I should send for a proper websocket handshake?
I am sending GET / HTTP/1.1 with some other standard websocket headers but I get a 400 Bad Request. See below
$ telnet localhost 8889
Trying 127.0.0.1...
Connected to localhost (127.0.0.1).
Escape character is '^]'.
GET / HTTP/1.1
Host: ws-feed.exchange.coinbase.com
Connection: Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket
Origin: http://www.test.com
HTTP/1.1 400 Bad Request
Server: cloudflare-nginx
Date: Thu, 10 Sep 2015 19:25:54 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: __cfduid=d3fe870c84fc991b0f2f6fc2c936820471441913154; expires=Fri, 09-Sep-16 19:25:54 GMT; path=/; domain=.coinbase.com; HttpOnly
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
X-Content-Type-Options: nosniff
CF-RAY: 223d857c6bae01ee-EWR
0
Sec-WebSocket-Version header and Sec-WebSocket-Key header are missing.

Cached content expiring too frequently on fastly CDN

I'm using Fastly as a CDN in front of my Heroku application, and am seeing many requests that I expect to be cached make it through.
An example of this behavior is two requests to the URL:
https://nuu-acceptance-herokuapp-com.global.ssl.fastly.net/attachments/f092ff0398b3bace19fae21b17a22320c3da5428/store/fit/240/160/28515a2fa2e47b59f13b2044ea5b9a7c8c9587ceca7d7dfadb28f08730f7/file.jpg. Here are two responses from the requests, which occurred fifteen minutes apart:
RESPONSE 1:
----------
Strict-Transport-Security: max-age=31536000
Content-Encoding: gzip
X-Content-Type-Options: nosniff
Age: 0
Transfer-Encoding: chunked
X-Cache: MISS
X-Cache-Hits: 0
Content-Disposition: inline; filename="file.jpg"
Connection: keep-alive
Via: 1.1 vegur
Via: 1.1 varnish
X-Request-Id: bc766069-c2ca-4a66-ba88-a8d76da72e2d
X-Served-By: cache-sjc3124-SJC
X-Runtime: 3.711698
Last-Modified: Tue, 23 Jun 2015 18:44:27 GMT
Server: Cowboy
X-Timer: S1435085062.909546,VS0,VE4437
Date: Tue, 23 Jun 2015 18:44:27 GMT
Vary: Accept-Encoding
Content-Type: image/jpeg
Access-Control-Allow-Origin: *
Cache-Control: public, must-revalidate, max-age=31536000
Set-Cookie: __profilin=; path=/; max-age=0; expires=Thu, 01 Jan 1970 00:00:00 -0000; secure
Accept-Ranges: bytes
Expires: Wed, 22 Jun 2016 18:44:27 GMT
----------
RESPONSE 2:
----------
Strict-Transport-Security: max-age=31536000
Content-Encoding: gzip
X-Content-Type-Options: nosniff
Age: 0
Transfer-Encoding: chunked
X-Cache: MISS
X-Cache-Hits: 0
Content-Disposition: inline; filename="file.jpg"
Connection: keep-alive
Via: 1.1 vegur
Via: 1.1 varnish
X-Request-Id: 60ee54b0-9509-42c5-9b03-c0f5854c5524
X-Served-By: cache-sjc3135-SJC
X-Runtime: 0.251021
Last-Modified: Tue, 23 Jun 2015 18:57:44 GMT
Server: Cowboy
X-Timer: S1435085863.749442,VS0,VE560
Date: Tue, 23 Jun 2015 18:57:44 GMT
Vary: Accept-Encoding
Content-Type: image/jpeg
Access-Control-Allow-Origin: *
Cache-Control: public, must-revalidate, max-age=31536000
Accept-Ranges: bytes
Expires: Wed, 22 Jun 2016 18:57:44 GMT
Both are cache misses, even though I expect this content to be cached for a year. It also appears that the same Fastly cluster handled the request. Can anyone point me to what I might be doing wrong? I'm seeing this behavior across many files served by Fastly - fastly seems to serve the files intermittently, but there are cache misses much more often than I expect.
I'd appreciate any help that anyone could give me with this - thanks!
If you look at the HTTP headers from your responses, you will see that they both contain a Set-Cookie header. Responses with cookies will not be cached by Fastly. You can remove them, however, in your app or within your Fastly configuration.

Drive Realtime API no longer returns realtime document on localhost

I have been calling the following gapi javascript function with great success for a few months:
gapi.drive.realtime.load(fileId,
successHandler,
initializer,
errorHandler);
Suddenly, at 1:30 PM CDT today, that call stopped working when run in javascript on localhost. I can deploy the exact same code to my server and it works perfectly!
Frustratingly, none of the callbacks are called - not successHandler OR errorHandler.
I have localhost:3000 set as an allowed javascript origin in my Google API Console project, and anyway I haven't changed any settings there since this was working. I am correctly authorized and can make REST calls to the Drive API without an issue.
Has anyone else seen this behavior suddenly? Can anyone from the Google team make a suggestion?
Update: the request inspector shows a GET to
https://drive.google.com/otservice/gs?access_token=[ommitted-for-stackoverflow]&id=[also-omitted]
with the response
)]}'
["17AKDsTY8kHESKfQavrHeh3YybD5k4b6ty8CQ78MHtyc","724b79b808d48070",false,1,[1,""],[0,[28,"724b79b808d48070","110581799581534438628",false,true,"REL DEV","#58B442","https://lh3.googleusercontent.com/-XdUIqdMkCWA/AAAAAAAAAAI/AAAAAAAAAAA/4252rscbv5M/s128/photo.jpg"]]]
The headers are
HTTP/1.1 200 OK
status: 200 OK
version: HTTP/1.1
access-control-allow-origin: *
access-control-expose-headers: Content-Length,Content-Type,X-Restart
alternate-protocol: 443:quic
cache-control: no-cache, no-store, max-age=0, must-revalidate
content-disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt
content-encoding: gzip
content-type: application/json; charset=utf-8
date: Fri, 04 Apr 2014 22:15:40 GMT
expires: Fri, 01 Jan 1990 00:00:00 GMT
pragma: no-cache
server: GSE
vary: Origin
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-restart:
x-xss-protection: 1; mode=block
There are no other requests after that.

Resources