How do I fix php7.1-fpm.sock failure on Laravel - laravel

I am trying to solve my white screen problem. Currently, I'm unable to login to one of my staging servers. I'm running Laravel 5.2 / php 7.1-fpm. After doing a repository update(via Forge), I hit the white screen view.
Based on my domain error log, I think that fixing the php7.1-fpm.sock failure will solve my white screen error.
I've tried just about every suggestion from various forums; I've tried suggestions from several similar related stack-overflow threads.
I still haven't found a solution. Can anyone offer suggestions?
Below is the error.log contents:
2018/08/12 21:16:09 [crit] 949#949: *28 SSL_do_handshake() failed
(SSL: error:1417D102:SSL routines:tls_process_client_hello:unsupported
protocol) while SSL handshaking, client: 23.27.154.95, server:
0.0.0.0:443
2018/08/13 00:43:46 [crit] 949#949: *47 SSL_do_handshake() failed (SSL: error:1417D102:SSL
routines:tls_process_client_hello:unsupported protocol) while SSL
handshaking, client: 220.181.132.198, server: 0.0.0.0:443
2018/08/13 00:43:46 [crit] 949#949: *48 SSL_do_handshake() failed (SSL:
error:1417D18C:SSL routines:tls_process_client_hello:version too low)
while SSL handshaking, client: 220.181.132.198, server: 0.0.0.0:443
2018/08/13 00:43:47 [crit] 949#949: *49 SSL_do_handshake() failed
(SSL: error:1417D102:SSL routines:tls_process_client_hello:unsupported
protocol) while SSL handshaking, client: 220.181.132.198, server:
0.0.0.0:443
Below is my site's error log contents:
2018/08/11 17:48:31 [crit] 934#934: *1 connect() to
unix:/var/run/php/php7.1-fpm.sock failed (2: No such file or
directory) while connecting to upstream, client: 73.106.222.129,
server: blabs.goteachersintouch.com, request: "GET / HTTP/2.0",
upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock:", host:
"blabs.goteachersintouch.com"
2018/08/11 17:48:35 [crit] 1266#1266: *1 connect() to unix:/var/run/php/php7.1-fpm.sock failed (2: No such file
or directory) while connecting to upstream, client: 73.106.222.129,
server: blabs.goteachersintouch.com, request: "GET /img/body-bg.png
HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock:",
host: "blabs.goteachersintouch.com", referrer:
"https://blabs.goteachersintouch.com/build/css/app-5aaeb8644f.css"
Before I did the Repository Uninstall with Laravel Forge, my build was working fine. Apparently setting my server to it's original PHP state is not in alignment with some configuration (there's something I'm missing, I just can't see it).
My codebase is still running on Laravel 5.2 library. But even still, can anyone offer suggestions to fix this white screen issue.
I have tried the permissions adjustments--that hasn't worked.
Thank you!

Turns out I was missing a bower file. The file was angular-sanitize#1.6.1
I ran:"bower install angular-sanitize#1.6.1" and was up and running again.

Related

grpc-go over https: failed rpc error: code = Unavailable desc = transport is closing:

Note: This is running in containers in Kubernetes.
I have successfully done this very short description: https://knative.dev/docs/serving/samples/grpc-ping-go/
Success:
2019/05/08 13:43:56 Ping got hello - pong
2019/05/08 13:43:56 Got pong 2019-05-08 13:43:57.646935391 +0000 UTC m=+1.661567121
But if I run through 443 from a Gateway on knative setup for https, it does not work:
docker run -ti --entrypoint=/client docker.io/{username}/grpc-ping-go \
-server_addr="${SERVICE_IP}:443" \
-server_host_override="${SERVICE_HOST}" \
-insecure
2019/05/08 13:50:28 &{0xc00012e000}.Ping failed rpc error: code = Unavailable desc = transport is closing:
The client code from the sample, and the server code.
The server is not listening for tls, but the connection to the server is over https.
Just to make sure, I know the https is worning from a simpel hello-go text reply.
In your server code, you are not listening on port 443, so this is most likely the reason your example isn't working.
If you want to keep using http and not https, then your code is working just fine.
If you want to get it working with TLS, this overview is a pretty good one.
To get port 80 to redirect to port 443 (I highly recommend it if you are using https), see this SO post.

Filezilla - can't access folder when connecting with other computer using ip adress but works localhost

When I am connecting using localhost on the computer the filezilla server lies on it works perfectly fine, but when I connect with IP-Adress (It is port-forwarded correctly, im 100% sure of that) this happens:
Status: Connecting to **.**.**.**:800...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I
Command: PASV
Response: 227 Entering Passive Mode (**,**,**,**,***,***)
Command: MLSD
Error: The data connection could not be established: ECONNREFUSED -
Connection refused by server
Response: 425 Can't open data connection for transfer of "/"
Error: Failed to retrieve directory listing
When this happens, it's usually a firewall configuration problem.
Besides a control connection, FTP also uses a data connection on a different port that needs to be assigned before data trasfers.
This means that you must open ports on your firewall to allow data transfers and, of course, you should make FileZilla Server aware of that.
For passive mode transfers, you should set a range of ports from the window below:
Of course those ports should be open at the firewall too. A longer discussion can be find here.

Connection to FTP server sometimes works and others not

I have a ubuntu server (on Azure) running proftpd, when I try to connect to that server using FileZilla sometimes it works, sometimes it doesn't (usually it doesn't work at first... and I need to keep trying several random times before it works... and once it does it works for good...), now this is the error I receive it FileZilla logs:
Status: Resolving address of ftp.myserver.com
Status: Connecting to xx.xx.xx.xx:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Command: USER my_user
Response: 331 Password required for my_user
Command: PASS *******
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
Status: Waiting to retry...
Status: Resolving address of ftp.myserver.com
Status: Connecting to xx.xx.xx.xx:21...
Status: Connection established, waiting for welcome message...
Response: 220 ProFTPD 1.3.5a Server (Debian) [xx.xx.xx.xx]
Command: AUTH TLS
Response: 500 AUTH not understood
Command: AUTH SSL
Response: 500 AUTH not understood
Status: Insecure server, it does not support FTP over TLS.
Command: USER my_user
Response: 331 Password required for my_user
Command: PASS *******
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
and this is what I see in proftpd logs:
2016-08-09 10:26:37,263 FTP proftpd[33961] 10.0.0.6 (yy.yy.yy.yy[yy.yy.yy.yy]): USER my_user: Login successful.
2016-08-09 10:26:37,264 FTP proftpd[33961] 10.0.0.6 (yy.yy.yy.yy[yy.yy.yy.yy]): FTP session closed.
2016-08-09 10:26:37,468 FTP proftpd[33970] 10.0.0.6 (yy.yy.yy.yy[yy.yy.yy.yy]): FTP session opened.
I don't know why the server closes and reopens the connection after the login but I am no FTP expert...
Any thoughts on how to fix this?
Edit:
This is the content of proftpd.conf file
There are multiple possible causes for a delay at login time with ProFTPD. The most common causes are the mod_delay module (see its FAQ), or IdentLookups or UseReverseDNS.
However, since your delay happens after the PASS command has been sent, that rules out the IdentLookups or UseReverseDNS directives, as those pertain to the initial connection establishment, before any commands are sent.
Per discussion with the reporter, any latency added by mod_delay was ruled out. That leaves PAM, which, depending on the configuration (e.g. in /etc/pam.d/ftp) and the modules used, can add its own latency (over which ProFTPD has little control). To disable ProFTPD's use of PAM, you would use the following in the config:
<IfModule mod_auth_pam.c>
AuthPAM off
</IfModule>
The reporter mentioned that disabling the use of PAM did indeed remove the delay -- thus pointing out that one of the PAM modules was the root cause.
Hope this helps!

NGINX caching proxy fails with SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure

NGINX acting as a caching proxy encounters problems when fetching content from CloudFront server over HTTPS:
This is the extract from the NGINX's error log:
2014/08/14 16:08:26 [error] 27534#0: *11560993 SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream, client: 82.33.49.135, server: localhost, request: "GET /static/images/media-logos/best.png HTTP/1.1", upstream: "https://x.x.x.x:443/static/images/media-logos/best.png",
I tried different proxy setting like proxy_ssl_protocols and proxy_ssl_ciphers but no combination worked.
Any ideas?
I had the exactly same problem and spent a couple of hours...
I guess you are using older version of nginx (lower than 1.7)?
In nginx 1.7 you can use this directive:
proxy_ssl_server_name on;
This will force nginx to use SNI
Also, you should set the SSL protocols:
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
For earlier versions you may be able to use this patch (but I can't verify that that is working):
http://trac.nginx.org/nginx/ticket/229
2019 Update: You should avoid TLSv1 and TLSv1.1 and disable them if possible. I'll leave them in the answer as they are still valid for SNI.

SFTP Connection Issue "Connection reset by peer"

I am unable to connect to Secured FTP server Using Filezilla and psfTP too.
While connecting one popup message comes for Certification, then I find this error
Error Message:--
Status: Connecting to idx.XYZ.com...
Response: fzSftp started
Command: open "abc_mnp#idx.XYZ.com" 22
Command: Pass: ****
*Error: Network error: Connection reset by peer
Error: Could not connect to server*
Any Idea guys..
I feel this is an Issue with Server.

Resources