Unable to connect to remote server - Laravel 4.1 - laravel

I want to execute some commands in a remote server using SSH.
This is my configuration :
'connections' => [
'production' => [
'host' => '172.55.81.20',
'username' => 'user',
'password' => '',
'key' => 'C:\cygwin64\home\oqannouf\.ssh\id_rsa'
],]
I tried this SSH::into('production')->run(['ls']);
but i get the following error:
Unable to connect to remote server
with no additional informations explaining why it doesn't work.
Note:
the RSA key is correct and i can use it to login to the server using cygwin.
Even with a correct password it doesn't work.
I have all the permissions on the folder C:\cygwin64\home\oqannouf.ssh\

Related

How to use `laravel-backup`-Package with Laravel Vapor

We are using Laravel Vapor to manage our laravel application and planing to use the laravel-backup package to create automated database backups for our production environment.
I testet the implementation and managed to get it worked (with version 7.3.3) on my windows machine.
I set the mail configuration to get notified when an backup runs (successful or not) and set the path to mysqldump like this:
'dump' => [
'dump_binary_path' => 'C:\xampp\mysql\bin',
'use_single_transaction',
'timeout' => 60 * 5,
]
To set this up and running with vapor, I changed the destination.disk-config from local to s3 with s3 as
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
],
I removed the dump_binary_path, because I didn't know where to point with it in the context of vapor. So I hoped that it is at a default location as it is mentioned in the docs of the laravel-backup package:
mysqldump is used to backup MySQL databases. pg_dump is used to dump PostgreSQL databases. If these binaries are not installed in a default location, you can add a key named dump.dump_binary_path in Laravel's own database.php config file.
I included the backup command in the kernel-file
$schedule->command('backup:clean')->daily()->at('01:00');
$schedule->command('backup:run --only-db')->daily()->at('01:30');
and deployed it with vapor.
Unfortunately it isn't working. I didn't recived an email (neither success nor failure) and nothing was created at our s3.
Does someone used laravel-backup with vapor before and knows how to fix this? What am I missing?
Thanks in advance!
The Laravel Vapor support replied this:
"Our native runtime doesn't have mysqldump installed. You will need to run the Docker runtime and install yourself if this is a requirement. It's also worth noting the maximum execution time on Lambda is 15 mins so if your backup is large, you could run into issues."

Laravel Horizon - Redis - HAProxy - Error while reading line from the server

Sorry for the title which might sound like an "allready answered" topic but I believe my case is unique.
Also, this is my first post so I apologize if I am not on the proper channel as I am not sure wether my problem is on the server administration side or the Laravel's configuration one.
I am trying to get some fresh ideas on how to resolve an issue with Horizon / Predis / HAProxy which I thought was fixed but is showing up again.
Some details on environment
2x Apache servers : PHP Version 7.2.29-1+ubuntu18.04.1+deb.sury.org+1
thread safe is disabled and we use FPM
2x Redis servers using a simple master-slave setup (no high availability, no sentinel) : redis version 4.0.9
load balancing with HAProxy version 1.9
Libraries
laravel/framework: 6.14.0
laravel/horizon": 3.7.2
redis/predis: 1.1.1
Horizon configuration
The Horizon daemon is managed through Supervisor.
This is the Redis client configuration in config/database.php:
'redis' => [
'client' => 'predis',
'options' => [
'prefix' => strtoupper(env('APP_NAME') . ':')
],
'default' => [
'host' => env('REDIS_HOST'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT'),
'database' => env('REDIS_DB'),
'read_write_timeout' => -1
],
...
This the Redis connection configuration in config/queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 110
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'long-running-queue'),
'retry_after' => 3620
],
As you can see there are two connections defined for the same physical Redis server.
The application uses queues for 2 different types of jobs:
Fast / short processes like broadcasting, notifications or some Artisan commands calls.
These use the first connection configuration with low timeout setting.
Long running processes which essentially query large amounts of data on a Snowflake DB (cloud based SQL like DB) and / or update / inserts documents on a Solr server.
These processes use the 2nd connection configuration as they can take quite some time to complete (usually around 20 minutes for the one combining read from Snowflake and write to Solr)
The application is a business webapp meant for private use by my company and the load is rather small (around 200 jobs queued / day) but the long running processes are critical to the business : job failure or double run is not acceptable.
This is the config/horizon.php file:
'environments' => [
'production' => [
'supervisor-default' => [
'connection' => 'redis',
'queue' => ['live-rules', 'solr-cmd', 'default'],
'balance' => 'simple',
'processes' => 3,
// must be lower than /config/queue.php > 'connections.redis'
'timeout' => 90,
'tries' => 3,
],
'supervisor-long-run' => [
'connection' => 'redis-long-run',
'queue' => ['long-running-queue', 'solr-sync'],
'balance' => 'simple',
'processes' => 5,
// must be lower than /config/queue.php > 'connections.redis-long-run'
'timeout' => 3600,
'tries' => 10,
],
],
'staging' => [
...
Initial problem <solved>
When we went live at the beginning of the year we immediately hit a problem with the jobs running on the long-running-queue connection:
Error while reading line from the server. [tcp://redis_host:6379] errors started popping left and right.
These translated into jobs being stuck in pending state, until they finally ended up being marked as failed although the tasks had in reality succeeded.
At the time the application's long running processes were limited to the Snowflake SELECT queries.
After going through the numerous posts about it on Laravel Horizon's github issues as well as SO's topics and testing the suggestions without luck we finally figured out that the culprit was our load balancer closing the connection after 90 seconds.
Redis has a tcp-keepalive default config parameter of 300 secs so we tweaked the HAProxy's configuration to close at 310 secs and - poof! -, everything worked fine for a while.
This is HAProxy's configuration for the application nowadays:
listen PROD-redis
bind 0.0.0.0:6379
mode tcp
option tcplog
option tcp-check
balance leastconn
timeout connect 10s
timeout client 310s
timeout server 310s
server 1 192.168.12.34:6379 check inter 5s rise 2 fall 3
server 2 192.168.43.21:6379 check inter 5s rise 2 fall 3 backup
New problem (initial reborn ?)
Coming back a few months later the application has evolved and we now have a job which reads and yields from Snowflake in batch to build a Solr update query. The Solr client is solarium/solarium and we use the addBuffered plugin.
This worked flawlessly on our pre-production environment which doesn't have load balancing.
So next we moved to the production environment and the Redis connection issues rose again unexpectedly, except this time we've got the HAProxy setup properly.
Monitoring the keys in Redis we can see that these jobs get indeed reserved but end up in the delayed state after some time, waiting to be tried again once the job's timeout is reached.
This is a real problem as we end up going through the job's max tries count until it eventually gets marked as failed, running it x times because it never gets the complete flag, putting unecessary stress on the environment and consuming resources when in fact the job DID succeed at first try.
This is what we get from HAProxy's logs:
Jun 26 11:35:43 apache_host haproxy[215280]: 127.0.0.1:42660 [26/Jun/2020:11:29:02.454] PROD-redis PROD-redis/redis_host 1/0/401323 61 cD 27/16/15/15/0 0/0
Jun 26 11:37:18 apache_host haproxy[215280]: 127.0.0.1:54352 [26/Jun/2020:11:28:23.409] PROD-redis PROD-redis/redis_host 1/0/535191 3875 cD 24/15/14/14/0 0/0
The cD part is the interesting information, as per haProxy's documentation:
c : the client-side timeout expired while waiting for the client to send or receive data.
D : the session was in the DATA phase.
There are more logs like this and there is no obvious pattern in the delay between the connection established and the moment it closes as you can see from dates.
Before getting there we have :
switched to a Redis version 5.0.3 server: same issue.
removed HAProxy from the equation and established direct connection between the client and Redis : works flawlessly.
I'm a bit at a loss as to how to figure out and fix the issue for good.
Going back to the HAProxy log concerning client-side timeout, I wonder what could possibly be wrong about the client configuration and what I should try next.
Maybe someone here will come with a suggestion ? Thank you for reading.
From Laravel documentation it is better to use PhpRedis client instead of Predis.
Predis has been abandoned by the package's original author and may be removed from Laravel in a future release.
In short, PhpRedis is a php module written in C. While Predis is php library written in PHP. Huge performance difference described here
BTW, we have similar stack: Laravel + Horizon -> HAProxy-> Redis Server. Wу have 3 redis servers (1 master, 2 slaves). And Sentinel to keep on actual master.
And had similar problems with redis until we migrated from Predis to PhpRedis. When researching problems, the best answer was to use PhpRedis.
PS. We just changed REDIS_CLIENT in .env from Predis to phpredis and everything was still working.
Error
Error while reading line from the server. [tls://private-XXX-do-user-XXX-0.b.db.ondigitalocean.com:25061]
Reason
DigitalOcean Managed Redis is not compatible with Predis.
Solution
Remove Predis and use phpredis. Phpredis is an extension for PHP.
Check if you have installed phpredis:
# php -i | grep async_redis
async_redis => enabled
If you dont have phpredis, just install it: pecl install redis (more information about php-redis installation)
Remove predis from your composer.json and add this line (good practice):
"require": {
"php": "^8.0",
/* ... */
"ext-redis": "*",
/* ... */
Change your code or .env REDIS_CLIENT value:
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'default' => [
'scheme' => env('DATA_REDIS_SCHEME', 'tcp'),
'host' => env('DATA_REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
// 'read_write_timeout' => 0, // was required by DO in the past, but just in case...
],
'cache' => [
],
],

How to resolve Peer certificate not matching expected for a PDO connection?

I am trying to set up an SSL connection between application (Laravel) server and mysql 8 server. I have looked around for a solution and the most suggested solution is to set verify server cert to false which i have in my database config below.
But I seem to still be getting this error. Is there another corresponding setting somewhere else I need to check?
PDOException::("PDO::__construct(): Peer certificate CN=MySQL_Server_8.0.16_Auto_Generated_Server_Certificate did not matc
h expected CN=`IPADDRESS'")
'options' => array_filter([
PDO::MYSQL_ATTR_SSL_CA => 'mysql_client_ssql/ca.pem',
PDO::MYSQL_ATTR_SSL_CERT => 'mysql_client_ssql/client-cert.pem',
PDO::MYSQL_ATTR_SSL_KEY => 'mysql_client_ssql/client-key.pem',
PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT => false,
]),

Using gmail smtp via Laravel: Connection could not be established with host smtp.gmail.com [Connection timed out #110]

When I try to use GMail SMTP for sending email via Laravel, I encounter the following error:
Swift_TransportException
Connection could not be established with host smtp.gmail.com [Connection timed out #110]
It is the trace of the error:
...
}
$this->_stream = #stream_socket_client($host.':'.$this->_params['port'], $errno, $errstr, $timeout, STREAM_CLIENT_CONNECT, stream_context_create($options));
if (false === $this->_stream) {
throw new Swift_TransportException(
'Connection could not be established with host ' . $this->_params['host'] .
' [' . $errstr . ' #' . $errno . ']'...
and here are my configuration for mail:
'driver' => 'smtp',
'host' => 'smtp.gmail.com',
'port' => 587,
'from' => array('address' => 'some#example.ir', 'name' => 'some'),
'encryption' => 'tls',
'username' => 'myemail#gmail.com',
'password' => 'mypassword',
'sendmail' => '/usr/sbin/sendmail -bs',
'pretend' => false
I use a shared host and the port 587 on localhost is open.
I had the same problem and I resolved it in this way:
'driver' => 'sendmail',
You need to change only that line.
After doing lot of research I found this one helpful.
https://www.google.com/settings/security/lesssecureapps.
Open the above link .
Click on Enable. And save it.
Then try to send email again.
For me it worked .
Solved mine by changing my .env file as follows:
'driver' => 'sendmail',
Try
'encryption' => 'ssl',
'port' => 465,
The problem is that smtp.gmail.com is resolving an IPv6 address and that google service only listens on IPv4. What you need to do is set the source IP to ensure domains resolve as IPv4 and not IPv6.
The important method:
->setSourceIp('0.0.0.0')
How you might use it in code:
$this->_transport = Swift_SmtpTransport::newInstance
(
'smtp.gmail.com',
465,
'ssl'
)
->setUsername('username')
->setSourceIp('0.0.0.0')
->setPassword('password');
Works for me with same settings except encryption and port. Change to:
'encryption' => ssl,
'port' => 465,
Since this is only for localhost encryption line should also be environment specific. So instead above I did following:
env('MAIL_ENCRYPTION','tls'),
Now you can set this in .env file, which is environment specific and should be in .gitignore
I got the same problem using laravel forge + digitalocean.
I find when i try telnet smtp.gmail.com 465
telnet smtp.gmail.com 465
Trying 2404:6800:4003:c00::6d... # more than 30 sec
Trying 74.125.200.108... # less 1 sec
Connected to smtp.gmail.com.
Maybe is IPv6 that it Connection timed out.
So,I change gai.conf to prioritize ipv4 over ipv6
vi /etc/gai.conf
#For sites which prefer IPv4 connections change the last line to
precedence ::ffff:0:0/96 100
...
# For sites which use site-local IPv4 addresses behind NAT there is
# the problem that even if IPv4 addresses are preferred they do not
# have the same scope and are therefore not sorted first. To change
# this use only these rules:
#
scopev4 ::ffff:169.254.0.0/112 2
scopev4 ::ffff:127.0.0.0/104 2
scopev4 ::ffff:0.0.0.0/96 14
open your .env file and change this
MAIL_DRIVER=smtp
TO
MAIL_DRIVER=sendmail
this fixed my problem, hope it will help you as well.
The error might be due to 2 step verification enabled. In that case you need to create gmail app password and use this as password.
For me after trying all above solution the only thing that worked for my was
Disabling My Firewall and Antivirus temporarily
If it is a non-Google Apps account, definitely enable access from less secure apps as suggested. If you don't do that, it won't work.
If it is a Google Apps account (ie it's a business account) then there is an admin panel that governs access. You will need to make sure it is only specifying access by authentication and not by IP, since if it is by IP your IP presumably is not on the list.
The final thing to try is using the IPv4 address of smtp.gmail.com in place of that domain name in your code. I found that mine would not connect using the domain (because that resolved to an IPv6 address) but would connect when I used the raw IP in its place.
I got the same problem using Swiftmailer
Anyway, a quick dirty hack you should not use would be to edit swiftmailer\swiftmailer\lib\classes\Swift\Transport\StreamBuffer.php. In _establishSocketConnection() line 253 replace:
$options = array();
with something like this:
$options = array('ssl' => array('allow_self_signed' => true, 'verify_peer' => false));
This will change the ssl options of stream_context_create() (a few lines below $options):
$this->_stream = #stream_socket_client($host.':'.$this->_params['port'], $errno,
$errstr, $timeout, STREAM_CLIENT_CONNECT, stream_context_create($options));
There is a dirty hack for this You can find it here
Also my ENV is set to
MAIL_DRIVER=smtp
MAIL_HOST=mail.mydomain.com
MAIL_PORT=587
MAIL_USERNAME=noreply#mydomain.com
MAIL_PASSWORD=mypassword
In your terminal use this command
sudo ufw allow in "Postfix Submission"
this enable port 587 for SMTP
you need to create 2 factor auth and custom password in google account. Also don't forget to add custom password for every new host you are using.
For temporary fix you can resolved the issue by updating env file as 'driver' => 'sendmail'
Check your free disk space. On my case, I have 100% used.
I am using MAMP on MAC. I face this Issue Step to Follow to solve this problem on MAC
SMTP MAIL on MAC OS
Create a file to store our credentials:
sudo vim /etc/postfix/sasl_passwd
Add something like this:
smtp.gmail.com:587 username#gmail.com:password
Now run:
sudo postmap /etc/postfix/sasl_passwd
Prepare the postfix main config file:
sudo vim /etc/postfix/main.cf
Add/update these lines
relayhost=smtp.gmail.com:587
smtp_sasl_auth_enable=yes
smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd
smtp_use_tls=yes
smtp_tls_security_level=encrypt
tls_random_source=dev:/dev/urandom
smtp_sasl_security_options = noanonymous
smtp_always_send_ehlo = yes
smtp_sasl_mechanism_filter = plain
Stop/Start the service
sudo postfix stop
sudo postfix start
Check the queue for any errors
mailq
Your .env configuration should look like this
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
MAIL_USERNAME="username#gmail.com"
MAIL_PASSWORD="password"
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS="username#gmail.com"
MAIL_FROM_NAME="${APP_NAME}"

ci-merchant purchase() not working

Im using the ci-merchant library in my PyroCMS module locally on my development WAMP server. (all working fine)
When I upload to my Linux test server the purchase() function being called does not seem to work.
When it executes it pools for 5 minutes then I get a response "Could not connect to host".
PHP
$params =
array(
'amount' => 20,
'currency' => 'USD',
'return_url' => 'http://someurl.com/return/'
'cancel_url' => 'http://someurl.com/cancel/'
);
$settings = array(
'test_mode' => TRUE,
'username' => 'PAYPAL_TEST_USERNAME'
'password' => 'MY_PAPAL_TEST_PASS'
'signature' => 'MY_PAYPAL_TEST_SIG'
);
$this->load->library('merchant');
$this->merchant->load('paypal_express');
$this->merchant->initialize($settings);
//this is where Im having the issue
$response = $this->merchant->purchase($params);
$message = $response->message();
When I echo $message
echo $message; //Outputs: couldn't connect to host"
CURL - Server Settings
Below is a list of the differences in the CURL settings on the servers. Perhaps this is the issue. I dont think these settings can be changed without having to compile curl but im not sure.
Development Server (WAMP server - status:Working)
AsynchDNS : Yes
CurlInfo : 7.21.7
GSS Neg : Yes
IDN : No
SSPI : Yes
libSSH : libssh2/1.2.7
Test Server (Linuxserver - status:Not working)
AsynchDNS : No
CurlInfo : 7.24.0
GSS Neg : No
IDN : Yes
SSPI : No
libSSH : <<not listed>>
After much trial and error and some advice from friends I found this to be a missing libSSH module.
Since then I have moved my site from a shared hosting company to a more reliable VPS Hosting.
I installed the appropriate libraries and everything is working fine.
I would recommend anyone hosting their sites to moving away from any "shared" hosting companies. I only encountered very delayed support and VPS Hosting wasnt really that much more than what I was paying for VPS.
But you will need to know how to manage a server before you do.

Resources