Clients try to connect to redis with old password - laravel

I have a Debian 10 server with Laravel 5.8.
This is the redis configuration in config/database.php:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
'read_write_timeout' => 60,
],
'cache' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_CACHE_DB', 1),
],
],
In .env file I have only the REDIS_PASSWORD param, that is the same in redis-server configuration.
I noticed that some emails remain in the queue for a long time, for no apparent reason.
Checking the Redis log with redis-server monitor I see that several clients try to connect even with old test passwords that were present in the .env file.
php artisan config:cache and similar were not helpful.
Test passwords are no longer present in the .env file and in no other file; how and where can they be saved? How do I get rid of them permanently?
Thanks

You need to restart the queue, as it caches your code entirely including config.
Quote from docs:
Remember, queue workers are long-lived processes and store the booted
application state in memory. As a result, they will not notice changes
in your code base after they have been started. So, during your
deployment process, be sure to restart your queue workers. In
addition, remember that any static state created or modified by your
application will not be automatically reset between jobs.

Related

unsupported driver [https], laravel when deployed to heroku

I am trying to deploy a Laravel application to Heroku and connect it with a database which has already been deployed to Azure.
But I am having error "unsupported driver[https]".
My database.php:
<?php
use Illuminate\Support\Str;
return [
'default' => env('DB_CONNECTION', 'mysql'),
/
'mysql' => [
'driver' => 'mysql'
'url' => env('DATABASE_URL','https://firstsqlaap.scm.azurewebsites.net/phpMyAdmin/db_structure.php?server=1&db=localdb&token=51b0b3471e798a712e129bcd1ebe5b01'),
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '53082'),
'database' => env('DB_DATABASE', 'localdb'),
'username' => env('DB_USERNAME', 'user'),
'password' => env('DB_PASSWORD', 'pass'),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'options' => extension_loaded('pdo_mysql') ? array_filter([
PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
]) : [],
],
];
My SESSION_DRIVER is set to database because when set to file it was saying 419 error. I do not have any migration files as my database is deployed to Azure.
How to resolve this issue?
This certainly isn't the right URL to use:
https://firstsqlaap.scm.azurewebsites.net/phpMyAdmin/db_structure.php
You appear to be pointing to an instance of phpMyAdmin. phpMyAdmin isn't a database server, it's a dataase client. It's a tool that you might use to interact with your database. You need to provide the URL to your actual database.
Your database URL should look more like this:
driver://username:password#host:port/database?options
For MySQL, driver:// is likely mysql://.
I don't have any MySQL databases running on Azure, but it looks like a real URL might be something like
mysql://user:password#your-database-instance.mysql.database.azure.com/your-database-name
Go into the Azure portal and navigate to your database instance. Then, in the left navigation panel, click on "Connection strings". The information you need should be there, though not in URL format. You can either build your own URL by plugging the right values in or use the individual settings in your config/database.php file.
I commented url and it work for me

Laravel Echo Server: "Error sending authentication request" on production using HTTPS (Private/Presence channel subscription_error)

I created realtime chat app using: Laravel 5.8, Laravel Echo, Redis, Socket.io and Vue.js.
It works fine on my local machine(localhost). But when I tried to move the app to production server(which has https) - it stopped working and it shows me the next error in laravel-echo-server.log file:
Also I've changed laravel-echo-server.json(echo server config file) like so:
Also you can see how window.Echo config looks like(in app.js):
import Echo from 'laravel-echo'
window.io = require('socket.io-client');
window.Echo = new Echo({
broadcaster: 'socket.io',
host: window.location.hostname + ':6001'
});
May be I have some bugs in the configs which mentioned above?
Also, I think it must be helpful, check the screenshot below what I have noticed in one of client's requests:
I tried to find more information about the error and why it could appear but no results. Exactly I checked laravel log file - it was empty, and nginx error.log file which was empty too, the only place where I saw error it's laravel-echo-server.log file(I shoved its content above)
for additional information you can check what settings I'm using for redis server in my .env file (I cleared cache when changed .env):
...
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
...
Also, that's what I have in my config/database.php file:
...
'redis' => [
'client' => env('REDIS_CLIENT', 'predis'),
'options' => [
'cluster' => env('REDIS_CLUSTER', 'predis'),
],
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DB', 0),
],
'cache' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_CACHE_DB', 1),
],
],
...
Thank you guys so much for any help, I have been struggling with that problem for the last few days and I can't figure it out, so I will be gratefull for any tips!
Run This Command In Terminal
sudo nano /etc/hosts
Add your project host in Host entry
127.0.0.1 https://naturalselection.ie
And Save The File
After Run This Command In Terminal
laravel-echo-server start
And enjoy your live Notification With Laravel Echo
I sorted it out.
To fix that kind of issue you must go to /etc/hosts and paste there the next text:
127.0.0.1 your_domain_name.com
sudo nano /etc/hosts
127.0.0.1 your_domain_name.com
That's all, Hope it will help anybody :)

Laravel 5l Multiple Database Connections

Do I need to have my DB connection info in 2 places? If I do the below, I connect just fine. If I remove either file's data, I can't connect.
config/databases.php file
'blah_1' => [
'driver' => 'mysql',
'host' => env('DB_HOST’,’1.1.1.1’),
'port' => env('DB_PORT','3306'),
'database' => env('DB_DATABASE’,’someDB_1’),
'username' => env('DB_USERNAME’,’someUser_1’),
'password' => env('DB_PASSWORD’,’somePass_1’),
],
'blah_2' => [
'driver' => 'mysql',
'host' => env('DB_HOST_SECOND’,’2.2.2.2’),
'port' => env('DB_PORT_SECOND','3306'),
'database' => env('DB_DATABASE_SECOND’,’someDB_2’),
'username' => env('DB_USERNAME_SECOND’,’someUser_2’),
'password' => env('DB_PASSWORD_SECOND’,’somePass_2’),
],
.env file:
DB_CONNECTION=blah_1
DB_HOST=1.1.1.1
DB_PORT=3306
DB_DATABASE=someDB_1
DB_USERNAME=someUser_1
DB_PASSWORD=somePass_1
DB_CONNECTION_SECOND=blah_2
DB_HOST_SECOND=2.2.2.2
DB_PORT_SECOND=3306
DB_DATABASE_SECOND=someDB_2
DB_USERNAME_SECOND=someUser_2
DB_PASSWORD_SECOND=somePass_2
The short answer is no. You only need it in your config/databases.php. The .env file is to overwrite the settings in your other environments without updating the configuration file.
For example, in your local environment, your credentials is most likely different from your production environment. You wouldn't want to update config/databases.php locally and remind yourself to not push the file.
However, the connections would still work even if remove them from the .env file. It will use the second parameter's value in your env() as default.

Why can't find my Laravel session keys in Redis container?

I have setup my Laravel application with docker, one container is dedicated to the app one for redis.
I have setup Laravel to use Redis for session an caching.
All works fine but if I enter my Redis container and try to list all keys like:
$redis-cli
#KEYS *
It will return only key values used for caching not the session keys.
The above is a doublecheck because actually from Laravel application I set session key and then dump like
<?php dump(session()->all()); dump(Session::getDefaultDriver()); ?>
and from the dump everything looks fine.
I see my session keys and values data structures.
Session::getDefaultDriver() //returns "redis"
So, by seeing Cache key:values inside redis container I assume that there's not a connection/docker container issues... Laravel is writing in the correct place. Redis default connection is shared by Cache and SEssion.
In database.php I have:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', 'redis'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
In session.php I have
'driver' => env('SESSION_DRIVER', 'redis'),
...
'lifetime' => env('SESSION_LIFETIME', 120),
By seeing the dumps returning correct values in Laravel web application I'm assuming session is working properly and points to redis.
What am I missing?

AWS ElastiCache Redis can't connect from Laravel nad from redis-cli

I'm having a problem connecting to ElastiCache Redis from Laravel application installed on EC2 instance or even using redis-cli from EC2 instance.
Laravel
I tried to use predis with configurations in database.php like
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
'read_write_timeout' => -1,
'timeout' => 0
],
],
and got 'Error while reading line from the server. [tcp:server here]'
I tried with phpRedis extension with same configurations only change 'client' => 'phpredis' and got error read error on connection {"exception":"[object] (RedisException(code: 0): read error on connection at vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:69)
Redis cli
Using redis cli redis-cli -h host_here -p 6379 -a password_here I see prompt like host:6379> but typing any command throws error Error: Connection reset by peer
ElastiCache Redis configurations
My EC2 and elastic cache are in the same VPC and using telnet I can connect to redis instance
~$ telnet host 6379
Trying 172.31.23.113...
Connected to host.
Escape character is '^]'.
Thanks for any help!
I know this is pretty old but I was having the same issue myself. If anyone encounters this issue then see the solution here and here.
It seems that when you enable Encryption in-transit in AWS Elasticache it prevents you from using redis-cli as it doesn't support TLS connections. Switching to another client should work. This answer has a list of TLS enabled clients.
Edit:
Did some more digging and found that using stunnel you can wrap your connection of redis-cli with ssl. Here is a guide for doing it.
Related: Laravel + Redis Cache via SSL?
To which I've answered here: https://stackoverflow.com/a/48876398/663058
Relevant details below:
Since you have clustering and TLS then you'll need a different config entirely:
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
// Note! for single redis nodes, the default is defined here.
// keeping it here for clusters will actually prevent the cluster config
// from being used, it'll assume single node only.
//'default' => [
// ...
//],
// #pro-tip, you can use the Cluster config even for single instances!
'clusters' => [
'default' => [
[
'scheme' => env('REDIS_SCHEME', 'tcp'),
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DATABASE', 0),
],
],
'options' => [ // Clustering specific options
'cluster' => 'redis', // This tells Redis Client lib to follow redirects (from cluster)
]
],
'options' => [
'parameters' => [ // Parameters provide defaults for the Connection Factory
'password' => env('REDIS_PASSWORD', null), // Redirects need PW for the other nodes
'scheme' => env('REDIS_SCHEME', 'tcp'), // Redirects also must match scheme
],
'ssl' => ['verify_peer' => false], // Since we dont have TLS cert to verify
]
]
Explaining the above:
'client' => 'predis': This specifies the PHP Library Redis driver to use (predis).
'cluster' => 'redis': This tells Predis to assume server-side clustering. Which just means "follow redirects" (e.g. -MOVED responses). When running with a cluster, a node will respond with a -MOVED to the node that you must ask for a specific key.
If you don't have this enabled with Redis Clusters, Laravel will throw a -MOVED exception 1/n times, n being the number of nodes in Redis cluster (it'll get lucky and ask the right node every once in awhile)
'clusters' => [...]: Specifies a list of nodes, but setting just a 'default' and pointing it to the AWS 'Configuration endpoint' will let it find any/all other nodes dynamically (recommended for Elasticache, because you don't know when nodes are comin' or goin').
'options': For Laravel, can be specified at the top-level, cluster-level, and node option. (they get combined in Illuminate before being passed off to Predis)
'parameters': These 'override' the default connection settings/assumptions that Predis uses for new connections. Since we set them explicitly for the 'default' connection, these aren't used. But for a cluster setup, they are critical. A 'master' node may send back a redirect (-MOVED) and unless the parameters are set for password and scheme it'll assume defaults, and that new connection to the new node will fail.
If you are using predis as client.
Then you can change Redis connection in config/database.php
'redis' => [
'client' => 'predis',
'default' => [
'scheme' => 'tls',
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
You can add 'scheme' in that.
I am using Laravel 5.8 And then everything works great within Laravel.
But yes, as Redis doesn't provide TLS connection so redis-cli will still not work.

Resources