docker run long time make cache connection timedout - caching

I built my app with docker-compose , one container is database use mariadb image ,one php to run Laravel (I installed php-memcached or php-redis extension for my app), one cache container built on redis docker image .
at first everything goes on well , but after running 2 or 3 days , I got the php exception : Connection timed out [tcp://redis:6379];
I monitor the cpu and memory and network use zabbix installed by myself on host server , but I got these error :
monitor CPU
monitor memory
I changed cache container to memcached and 2 or 3 days same thing happen,
the only way I found to solve this problem is to restart system , and it can run another 2 or 3 days before getting the same error. you know it's not possible to restart system on production, so any one can suggest me where to solve the problem other than restarting system ?
Thanks!

I think you are facing problem with redis docker container. This type of error comes when memory is exhausted. You need to set max memory parameter of redis server.
Advice: Please try to use another image of redis.

Related

Queued jobs are somehow being cached with Laravel Horizon using Supervisor

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

DSX local - Models not working

I have installed a DSX 3 node cluster on RHEL 7.4, all notebooks and r-studio code work fine. However, model creation gives this error:
Load Data
Error: The provided kernel id was not found. Verify the input spark service credentials
All kubernetes pods seem to be up and running. Any ideas on how to fix this?
If you are in the Sept release, suggest stop kernels and restart. There was a limit of of 10 kernels in that release. You will see the active green button across notebooks/models with option to stop.

How many docker containers can i run simultaneously on single host?

I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.

GCE Highmem Instance Slow

I am connecting to a CentOS booted from a persistent disk created from here. The instance is a n1-standard instance but I'm having some serious performance issues.
I can SSH into the instance just fine, however commands take a long time to run. I tried to just run a simple ping command to google.com and it took 2 minutes and 36 seconds to start the command. The CPU usage is next to none as there isn't really anything going on within the instance except an Apache installation that isn't doing anything yet. My internet connection is just fine, and other instances work just fine. I've even deleted the instance and started over from scratch.
Is there something wrong with the image CentOS is being created from or is this just a problem I'm having? What steps can I take to narrow down the problems?

liferay performance issue on linux

I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?

Resources