Docker time is not the same as on my windows 10 laptop - windows

I have my container running in docker but when i look at some lines that i printed with the corresponding time. I see that it's every time 2 hours behind.
What I've done:
I did a software update
I removed every container and created them again
I added this piece of code as a volume in my docker-compose.yml file
/etc/timezone:/etc/timezone:ro
/etc/localtime:/etc/localtime:ro
But none of this work
What causes this problem?
How can i fix this?
This is a screen of my problem

Related

Docker uses memory without any image

I am trying to create a docker image from Dockerfile on Windows 10. Being new to it, it crashed multiple times due to one or more syntax errors in the Dockerfile. I tried to clear all the images by using docker system prune --all. I got some disk space cleared up (If I am right, the system here means HDD rather than RAM?). Anyway, I see that Docker.Service seems to be using 6+ GB of memory.
My question is, is there a way to clear the memory in Docker.Service? Why is it using so much of memory when no image is being used? I know that it can be cleared by exiting Docker or force closing it.
Update
By the way, I am using Linux container, there is an option when right click on the docker icon from the tray.
Update 2
I tried all the commands from their documentation page - https://docs.docker.com/config/pruning/ - No effect.
Update 3
Doesn't seem to clear even when the image is created and saved. Looks like a bug?
Docker creates an image for each command in dockerfile, it creates image layers and stores it in the cache, So whenever your dockerfile execution is interrupted the image remains in the cache. Hence you see the memory consumption.
Try the following command which will remove all the images
docker rmi $(docker images -a -q)
Docker runs Linux containers in a VM on Windows. The 6G of ram is likely what you have assigned to that VM. Use docker stats to see the resource usage of containers running inside that VM.

Raspberry Pi 3 freezes - triggered by rsyslogd?

My Raspberry Pi keeps freezing after a few hours of activity. It often happens at exactly 17 minutes past the hour so I'm suspecting this might be related to a cron job. When I look into /var/log/syslog after rebooting, I see that each freeze comes right after a flurry of activity that always starts with this line (You can see the entire log output following that line, all the way to the point a freezing at https://pastebin.com/m1dmferU):
Aug 13 13:17:05 raspberrypi rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="486" x-info="http://www.rsyslog.com"] start
So I'm wondering if something related to rsyslogd might be causing this.
FYI: After the freeze the monitor is dark and unresponsive.
I'll be grateful for any help!
marc.

Kibana stops showing data after a while. Logs too big?

I'm running PALallax, which is a custom version of Kibana / ElasticSearch for Palo Alto firewalls. I have it installed on CentOS 7 with more than enough resources (4 processors, 16GB of RAM). It works fine - however, almost every single day, half way through, Kibana will stop showing results and end up with the dreaded "no results found". I know it works, though. The log file continues to grow (which is big, by the way - about 11GB half way through the day). No matter what I do, I can't get any information to display until I delete the log and indices files on the server and reboot - then it starts working again.
I've looked through logs all around the system and can't figure out what is going on. I'm not an Linux expert, so unfortunately I've run out of ideas and have nothing else to try. I've spent countless days googling different things and haven't been able to isolate any specific problem in the logs.
Any suggestions on where to look? Are my logs too big? I can see that I'm not running out of RAM while this is happening. I always have it set for 'last hour' worth of data, set to auto-refresh every 5 minutes.
Monitor the free disk space and set up automatic deletion of old indices to avoid running out of disk space.

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Docker Run Time Statistics ( Benchmarks)

I know there are lots of docker experts around but I spent considerable time to find out some facts and figure about Docker's run time performance, but unfortunately i could not get any concrete answer. Let me start with telling you my System's configuration:
(a) Running CentOS 6.5 on a machine having 48GB RAM, 1TB Disc and 12 Core CPUs.
(b) I build up a Docker image which is having size almost 6.5GB
Below are questions if someone can answer for the benefit of readers:
(a) Now with the given configuration, question comes that how many containers I can run in parallel without break any functionality?
(b) Assume I have two Images each having size 3.5GB, then is it suggested to run multiple small size images or we get a good performance with big sized image?
(c) What is the best file systems option to use with Docker?
EDIT: more information
(d) Actually I'm trying to put many compilers inside a container and trying to give facility to users to compile their languages online. This tool is under development and will replace my existing website compileonlone.com. Things are going fine, I build up two images with few compilers in each. I'm able to run around 250 containers successfully and after that I start getting too many files opened. After 250 containers, my RAM is reaching somewhere 40GB and CPU utilization is around 50%,.
Main problem I'm facing is removal of the old containers. Because user will come and compile his code and then will go away, so I need to remove those container after certain period of time but when I'm trying to remove such stopped containers using docker rm -v, its slowing down main docker process and its almost hanging. I mean docker -d daemon which is listening at /var/run/docker.sock. Not sure if there is any other way around to clean these containers or I have a bug. Here is the detail of Docker:
# docker info
Containers: 1016
Images: 41
Storage Driver: devicemapper
Pool Name: docker-0:20-258-pool
Pool Blocksize: 64 Kb
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 17820.7 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 102.4 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.17.2-1.el6.elrepo.x86_64
Operating System: <unknown>
WARNING: No swap limit support
If someone can help me on how to delete old containers in fastest way then it will be great. Simple shell script and all are not working. I already have tried like
#docker rm -v $(docker ps -a |grep Exited | awk '{print $1}')
but its completely slowing down main docker process and its unable to create new containers while this removal process is running.
Thanks for your time taken to answer these questions, which will help me as well as many others in going ahead with Docker.
a): A container is like a process. This question is like asking "how many processes can I run in parallel". It is not answerable without knowing what the processes are doing. Please add this information to your question.
b) Both 3.5GB and 6.5GB are very large for a Docker image. Best practice is to put one application in one container: if you have an application that size, then great. If not, maybe you have put your application's data into the image. This is not a good idea because the layered filesystem is slower than a regular filesystem, and you won't be wanting any of the features of layering or snapshotting on your transactional data.
The documentation on managing data explains how to mount regular disk so it is accessible from your containers.
Edit, after more information was supplied
d) Using up RAM implies the containers are still running. If there is some way within the logic of your site to know when a container is no longer needed you can docker kill it, then docker rm to remove the disk storage. Or docker rm -f does those two operations in one.
After a lot R&D and discussing with many experts, I found a solution to delete containers with lightening speed. Its simple you have run your docker daemon with dm.blkdiscard=false option as follow.
docker -d --storage-opt dm.blkdiscard=false
By the way here is what I have developed. Here I need to create and delete containers with a high speed
http://codingground.tutorialspoint.com
Hope this will help many others.

Resources