docker-machine time is 4 hours ahead of my macbook: container - UTC, macbook - EDT - docker-machine

I am running docker-machine (--driver amazonec2) on mac downloaded via homebrew. Inside the container:
date
returns a date 4 hours ahead (UTC instead of EDT). How do I fix this? I would like the date to be the same as my local machine ie. both set to EDT. I have tried restarting docker-machine but got an error, setting environment. So far, all I have managed to do is set $TZ to new-york -however, this isn't helpful as the date still shows UTC.
RUN echo "America/New_York" > /etc/timezone
in Dockerfile and volumes
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
in docker-compose has not fixed the issue.

Instead of
FROM node:13.12.0-alpine as build
I used:
FROM node:13.12.0 as build
This is conjunction with setting
ENV TZ America/New_York
in my Dockerfile was able to solve the problem. Note: both of these modifications were necessary to solve this issue.

Related

how to solve the extited code 139, for run the image cloudera/quickstart, in docker with WSL2 Ubuntu?

I'm using WSL2 with Ubuntu 20.04 distribution, and I was trying create a container in Docker with the Following command:
docker run --hostname=quickstart.cloudera --privileged=true -it -v $PWD:/src --publish-all=true -p 8888:8888 -p 8080:8080 -p 7180:7180 cloudera/quickstart /usr/bin/docker-quickstart
when I run this command, a download started with a weight of about 4.4 GB, (i think that is because because was the first time that I run this container), whe the download was over, I used the following command to check the containers docker ps -a and the status for the container is Exited (139) 6 minutes ago, when check my image list
REPOSITORY TAG IMAGE ID CREATED SIZE
uracilo/hadoop latest 902e5bb989ad 8 months ago 727MB
cloudera/quickstart latest 4239cd2958c6 4 years ago 6.34GB
I think that the image was created successfully, but when I try to run the first command, I keep gettind the Exited (139) in the status and I can't use the container
Apparently the exit code 139 refers to some problem with the system or the hardware, maybe the RAM, but I'm not sure. and I don't know if this problem is because I'm using wsl or my 8GB in ram
not enough to run the image
is there any way to run this image successfully?
You need to create a file named .wslconfig under %userprofile% folder on your Windows and copy the following lines into that file
[wsl2]
kernelCommandLine = vsyscall=emulate
Then just restart your Docker engine.
I fixed this by changing the Docker engine from WSL2 back-end to Hyper-V
https://community.cloudera.com/t5/Support-Questions/docker-exited-with-139-when-running-cloudera-quickstart/td-p/298586

phpMyAdmin connection timeout. No Login cooke validity option under Settings

This is my first Stack Overflow post ever. Hurah for me :)
phpMyAdmin 4.9.1. How can I change connection timeout? For now it's 1440 seconds.
Settings/Fuatuers/General dosen't show option "Login cookie validity".
OS: MacOS Catalina 10.15.1 (19B88)
Google Chrome Version 78.0.3904.97 (Official Build) (64-bit)
Thanks for your help.
in phpmyadmin\libraries\config.default.php change
$cfg['LoginCookieValidity'] = 1440
to
$cfg['LoginCookieValidity'] = 7200;
to make session logout after 2 Hours.
in phpmyadmin\libraries\config.default.php change
$cfg['ExecTimeLimit'] = 1440
to
$cfg['ExecTimeLimit'] = 0;
and restart - this will clear the limit
try to add the line $cfg['LoginCookieValidity'] = '7200' ; into config.inc.php file in PhpMyAdmin directory
this is for two hours
If you're using phpMyAdmin in a DOCKER container:
Go to your TERMINAL and list your docker containers in order to find out the phpmyadmin one:
$ docker ps
# EXAMPLE OF OUTPUT:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93dfa1f49775 php:7.4.1-fpm "docker-php-entrypoi…" 2 weeks ago Up 2 weeks 9000/tcp docker_localhost_app
36299ca6ce83 nginx:alpine "/docker-entrypoint.…" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp docker_localhost_nginx
c1d8e6ffd28c phpmyadmin/phpmyadmin "/docker-entrypoint.…" 2 weeks ago Up 2 weeks 0.0.0.0:8080->80/tcp docker_localhost_myadmin
d75778f88cc6 mysql:5.6 "docker-entrypoint.s…" 2 weeks ago Up 2 weeks 0.0.0.0:3306->3306/tcp docker_localhost_db
Execute bash in the chosen container:
$ docker exec -it docker_localhost_myadmin bash
root#c1d8e6ffd28c:/var/www/html#
In this case docker_localhost_myadmin is the name of my container.
Install vim editor in order to change the file that have the timeout variable. Execute that:
$ apt update
$ apt upgrade
$ apt install vim
Edit the file config.default.php:
$ vim /var/www/html/libraries/config.default.php
Find out and change the variable $cfg['LoginCookieValidity'] from 1440 to 28800(8 hours):
To find out the variable on vim, press / and type LoginCookieValidity
Before:
$cfg['LoginCookieValidity'] = 1440
After:
$cfg['LoginCookieValidity'] = 28800;
NOTE 1: DO NOT set 0(zero) as it will make your phpMyAdmin logout immediately.
NOTE 2: You may face a message on your phpMyAdmin like:
Your PHP parameter session.gc_maxlifetime is lower than cookie validity configured in phpMyAdmin, because of this, your login might expire sooner than configured in phpMyAdmin.
In this case, change the environment variable session.gc_maxlifetime on your docker-compose.yml to - session.gc_maxlifetime=28800 or bigger.
Restart your container:
$ /etc/init.d/apache2 reload
Logout and Login your phpMyAdmin to see the results.

Time isn't synchronize with server, causing cloning cert error

I'm setting up a honeypot for my boss, and I'm coming across an issue with actually getting the time to synchronize with my workstations time (the reason I want to achieve this is because before looking at the steps on the link below, I had NOOBS rasbian OS installed which had the same issue with not being able to clone, but after doing the following command sudo apt-get install ntp, I was able to clone the files into the system with no issues, but because the link below calls for the "Rasbian Stretch Lite OS", I had to re-do the process, and because of this I can't seem to get the time to sync anymore.
https://github.com/DShield-ISC/dshield
So when I attempt to do the following command in the steps:
git clone https://github.com/DShield-ISC/dshield.git
fatal: unable to access 'https://github.com/DShield-ISC/dshield.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
I've Tried the following methods with no luck:
sudo /etc/init.d/ntp stop
sudo raspi-config (setting timezone)
sudo /etc/init.d/ntp start
the timedatectl settings are as follows:
Local time: Mon 2016-02-04 12:04:52 PST
Universal time: Mon 2016-02-04 20:04:52 UTC
RTC time: n/a
Time zone: America/Los_Angeles (PST, -0800)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
Also i've tried..
sudo ntpd -q -g
I've noticed with this command I get a ton of results, and the process never finishes, if this is vital I can re-run the command and tell you what kind of information is coming back to me.
Yes I've set the time to be as close as possible to the actual clock before attempting any of these, I've noticed that regardless it's always either a minute or some seconds off, whenever rebooting. I'm assuming that's because it isn't synchronized even though it states that it is.
The Cert error was due to me being HARDWIRED into my RBI verses using wifi, after going into sudo raspi-config, and setting up a wifi connection I was able to successfully clone the github repo.

Vagrant CoreOS box missing fleetctl

I am following the CoreOS in Action book (and also CoreOS online instruction) to bring up a 3-node cluster using Vagrant and VirtualBox on MacOS.
It all goes fine, machines come up & running and I can ssh into one of them, but it looks like the boxes brought up are missing fleetctl (which makes no sense, as it's such a core component of CoreOS):
$ vagrant ssh core-01 -- -A
Last login: Thu Mar 1 21:28:58 UTC 2018 from 10.0.2.2 on pts/0
Container Linux by CoreOS alpha (1702.0.0)
core#core-01 ~ $ fleetctl list-machines
-bash: fleetctl: command not found
core#core-01 ~ $ which fleetctl
which: no fleetctl in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin)
What am I doing wrong?
I have changed the number of instances to 3, created a new "discovery token URL" and updated the user.data file; Googling around I seem to be the one and only person having this problem.
Thanks in advance for any suggestions you may have!
PS -- yes, I have tried (several times!) to vagrant destroy and rebuild the cluster: even nuked the repo and re-cloned it. Same issue every time.
The answer is going to make you a bit sad, here it is:
CoreOS no longer support fleet. It's gone. Ciao :(
https://coreos.com/blog/migrating-from-fleet-to-kubernetes.html
To this end, CoreOS will remove fleet from Container Linux on February 1, 2018, and support for fleet will end at that time. fleet has already been in maintenance mode for some time, receiving only security and bugfix updates, and this move reflects our focus on Kubernetes and Tectonic for cluster orchestration and management.
You are using Coreos 1702.0.0, fleet has been removed since Coreos 1675.0.1 https://coreos.com/releases/

`docker-compose up` times out with UnixHTTPConnectionPool

In our Jenkins agents we are running about several (around 20) tests whose setup involves running docker-compose up for a "big" number of services/containers (around 14).
From time to time, I'll get the following error:
ERROR: for testdb-data UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
Haven't been able to reproduce this consistently. And I'm still trying to figure out whether or not there is a correlation with our agent's resources being at full use.
docker -v is 1.10.1 and docker-compose -v is 1.13.1.
Any ideas about what this may be related to?
Restarting docker service:
sudo systemctl restart docker
and setting DOCKER_CLIENT_TIMEOUT and COMPOSE_HTTP_TIMEOUT environment variables:
export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120
are two workarounds for now. But the issues are still open in docker compose github:
https://github.com/docker/compose/issues/3927
https://github.com/docker/compose/issues/4486
https://github.com/docker/compose/issues/3834
I had the same problem. It was solved after change the max-file size value from a number to a string.
Wrong config
logging:
options:
max-file: 10
max-size: 10m
Correct config
logging:
options:
max-file: "10"
max-size: 10m
docker-compose down
Running docker-compose down and then running docker-compose up --build may work. I am working on vscode and when I encountered a similar problem while building docker, this solution worked for me.
Before performing above mentioned command its better you refer to what is the purpose of docker-compose down
docker-compose restart
Fixed my issue. This will restart all stopped and running services.
docker-compose down
export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120
docker-compose up -d
following steps worked
sometimes it could because of docker internal cache, for me following steps worked:
1: docker-compose down -v --rmi all --> to remove all the image and cache
2: docker-compose up --build

Resources