Unable to start the MGT Development Environment - magento

I'm trying to setup the MGT Development Environment as per the instructions on the site. I'm running Ubuntu 16.04 and native docker.
I did a fresh pull before trying any of this. After running the container the browser at 127.0.0.1:3333 just shows the general HTTP 500 error. Running docker logs on the container shows the following log entries:
docker logs 7b1f04c29bf2
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-03-28 14:03:53,908 CRIT Supervisor running as root (no user in config file)
2017-03-28 14:03:53,908 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2017-03-28 14:03:53,916 INFO RPC interface 'supervisor' initialized
2017-03-28 14:03:53,917 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2017-03-28 14:03:53,917 INFO supervisord started with pid 1
2017-03-28 14:03:54,919 INFO spawned: 'sshd' with pid 9
2017-03-28 14:03:54,920 INFO spawned: 'postfix' with pid 10
2017-03-28 14:03:54,922 INFO spawned: 'php-fpm' with pid 11
2017-03-28 14:03:54,928 INFO spawned: 'redis' with pid 13
2017-03-28 14:03:54,930 INFO spawned: 'varnish' with pid 16
2017-03-28 14:03:54,932 INFO spawned: 'cron' with pid 18
2017-03-28 14:03:54,934 INFO spawned: 'nginx' with pid 19
2017-03-28 14:03:54,935 INFO spawned: 'clp-server' with pid 20
2017-03-28 14:03:54,937 INFO spawned: 'clp5-fpm' with pid 23
2017-03-28 14:03:54,938 INFO spawned: 'mysql' with pid 24
2017-03-28 14:03:54,940 INFO spawned: 'memcached' with pid 26
2017-03-28 14:03:54,940 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:54,941 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2017-03-28 14:03:55,011 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:55,102 INFO exited: postfix (exit status 0; expected)
2017-03-28 14:03:55,255 INFO exited: varnish (exit status 0; not expected)
2017-03-28 14:03:56,256 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,257 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,259 INFO spawned: 'redis' with pid 382
2017-03-28 14:03:56,262 INFO spawned: 'varnish' with pid 383
2017-03-28 14:03:56,263 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp-server entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,266 INFO spawned: 'mysql' with pid 384
2017-03-28 14:03:56,266 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,279 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:56,279 CRIT reaped unknown pid 385)
2017-03-28 14:03:56,306 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:56,585 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:03:58,588 INFO spawned: 'redis' with pid 396
2017-03-28 14:03:58,589 INFO spawned: 'varnish' with pid 397
2017-03-28 14:03:58,590 INFO spawned: 'mysql' with pid 398
2017-03-28 14:03:58,599 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:58,605 CRIT reaped unknown pid 399)
2017-03-28 14:03:58,632 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:58,913 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:01,919 INFO spawned: 'redis' with pid 410
2017-03-28 14:04:01,921 INFO spawned: 'varnish' with pid 411
2017-03-28 14:04:01,923 INFO spawned: 'mysql' with pid 412
2017-03-28 14:04:01,930 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:04:01,930 INFO gave up: redis entered FATAL state, too many start retries too quickly
2017-03-28 14:04:01,930 CRIT reaped unknown pid 413)
2017-03-28 14:04:01,969 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:04:02,238 INFO gave up: mysql entered FATAL state, too many start retries too quickly
2017-03-28 14:04:02,238 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:03,240 INFO gave up: varnish entered FATAL state, too many start retries too quickly
If I logon to the container via docker exec -it bash it shows the following running process:
root#mgt-dev-70:/# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 48144 16348 ? Ss+ 14:03 0:00 /usr/bin/python /usr/bin/supervisord
root 9 0.0 0.0 55600 5268 ? S 14:03 0:00 /usr/sbin/sshd -D
root 11 0.0 0.3 819816 49984 ? S 14:03 0:00 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf)
root 18 0.0 0.0 25904 2236 ? S 14:03 0:00 /usr/sbin/cron -f
root 19 0.0 0.1 64660 23456 ? S 14:03 0:00 nginx: master process /usr/sbin/nginx -g daemon off;
root 20 0.0 0.0 93752 8432 ? S 14:03 0:00 nginx: master process /usr/sbin/clp-server -g daemon off;
root 23 0.0 0.2 854428 39528 ? S 14:03 0:00 php-fpm: master process (/etc/clp5/fpm/php-fpm.conf)
root 25 0.1 0.0 37256 8876 ? Ssl 14:03 0:00 /usr/bin/redis-server 127.0.0.1:6379
memcache 26 0.0 0.0 327452 2724 ? Sl 14:03 0:00 /usr/bin/memcached -p 11211 -u memcache -m 256 -c 1024
root 40 0.0 0.1 65564 21516 ? S 14:03 0:00 nginx: worker process
root 102 0.0 0.0 94588 4304 ? S 14:03 0:00 nginx: worker process
root 156 0.0 0.0 36620 3948 ? Ss 14:03 0:00 /usr/lib/postfix/master
postfix 157 0.0 0.0 38684 3780 ? S 14:03 0:00 pickup -l -t unix -u -c
postfix 158 0.0 0.0 38732 3892 ? S 14:03 0:00 qmgr -l -t unix -u
varnish 164 0.0 0.0 126924 7172 ? Ss 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
vcache 165 0.0 0.7 314848 123484 ? Sl 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
root 495 0.0 0.0 20244 2984 ? Ss 14:12 0:00 bash
root 501 0.0 0.0 17500 2036 ? R+ 14:12 0:00 ps -aux
That's really as much as I know. Any guideance on getting it progressed appreciated as it looks great as a quick and easy way to get going on Magento 2. Thanks.

Related

.bashrc somehow looping and sourcing itself (fork bomb)

I'm using a web host with an Apache terminal, using it to host a NodeJS application. For the most part everything runs smooth, however when I open the terminal I often get bash: fork: retry: no child processes and bash: fork: retry: resource temporarily unavailable.
I've narrowed down the cause of the problem to my .bashrc file, as when using top I could see that the many excess processes being created were bash instances:
top - 13:41:13 up 71 days, 20:57, 0 users, load average: 1.82, 1.81, 1.72
Tasks: 14 total, 1 running, 2 sleeping, 11 stopped, 0 zombie
%Cpu(s): 11.7 us, 2.7 sy, 0.1 ni, 85.5 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 41034544 total, 2903992 free, 6525792 used, 31604760 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 28583704 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1001511 xxxxxxxx 20 0 11880 3692 1384 S 0.0 0.0 0:00.02 bash
1001578 xxxxxxxx 20 0 11880 2840 524 T 0.0 0.0 0:00.00 bash
1001598 xxxxxxxx 20 0 11880 2672 348 T 0.0 0.0 0:00.00 bash
1001599 xxxxxxxx 20 0 11880 2896 524 T 0.0 0.0 0:00.00 bash
1001600 xxxxxxxx 20 0 11880 2720 396 T 0.0 0.0 0:00.00 bash
1001607 xxxxxxxx 20 0 11880 2928 532 T 0.0 0.0 0:00.00 bash
1001613 xxxxxxxx 20 0 11880 2964 532 T 0.0 0.0 0:00.00 bash
1001618 xxxxxxxx 20 0 11880 2780 348 T 0.0 0.0 0:00.00 bash
1001619 xxxxxxxx 20 0 12012 3024 544 T 0.0 0.0 0:00.00 bash
1001620 xxxxxxxx 20 0 11880 2804 372 T 0.0 0.0 0:00.00 bash
1001651 xxxxxxxx 20 0 12012 2836 352 T 0.0 0.0 0:00.00 bash
1001653 xxxxxxxx 20 0 12016 3392 896 T 0.0 0.0 0:00.00 bash
1004463 xxxxxxxx 20 0 9904 1840 1444 S 0.0 0.0 0:00.00 bash
1005200 xxxxxxxx 20 0 56364 1928 1412 R 0.0 0.0 0:00.00 top
~/.bashrc consists of only:
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
If I comment out the last 3 lines like so:
#export NVM_DIR="$HOME/.nvm"
#[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
#[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Then the terminal functions as expected and no excess processes are created. However I obviously can't use nvm/npm commands while it's disabled as nvm isn't started.
I'm relatively inexperienced with bash and can't seem to figure out why this is happening. It seems that bash is somehow calling itself every time it opens, which creates the loop/fork bomb once the terminal is opened.
How can I prevent this while still being able to use nvm/npm?

Jump to the top parent shell from any arbitrary depth of subshell

I created multiple subshells
$ ps -f
UID PID PPID C STIME TTY TIME CMD
501 2659 2657 0 8:22AM ttys000 0:00.15 -bash
501 2776 2659 0 8:23AM ttys000 0:00.02 bash
501 2778 2776 0 8:23AM ttys000 0:00.09 bash
501 3314 2778 0 9:13AM ttys000 0:00.26 bash
501 8884 3314 0 4:41PM ttys000 0:00.03 /bin/bash
501 8891 8884 0 4:41PM ttys000 0:00.01 /bin/bash
501 8899 8891 0 4:41PM ttys000 0:00.02 /bin/bash
501 423 408 0 7:16AM ttys001 0:00.22 -bash
501 8095 423 0 3:52PM ttys001 0:00.15 ssh root#www.****.com
501 8307 8303 0 4:05PM ttys002 0:00.17 -bash
I'd like to jump back the most top one, but have to try exit one by one
$ ps -f
UID PID PPID C STIME TTY TIME CMD
501 2659 2657 0 8:22AM ttys000 0:00.17 -bash
501 423 408 0 7:16AM ttys001 0:00.22 -bash
501 8095 423 0 3:52PM ttys001 0:00.15 ssh root#***.com
501 8307 8303 0 4:05PM ttys002 0:00.17 -bash
I checked there are 3 bashes left, so I continue,
$ exit
logout
Saving session...completed.
[Process completed]
Sad, it's the most cases I encounter, How could I jump to the top from arbitrary depth of subshells?

Bad health due to com.cloudera.cmon.agent.DnsTest timeout

Problems:
More and more data nodes become bad health in Cloudera Manager.
Clue1:
no any task or job, just an idle data node here,
top
-bash-4.1$ top
top - 18:27:22 up 4:59, 3 users, load average: 4.55, 3.52, 3.18
Tasks: 139 total, 1 running, 137 sleeping, 1 stopped, 0 zombie
Cpu(s): 14.8%us, 85.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7932720k total, 1243372k used, 6689348k free, 52244k buffers
Swap: 6160376k total, 0k used, 6160376k free, 267228k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13766 root 20 0 2664m 21m 7048 S 85.4 0.3 190:34.75 java
17688 root 20 0 2664m 19m 7048 S 75.5 0.3 1:05.97 java
12765 root 20 0 2859m 21m 7140 S 36.9 0.3 133:25.46 java
2909 mapred 20 0 1894m 113m 14m S 1.0 1.5 2:55.26 java
1850 root 20 0 1469m 62m 4436 S 0.7 0.8 2:54.53 python
1332 root 20 0 50000 3000 2424 S 0.3 0.0 0:12.04 vmtoolsd
2683 hbase 20 0 1927m 152m 18m S 0.3 2.0 0:36.64 java
Clue2:
-bash-4.1$ ps -ef|grep 13766
root 13766 1850 99 16:01 ? 03:12:54 java -classpath /usr/share/cmf/lib/agent-4.6.3.jar com.cloudera.cmon.agent.DnsTest
Clue3:
in cloudera-scm-agent.log,
[30/Aug/2013 16:01:58 +0000] 1850 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['java', '-classpath', '/usr/share/cmf/lib/agent-4.6.3.jar', 'com.cloudera.cmon.agent.DnsTest']
None
[30/Aug/2013 16:01:58 +0000] 1850 Monitor-HostMonitor throttling_logger ERROR Failed to collect java-based DNS names
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 53, in collect
result, stdout, stderr = self._subprocess_with_timeout(args, self._poll_timeout)
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 42, in _subprocess_with_timeout
return SubprocessTimeout().subprocess_with_timeout(args, timeout)
File "/usr/lib64/cmf/agent/src/cmf/monitor/host/subprocess_timeout.py", line 70, in subprocess_with_timeout
raise Exception("timeout with args %s" % args)
Exception: timeout with args ['java', '-classpath', '/usr/share/cmf/lib/agent-4.6.3.jar', 'com.cloudera.cmon.agent.DnsTest']
"cloudera-scm-agent.log" line 30357 of 30357 --100%-- col 1
Backgrouds:
if I restart all nodes, then everythings are OK, but after half and hour or more, bad health is coming one by one.
Version: Cloudera Standard 4.6.3 (#192 built by jenkins on 20130812-1221 git: fa61cf8559fbefeb5af7f223fd02164d1a0adfdb)
I added all nodes in /etc/hosts
the installed CDH is 4.3.1.
in fact, these nodes are VMs with fixed IP address.
Any suggestions?
BTW, where can I download source code of com.cloudera.cmon.agent.DnsTest?

How do I put an already running CHILD process under nohup

My question is very similar to that posted in: How do I put an already-running process under nohup?
Say I execute foo.sh from my command line, and it in turn executes another shell script, and so on. For example:
foo.sh
\_ bar.sh
\_ baz.sh
Now I press Ctrl+Z to suspend "foo.sh". It is listed in my "jobs -l".
How do I disown baz.sh so that it is no longer a grandchild of foo.sh? If I type "disown" then only foo.sh is disowned from its parent, which isn't exactly what i want. I'd like to kill off the foo.sh and bar.sh processes and only be left with baz.sh.
My current workaround is to "kill -18" (resume) baz.sh and go on with my work, but I would prefer to kill the aforementioned processes. Thanks.
Use ps to get the PID of bar.sh, and kill it.
imac:barmar $ ps -l -t p0 -ww
UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD
501 3041 3037 4006 0 31 0 2435548 760 - Ss 8c6da80 ttyp0 0:00.74 /bin/bash --noediting -i
501 68228 3041 4006 0 31 0 2435544 664 - S 7cbc2a0 ttyp0 0:00.00 /bin/bash ./foo.sh
501 68231 68228 4006 0 31 0 2435544 660 - S c135a80 ttyp0 0:00.00 /bin/bash ./bar.sh
501 68232 68231 4006 0 31 0 2435544 660 - S a64b7e0 ttyp0 0:00.00 /bin/bash ./baz.sh
501 68233 68232 4006 0 31 0 2426644 312 - S f9a1540 ttyp0 0:00.00 sleep 100
0 68243 3041 4106 0 31 0 2434868 480 - R+ a20ad20 ttyp0 0:00.00 ps -l -t p0 -ww
imac:barmar $ kill 68231
./foo.sh: line 3: 68231 Terminated ./bar.sh
[1]+ Exit 143 ./foo.sh
imac:barmar $ ps -l -t p0 -ww
UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD
501 3041 3037 4006 0 31 0 2435548 760 - Ss 8c6da80 ttyp0 0:00.74 /bin/bash --noediting -i
501 68232 1 4006 0 31 0 2435544 660 - S a64b7e0 ttyp0 0:00.00 /bin/bash ./baz.sh
501 68233 68232 4006 0 31 0 2426644 312 - S f9a1540 ttyp0 0:00.00 sleep 100
0 68248 3041 4106 0 31 0 2434868 480 - R+ 82782a0 ttyp0 0:00.00 ps -l -t p0 -ww

My VPS server is slow. Much slower than my share hosting

I have recently upgrade from shared hosting to VPS hosting.
But the VPS is much slow than the shared hosting. Any advice?
my website is www.sgyuan.com
top - 08:59:55 up 2 days, 15:10, 3 users, load average: 0.52, 0.40, 0.36
Tasks: 28 total, 1 running, 27 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1048576k total, 499848k used, 548728k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13782 mysql 15 0 157m 44m 6420 S 22 4.3 107:26.02 mysqld
19902 www-data 15 0 66396 12m 6916 S 1 1.2 0:06.69 apache2
19924 www-data 15 0 65928 12m 7120 S 1 1.2 0:07.39 apache2
1 root 18 0 2604 1464 1212 S 0 0.1 0:02.46 init
1155 root 15 0 2216 884 704 S 0 0.1 0:00.34 cron
1203 syslog 15 0 2020 724 572 S 0 0.1 0:02.38 syslogd
1264 root 15 0 5600 2156 1736 S 0 0.2 0:03.12 sshd
7555 root 15 0 8536 2884 2276 S 0 0.3 0:01.83 sshd
7567 root 15 0 3104 1760 1412 S 0 0.2 0:00.02 bash
7735 root 15 0 8548 2888 2268 S 0 0.3 0:01.86 sshd
7751 root 18 0 3176 1848 1428 S 0 0.2 0:00.21 bash
18341 memcache 18 0 43924 1104 808 S 0 0.1 0:00.02 memcached
19549 root 18 0 63972 8824 4960 S 0 0.8 0:00.13 apache2
19897 www-data 16 0 65652 12m 7008 S 0 1.2 0:06.78 apache2
19898 www-data 15 0 65896 12m 7328 S 0 1.2 0:07.16 apache2
19899 www-data 16 0 65932 12m 7328 S 0 1.2 0:07.29 apache2
19900 www-data 15 0 65640 12m 7320 S 0 1.2 0:07.60 apache2
19901 www-data 15 0 65676 12m 7048 S 0 1.2 0:10.32 apache2
19903 www-data 15 0 65672 11m 6568 S 0 1.2 0:06.38 apache2
19904 www-data 15 0 65640 12m 6876 S 0 1.2 0:06.32 apache2
19905 www-data 15 0 65928 12m 6800 S 0 1.2 0:06.66 apache2
20452 bind 18 0 105m 16m 2304 S 0 1.7 0:00.10 named
21720 root 15 0 17592 13m 1712 S 0 1.3 0:12.25 miniserv.pl
21991 root 18 0 2180 996 832 S 0 0.1 0:00.00 xinetd
22378 root 15 0 2452 1128 920 R 0 0.1 0:00.06 top
23834 root 15 0 8536 2920 2272 S 0 0.3 0:23.63 sshd
23850 root 15 0 3184 1868 1436 S 0 0.2 0:00.44 bash
29812 root 15 0 3820 1064 836 S 0 0.1 0:00.24 vsftpd
Is the web server config identical for the VPS and shared hosting configuration? That would be the first place I look because it's not trivial to tune apache to perform well. I'm assuming that with the VPS it is 100% your responsibility to configure the web server and you have to make the decisions about the number of clients, the process model, opcode caches, etc.

Resources