DDEV on VMware Ubuntu 18 VM - ddev

ddev will not start on VMware Ubuntu 18 VM and fails.
fails with the following error messages.
Failed to start drupaltraining: ddev-router failed to become ready:
logOutput=nginx: the configuration file /etc/nginx/nginx.conf syntax
is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
nginx config: OK ddev-router healthcheck endpoint not responding
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
, err=container /ddev-router unhealthy: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
nginx config: OK ddev-router healthcheck endpoint not responding
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
I was able to get it working on a physical Ubuntu 18 box.
Thanks for any help

Related

How to fix unable to check revocation for the certificate when downloading a remote file in Vagrant

As part of my Vagrantfile I have
config.vm.box = "hashicorp/bionic64"
config.vm.provision "shell", path: "https://get.docker.com", name: "dockers"
I'm behind a corporate proxy. I appended my corporate certificate to
C:\HashiCorp\Vagrant\embedded\cacert.pem. Also, I set this environments variable CURL_CA_BUNDLE & SSL_CERT_FILE both to C:\HashiCorp\Vagrant\embedded\cacert.pem which has the certificate.
But still vagrant up fails with the following message:
schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate.
INFO interface: Machine: error-exit ["Vagrant::Errors::DownloaderError", "An error occurred while downloading the remote file. The error\nmessage, if any, is reproduced below. Please fix this error and try\nagain.\n\nschannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate.\r"]
My guess is that Ruby (being used by Vagrant) cannot find the cert or the call to get the revocation list is blocked. Any ideas what is the exact issue here and how to fix it?
Update
In the debug mode it appears that curl (possibly called from Ruby?) is trying to download the file
INFO downloader: Downloader starting download:
INFO downloader: -- Source: https://get.docker.com
INFO downloader: -- Destination: C:/Users/John/.vagrant.d/tmp/12288a08-a7ba-3d92-96ff-8bf28e739099-remote-script
INFO subprocess: Starting process: ["C:\\HashiCorp\\Vagrant\\embedded\\bin/curl.EXE", "-q", "--fail", "--location", "--max-redirs", "10", "--verbose", "--user-agent",
"Vagrant/2.2.16 (+https://www.vagrantup.com; ruby2.6.7) ", "--output", "C:/Users/John/.vagrant.d/tmp/12288a08-a7ba-3d92-96ff-8bf28e739099-remote-script", "https://get.docker.com"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stderr: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 99.84.174.91:443...
* Connected to get.docker.com (99.84.174.91) port 443 (#0)
* schannel: ALPN, offering http/1.1
* schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
* schannel: shutting down SSL/TLS connection with get.docker.com port 443
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate.
Have a look at this SO answer .
As mentioned there, disabling the antivirus software till vagrant init was initialized solved the problem.

Datanode and Namenode runs but not reflected in UI

I have a small setback in configuring my Master and Slave in Hadoop and I have both my namenode and datanode in Master and Slave up and running.
However the LiveNodes count in the WebUI are not getting reflected but the nodes are running.
I have already tried disabling the firewall and formatted the nodes but I am unable to resolve the same.
Any help would be highly appreciated !!!
Here are the snippets :
Master:
jps command output :
5088 Jps
4446 NameNode
4681 SecondaryNameNode
Slave :
jps command output:
2478 Jps
2410 DataNode
ubuntu#hadoop-master:/usr/local/hadoop/etc/hadoop$ $HADOOP_HOME/bin/hdfs dfsadmin -refreshNodes
16/04/28 02:22:37 WARN ipc.Client: Address change detected. Old: hadoop-master/52.200.230.29:50077 New: hadoop-master/127.0.0.1:50077
refreshNodes: Call From hadoop-master/127.0.0.1 to hadoop-master:50077 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Log file of hadoop-slave-1:
2016-04-28 21:23:07,248 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
2016-04-28 21:23:12,257 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
2016-04-28 21:23:17,265 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
2016-04-28 21:23:22,273 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
2016-04-28 21:23:27,282 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
2016-04-28 21:23:32,291 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/52.200.230.29:9000
Log File of Hadoop-master:
2016-04-28 21:21:04,002 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-04-28 21:21:04,002 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-04-28 21:21:04,002 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 407
2016-04-28 21:21:04,002 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22
2016-04-28 21:21:04,003 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 23
2016-04-28 21:21:04,003 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_inprogress_0000000000000000407 -> /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_0000000000000000407-0000000000000000408
2016-04-28 21:21:04,004 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 409
netstat -pant command on my master:
ubuntu#hadoop-master:/usr/local/hadoop/etc/hadoop$ netstat -pant
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 21491/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50077 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:50078 0.0.0.0:* LISTEN 21491/java
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 21726/java
tcp 0 0 172.31.63.189:50070 128.235.8.68:57225 ESTABLISHED 21491/java
tcp 0 0 127.0.0.1:41471 127.0.0.1:50078 TIME_WAIT -
tcp 0 124 172.31.63.189:22 128.235.8.68:56950 ESTABLISHED -
tcp 0 0 172.31.63.189:50070 128.235.8.68:57224 ESTABLISHED 21491/java
tcp 0 0 172.31.63.189:50070 128.235.8.68:57223 ESTABLISHED 21491/java
tcp 0 0 172.31.63.189:22 128.235.8.68:57084 ESTABLISHED -
tcp 0 0 172.31.63.189:22 58.218.204.215:39402 ESTABLISHED -
tcp 0 0 172.31.63.189:50070 128.235.8.68:57227 ESTABLISHED 21491/java
tcp 0 0 172.31.63.189:50070 128.235.8.68:57228 ESTABLISHED 21491/java
tcp 0 0 172.31.63.189:50070 128.235.8.68:57226 ESTABLISHED 21491/java
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::50077 :::* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN -
Connection refused
I can see this error from your post. I guess you need to do 3 things
make sure 50077 port is listen by a process and it is your hadoop process
make sure it is access able using some tools like telnet
besides firewall. selinux can also affect access. So shut it down and restart your service and try again

Nutch 2.3.1 on cassandra couldn't start

I'm trying to run nutch 2.3.1 with cassandra. Followed steps on http://wiki.apache.org/nutch/Nutch2Cassandra . Finally, when I try to start nutch with command:
bin/crawl urls/ test http://localhost:8983/solr/ 2
I got the following exception:
GeneratorJob: starting
GeneratorJob: filtering: false
GeneratorJob: normalizing: false
GeneratorJob: topN: 50000
GeneratorJob: java.lang.RuntimeException: job failed: name=[test]generate: 1454483370-31180, jobid=job_local1380148534_0001
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120)
at org.apache.nutch.crawl.GeneratorJob.run(GeneratorJob.java:227)
at org.apache.nutch.crawl.GeneratorJob.generate(GeneratorJob.java:256)
at org.apache.nutch.crawl.GeneratorJob.run(GeneratorJob.java:322)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.GeneratorJob.main(GeneratorJob.java:330)
Error running:
/home/user/apache-nutch-2.3.1/runtime/local/bin/nutch generate -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -topN 50000 -noNorm -noFilter -adddays 0 - crawlId webmd -batchId 1454483370-31180
Failed with exit value 255.
When I check logs/hadoop.log, here's the error message:
2016-02-03 15:18:14,741 ERROR connection.HConnectionManager - Could not start connection pool for host localhost(127.0.0.1):9160
...
2016-02-03 15:18:15,185 ERROR store.CassandraStore - All host pools marked down. Retry burden pushed out to client.
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:390)
But my cassandra server is up:
runtime/local$ netstat -l |grep 9160
tcp 0 0 172.16.230.130:9160 *:* LISTEN
Anyone can help on this issue? Thanks.
The address of Cassandra is not localhost, it's 172.16.230.130. That is the reason, Nutch cannot connect to the Cassandra store.
Hope this helps,
Le Quoc Do

openshift: my local gears often timeout and go down which cause response time very long

my application is a scalable tomcat application with MySQL. If I do not access my application for a while, response time is very long when I access it again. Checking haproxy.log:
[WARNING] 131/134600 (449836) : Server express/local-gear is DOWN, reason: Layer7 timeout, check duration: 10002ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/134644 (449836) : Server express/local-gear is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 4068ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 131/154052 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is DOWN, reason: Layer7 timeout, check duration: 10004ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/154323 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 501ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 131/154550 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is DOWN, reason: Layer7 timeout, check duration: 10003ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/154643 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 7ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 131/182346 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is DOWN, reason: Layer7 timeout, check duration: 10003ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/182512 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 11ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 131/194433 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is DOWN, reason: Layer7 timeout, check duration: 10004ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/194439 (449836) : Server express/gear-5370cea0500446741d00058b-ibrainext is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 109ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 131/194615 (449836) : Server express/local-gear is DOWN, reason: Layer7 timeout, check duration: 10002ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 131/194735 (449836) : Server express/local-gear is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Look like the timeout value is 10sec. How can I keep my application running without going down ?
If you do not access your application for 24 hours, it will get idled (if this is a free acount), and depending on how large your application is, it could take awhile to spin back up when you do access it again.

PHP processing speed apache 2.4 mpm-prefork mod_php 5.4 vs nginx 1.2.x PHP-FPM 5.4

I've been looking for days to see if someone has done a good, documented, PHP processing speed comparison between
apache-mpm-prefork 2.4 with mod_php 5.4
and
nginx 1.2.x + PHP-FPM 5.4
Why I'm looking: The only test I saw are abount benchmarks, serving full pages or Hello, World -like test, without proper documentation on what exactly was tested. I don't care about the request/seconds, the hardware, but I do need to see what PHP script was tested and with what exact configuration.
Why these two: mod_php was known to be the fastest on processing PHP ( no static files, no request/response measuring, just processing the PHP itself ) but a lot has changed since then, including apache version. Nginx and PHP-FPM eats a lot less memory, so it'd be a good reason to change architecture but if they're not fast enough in this case, the change would be irrelevant.
I know if I'm unable to find one I have to do it myself but I can't believe no one has done a test like this so far :)
I have completed this test on CentOS 6.3 using nginx 1.2.7, apache 2.4.3 and php 5.4.12 all compiled with no changes to default.
./configure
make && make install
With the exception of php where I enabled php-fpm
./configure --enable-fpm
All servers have 100% default config except as noted below. All testing was done on a test server, with no load and a reboot between tests. The server has a Intel(R) Xeon(R) CPU E3-1230, 1GB RAM and 2 x 60GB SSD in RAID 1. Tests were run using ab -n 50000 -c 500 http://127.0.0.1/test.php
Test PHP script:
<?php
$testing = 0;
for ($i = 0; $i < 1000; $i++) {
$testing++;
}
echo $testing;
I also had to enable php in nginx.conf as it's not enabled by default.
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
Nginx - PHP-FPM on 127.0.0.1:9000
Concurrency Level: 500
Time taken for tests: 10.932 seconds
Complete requests: 50000
Failed requests: 336
(Connect: 0, Receive: 0, Length: 336, Exceptions: 0)
Write errors: 0
Non-2xx responses: 336
Total transferred: 7837824 bytes
HTML transferred: 379088 bytes
Requests per second: 4573.83 [#/sec] (mean)
Time per request: 109.317 [ms] (mean)
Time per request: 0.219 [ms] (mean, across all concurrent requests)
Transfer rate: 700.17 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 34 338.5 0 7000
Processing: 0 34 166.5 23 8120
Waiting: 0 34 166.5 23 8120
Total: 13 68 409.2 23 9846
Percentage of the requests served within a certain time (ms)
50% 23
66% 28
75% 32
80% 33
90% 34
95% 46
98% 61
99% 1030
100% 9846 (longest request)
Nginx - PHP-FPM via socket (config change to fastcgi_pass)
fastcgi_pass unix:/var/lib/php/php.sock;
Concurrency Level: 500
Time taken for tests: 7.054 seconds
Complete requests: 50000
Failed requests: 351
(Connect: 0, Receive: 0, Length: 351, Exceptions: 0)
Write errors: 0
Non-2xx responses: 351
Total transferred: 7846209 bytes
HTML transferred: 387083 bytes
Requests per second: 7087.70 [#/sec] (mean)
Time per request: 70.545 [ms] (mean)
Time per request: 0.141 [ms] (mean, across all concurrent requests)
Transfer rate: 1086.16 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 26 252.5 0 7001
Processing: 0 24 112.9 17 3683
Waiting: 0 24 112.9 17 3683
Total: 7 50 306.4 17 7001
Percentage of the requests served within a certain time (ms)
50% 17
66% 19
75% 20
80% 21
90% 23
95% 31
98% 55
99% 1019
100% 7001 (longest request)
Apache - mod_php
Concurrency Level: 500
Time taken for tests: 10.979 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 9800000 bytes
HTML transferred: 200000 bytes
Requests per second: 4554.02 [#/sec] (mean)
Time per request: 109.793 [ms] (mean)
Time per request: 0.220 [ms] (mean, across all concurrent requests)
Transfer rate: 871.67 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 22 230.2 1 7006
Processing: 0 58 426.0 18 9612
Waiting: 0 58 425.9 18 9611
Total: 5 80 523.8 19 10613
Percentage of the requests served within a certain time (ms)
50% 19
66% 23
75% 25
80% 26
90% 31
95% 36
98% 1012
99% 1889
100% 10613 (longest request)
I'll be more than happy to tune apache further, but it seems apache just can't keep up. The clear winner is nginx with php-fpm via socket.
It seems you are comparing apples with oranges, or more to put it more accurately, you are confounding the results by adjusting two variables. Surely, it would be more sensible to compare Apache+fastcgi+php-fpm against nginx+php-fpm? You'd expect the php-fpm part to be the same, so then you would be measuring the better of Apache_fastcgi vs nginx.

Resources