my application is build with apache2 and tomcat under redhat in AWS EC2 instance. It work before, after I restart EC2 instance just can access thought ssh but can't connect in browser, and show 'ERR_CONNECTION_TIMED_OUT'.
Any idea what I make wrong or which log I should check?
#Dusan Bajic the httpd status seems normal, after execute sudo systemctl status httpd shows:
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2017-12-29 16:34:06 +08; 2s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 10891 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 10896 (httpd)
Status: "Processing requests..."
CGroup: /system.slice/httpd.service
├─10896 /usr/sbin/httpd -DFOREGROUND
├─10897 /usr/sbin/httpd -DFOREGROUND
├─10898 /usr/sbin/httpd -DFOREGROUND
├─10899 /usr/sbin/httpd -DFOREGROUND
├─10900 /usr/sbin/httpd -DFOREGROUND
└─10901 /usr/sbin/httpd -DFOREGROUND
Dec 29 16:34:06 ip-172-31-21-170.ap-southeast-1.compute.internal systemd[1]: ...
Dec 29 16:34:06 ip-172-31-21-170.ap-southeast-1.compute.internal systemd[1]: ...
Hint: Some lines were ellipsized, use -l to show in full.
Please check the following:
Make sure that you are using correct public IP address or Public DNS (IPv4) in your browser.
Make sure that you have correct configuration in Apache configuration file, as Listen IP:80 and ServerName domain:80.
Ensure apache is listening on port 80.
Related
root#vultr:~# systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-07-28 02:16:44 UTC; 23min ago
Docs: man:nginx(8)
Main PID: 12999 (nginx)
Tasks: 2 (limit: 1148)
Memory: 8.2M
CGroup: /system.slice/nginx.service
├─12999 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─13000 nginx: worker process
Jul 28 02:16:44 vultr.guest systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 28 02:16:44 vultr.guest systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Jul 28 02:16:44 vultr.guest systemd[1]: Started A high performance web server and a reverse proxy server.
The nginx is in good status.
I want to create and write certificate.crt with acme:
sudo su -l -s /bin/bash acme
curl https://get.acme.sh | sh
export CF_Key="xxxx"
export CF_Email="yyyy#yahoo.com"
CF_Key is my global api key in cloudflare,CF_Email is the register email to login cloudflare.
acme#vultr:~$ acme.sh --issue --dns dns_cf -d domain.com --debug 2
The output content is so long that i can't post here,so i upload into the termbin.com ,we share the link below:
https://termbin.com/taxl
Please open the webpage,you can get the whole output info,and check which result in error,there are two main issues:
1.My nginx server is in good status,acme.sh can't detect it.
2.How can set the config file?
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_KEY_ID
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EAB_HMAC_KEY
[Wed Jul 28 03:04:38 UTC 2021] config file is empty, can not read CA_EMAIL
To write key into specified directory:
acme.sh --install-cert -d domain.com
--key-file /usr/local/etc/certfiles/private.key
--fullchain-file /usr/local/etc/certfiles/certificate.crt
It encounter problem:
[Tue Jul 27 01:12:15 UTC 2021] Installing key to:/usr/local/etc/certfiles/private.key
cat: /home/acme/.acme.sh/domain.com/domain.com.key: No such file or directory
To check files in /usr/local/etc/certfiles/
ls /usr/local/etc/certfiles/
private.key
No certificate.crt in /usr/local/etc/certfiles/.
How to fix then?
From acme.sh v3.0.0, acme.sh is using Zerossl as default ca, you must
register the account first(one-time) before you can issue new certs.
Here is how ZeroSSL compares with LetsEncrypt.
With ZeroSSL as CA
You must register at ZeroSSL before issuing a certificate. To register run the below command (assuming yyyy#yahoo.com is email with which you want to register)
acme.sh --register-account -m yyyy#yahoo.com
Now you can issue a new certificate (assuming you have set CF_Key & CF_Email or CF_Token & CF_Account_ID)
acme.sh --issue --dns dns_cf -d domain.com
Without ZeroSSL as CA
If you don't want to use ZeroSSL and say want to use LetsEncrypt instead, then you can provide the server option to issue a certificate
acme.sh --issue --dns dns_cf -d domain.com --server letsencrypt
Here are more options for the CA server.
lHello, in preparation for using a RP4 (running ubuntu server) , i am trying to have a bash script that is kicked off on boot... and relaunches is killed. i have included the steps belle along with the content of the file. Any clue on the error code or why it is not work would be greatly appreciated.
any idea on the exit code with a status of 2?
thank you.
uburntu#ubuntu:/etc/systemd/system$ cat prysmbeacon_altona.service
[Unit]
Description=PrysmBeacon--Altona
Wants=network.target
After=network.target
[Service]
Type=simple
DynamicUser=yes
ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2
WorkingDirectory=/home/ubuntu/Desktop/prysm
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
ubuntu#ubuntu:/etc/systemd/system$ systemctl daemon-reload
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl start prysmbeacon_altona
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'prysmbeacon_altona.service'.
Authenticating as: Ubuntu (ubuntu)
Password:
==== AUTHENTICATION COMPLETE ===
ubuntu#ubuntu:/etc/systemd/system$ systemctl status prysmbeacon_altona.service
● prysmbeacon_altona.service - PrysmBeacon--Altona
Loaded: loaded (/etc/systemd/system/prysmbeacon_altona.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-07-23 15:51:48 CEST; 111ms ago
Process: 3407 ExecStart=/home/ubuntu/Desktop/prysm/prysm.sh beacon-chain --altona --datadir=/home/ubuntu/.eth2 (code=exited, status=2)
Main PID: 3407 (code=exited, status=2)
ubuntu#ubuntu:/etc/systemd/system$
Without any major system update of my Ubuntu (4.4.0-142-generic #168-Ubuntu SMP), Kibana 7.2.0 stopped working. I am still able to start the service with sudo systemctl start kibana.service and the corresponding status looks fine. There is only a warning and no error, this does not seem to be the issue:
# sudo systemctl status kibana.service
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-07-10 09:43:49 CEST; 22min ago
Main PID: 14856 (node)
Tasks: 21
Memory: 583.2M
CPU: 1min 30.067s
CGroup: /system.slice/kibana.service
└─14856 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
Jul 10 09:56:36 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T07:56:36Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
Nevertheless, when I visit http://srv003:5601/ on my client machine, I keep seeing only (even after waiting 20 minutes):
Kibana server is not ready yet
On the server srv003 itself, I see
me#srv003:# curl -XGET http://localhost:5601/status -I
curl: (7) Failed to connect to localhost port 5601: Connection refused
This is a strange since Kibana seems to be really listening at that port and the firewall is disabled for testing purposes:
root#srv003# sudo lsof -nP -i | grep 5601
node 14856 kibana 18u IPv4 115911041 0t0 TCP 10.0.0.72:5601 (LISTEN)
root#srv003# sudo ufw status verbose
Status: inactive
There is nothing suspicious in the log of kibana.service either:
root#srv003:/var/log# journalctl -u kibana.service | grep -A 99 "Jul 10 10:09:14"
Jul 10 10:09:14 srv003 systemd[1]: Started Kibana.
Jul 10 10:09:38 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T08:09:38Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
My Elasticsearch is still up and running. There is nothing interesting in the corresponding log files about Kibana:
root#srv003:/var/log# cat elasticsearch/elasticsearch.log |grep kibana
[2019-07-10T09:46:25,158][INFO ][o.e.c.m.MetaDataIndexTemplateService] [srv003] adding template [.kibana_task_manager] for index patterns [.kibana_task_manager]
[2019-07-10T09:47:32,955][INFO ][o.e.c.m.MetaDataCreateIndexService] [srv003] [.monitoring-kibana-7-2019.07.10] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [_doc]
Now I am running a bit out of options, and I hope somebody can give me another hint.
Edit: I do not have any Kibana plugins installed.
Consulted sources:
How to fix "Kibana server is not ready yet" error when using AKS
Kibana service is running but can not access via browser to console
Why won't Kibana Node server start up?
https://discuss.elastic.co/t/failed-to-start-kibana-7-0-1/180259/3 - most promising thread, but nobody ever answered
https://discuss.elastic.co/t/kibana-server-is-not-ready-yet-issue-after-upgrade-to-6-5-0/157021
https://discuss.elastic.co/t/kibana-server-not-ready/162075
It looks like if Kibana enters the described undefined state, a simple reboot of the computer is necessary. This is of course not acceptable for a (virtual or physical) machine where other services are running.
I am playing with hortonworks sandbox, but I am not able to get Apache Ambari to work.
As you can see, when accessing the welcome page of the Hortonworks sandbox, I get a message saying:
Service disabled by default. To enable the service you need to log in as an ambari admin.
The ambari admin password can be set by ssh'ing into the vm as root as mentioned in the section "Secure Shell (SSH) Client". Once logged in as root user, execute ambari-admin-password-reset and follow the prompt
I did that but still, when I access the link: 127.0.0.1:8080 it's not working. I checked that the ambari-server is running:
[root#sandbox ~]# service ambari-server status
Using python /usr/bin/python2
Ambari-server status
Ambari Server running
Found Ambari Server PID: 1497 at: /var/run/ambari-server/ambari-server.pid
I checked within the Hortonworks sandbox to confirm that Ambari Server is listening to port number 8080
[root#sandbox ~]# netstat -anop | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 6320/java off (0.00/0/0)
[root#sandbox ~]#
[root#sandbox ~]#
[root#sandbox ~]# ps aux | grep 6320
root 6320 9.0 4.9 4596612 398396 pts/0 Sl 05:28 3:43 /usr/lib/jvm/java/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar org.apache.ambari.server.controller.AmbariServer
root 8750 0.0 0.0 8452 908 pts/0 S+ 06:09 0:00 grep 6320
[root#sandbox ~]#
The iptables firewall is not running:
#service iptables status
iptables: Firewall is not running.
The port forwarding from Guest to Host is set right
How to resolve this?
Check if the firewall in your sandbox is preventing it.
[..]# service iptables status
And then try accessing it after stopping iptables.
[..]# service iptables stop
If that too doesn't help, do check the port-forwarding settings of your Virtual Box (I assume you using Virtual Box).
I set the Guest IP address in the port forwarding settings and restarted the VM, now it's working.
I imported horton sandbox into Vmware player. Now how should I assign host ip address and port number(8080) Apache Ambari
If anyone is looking for the ambari's admin password; it is:
User: admin
Pass: 4o12t0n
I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.