Kibana is ending up with error code of 503 - kibana-6

When i am using curl command to check the kibana status , it says:
curl -I http://localhost:5601/status
HTTP/1.1 503 Service Unavailable
retry-after: 30
content-type: text/html; charset=utf-8
cache-control: no-cache
content-length: 30
Date: Sat, 04 May 2019 12:50:18 GMT
Connection: keep-alive
Kibana.YML file
cat /etc/kibana/kibana.yml
elasticsearch.url: "http://10.0.1.41:9200"
server.port: 5601
server.host: "localhost"
server.ssl.enabled: false
logging.dest: /var/log/kibana/kibana.log
Can someone help me here to solve the 503 error?
Tried to get change the Kibana.yml but noluck

Issue was with Amazon Linux AMI, which has been resolved now.

Related

Certbot unauthorized and connection errors

I have a spring boot application on Google Cloud, CentOS 7. I wish to install SSL certificate via Let's Encrypt and Certbot. When I use certbot --apache -d mydomain.zone command I receive an error:
My domain is registered on Namecheap. My A records on Google Cloud:
Also I provided google cloud nameservers in Namecheap like in this tutorial: https://www.wpmentor.com/setup-domain-google-cloud-platform/
Can you tell me where the issue is? I also wonder is there an issue with my java code in app. For example sometimes while accessing index page, error_page is called. When I have a method in my controller:
#RequestMapping(value = "/error_page", method = RequestMethod.GET)
public String homeError(Model model)
{
return "/error_page";
}
I have a different certvbot error:
but when I comment/erase my controller method for error page I receive this error:
Can it be it's an application bug? Or issue with apache?
EDIT:
I tried to turn off Tomcat. Now I receive this error:
note: My Apache forwards to 8080, I don't know will it make any issue?
iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
After curl -I -L http://mydomain/.well-known/acme-challenge/zySNHSFB-qL95Ubx4jcIvuHPiiNbwkphE55kFuqP8jM:
HTTP/1.1 302
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: /error_page
Content-Language: en-US
Content-Length: 0
Date: Tue, 15 Feb 2022 20:01:50 GMT
HTTP/1.1 302
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: /error_page
Content-Language: en-US
Content-Length: 0
Date: Tue, 15 Feb 2022 20:01:50 GMT
curl: (47) Maximum (50) redirects followed
I needed to turn off the Apache web server to free my port 80. Also, I deleted iptables rule that forwards traffic from port 80 to port 8080. Now Certbot works

wget gives 403 on accessible files

First time poster with a bizarre issue I am having. I usually install software through conda, but from one moment to the other I stopped being able to use conda install because of a 403 I get from conda trying to access some configuration files. When trying to download those files with wget --spider --debug https://conda.anaconda.org/anaconda/noarch/current_repodata.json, I get the same 403 error.
DEBUG output created by Wget 1.19.4 on linux-gnu.
Reading HSTS entries from /home/jsequeira/.wget-hsts
URI encoding = ‘UTF-8’
Converted file name 'current_repodata.json' (UTF-8) -> 'current_repodata.json' (UTF-8)
Spider mode enabled. Check if remote file exists.
--2020-07-30 11:25:59-- https://conda.anaconda.org/anaconda/noarch/current_repodata.json
Resolving conda.anaconda.org (conda.anaconda.org)... 104.17.92.24, 104.17.93.24, 2606:4700::6811:5d18, ...
Caching conda.anaconda.org => 104.17.92.24 104.17.93.24 2606:4700::6811:5d18 2606:4700::6811:5c18
Connecting to conda.anaconda.org (conda.anaconda.org)|104.17.92.24|:443... connected.
Created socket 5.
Releasing 0x000056545deb1850 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 5 to SSL handle 0x000056545deb2700
certificate:
subject: CN=anaconda.org,O=Cloudflare\\, Inc.,L=San Francisco,ST=CA,C=US
issuer: CN=Cloudflare Inc ECC CA-3,O=Cloudflare\\, Inc.,C=US
X509 certificate successfully verified and matches host conda.anaconda.org
---request begin---
HEAD /anaconda/noarch/current_repodata.json HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: conda.anaconda.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 403 Forbidden
Date: Thu, 30 Jul 2020 11:25:59 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
CF-Chl-Bypass: 1
Set-Cookie: __cfduid=d3cd3a67d3926551371d8ffe5a840b04f1596108359; expires=Sat, 29-Aug-20 11:25:59 GMT; path=/; domain=.anaconda.org; HttpOnly; SameSite=Lax
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 01 Jan 1970 00:00:01 GMT
X-Frame-Options: SAMEORIGIN
cf-request-id: 044111dd9600005d4732b73200000001
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Vary: Accept-Encoding
Server: cloudflare
CF-RAY: 5baeb8dc2ba65d47-LIS
---response end---
403 Forbidden
cdm: 1
Stored cookie anaconda.org -1 (ANY) / <permanent> <insecure> [expiry 2020-08-29 11:25:59] __cfduid d3cd3a67d3926551371d8ffe5a840b04f1596108359
URI content encoding = ‘UTF-8’
Closed 5/SSL 0x000056545deb2700
Remote file does not exist -- broken link!!!
These files are accessible through the browser, and were always accessible with wget and conda until yesterday, when I was installing some tools not related to these network accesses. How can wget fail to download them?
So this was fixed by reinstalling apt-get. Some configuration file there must have been messed up.

webhdfs is redirecting to localhost:50075

I am trying to create a file from a non-hadoop environment to a remote hdfs.
For this purpose, I am using pywebhdfs api and I'm running command using curl.
https://pythonhosted.org/pywebhdfs/
I used this documentation as a reference, I am able to execute all other methods except of create_file().
While using create_file(), I am getting error like: 'Couldn't connect to host'
Command: curl -i -X PUT -L "http://xxx.xxx.xxx.xxx:50070/webhdfs/v1/test1/?op=CREATE" -T sample.txt
Response:HTTP/1.1 100 Continue
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 30 Oct 2018 12:04:04 GMT
Date: Tue, 30 Oct 2018 12:04:04 GMT
Pragma: no-cache
Expires: Tue, 30 Oct 2018 12:04:04 GMT
Date: Tue, 30 Oct 2018 12:04:04 GMT
Pragma: no-cache
Content-Type: application/octet-stream
Location: http://localhost:50075/webhdfs/v1/test1/?op=CREATE&namenoderpcaddress=xxx.xxx.xxx.xxx:9000&overwrite=false
Content-Length: 0
Server: Jetty(6.1.26)
curl: (7) couldn't connect to host
This Location is displaying as localhost here. I took reference from the past post.
webhdfs always redirect to localhost:50075
but I didn't get success.
I tried changing IP in hdfs-site.xml and /etc/hosts file but no success at all.
Can anyone tell me, how to fix this?
Thanks in advance..

webhdfs always redirect to localhost:50075

I have a hdfs cluster (hadoop 2.7.1), with one namenode, one secondary namenode, 3 datanodes.
When I enable webhdfs and test, I found it always redirect to "localhost:50075" which is not configured as datanodes.
csrd#secondarynamenode:~/lybica-hdfs-viewer$ curl -i -L "http://10.56.219.30:50070/webhdfs/v1/demo.zip?op=OPEN"
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 01 Dec 2015 03:29:21 GMT
Date: Tue, 01 Dec 2015 03:29:21 GMT
Pragma: no-cache
Expires: Tue, 01 Dec 2015 03:29:21 GMT
Date: Tue, 01 Dec 2015 03:29:21 GMT
Pragma: no-cache
Location: http://localhost:50075/webhdfs/v1/demo.zip?op=OPEN&namenoderpcaddress=10.56.219.30:9000&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26)
curl: (7) Failed to connect to localhost port 50075: Connection refused
The etc/hadoop/slaves is configured as:
10.56.219.32
10.56.219.33
10.56.219.34
Is there any configurations on this?
Thanks!
Well, it's a /etc/hosts mistake.
The /etc/hosts on datanodes was:
127.0.0.1 localhost datanode-1
change it to:
127.0.0.1 datanode-1 localhost
fix this problem.
You need to have this entry in hdfs-site.xml
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
Value should be 0.0.0.0 on a cluster. You need to restart the cluster after updating hdfs-site.xml file and deploying it on all nodes in the cluster.

NGINX + HHVM + Magento

I've been working on a development box deploying an Nginx, Magento, HHVM model. The site loads nicely and I don't get any real clear errors that I can see. However, a good number of my images are not loading properly. All I get are the image dimensions. The version of HHVM I'm using is:
HipHop VM 3.3.0-dev (rel)
Compiler: heads/master-0-g39d07abf36f7cad883d6be2a384a77c0c8aac040
Repo schema: 248cb6ce2d336c890fb26fd16690bf1b53b46994
Extension API: 20140829
The version of Nginx is:
nginx version: nginx/1.7.4
Some images load, and others don't. We're using timthumb for images, and I do see one error:
\nWarning: Is a directory in /home/user/shared/timthumb/index.php on line 90
If I run a curl from the command-line, I get the following result for an image, which means it's being seen as far as I can tell:
HTTP/1.1 200 OK
Content-Type: image/gif
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: HHVM/3.3.0-dev
CF-Cache-Status: HIT
CF-RAY: 16513733df33085c-IAD
Server: cloudflare-nginx
Date: Fri, 05 Sep 2014 08:56:47 GMT
Expires: Sun, 05 Oct 2014 08:56:47 GMT
Cache-Control: public, max-age=2592000
Set-Cookie: __cfduid=d6c188df6595a443200995301f0b707981409907407975; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.placehold.it; HttpOnly
Last-Modified: Fri, 19 Nov 2010 07:00:00 GMT
Cache-Control: max-age=31536000, public, must-revalidate, proxy-revalidate
I have a separate test server with the same content/database running NGINX/PHP-FPM, and the only variable I can see that's different in the curl is that when I run the curl against the dev server running php-fpm, the result is Content-Type: image/jpeg. When I run it against HHVM I get Content-Type: image/gif.
I would think this is related to rewrite rules, but I'm not sure. I also just discovered this error here as well in exception.log:
2014-09-05T12:04:06+00:00 ERR (3): Warning: open(tcp://127.0.0.1:11211?persistent=1&weight=2&timeout=10&retry_interval=10/sess_dd885307c2ba602e5faa7020c9c7c038, O_RDWR) failed: No such file or directory (2) in on line 0
Any advice would be greatly appreciated.

Resources