kibana.dev.yml is not applied in kibana development mode - elasticsearch

I appreciate if someone can help me out with this issuse.
I am starting a development for kibana plugin and installed all necessary packages.
My environment is below.
kibana 5.0.0 alpha5 (Used git clone from the git repository)
I want to start the devlopment server other than 127.0.0.1:5601
so I have created config/kibana.dev.yml as below
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# This setting specifies the IP address of the back end server.
server.host: "0.0.0.0"
However, this seems not to be applied when I start the kibana server from npm start. It keeps starting at 127.0.0.1:5601
Do I need any other setting to read config/kibana.dev.yml?
Thanks,
Yu Watanabe

When started in dev mode, SSL is on be default. In that configuration and if no custom certificates have been specified, the server.host setting has no effect and is forced to localhost (to match the host name in the default provided certificates) as can be seen in the cli/serve/serve.js file:
if (opts.dev) {
set('env', 'development');
set('optimize.lazy', true);
if (opts.ssl && !has('server.ssl.cert') && !has('server.ssl.key')) {
set('server.host', 'localhost');
set('server.ssl.cert', fromRoot('test/dev_certs/server.crt'));
set('server.ssl.key', fromRoot('test/dev_certs/server.key'));
}
}
You can start Kibana by specifying the --no-ssl switch in order for the server.host setting to be taken into account:
sh ./bin/kibana --dev --no-ssl

Related

SonarQube server not showing in browser

I have a Linux VM running with a Jenkins, Nexus and SonarQube server on it. The IP for the VM is 192.168.56.2 and I have no trouble accessing both Jenkins and Nexus on ports 8080 and 8081 respectively. However, when I try to access 192.168.56.2:9000 for SonarQube it just says 192.168.56.2 refused to connect.
When I run systemctl status sonar in the terminal it shows that SonarQube is active and running. I have opened the firewall to port 9000 and I have not changed any of the default settings. Does anyone have any idea what might be the issue?
SonarQube will only be listening on 'loopback' rather than on all inbound IP addresses. In your server's sonar.properties file, you'll need to set the Web information in order to access the server remotely, specifically the following values:
sonar.web.host: 192.168.56.2
sonar.web.port: 80 # if you want to use a port other than 9000
Also, in the web UI's Settings, under the "General" section, set the "Server base URL" value so that links and redirects issued by SonarQube target the correct location.

Configuring elastic search not to be localhost

After installing Elasticsearch 5.6.3 and setting Nodename to the server name. I tried to browse to Elasticsearch using IP:9200 but it didn't work. If I browse to localhost:9200 it works. Where do I go to change th default behaviour of Localhost. Since I want to open this up to other external servers so the loop back address of localhost isn't any good.
After installing Kibana 5.6.3, the same is obviously true here as well. Starting the kibana server with the ip fails, but with localhost doesn't.
At this point I have no indexes, I just want to prove Elasticsearch can be reached beyond localhost.
Thanks
Bill
You can configure your IP with the "network.host" setting in 'elasticsearch.yml' and 'kibana.yml' in your config directory.
Here is some link to the Elasticsearch doc to config yours :)
Configuring Elasticsearch
Important Settings
For a quick start development configuration the following settings can be placed into 'elasticsearch.yml':
network.host e.g.
network.host: 192.168.178.49
cluster.initial_master_nodes e.g.
cluster.initial_master_nodes: ["node_1"]
You can also define a cluster name:
cluster.name: my-application
Start it with the node name (example for Windows)
C:\InstallFolder\elasticsearch-7.10.0>C:\InstallFolder\elasticsearch-7.10.0\bin\elasticsearch.bat -Enode.name=node_1
Go to your browser and open http://192.168.178.49:9200 (replace with your IP). It shows a JSON result. The localhost:9200 will no longer work.
This config should not be used for production environments. See the official docs.
In general when starting from a command prompt it shows any errors when something fails. These are very helpful.

Unable to get MariaDB 10.1 (on centos 7) to listen only on IPv4

How could I get MariaDB 10.1 to listen only on IPv4? Strange but true the very first time I installed MariaDB and started it, I saw that it was correctly listening on IPv4 as shown in the example picture below
But strangely after reinstalling MariaDB for some reasons and rebooting my Centos 7 installation, it seems to have started listening only on IPv6 and I hence I cannot get the Galera Cluster to work (which was working fine when it was listening on IPv4). So how do I get this MariaDB to listen only on IPv4. The below is a screenshot from my machine
[root#dataqry-0001 ~]# netstat -ntpl | grep sql
tcp6 0 0 :::3306 :::* LISTEN 14323/mysqld
Contents of /etc/my.cnf.d/server.cnf (Pls note that I also tried uncommenting out the bind address, it is still the same strangely)
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]
I should add that I am quite confused with MariaDB / MySQL settings littered all over the place. The above bind-address is for Galera I guess. It's my first time with MariaDB on Centos 7, so apologies - I even tried disabling IPv6 earlier but doesn't show it listening on IPv4
Thanks
M.M
Preface
Although the information in the official MariaDB bug-tracker seems to suggest that this is not possible, unless the mysql software is used instead; I can confirm that setting the following configuration option in e.g. /etc/my.cnf, at least while using version 10.1.21-MariaDB, does work as expected and as outlined in #Hackerman's comment.
bind-address=0.0.0.0
The misunderstood/misleading/irrelevant official bug-trackers I eluded to:
MDEV-6536 (Open)
MDEV-4379 (Closed)
Answer
To answer the question as it pertains to your specific scenario, however, you should pay attention to the "section" under which that setting is set; namely, you have it written under the [galera] section, rather than the server-wide [mysqld] section.
[mysqld]
#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
Make sure bind-address is specified in the [mysqld] section.

Elasticsearch: Failed to connect to localhost port 9200 - Connection refused

When I tried connecting to Elasticsearch using the
curl http://localhost:9200 it is working fine.
But when I run the curl http://IpAddress:9200 it is throwing an error saying
Failed to connect to localhost port 9200: Connection refused
How to resolve this error?
Edit /etc/elasticsearch/elasticsearch.yml and add the following line:
network.host: 0.0.0.0
This will "unset" this parameter and will allow connections from other IPs.
By default it should bind to all local addresses. So, assuming you don't have a network layer issue with firewalls, the only ES setting I can think to check is network.bind_host and make sure it is either not set or is set to 0.0.0.0 or ::0 or to the correct IP address for your network.
Update: per comments in ES 2.3 you should set network.host instead.
In my case elasticsearch was started.
But still had
curl: (7) Failed to connect to localhost port 9200: Connection refused
The following command was unsuccessful
sudo service elasticsearch restart
In order to make it work, I had to run instead
sudo systemctl restart elasticsearch
Then it went all fine.
Tried everything on this page, and only instructions from here helped.
in /etc/default/elasticsearch, make sure these are un-commented:
START_DAEMON=true
ES_USER=elasticsearch
ES_GROUP=elasticsearch
LOG_DIR=/var/log/elasticsearch
DATA_DIR=/var/lib/elasticsearch
WORK_DIR=/tmp/elasticsearch
CONF_DIR=/etc/elasticsearch
CONF_FILE=/etc/elasticsearch/elasticsearch.yml
RESTART_ON_UPGRADE=true
make sure /var/lib/elasticsearch is owned by elasticsearch user:
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch/
Why don't you start with this command-line:
$ sudo service elasticsearch status
I did it and get:
"There is insufficient memory for the Java Runtime..."
Then I edited /etc/elasticsearch/jvm.options file:
...
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
#-Xms2g
#-Xms2g
-Xms512m
-Xmx512m
################################################################
...
This worked like a charm.
None of the proposed solutions here worked for me, but what eventually got it working was adding the following to elasticsearch.yml
network:
host: 0.0.0.0
http:
port: 9200
After that, I restarted the service and now I can curl it from both within the VM and externally. For some odd reason, I had to try a few different variants of a curl call inside the VM before it worked:
curl localhost:9200
curl http://localhost:9200
curl 127.0.0.1:9200
Note: I'm using Elasticsearch 5.5 on Ubuntu 14.04
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12)
be sure that the server is started. I've seen this problem when my virtual machine had too litle RAM and es could not start.
sudo systemctl status elasticsearch
the above will show you if es is indeed running.
Edit elasticsearch.yml and add the following line
http.host: 0.0.0.0
network.host: 0.0.0.0 didn't work for
For this problem, I had to use :
sudo /usr/share/elasticsearch/bin/elasticsearch start
to be able to get something on ports 9200/9300 (sudo netstat -ntlp) and a response to:
curl -XGET http://localhost:9200
I experienced a similar issue.
Here's how I solved it
Run the service command below to start ElasticSearch
sudo service elasticsearch start
OR
sudo systemctl start elasticsearch
If you still get the error
curl: (7) Failed to connect to localhost port 9200: Connection refused
Run the service command below to check the status of ElasticSearch
sudo service elasticsearch status
OR
sudo systemctl status elasticsearch
If you get a response (Active: active (running)) like the one below then you ElasticSearch is active and running
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: active (running) since Sat 2019-09-21 11:22:21 WAT; 3s ago
You can then test that your Elasticsearch node is running by sending an HTTP request to port 9200 on localhost using the command below:
curl http://localhost:9200
Else, if you get a response a different response, you may have to debug further to fix it, but the running the command below, will help you detect what caveats are holding ElasticSearch service from starting.
sudo service elasticsearch status
OR
sudo systemctl status elasticsearch
If you want to stop the ElasticSearch service, simply run the service command below;
sudo service elasticsearch stop
OR
sudo systemctl stop elasticsearch
N/B: You may have to run the command sudo service elasticsearch status OR sudo systemctl status elasticsearch each time you encounter the error, in order to tell the state of the ElasticSearch service.
This also applies for Kibana, run the command sudo service kibana status OR sudo systemctl status kibana each time you encounter the error, in order to tell the state of the Kibana service.
That's all.
I hope this helps.
I had the same problem refusing connections on 9200 port.
Check elasticsearch service status with the command sudo service elasticsearch status. If it is presenting an error and you read anything related to Java, probably the problem is your jvm memory. You can edit it in /etc/elasticsearch/jvm.options. For a 1GB RAM memory machine on Amazon environment, I kept my configuration on:
-Xms128m
-Xmx128m
After setting that and restarting elasticsearch service, it worked like a charm. Nmap and UFW (if you use local firewall) checking should also be useful.
Open your Dockerfile under elasticsearch folder and update "network.host=0.0.0.0" with "network.host=127.0.0.1". Then restart the container. Check your connection with curl.
$ curl http://docker-machine-ip:9200
{
"name" : "vI6Zq_D",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "hhyB_Wa4QwSX6zZd1F894Q",
"version" : {
"number" : "5.2.0",
"build_hash" : "24e05b9",
"build_date" : "2017-01-24T19:52:35.800Z",
"build_snapshot" : false,
"lucene_version" : "6.4.0"
},
"tagline" : "You Know, for Search"
}
For versions higher than 6.8 (7.x) you need two things.
1. change the network host to listen on the public interface.
In the configuration file elasticsearch.yml (for debian and derivatives -> /etc/elasticsearch/elasticsearch.yml).
set the network.host or network.bind_host to:
...
network.host: 0.0.0.0
...
Or the interface that must be reached
2. Before going to production it's necessary to set important discovery and cluster formation settings.
According to elastic.co:
v6.8 -> discovery settings that should set.
by e.g
...
# roughly means the same as 1
discovery.zen.minimum_master_nodes: -1
...
v7.x -> discovery settings that should set.
by one single node
discovery.type: single-node
#OR set discovery.seed_hosts : 127.0.0.1:9200
at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured.
In this case, first of all you need to check the java version using below command:
java -version
after running this command you get something like this:
java version "1.7.0_51"
OpenJDK Runtime Environment (rhel-2.4.5.5.el7-x86_64 u51-b31)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
then use this command:
update-alternatives --config java
and select the below version
*+ 1 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64/jre/bin/java
2 /usr/java/jdk1.8.0_73/jre/bin/java
Enter to keep the current selection[+], or type selection number: 2
curl -XGET http://127.0.0.1:9200
My 2 cents,
I just followed the install procedure on Digital Ocean, apparently the package available in the repos is not up to date, I deleted everything and followed the install procedure direct from Elastic Search and everything is working now, basically the out of the box behaviour is on a localhost pointing to 9200. Same thing/issue found with Kibana, the solution for me was too, to remove everything and just follow their procedure, Hope this saves someone two hours (the time I spent figuring out how to setup ELK!)
en
Update your jdk to latest minimum version for your elasticsearch.
Change the network.bind to 0.0.0.0 and http:port to 9200. The bind address 0.0.0.0 means all IPv4 addresses on the local machine. If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs.
If you encounter the Connection refused error, simply run the command below to check the status of ElasticSearch service
sudo service elasticsearch status
This will help you decipher the state of ElasticSearch service and what to do about it.
For those of you installing ELK on virtual machine in GCP (Google Cloud Platform), make sure that you created firewall rule of Ingress type (i.e. for incoming to VM traffic). You can specify in the rule multiple ports at a time by separating them with comma: 5000,5044,5601,9200,9300,9600.
In that rule you may want to specify a tag (pick tag's name as you like, for example docker-elk that will target your VM (Targets column):
On VM's settings page assign that tag to your VM:
After doing that I was able to access Elasticsearch in my browser via port 9200. And I didn't have to edit elasticsearch.yml file whatsoever.
I have run across this problem every time I install or upgrade ES (7.0+). And the solution was ALWAYS just wait for ES to fully start. It takes about a minute for the REST API to be reponsive. No matter what service status says.
service elasticsearch start
*started
*wait for at least a minute
curl now works and returns responses on the port 9200
After utilizing some of the answers above, don't forget that after an apt install, a total reboot might be in order.
Just to add on this, I've came across many docs through google that said to set network.host to localhost.
Doing so gave me the infamous connection refused. You must use an IP address (127.0.0.1), not a FQDN.
Jeff
Make sure that port 9200 is open for my case it was an amazon instance so when i opened it in my security group the curl command worked.
Disabling SELinux worked for me, although I don't suggest it - I did that just for a PoC
My problem was I could not work with localhost I needed to set it to localhost's IP address
network.bind_host: 127.0.0.1
In my case, the problem is with java version, i installed open-jdk 11 previously. Thats creating the issue while starting the service. I changed it open-jdk 8 and it started working
I experienced this on CentOS 7, and the issue was that /etc/hosts had the following:
127.0.0.1 localhost.localdomain
which I updated to include localhost as follows:
127.0.0.1 localhost localhost.localdomain
after that, no issues.
you have to edit /etc/elasticsearch/elasticsearch.yml
by default all configurations will be commented ,add following configuration
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: [0.0.0.0]
then restart the service
I ran into a related situation recently.
Here's my take on the subject: Accessing Elastic 5.5 in vagrant guest from host through a private network
TL;DR
The settings:
network.host: 0.0.0.0
http.port: 9200
work fine. One just needs to wait enough time for ES to complete its initialization procedure, bind to the network iface and start listening on the port.
Now, from within the guest, curl http://localhost:9200 works and from the host, curl http://192.168.54.2:9200 works as well.
For Windows user try,
https://localhost:9200/
It worked for me.

Logstash not sending data to elastic search when ran as a service

This is my config file stored at /etc/logstash/conf
input
{
file{
path => ["PATH_OF_FILE"]
}
}
output
{
elasticsearch
{
host => "172.29.86.35"
index => "new"
}
}
and this is my elasticsearch.yaml file content for network and http
\# Set the bind address specifically (IPv4 or IPv6):
\#network.bind_host: 172.29.86.35
\# Set the address other nodes will use to communicate with this node. If not
\# set, it is automatically derived. It must point to an actual IP address.
\#network.publish_host: 192.168.0.1
\# Set both 'bind_host' and 'publish_host':
network.host: 172.29.86.35
\# Set a custom port for the node to node communication (9300 by default):
\#transport.tcp.port: 9300
\# Enable compression for all communication between nodes (disabled by default):
\#transport.tcp.compress: true
\# Set a custom port to listen for HTTP traffic:
\#http.port: 9200
I am running elasticsearch and logstash as service.The problem is when I start log stash as a service it does not send any data to elasticsearch. However if I use the same config in the logstash conf file and run logstash from the CLI it works perfectly fine. Even the logs do not show any error.
The version I am running is 1.4.3 for ES and 1.4.2 for LS.
The system env is RHEL 7
I also have encountered same issue...
When I exec command using -f option, it works normally, but when I start service, nothing happen and log file under /etc/log stash never updated.
What I did as the temporary counter measure is to exec the command below(with & option)
Logstash if conffile.conf &
With this, it work even if I logout from server.

Resources