Start Elasticsearch service with 2 nodes - elasticsearch

I try to start Elastic search in clustering with 2 nodes :
I run Command :
service elasticsearch start
then I run 2 instances of elasticsearch in order to join the cluster with commands:
/bin/elasticsearch
But when I check the head_plugin : localhost:2900/_plugin/head/ I get the Cluster health status Yellow, and the nodes didn't join the cluster
How can I configure the two nodes to make them join the cluster ?
thanks
EDIT:
This is what I get :
root#vmi17663:~# curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"aHUjm3SjQa6MbRoWCnL4pQ" : {
"name" : "Primary node",
"transport_address" : "inet[/ip#dress:9300]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9200]"
}
}
}root#vmi17663:~# curl -XGET 'http://localhost:9201/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"pz7dfIABSbKRc92xYCbtgQ" : {
"name" : "Second Node",
"transport_address" : "inet[/ip#dress:9301]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9201]"
}
}

I made it work !
As expected It was iptables Problem I added this rule
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
and everything went smooth

Make sure you have different elasticsearch.yml files for each node.
Make sure each is configured to join the same cluser via cluster.name: "mycluster"
You can start an additional nodes (new jvm process) off the same code install like this:
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-1/config/elasticsearch.yml
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-2/config/elasticsearch.yml
My setup looks like this:
elasticsearch-1.0.0.RC1
├── LICENSE.txt
├── NOTICE.txt
├── README.textile
├── bin
├── config
├── data
├── lib
├── logs
└── plugins
elasticsearch-2
├── config
├── data
├── logs
├── run
└── work
elasticsearch-3
├── config
├── data
├── logs
├── run
└── work
elasticsearch-1
├── config
├── data
├── logs
├── run
└── work
I start all three with aliases like this:
alias startes1='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-1/config/elasticsearch.yml'
alias startes2='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-2/config/elasticsearch.yml'
alias startes3='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-3/config/elasticsearch.yml'

If your nodes don't join, then you need to check your cluster.name setting, and make sure that each node can communicate to each other via port 9300. (9200 is for incoming traffic, and 9300 is for node to node traffic).
So as #mcolin mentioned make sure your cluster name is the same for each node. To do so, open up your /etc/elasticsearch/elasticsearch.yml file on your 1st server, and find the line that says "cluster.name" and note what it is set to. Then go to your other servers and make sure they are set to the exact same thing.
To do this, you could run this command:
sudo vim /etc/elasticsearch/elasticsearch.yml
and set the following line to be something like:
cluster.name: my_node_name
Additionally, your nodes might not be able to talk to each other. My nodes are running on AWS, so I went to my EC2 panel and made sure my instances were in the same security group. Then I set my security group to allow all instances within it to talk to each other by creating a rule like this:
Custom TCP Rule TCP 9300 dev-elasticsearch
(or to be wild and dangerous, set this:)
All traffic All All dev-elasticsearch
Within a minute of setting this I checked my cluster status and all was well:
curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'

Related

How to resolve "OpenSearch Unreachable: [https://127.0.0.1:9200/][ unable to find valid certification path to requested target?

Hi I am trying to ingest data from logstash(oss) to Opensearch but it seems I can't connect to Opensearch from logstash.
The error log:
[avs#localhost pipeline]$ ./bin/logstash -f config/pipeline/ipv4.conf
-bash: ./bin/logstash: No such file or directory
[avs#localhost pipeline]$ cd ..
[avs#localhost config]$ cd ..
[avs#localhost logstash-7.16.2]$ ./bin/logstash -f config/pipeline/ipv4.conf
Using bundled JDK: /oss/bin/logstash-7.16.2/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /oss/data_files/logs/logstash which is now configured via log4j2.properties
[2022-01-27T11:36:13,302][INFO ][logstash.runner ] Log4j configuration path used is: /oss/bin/logstash-7.16.2/config/log4j2.properties
[2022-01-27T11:36:13,313][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.16.2", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-x86_64]"}
[2022-01-27T11:36:13,813][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-01-27T11:36:14,823][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-01-27T11:36:16,149][INFO ][org.reflections.Reflections] Reflections took 152 ms to scan 1 urls, producing 119 keys and 417 values
[2022-01-27T11:36:17,683][INFO ][logstash.outputs.opensearch][main] New OpenSearch output {:class=>"LogStash::Outputs::OpenSearch", :hosts=>["https://127.0.0.1:9200"]}
[2022-01-27T11:36:18,093][INFO ][logstash.outputs.opensearch][main] OpenSearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://127.0.0.1:9200/]}}
[2022-01-27T11:36:18,498][WARN ][logstash.outputs.opensearch][main] Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"https://127.0.0.1:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [https://127.0.0.1:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}
The logstash pipeline file:
input {
file {
path => "/home/avs/avs_dump.csv"
start_position => "beginning"
}
}
output {
opensearch {
hosts => ["https://127.0.0.1:9200"]
auth_type => {
type => 'basic'
user => 'admin'
password => 'admin'
}
index => "cassandra"
}
file {
path => "/oss/data_files/data/logstash/zonos_ipv4.out"
}
}
and here is the opensearch.yml file:
# ======================== OpenSearch Configuration =========================
#
# NOTE: OpenSearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.opensearch.org
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: avs-subhsaree
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /oss/data_files/data/logstash
#
# Path to log files:
#
path.logs: /oss/data_files/logs/logstash
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# OpenSearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
node.max_local_storage_nodes: 3
######## End OpenSearch Security Demo Configuration ########
here is the files in opensearch directory that has some pem files:
.
├── esnode-key.pem
├── esnode.pem
├── jvm.options
├── jvm.options.d
├── kirk-key.pem
├── kirk.pem
├── log4j2.properties
├── opensearch.keystore
├── opensearch-observability
│   └── observability.yml
├── opensearch-reports-scheduler
│   └── reports-scheduler.yml
├── opensearch.yml
└── root-ca.pem
3 directories, 11 files
Seems logstash can't connect to opensearch because of auth type which should be SSL instead of basic but the problem is I do not know how do I obtain the required file and from where.
Anyone can point me in the right direction or a document for this would be really helpful.
Thanks

Proxy $PORT value from create-react-app in a child directory

I'm working on a repo which is serving a create-react-app from a node endpoint. So, the react app is nested as a child directory:
.
├── Procfile
├── frontend
│ ├── README.md
│ ├── build
│ ├── package.json <---- "proxy": "http://localhost:$PORT"
│ ├── public
│ ├── src
│ │ ├── App.css
│ │ ├── App.js
│ │ └── // etc...
│ └── .env <----- frontend env file, removed PORT value from here
├── package.json
├── src
│ ├── app.js
│ ├── server.js
│ └── // etc...
├── .env <--- backend env file, PORT=9000 for node
├── static.json
└── yarn.lock
With port value removed from the .env file, CRA runs on port 3000. If I hardcode port 9000 instead of $PORT, then the proxy works properly in development.
However, when deploying to production, I want the frontend to proxy Heroku's dynamic port number, this is one example:
Heroku seems to ignore the port value even if I intentionally define it in the env in their website, with a value of 9000.
My question is how do I define the proxy on the frontend without having CRA to instance at that port number, e.g. apply PORT=9000 in the frontend .env but have CRA load at port 3000.
I've tried defining the port number in the script, while making sure that I've defined PORT=9000 in the frontend env:
"scripts": {
"start": "export PORT=3000 && react-scripts start",
CRA will load at 3000, but I get a proxy error:
Heroku doesn't let you chose your port, but rather allocates a port for your app to use as an environment variable. Read more here
Each web process simply binds to a port, and listens for requests coming in on that port. The port to bind to is assigned by Heroku as the PORT environment variable.
Remove all hardcoded PORT variables
It's not ideal to use $PORT in your package.json file as you cannot add logic to it. In your nodejs app read the port variable like so:
const PORT = process.env.PORT || 3000
This will set the port variable to whatever is in the environment variable PORT and if it is not set, will default to 3000
It is not efficient to serve a production app with CRA
Don't run two servers for react and nodejs, instead use your nodejs app to serve a production built react app
const express = require('express')
const path = require('path')
const app = express()
// All your other routes go here
app.use('/', express.static(path.join(__dirname,'client/build'))) // this must be the last one
NOTE: This is assuming your react app is built inside client/build relative to your project root
The proxy setting is only for development convenience and will not work if the app is not served by CRA
Make heroku build your react app during buildtime with:
npm --prefix client run build # or if you use yarn
yarn --cwd client build
in your outer package.json file's build script
You start script is going to run your nodejs server:
"scripts": {
"start": "node src/server.js",
"build": "npm --prefix client run build"
}
Don't commit your .env files to heroku, instead set environment variables directly using heroku config:set KEY=VALUE if you have heroku cli or use the dashboard settings
NOTE: Do this before pushing your code to have these variables accessible during buildtime of the react app

dns01 validation: Certificate issuance in progress. Temporary certificate issued

Following this
Setup:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-19T22:12:47Z", GoVersion:"go1.12.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.10", GitCommit:"8d9b8641e72cf7c96efa61421e87f96387242ba1", GitTreeState:"clean", BuildDate:"2019-04-12T22:59:24Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
knative-serve & Istio is v0.5.2
cert-manager is v0.7 applyed with --validate=false as k8s is 1.12.7
Cert-manager ClusterIssuer status:
│ status: │
│ conditions: │
│ - lastTransitionTime: "2019-04-29T21:29:40Z" │
│ message: Certificate issuance in progress. Temporary certificate issued. │
│ reason: TemporaryCertificate │
│ status: "False" │
│ type: Ready
I have done as in the documentation, but setting up Google DNS not described
I have manually created a DNS in Google DNS consule.
My domain is pointing at the nameservers and I can ping the right server ip address,
When creating the DNS I added a record set:
*.mydomain.com. A 300 x.x.x.x
Note: also tried without " * "
I have seen here, that they talk abaout setting TXT?
Do you know how to make this(cert-manager & TLS) it work?
First, look at the logs being issued by the cert manager pod kubectl logs -n <namespace> pod/podname.
Cert manager will tell you why the challenge is failing.
One common reason is the rate limiting by Letsencrypt and you have to wait for 7 days.
You can also view this same issue on github https://github.com/jetstack/cert-manager/issues/1745

How to access an Elasticsearch stored in a Docker container from outside?

I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.

Share one Vagrant instance between different directories

I have a few directories with different Mercurial histories that I am working on in parallel. They all have the same Vagrantfile so it would be natural to use just one instance for all of them.
But when I run "vagrant up" in a new directory, it starts from linking the existent VM, setting up the environment, and so on.
How do I share the Vagrant instance between different directories?
UPDATE: my directory structure:
\
Vagrantfile
puppet
*.pp
support
nginx.conf
uwsgi.development.ini
other_repo_related_files_and_dirs
Well, if you want to share some directories with the same Vagrant's instance, you can configure the Vagrantfile.
This is an example with two VM (app and web), using the same box (ubuntu-12.04) and the same Vagrantfile. Each instance have two folders (one folder by VM).
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define 'app' do |app_config|
app_config.vm.box = 'ubuntu-12.04'
app_config.vm.host_name = 'app'
app_config.vm.network "private_network", ip: "192.168.33.33"
app_config.vm.synced_folder "app_config", "/app_config"
end
config.vm.define 'web' do |web_config|
web_config.vm.box = 'ubuntu-12.04'
web_config.vm.host_name = 'web'
web_config.vm.network "private_network", ip: "192.168.33.34"
web_config.vm.synced_folder "web_config", "/web_config"
end
end
The app machine has an app_config folder and the web machine have a web_config folder(these folders are in the same level of the Vagrantfile file).
When you enter to each VM with the vagrant ssh command you can see each folder.
This is into app machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh app
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:46:36 2014 from 10.0.2.2
vagrant#app:~$ cd /app_config/
vagrant#app:/app_config$ ls
app_config_file
This is into web machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh web
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:47:12 2014 from 10.0.2.2
vagrant#web:~$ cd /web_config/
vagrant#web:/web_config$ ls
web_config_file
vagrant#web:/web_config$
And this is the structure for my directory.
.
├── **app_config**
│   └── *app_config_file*
├── attributes
├── Berksfile
├── Berksfile.lock
├── chefignore
├── definitions
├── files
│   └── default
├── Gemfile
├── libraries
├── LICENSE
├── metadata.rb
├── providers
├── README.md
├── recipes
│   └── default.rb
├── resources
├── templates
│   └── default
├── test
│   └── integration
│   └── default
├── Thorfile
├── Vagrantfile
├── Vagrantfile~
└── **web_config**
└── *web_config_file*
I hope this help you.
Just thinking out loud here. Not sure if it's a solution that meets your demands.
If you set-up a directory structure like this
/Main
/projects
/mercurial_history_1
/mercurial_history_2
/mercurial_history_3
/puppet
/modules
/manifests
default.pp
Vagrantfile
I'm not sure what kind of projects you are running, but if you are running a apache webserver for example. You could specify a separate vhost for every mercurial project inside the VM. So you can point the DocumentRoot to the specific mercurial project.
For this solution you have to add the following line in the Vagrantfile
config.vm.network "private_network", ip: "22.22.22.11" <- Just an example IP
Then on you host machine you can update the hosts file with the IP and corresponding vhostname servername. It's a little bit more work, but you can add vhosts using a provisioner to make life easier ;)
This way you only have one VM running that runs al your mercurial projects

Resources