Elasticsearch my existing index is deleted - elasticsearch

I have a self-hosted elasticsearch in my server. I have integrated with my code after certain days my existing data is deleted. Only If I create a new entry this will store in the index
I have used Amazon aws free tier server. I have used rabbitmq in this server also. I know the more ram needed for the elastic search but I need to know the reason for the data loss
THis is my server log link

Elasticsearch never deletes an index. It means that you probably have something, someone running a DELETE index query from somewhere.
If you look at your logs you should see something like this.
Data is stored on disk in the data dir.

You have been attacked by a bot. If that happens again, try to list indices by GET /_cat/indices. If you see something like 'meow' or 'warn' there, then it is known bot attackers.
I faced same issue and did some investigations. Port 9200 was open on my server! Despite of iptables rules which restrict everything except 443, 8443 and 22. And the cause was docker! It added one more rule after all mine.
sudo iptables -S | grep 9200
-A DOCKER -d 172.18.0.2/32 ! -i br-cab97908df43 -o br-cab97908df43 -p tcp -m tcp --dport 9200 -j ACCEPT
How is that possible? Why docker does this crazy thing? The reason is default docker-compose.yml which I took from elasticsearch website.
Change
ports:
- 9200:9200
to
ports:
- 127.0.0.1:9200:9200

Happened the same to me once, I was working on some queries when all my indices got suddenly deleted.
I had in mind securing the whole thing but I kept on postponing it. It just took an hour to install x-pack and figure out all the errors that may arrive because of that, but no suddenly deleted indices ever since :D.

Related

`ddev get --list` doesn't work (lookup api.github.com: i/o timeout)

I need to add Solr to a DDEV project but am encountering errors when attempting to gather information about available services.
I'm following guidance here:
https://ddev.readthedocs.io/en/stable/users/extend/additional-services/
When I attempt to list all available services: ddev get --list, I receive this response after approx 30 seconds:
Failed to list available add-ons: Unable to get list of available services: Get "https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud": dial tcp: lookup api.github.com: i/o timeout
I'm not sure what the problem is. If I curl the URL from the error message, ie curl https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud, I receive a JSON response from Github with information about the repository.
This has happened for over two days now. I may be overlooking something but am not sure what, exactly. I'm able to run DDEV projects using the standard installation (mariadb, nginx, nodejs, mailhog) but continue to run into errors re listing add-ons.
I have ddev v.1.21.4 installed.
I'm using an M1 Mac on macOS 13.1.
Thank you.
Your system is unable to do a DNS lookup of the hostname api.github.com, and this is happening on your macOS host. Are you able to ping api.github.com? Have you tried rebooting?
You may want to temporarily disable firewall, VPN, virus checker to see if that changes things. But you'll want to be able to get to where you can ping api.github.com.
There is an obscure golang problem on macOS affecting situations where people have more than one DNS server, so that could be it if you're in that category. You also might want to consider changing the DNS server for your system to 1.1.1.1, as this can sometimes be a problem with your local DNS server (but of course the fact that you can curl the URL argues against that).

Docker-Compose Elkstack

I'd like to get an Elkstack running on my virtual machine. I read up on the topic already a while ago but just recently figured out how to use docker-compose. I found a complete compose file on github (https://github.com/deviantony/docker-elk) which is capable of composing the whole stack, however I ran into two questions: Kibana is awaiting input from Elasticsearch on the url "http://elasticsearch:9200" - The url is obviously not resolving to anything - Hows that supposed to work? I made changes to it so it tries to connect to localhost:9200 but Elasticsearch is refusing the connection - I checked the running containers and saw that elasticsearch is indeed running but using "lsof -i :9200" (or 9300) did not show anything which means it is not listening on the port right? Or would it now show up due to the fact that it is running in docker?
Thanks in advance.

Kibana web interface not loading

Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?
May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach
Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.

elasticsearch on Ec2 cannot hit public IP(timeout)

I have elasticsearch running on EC2,
I can hit form local IP address(ex. curl -XGET localhost:9200)
I cannot hit from public IP address, whether on the same machine, or from our network, it always times out,
IPtables are allowing
port is open(to itself as well as private network)
Elasticsearch http.cors is enabled and allows "*"
aside from Iptables, amazon security config, elasticsearch config could there be anything I am overlooking? (we can access 443 and get kibana up, it just times out on the elasticsearch ajax call or if I try to access 9200 directly)
been working on this for over a day so I humbly come to you all!
thank you
I had exactly the same issue.
I managed to solve it as follows:
Do what TJ said in his comment, + restart the instance. I wasn't sure if this was/is necessary, but I did it for good measure.
I made sure that the following is set in the elasticsearch.yml file:
a. http.enabled: true
b. http.cors.enabled: true
c. http.cors.allow-origin: "*"
Restarted elasticsearch (service elasticsearch restart)
Then when I tried to access elasticsearch from the public IP it worked - http://[PUBLIC IP OF INSTANCE]:9200
Hope this helps.
I just spent lots of time trying to get this working and just succeeded.
Setup: Elasticsearch 6.2.4, running on a Windows Server 2012, EC2 instance.
I also installed the discovery-ec2 plugin, not sure now if it is required, my assumption is, yes it is required although some of the settings it allows were not necessary to get it working.
Config (.yml). I tried tons of different .yml config settings which in the end did not help, in the end I think the main setting is:
network.host: 0.0.0.0
I tried setting the network.host to ec2:privateIpv4 and ec2:publicIpv4 (plugin settings) but they didn't help.
I had added the required Custom TCP Rules (allowing 9200 and 9300...not sure if 9300 is needed).
Either it failed to start (usually with a binding to 9300 error) or started but was not publicly accessible.
The Fix. What got it working in the end is you must also open the port in windows firewall. As soon as I added the inbound rule, boom it connected :)
I then stripped out all the extra configs I had been trying, restarted Elasticsearch... and it still worked!

Performance boost with DNS caching?

IN SHORT:
How would one create a local DNS cache on a linux system (ubuntu) so that common queries can run faster, and is it then possible to purge it?
The cache should be populated upon first-queries, not created by hand.
BACKGROUND:
There's a web server up in the cloud which makes connections to itself since the database is currently on the same (virtual)machine. To make it easier for future expansion where the database will be on another server, I've simply pointed the webserver at an address like database.example.com and set the DNS record to 127.0.0.1. The plan is that I can then simply change the DNS record once everything's migrated over. This might seem overkill with just web and database, but there will be other types of servers too (redis, node.js, etc.)
The problem is that when I use the hostname version, it is going very slow (5-10 seconds for session_start). When I use the IP address (i.e. 127.0.0.1), it is very fast (a couple milliseconds).
It seems clear to me that the problem is in DNS, and I believe local caching is a fine solution since it will allow me to manage it all in one place, rather than having to step through the different parts of the system and change configuration.
Install dnsmasq
apt-get install dnsmasq
Lock it down to only localhost add the following to /etc/dnsmasq.conf
listen-address=127.0.0.1
start your service and verify that it is running
service dnsmasq start
dig www.google.com #127.0.0.1
Edit /etc/resolv.conf add the following as your first line
nameserver 127.0.0.1
And remove options rotate if present.
Note you may have some scripts automatically rewriting / changing /etc/resolv.conf it which case you'll have to change those as well (ie dhclient or /etc/network/interfaces)

Resources