Restoring a Elasticsearch spapshot from Windows to Debian - elasticsearch

I'm trying to restore a snapshot I created in a computer, but I need to restore it into another. I successfully created a repo and a first snapshot, I assume it because have results from these queries:
GET /_snapshot/_all
GET /_snapshot/my_repo_1/snapshot_1
My elasticsearch instance is configured to save the snapshoots in:
path.repo: D:\BACKUP
So I copied the full folder (with a dir called "my_repo_1" and another 5 files), and I moved it into another computer with a clean instance of elasticsearch (under other operating System). I configured the elasticsearch.yml file to match the location of the copied file:
path.repo: /home/BACKUP
And I restarted the service, but when running:
curl 'localhost:9200/_snapshot/_all'
I got an empty response.
Any idea why it isn't working'?

Related

kubectl not working on my windows 10 machine

When I try to run any kubectl command including kubectl version, I get a pop-up saying "This app can't run on your PC, To find a version for your PC, check with the software publisher" when this is closed, the terminal shows "access denied"
The weird thing is, when I run the "kubectl version" command in the directory where I have downloaded kubectl.exe, it works fine.
I have even added this path to my PATH variables.
thank you for the answer, #rally
apparently, in my machine, it was an issue of administrative rights during installation. My workplace's IT added the permission and it worked for me.
Adding this answer here so that if anyone else comes across this problem they can try this solution as well.
Not knowing what exactly you downloaded, i would suggest you to delete everying in the folder and follow the instructions for installing kubectl for Windows from here:
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
Note: downloading the .exe is not enough. You need a kubeconfig file "config", which contains the configuration to access your cluster.
kubectl looks for this file in a hidden folder under your user profile directory. c:\users<me>.kube.
Just to let you try, i would suggest you to activate Kubernetes in your Docker-Desktop installation. I guess you have this installed. If not install it from the Dockersite. https://www.docker.com/products/docker-desktop/
Activating Kubernetes inside Docker-desktop, will install also kubectl and save the config in the .kube folder.
After the installation finished, in a new terminal:
kubectl get node
You should see the 1 node in the kubernetes-docker-desktop cluster.
Now if you want to access another cluster, you need the kubeconfig-file for that cluster. If you have it, just rename the config in the .kube folder (to not loose it) and put the other config inside.
If the new config file is correct you should be able to access that cluster.
The config file can be structured to hold more than one cluster configuration and you can switch between them using a so called context.
Here you can get the information how to do that, according to your needs:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Hope this can help you, starting with KUbernetes.

Elasticsearch path.logs is not working correctly

When I set the path.logs in the elasticsearch.yml I get the behaviour, that some logs are in the defined folder, but some stuff is also always created in the elasticsearch root folder.
So in the elasticsearch root folder in logs I find the pid file gc stderr and stdout file...
When I remove the folder it´s always created on startup.
How can I prevent ES from splitting up in two folders?
The path.logs in elasticsearch.yml change the path for the elasticsearch logs only.
Logs related to the jvm like the gc logs are set in the jvm.options file and the PID file location is set when starting up elasticsearch using the option -p.
If you installed elasticearch using a package manager like yum or apt, you will need to edit the systemd elasticsearch.service and change the PID_DIR variable.
If you are starting elasticsearch using the command line you will need to pass the PID file location using the option -p, something like -p /path/to/elasticsearch.pid

How do I force rebuild log's data in filebeat 5

I have filebeats 5.x ship logs to logstash.
How do I reset the “file pointer” in filebeat
This is a similar problem to
How to force Logstash to reparse a file?
https://discuss.elastic.co/t/how-do-i-reset-the-file-pointer-in-filebeats/49440
I cleaned all elasticsearch's data, delete the /var/lib/filebeat/registry. but filebeat is only shipping the new line.
change the registry_file is invalid, the file's offset saved to new file (delete file is the same problem)
filebeat.registry_file: registry
Stop filbeat service.
Rename the register file - usually found in /var/lib/filebeat/registry
Start filbeat service.
sudo service filbeat stop
mv /var/lib/filebeat/registry /var/lib/filebeat/registry.old
sudo service filbeat start
The Filebeat agent stores all of its state in the registry file. The location of the registry file should be set inside of your configuration file using the filebeat.registry_file configuration option.
I recommend specifying an absolute path in this option so that you know exactly where the file will be located. If you use a relative path then the value is interpreted relative to the ${path.data} directory. On Linux installations, when started as a service or started using the filebeat.sh wrapper, path.data is set to /var/lib/filebeat.
After deleting this registry file, Filebeat will begin reading all files from the beginning (unless you have configured a prospector with tail_files: true.
If you continue to have problems, I recommend looking at the Filebeat log file which will contain a line stating where the registry file is located. For example:
2017/01/18 18:51:31.418587 registrar.go:85: INFO Registry file set to: /var/lib/filebeat/registry
As already mentioned here, stopping the filebeat service, deleting the registry file(s) and restarting the service is correct.
I just wanted to add for Windows users, if you haven't specified a unique location for the filebeat.registry_file, it will likely default to ${path.data}/registry which is somewhat confusingly the C:\ProgramData\filebeat directory as mentioned by the folks at Elastic.
In my case I had to show hidden files before it was displayed.

Unable to find TeamCity 9.1.x data directory

This is really weird.
I am trying a clean Teamcity 9.1.1 install but the Data Directory is nowhere to be found.
if I access the Global Settings tab under Administration, it lists "C:\Windows\System32\config\systemprofile.BuildServer" - a folder that doesn't exist.
if I try to browse to that folder, it shows me a range of files; uploading a specific file there instead uploads it to C:\Windows\SysWOW64\config\systemprofile.BuildServer.
there is no teamcity-startup.properties file anywhere - I am unable to customize the location of the data directory.
when I restore a backup, the backup files are instead restored to C:\Users\[user name]\.BuildServer rather than in the correct data directory.
Does anyone has any suggestions on how to regain control of the situation? How can I tell TeamCity which data folder to use?
I resolved the situation by:
stopping TC services;
creating a teamcity-startup.properties in [install folder]\conf with the following content:
teamcity.data.path=D:\\[install folder]\\config
restarting TC services;
restoring my backup.
This restored the 9.1.1 install as well as stabilizing the location of the data directory. After this was done, the subsequent installation of 9.1.7 prompted me to uninstall 9.1.1 first (which it hadn’t done the first time around) and the upgrade succeeded.
I believe the system was already compromised at the beginning, unknown to me, due to the data folder being all over the place. Once that was resolved, everything else fell into place.

ElasticSearch installed---but Installing kibana on localhost?

I'd like to view my machine's syslogs more beautifully on an ubuntu desktop. I notice that all the kibana documentation is oriented towards remote servers (which makes sense). However, how would I securely view the same information about my local machine?
Here are some things I've read that were not helpful because they were designed for remote access:
https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-7
Kibana deployment issue on server . . . client not able to access GUI
http://www.elasticsearch.org/overview/kibana/installation/ which has the following problems:
there is no config.js to open in an editor per step 2, you can see this very plainly on their github page: https://github.com/elasticsearch/kibana
running
~/kibana/src/server/bin$ bash kibana.sh
The Kibana Backend is starting up... be patient
Error: Unable to access jarfile ./../lib/kibana.jar
How do I install kibana locally?
Not sure if you're still looking for an answer, but for future searchers:
What you can do is download elasticsearch - http://www.elasticsearch.org/overview/elkdownloads/
Extract it, and create a plugins subdirectory. Then, within the /plugins directory create a /kibana/_site subdirectory.
Then, download kibana using the above mentioned link. Extract the archive, then edit config.js to point to the localhost as the elasticsearch host:
elasticsearch: "http://localhost:9200",
Copy all of the contents of the folder you extracted kibana into to the /kibana/_site directory you created inside the elasticsearch folder.
Then start elasticsearch:
within the elasticsearch directory -
bin/elasticsearch
Kibana will now run off of the same 'server' as elasticsearch, on your local host.
UPDATE: Kibana 4 comes bundled with a web server now: see the docs

Resources