I tries to export data from "Discover" to CSV with no luck.
I read in Google version 6 supports exporting, but only example I saw was either using ES 6.5 or "Visualize" screen.
I also tried to export the data using "Dev tools" - with no luck.
Currently i have access to the ES instance (port 9200), is there any way to export data using the CMD?
My goal is to copy some data from one Index to another on the same ES.
Thanks!
As #val mentioned in the comment, you can use reindex API. in that API you can specify a query, with filters that would suite only the data you want to move to the new index.
See full info here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html
More reference on exporting:
Export to csv/excel from kibana
Related
I am new to Elastic search, and need to move data for an index from my local environment to acceptance environment.
How can I do this? using the ES Apis?
Is there a way I can export data and then import it?
your two options are either;
snapshot and restore - https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html
remote reindex - https://www.elastic.co/guide/en/elasticsearch/reference/7.15/reindex-upgrade-remote.html
I am new to elasticsearch,this question might look weird but is it possible to index a file automatically (i.e given a file path, elasticsearch should index the contents of it automatically).I have got some open source tool like elasticdump and tried using it for the purpose,but I prefer some plugins of elasticsearch which can support almost all elasticsearch versions.. Can anyone suggest me?
Am trying to use elasticsearch with my neo4j database for fast querying.I tried many sites but they are all old articles so i didn't get any clear idea. Steps I followed until now,
Installed neo4j
Installed elasticsearch
Copy pasted elastic search plugins into neo4j plugins folder
added this line into neo4j. properties file
elasticsearch.host_name=http://localhost:9200
elasticsearch.index_spec=people:Person(first_name,last_name), places:Place(name)
Here my question is,
How elasticsearch and neo4j are integrated. Please clarify me on this.
I followed this ,
Link
You have to install Apoc procedures plugin (https://github.com/neo4j-contrib/neo4j-apoc-procedures). The documentation about ES integration is here : ES Integration with Apoc procedures
[edit]
download and drop apoc.jar in plugins's Neo4j directory, regarding the targetted Neo4j version
restart Neo4j
in Neo4j Web browser, launch the following Cypher query to show all ES procedures:
CALL apoc.help("apoc.es")
Sample query for logs:
CALL apoc.es.getRaw("localhost","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
The recommanded way is to store the ES host in neo4j.conf by adding a key (after restart of Neo4j):
apoc.es.myKey.url=localhost
Then the query looks like:
CALL apoc.es.getRaw("myKey","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
For those of you who already have APOC plugin installed and accessible, but don't have access to the neo4j.properties file (or are more comfortable working with ES through curl) you can do this without using apoc.es.getRaw and can instead use the JSON returned with apoc.load.json:
WITH "http://myelasticurl:9200/my_index/_search?q=level:ERROR" as search_url
CALL apoc.load.json(search_url) YIELD value
UNWIND value.hits.hits as hit
WITH hit._source as source
...
# do work
...
I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)
I am trying to get up to speed on an elasticsearch implementation on a project. How can I see the data that is on the cluster? Is there a commandline tool that gives me information on the schema?
To get schema:
curl -XGET 'http://loadtest-appserver1:9200/myIndex/_mapping'
See Elasticsearch Api Doc
Try using ElasticSearch Head
http://mobz.github.io/elasticsearch-head/
It's a great tool when peeking in your index and it's meta data (such as the schema) to find out what's going on.
Also it's HTML5/REST based, so you can take a look in your browser at the commands it sends to your cluster and use those with command line CURL if needed.