How to do _cat/indices/<index_name_with_reg_ex> with JAVA API? - elasticsearch

I have some indexes with name test-1-in, test-2-in, test-3-in. I want to do _cat/indices/test-*-in from JAVA API. How to do this?
I tried using the IndexAdminClient but no luck.

Given an ElasticSearch Client object:
client.admin().indices()
.getIndex(new GetIndexRequest().indices("regex-*"))
.actionGet().getIndices();

In addition to Mario's answer, use the following to retrieve the indices with the Elasticsearch 6.4.0 high level REST client:
GetIndexRequest request = new GetIndexRequest().indices("*");
GetIndexResponse response = client.indices().get(request, RequestOptions.DEFAULT);
String[] indices = response.getIndices();

I have a solution:
final ClusterStateRequest clusterStateRequest = new ClusterStateRequest();
clusterStateRequest.clear().metaData(true);
final IndicesOptions strictExpandIndicesOptions = IndicesOptions.strictExpand();
clusterStateRequest.indicesOptions(strictExpandIndicesOptions);
ClusterStateResponse clusterStateResponse = client.admin().cluster().state(clusterStateRequest).get();
clusterStateResponse.getState().getMetadata().getIndices()
This will give all indexes. After that the reg ex matching has to be done manually. This is what is done for _cat implementation in elasticsearch source code.

In case you want to cat indices with ?v option:
IndicesStatsRequestBuilder indicesStatsRequestBuilder = new
IndicesStatsRequestBuilder(client, IndicesStatsAction.INSTANCE);
IndicesStatsResponse response = indicesStatsRequestBuilder.execute().actionGet();
for (Map.Entry<String, IndexStats> m : response.getIndices().entrySet()) {
System.out.println(m);
}
Each of the entries contains document count, storage usage, etc. You can run this for all or filtering some index.
PD: Tested with 5.6.0 version

Related

Elastic-Cloud Not Receiving Data from Serilog Sink

I set up an Elastic Cloud to offload my local elasticsearch config (as one does), but for reasons unknown to me, I can't get it to show any logs in Elastic Cloud, despite it working fine locally.
The code I got: (modified for privacy reasons)
//var uri = new Uri("http://localhost:9200"); // old one
var uri = new Uri("https://my-server.kb.eastus2.azure.elastic-cloud.com:9243");
var sinkOptions = new ElasticsearchSinkOptions(uri)
{
AutoRegisterTemplate = true,
ModifyConnectionSettings = x => x.BasicAuthentication("elastic", "the password I was given"),
IndexFormat = $"test-logs-{env.EnvironmentName?.ToLower().Replace('.', '-')}-{DateTime.Now:yyyy-MM}",
};
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(config)
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(sinkOptions)
.Enrich.WithProperty("Environment", env.EnvironmentName)
.CreateLogger();
There are two possible reasons I can think of that might be the cause of this not working:
The credentials are wrong
The Uri is wrong
Every solution I've been given so far has provided the data in this fashion, and nowhere does it say what the URI I'm supposed to use looks like.
I get no errors.
I get no warnings.
I get no logs.
What am I doing wrong here?
The issue was using the incorrect uri. I wrote
my-server.kb.eastus2.azure.elastic-cloud.com:9243 rather than
my-server.es.eastus2.azure.elastic-cloud.com:9243.
Note the very tiny difference that is kb vs es in the url

Can't figure out serialization error in Elasticsearch Python API

My code is displayed below.
client = Elasticsearch(url, http_auth = (username, password), verify_certs = False, read_timeout=50, terminate_after=25000)
examplename = 'GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4'
s = Search(using = client, index = [set_index]).source(['metadata.Filename'])\
.query('match', Filename={examplename})
total = s.count()
The error message is:
elasticsearch.exceptions.SerializationError: ({'query': {'match': {'Filename': set(['GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4'])}}}, TypeError("Unable to serialize set(['GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4']) (type: <type 'set'>)",))
In general, I don't need my search term to match the whole document. So for example if the document is named GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4, I want that document to be returned if I query for 20180508.
in .query('match', Filename={examplename}) you are passing in a set which is not json serializable. I believe it should have been just .query('match', Filename=examplename). Hope this helps!

Ignite 2.4.0 - SqlQuery results do not match with results of query from H2 console

We implemented a caching solution using ignite 2.0.0 version for data structure that looks like this.
public class EntityPO {
#QuerySqlField(index = true)
private Integer accessZone;
#QuerySqlField(index = true)
private Integer appArea;
#QuerySqlField(index = true)
private Integer parentNodeId;
private Integer dbId;
}
List<EntityPO> nodes = new ArrayList<>();
SqlQuery<String, EntityPO> sql =
new SqlQuery<>(EntityPO.class, "accessZone = ? and appArea = ? and parentNodeId is not null");
sql.setArgs(accessZoneId, appArea);
CacheConfiguration<String, EntityPO> cacheconfig = new
CacheConfiguration<>(cacheName);
cacheconfig.setCacheMode(CacheMode.PARTITIONED);
cacheconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheconfig.setIndexedTypes(String.class, EntityPO.class);
cacheconfig.setOnheapCacheEnabled(true);
cacheconfig.setBackups(numberOfBackUpCopies);
cacheconfig.setName(cacheName);
cacheconfig.setQueryParallelism(1);
cache = ignite.getOrCreateCache(cacheconfig);
We have method that looks for node in a particular accessZone and appArea. This method works fine in 2.0.0, we upgraded to the latest version 2.4.0 version and this method no longer returns anything(zero records). We enabled H2 debug console and ran the same query and we are seeing the same atleast 3k records. Downgrading the library back to 2.0.0 makes the code work again. Please let me know if you need more information to help with this question
Results from H2 console.
H2 Console Results
If you use persistence, please check a baseline topology for you cluster.
Baseline Topology is the major feature introduced in the 2.4 version.
Briefly, the baseline topology is a set server nodes that could store the data. Most probably the cause of your issue is to one or several server nodes are not in the baseline.

How to save BigQuery query results to another table?

I want save query results into new table.
With BigQuery online editor like bigquery.cloud.google i easily do it with micro-solution from Felipe Hoffa.
Results with ~150.000.000 rows inserted with several seconds.
But how do i run query with "Destination Table" parameters via BigQuery API?
By using the Jobs.insert API call.
For example, in Java:
[...]
TableReference tableRef = new TableReference();
tableRef.setProjectId("<project>");
tableRef.setDatasetId("<dataset>");
tableRef.setTableId("<name>");
JobConfigurationQuery queryConfig = new JobConfigurationQuery();
queryConfig.setDestinationTable(tableRef);
queryConfig.setAllowLargeResults(true);
queryConfig.setQuery("some sql");
queryConfig.setCreateDisposition(CREATE_IF_NEEDED);
queryConfig.setWriteDisposition(WRITE_APPEND);
JobConfiguration config = new JobConfiguration().setQuery(queryConfig);
Job job = new Job();
job.setConfiguration(config);
Bigquery.Jobs.Insert insert = bigquery.jobs().insert("<projectid>", job);
JobReference jobId = insert.execute().getJobReference();
[...]

Importing binary data to parse.com

I'm trying to import data to parse.com so I can test my application (I'm new to parse and I've never used json before).
Can you please give me an example of a json file that I can use to import binary files (images) ?
NB : I'm trying to upload my data in bulk directry from the Data Browser. Here is a screencap : i.stack.imgur.com/bw9b4.png
In parse docs i think 2 sections could help you out depend on whether you want to use REST api of the android sdk.
rest api - see section on POST, uploading files that can be upload to parse using REST POST.
SDk - see section on "files"
code for Rest includes following:
use some HttpClient implementation having "ByteArrayEntity" class or something
Map your image to bytearrayEntity and POST it with the correct headers for Mime/Type in httpclient...
case POST:
HttpPost httpPost = new HttpPost(url); //urlends "audio OR "pic"
httpPost.setProtocolVersion(new ProtocolVersion("HTTP", 1,1));
httpPost.setConfig(this.config);
if ( mfile.canRead() ){
FileInputStream fis = new FileInputStream(mfile);
FileChannel fc = fis.getChannel(); // Get the file's size and then map it into memory
int sz = (int)fc.size();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
data2 = new byte[bb.remaining()];
bb.get(data2);
ByteArrayEntity reqEntity = new ByteArrayEntity(data2);
httpPost.setEntity(reqEntity);
fis.close();
}
,,,
request.addHeader("Content-Type", "image/*") ;
pseudocode for post the runnable to execute the http request
The only binary data allowed to be loaded to parse.com are images. In other cases like files or streams .. etc the most suitable solution is to store a link to the binary data in another dedicated storage for such type of information.

Resources