I want to get the information that is returned by this query from SolrNet:
http://localhost:8983/solr/terms?terms.fl=Title&terms.sort=index&terms.limit=10000&terms.mincount=10&version=2.2
Solr has support for specifying the terms parameters, but I don't know how to retrieve the terms information. Executing a query goes to the select query handler, which throws an error. Here is what I have so far:
var solr = ServiceLocator.Current.GetInstance<ISolrOperations<ProductAdvertisementWidget>>();
int termLimit = 10000;
int minCount = 10;
var termParameters = new TermsParameters("Title");
termParameters.MinCount = minCount;
termParameters.Limit = termLimit;
termParameters.Sort = TermsSort.Index;
var options = new QueryOptions()
{
Terms = termParameters,
Rows = 0
};
var queryOptions = new QueryOptions();
queryOptions.Terms = termParameters;
int lastResultCount = termLimit;
IEnumerable<KeyValuePair<string, int>> terms =
solr.Query(new SolrQuery(""), options).Terms.FirstOrDefault(
x=>x.Field == SolrFieldMappings.Name).Terms;
SolrNet translates that into this query, which fails:
http://localhost:8983/solr/select?q=&terms.fl=Title&terms.sort=index&terms.limit=10000&terms.mincount=10&version=2.2
The error message is:
HTTP ERROR 500
Problem accessing /solr/select. Reason:
null
java.lang.NullPointerException at java.io.StringReader.(Unknown
Source) at
org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:203)
at
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:80)
at org.apache.solr.search.QParser.getQuery(QParser.java:142) at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:81)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:173)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326) at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582
)
This NullPointerException is a bug in Lucene, please report it. Lucene should not throw such a generic NullPointerException without any other information to help the user find the cause. Consider submitting a patch.
That said, you should not pass an empty query to Solr. Use SolrQuery.All with 0 rows instead if you only want terms information.
Also make sure you have the TermsComponent correctly configured in the default Solr request handler.
Related
I am using the following code for RestHighLevelClient in Elastic Search.
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(conf.value.getString("elkUserName"), conf.value.getString("elkPassword")))
val builder = RestClient.builder(new HttpHost(conf.value.getString("elkIp"), Integer.valueOf(conf.value.getString("elkPort"))))
.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
//set timeout
override def customizeRequestConfig(requestConfigBuilder: RequestConfig.Builder): RequestConfig.Builder = requestConfigBuilder.setConnectTimeout(Integer.valueOf(conf.value.getString("elkWriteTimeOut"))).setSocketTimeout(Integer.valueOf(conf.value.getString("elkWriteTimeOut")))
}).setHttpClientConfigCallback(new HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
client = new RestHighLevelClient(builder)
val requestBuilder = RequestOptions.DEFAULT.toBuilder
requestBuilder.setHttpAsyncResponseConsumerFactory(
new HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory(1024 * 1024 * 1024))
var request = new BulkRequest()
request.setRefreshPolicy("wait_for")
var sizeOfRequest = 1 L
newListOfMap.foreach {
vals =>
val newMap = vals.asJava
request.add(new IndexRequest(indexName).source(newMap))
}
client.bulk(request, requestBuilder.build)
But I am getting the following Exception
java.lang.NoSuchMethodError:
org.apache.http.ConnectionClosedException: method <init>()V not found
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.endOfInput(HttpAsyncRequestExecutor.java:356)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:261)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
at java.lang.Thread.run(Thread.java:748)
org.apache.http.ConnectionClosedException: Connection closed
unexpectedly at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:778)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:218)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:492)
at Utils.ELKUtil$.postDataToELK(ELKUtil.scala:59)
NOTE: The above code is working for smaller size of request but getting above error while posting bigger size of request. Please suggest.
If your project is using both httpcore and httpcore-nio dependencies, ensure that both of their versions are either simultaneously <= 4.4.10 or > 4.4.10.
The suggestion by Harshit is partially what caused this issue in my case. On closer inspection of the logs, it is evident that HttpAsyncRequestExecutor is calling a default constructor called ConnectionClosedException, however, this default constructor does not exist in the destination
The HttpAsyncRequestExecutor is a class of httpcore-nio package. ConnectionClosedException is a class of the httpcore package. This issue starts occurring after v4.4.10, beyond which point the ConnectionClosedException class has a default constructor included which HttpAsyncRequestExecutor calls. For versions <= 4.4.10, the default constructor is not present, however, it is called with a parameter in HttpAsyncRequestExecutor. Thus the versions for both libraries should be either above or below v4.4.10 when being used together.
we had a similar issue and the fix we that the service has httpcore-4.4.3 where as the elastic client requires httpcore-4.4.12. So we updated all HTTP dependencies to map the version needed for elastic-rest-high-level-client
https://mvnrepository.com/artifact/org.elasticsearch.client/elasticsearch-rest-client/7.5.2
I am using Elasticsearch SDK 2.3 and also Jest 2.0.0 to create a client. I am trying to get average aggregation (and other aggregations) implemented which will retrieve results from a certain time period.
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
//Queries builder
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
GregorianCalendar gC = new GregorianCalendar();
gC.set(2016, Calendar.OCTOBER, 18, 0, 0, 0);
long from = gC.getTimeInMillis();
gC.add(Calendar.MINUTE, 15);
long to = gC.getTimeInMillis();
searchSourceBuilder.postFilter(QueryBuilders.rangeQuery("timestamp").from(from).to(to));
JestClient client = getJestClient();
AvgBuilder aggregation2 = AggregationBuilders
.avg(AvgAggregation.TYPE)
.field("backend_processing_time");
AggregationBuilder ag = AggregationBuilders.dateRange(aggregation2.getName())
.addRange("timestamp", from, to).
subAggregation(aggregation2);
searchSourceBuilder.aggregation(ag);
String query = searchSourceBuilder.toString();
Search search = new Search.Builder(query)
.addIndex(INDEX)
.addType(TYPE)
// .addSort(new Sort("code"))
.setParameter(Parameters.SIZE, 5)
// .setParameter(Parameters.SCROLL, "5m")
.build();
SearchResult result = client.execute(search);
System.out.println("ES Response with aggregation:\n" + result.getJsonString());
And the error that I am getting is the following:
{"error":{"root_cause":[{"type":"aggregation_execution_exception","reason":"could not find the appropriate value context to perform aggregation [avg]"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"elbaccesslogs_2016_10","node":"5ttEmYcTTsie-z23OpKY0A","reason":{"type":"aggregation_execution_exception","reason":"could not find the appropriate value context to perform aggregation [avg]"}}]},"status":500}
Looks like the reason is 'could not find the appropriate value context to perform aggregation [avg]' ... which I don't know really what is going on.
Can anyone suggest anything please? Or if need more info on this before responding, please let me know.
Thanks,
Khurram
I have found the solution myself so I am explaining below and closing the issue which is as follows:
Basically the code
searchSourceBuilder.postFilter(QueryBuilders.rangeQuery("timestamp").from(from).to(to));
doesn't work with aggregations. We need to remove 'postFilter' code and the following code:
AggregationBuilder ag = AggregationBuilders.dateRange(aggregation2.getName())
.addRange("timestamp", from, to).
subAggregation(aggregation2);
And change the following code too:
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
to
searchSourceBuilder.query(
QueryBuilders.boolQuery().
must(QueryBuilders.matchAllQuery()).
filter(QueryBuilders.rangeQuery("timestamp").from(from).to(to))
);
So here is the entire code again:
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
GregorianCalendar gC = new GregorianCalendar();
gC.set(2016, Calendar.OCTOBER, 18, 0, 0, 0);
long from = gC.getTimeInMillis();
gC.add(Calendar.MINUTE, 15);
long to = gC.getTimeInMillis();
searchSourceBuilder.query(
QueryBuilders.boolQuery().
must(QueryBuilders.matchAllQuery()).
filter(QueryBuilders.rangeQuery("timestamp").from(from).to(to))
);
JestClient client = getJestClient(); // just a private method creating a client the way Jest client is created
AvgBuilder avg_agg = AggregationBuilders
.avg(AvgAggregation.TYPE)
.field("backend_processing_time");
searchSourceBuilder.aggregation(avg_agg);
String query = searchSourceBuilder.toString();
Search search = new Search.Builder(query)
.addIndex(INDEX)
.addType(TYPE)
.setParameter(Parameters.SIZE, 10)
.build();
SearchResult result = client.execute(search);
System.out.println("ES Response with aggregation:\n" + result.getJsonString());
I have a strange error. When i try to do a SearchMailboxes, I get this error:
Unhandled Exception: Microsoft.Exchange.WebServices.Data.ServiceResponseException: The request "://schemas.microsoft.com/exchange/services/2006/types" has invalid child element 'ExtendedAttributes'
The problem is that i get this error in some pc's. With fiddler I could see that my pc sends a request without the node ExtendedAttributes and it works.
This is the ExtendedAttributes node that produces the error.
The code:
List<MailboxSearchScope> scopeList = new List<MailboxSearchScope>();
foreach (SearchableMailbox mb in searchableMailboxes)
{
MailboxSearchScope scope = new MailboxSearchScope(mb.ReferenceId, MailboxSearchLocation.All);
scopeList.Add(scope);
}
MailboxQuery query = new MailboxQuery(searchQuery, scopeList.ToArray());
MailboxQuery[] mbQueryList = new MailboxQuery[] { query };
SearchMailboxesParameters p = new SearchMailboxesParameters
{
SearchQueries = mbQueryList,
ResultType = SearchResultType.PreviewOnly
};
ServiceResponseCollection<SearchMailboxesResponse> res = _service.SearchMailboxes(p);
ExtendedAttributes is a new element that was introduced in Exchange 2013 SP1 and is intended for internal use only.
http://msdn.microsoft.com/en-us/library/office/dn627392(v=exchg.150).aspx
I don't see in your code where you are trying to use this element so I would suggest that you specify ExchangeVersion.Exchange2013 when you are instantiating your ExchangeService object.
I am trying to run the below hive thrift code on hive server2 on CDH 4.3 and getting below error. Here is my code: I can run hive jdbc connection to same server successfully, it is just thrift which is not working.
public static void main(String[] args) throws Exception
{
TSocket transport = new TSocket("my.org.hiveserver2.com",10000);
transport.setTimeout(999999999);
TBinaryProtocol protocol = new TBinaryProtocol(transport);
TCLIService.Client client = new TCLIService.Client(protocol);
transport.open();
TOpenSessionReq openReq = new TOpenSessionReq();
TOpenSessionResp openResp = client.OpenSession(openReq);
TSessionHandle sessHandle = openResp.getSessionHandle();
TExecuteStatementReq execReq = new TExecuteStatementReq(sessHandle, "SELECT * FROM testhivedrivertable");
TExecuteStatementResp execResp = client.ExecuteStatement(execReq);
TOperationHandle stmtHandle = execResp.getOperationHandle();
TFetchResultsReq fetchReq = new TFetchResultsReq(stmtHandle, TFetchOrientation.FETCH_FIRST, 1);
TFetchResultsResp resultsResp = client.FetchResults(fetchReq);
TRowSet resultsSet = resultsResp.getResults();
List<TRow> resultRows = resultsSet.getRows();
for(TRow resultRow : resultRows){
resultRow.toString();
}
TCloseOperationReq closeReq = new TCloseOperationReq();
closeReq.setOperationHandle(stmtHandle);
client.CloseOperation(closeReq);
TCloseSessionReq closeConnectionReq = new TCloseSessionReq(sessHandle);
client.CloseSession(closeConnectionReq);
transport.close();
}
Here is the error log:
Exception in thread "main" org.apache.thrift.protocol.TProtocolException: Required field 'operationHandle' is unset! Struct:TFetchResultsReq(operationHandle:null, orientation:FETCH_FIRST, maxRows:1)
at org.apache.hive.service.cli.thrift.TFetchResultsReq.validate(TFetchResultsReq.java:465)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args.validate(TCLIService.java:12607)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args$FetchResults_argsStandardScheme.write(TCLIService.java:12664)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args$FetchResults_argsStandardScheme.write(TCLIService.java:12633)
at org.apache.hive.service.cli.thrift.TCLIService$FetchResults_args.write(TCLIService.java:12584)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:63)
at org.apache.hive.service.cli.thrift.TCLIService$Client.send_FetchResults(TCLIService.java:487)
at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:479)
at HiveJDBCServer1.main(HiveJDBCServer1.java:26)
Are you really sure you set the operationsHandle field to a valid value? The Thrift eror indicates what it says: The API expects a certain field (operationHandle in your case) to be set, which has not been assigned a value. And you stack trace confirms this:
Struct:TFetchResultsReq(operationHandle:null, orientation:FETCH_FIRST,
maxRows:1)
In case anyone finds this, like I did, by googling that error message: I had a similar problem with a PHP Thrift library for hiverserver2. At least in my case, execResp.getOperationHandle() returned NULL because there was an error in the executed request that generated execResp. This didn't throw an exception for some reason, and I had to examine execResp in detail, and specifically check the status, before attempting to get an operation handle.
I am using JDBC to get Informations from a Sybase Database. if I am trying to get large results from the Server I get an Error Message.(CS libary error: 33816852/1/0: cs converter: cslib user api layer: common libary error: The conversion/operation resulted in overflow.)
Maybe this is helpful
DataBaseHelper dbConnection = new DataBaseHelper();
Statement stmt = dbConnection.getConnection().createStatement();
ResultSet rs = stmt.executeQuery("Select * From fak");
int i = 0;
while(rs.next()){
i++;
}