SolrNet Error - Unable to read data from the transport connection: The connection was closed - solrnet

I'm trying to search a Solr server from a webservice using SolrNet. I set up the connection in the global.asax: Startup.Init<ApartmentDoc>("http://192.168.0.100:8080/solr/");
I'm trying to query the server in a class file via:
var solr = ServiceLocator.Current.GetInstance<ISolrOperations<ApartmentDoc>>();
var apartments = solr.Query(SolrQuery.All, new QueryOptions
{
ExtraParams = new Dictionary<string, string> {
{ "defType", "edismax" } ,
{ "fl", "*,score,_dist_:geodist() " } ,
{ "bf", "recip(geodist(),1,1000,1000)" } ,
{ "fq", string.Format("{{!geofilt d={0}}}", radius * 1.609344) } ,
{ "sfield", "Location" } ,
{ "pt", string.Format("{0},{1}", centerLat, centerLong) }
}
});
return apartments;
The error I'm getting is: Unable to read data from the transport connection: The connection was closed.
I've checked the logs in TomCat, and the request is going through and the results appear to have been returned.
Any ideas why I'm not getting the results back?
Thanks,
Drew

As the 'rows' parameter wasn't defined in your code, the request is likely timing out from trying to retrieve a large number of documents. As explained in the SolrNet documentation, always define pagination parameters.

Related

grpc error: packets.SubmitCustomerRequest.details: object expected

I just created a proto file like this
service Customer {
rpc SubmitCustomer(SubmitCustomerRequest) returns (SubmitCustomerResponse) {}
}
message SubmitCustomerRequest {
string name = 1;
map<string, google.protobuf.Any> details = 2;
}
message SubmitCustomerResponse {
int64 id = 1;
}
The code itself works when I called this from the client. But, I'm having trouble with testing it directly from bloomrpc or postman.
{
"name": "great name",
"details": {
"some_details": "detail value",
"some_int": 123
}
}
it throws me this error when I tried to hit it
.packets.SubmitCustomerRequest.details: object expected
I think I'm aware that the problem is with the details syntax format when I hit it on postman, but I'm not sure what the correct format is supposed to be. I've tried modifying it with any other possible format and none works either.

Chat app list last messages of each peer using parse server

I am doing a chat app using parse server, everything is great but i tried make to list just last message for every remote peer. i didn't find any query limitation how to get just one message from every remote peer how can i make this ?
Query limitation with Parse SDK
To limit the number of object that you get from a query you use limit
Here is a little example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
query.limit(1); // Get only one result
Get the first object of a query with Parse SDK
In you case as you really want only one result you can use Query.first.
Like Query.find the method Query.first make a query and will return only the first result of the Query
Here is an example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
const message = await query.first();
I hope my answer help you 😊
If you want to do this using a single query, you will have to use aggregate:
https://docs.parseplatform.org/js/guide/#aggregate
Try something like this:
var query = new Parse.Query("Messages");
var pipeline = [
{ match: { local: '_User$' + userID } },
{ sort: { createdAt: 1 } },
{ group: { remote: '$remote', lastMessage: { $last: '$body' } } },
];
query.aggregate(pipeline)
.then(function(results) {
// results contains unique score values
})
.catch(function(error) {
// There was an error.
});

More Like This Query Not Getting Serialized - NEST

I am trying to create an Elasticsearch MLT query using NEST's object initializer syntax. However, the final query when serialized, is ONLY missing the MLT part of it. Every other query is present though.
When inspecting the query object, the MLT is present. It's just not getting serialized.
I wonder what I may be doing wrong.
I also noticed that when I add Fields it works. But I don't believe fields is a mandatory property here that when it is not set, then the MLT query is ignored.
The MLT query is initialized like this;
new MoreLikeThisQuery
{
Like = new[]
{
new Like(new MLTDocProvider
{
Id = parameters.Id
}),
}
}
MLTDocProvider implements the ILikeDocument interface.
I expect the serialized query to contain the MLT part, but it is the only part that is missing.
This looks like a bug in the conditionless behaviour of more like this query in NEST; I've opened an issue to address. In the meantime, you can get the desired behaviour by marking the MoreLikeThisQuery as verbatim, which will override NEST's conditionless behaviour
var client = new ElasticClient();
var parameters = new
{
Id = 1
};
var searchRequest = new SearchRequest<Document>
{
Query = new MoreLikeThisQuery
{
Like = new[]
{
new Like(new MLTDocProvider
{
Id = parameters.Id
}),
},
IsVerbatim = true
}
};
var searchResponse = client.Search<Document>(searchRequest);
which serializes as
{
"query": {
"more_like_this": {
"like": [
{
"_id": 1
}
]
}
}
}

Elasticsearch .NET only allows me to bulk upload 80 times

I am using Elasticsearch.NET (5.6) on ASP.NET API (.NET 4.6) on Windows, and try to publish to elasticsearch hosted on AWS (I have tried both 5.1.1 and 6, both same behaviour).
I have the following code which bulk index the documents to Elasticsearch. Image calling the below code block many times:
var node = new System.Uri(restEndPoint);
var settings = new ConnectionSettings(node);
var lowlevelClient = new ElasticLowLevelClient(settings);
var index = indexStart + indexSuffix;
var items = new List<object>(list.Count() * 2);
foreach (var conn in list)
{
items.Add(new { index = new { _index = index, _type = "doc", _id = getId(conn) } });
items.Add(conn);
}
try
{
var indexResponse = lowlevelClient.Bulk<Stream>(items);
if (indexResponse.HttpStatusCode != 200)
{
throw new Exception(indexResponse.DebugInformation);
}
return indexResponse.HttpStatusCode;
}
catch (Exception ex)
{
ExceptionManager.LogException(ex, "Cannot publish to ES");
return null;
}
It runs fine, can publish documents to Elasticsearch, but it only can run 80 times, after 80 times, it will always get exception:
# OriginalException: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData) in C:\Users\russ\source\elasticsearch-net-5.x\src\Elasticsearch.Net\Connection\HttpConnection.cs:line 148
The most interesting part is that: I have tried to change the bulk size to be 200 or 30, and it turned out to be 16000 and 2400, meaning both ends up at 80 times. (Each document size is very similar)
Any ideas? Thanks
There is a connection limit (Also refer to comments from #RussCam under the question). So the real issue is that the Stream in the response holding the connections.
So the fix is either indexResponse.Body.Dispose (I haven't tried this one) or use VoidResponse: reportClient.BulkAsync<VoidResponse>(items); which does not require the response stream. I've tried the second and it works.

Listing all Vertices of specific class in OrientDB

I've recently started exploring Graph databases (in particular Neo4j and OrientDB) and have come across a problem I can't seem to put my finger on.
I'm running a local installation of OrientDB (OrientDB Server v2.0-M3 is active.).
I'm using Tinkerpops to connect to, and run queries against, the graph.
I'm using Java and Spring on a local Tomcat 7 server.
Testing my API I'm using Postman on Chrome.
Here's my faulty GET method:
#RequestMapping(value = "/articles", method = RequestMethod.GET)
public
#ResponseBody
Vector<Article> list() {
OrientGraph graph = new OrientGraph("remote:/local/path/to/orientdb/databases/mydb", "user", "password");
FramedGraphFactory factory = new FramedGraphFactory();
FramedGraph manager = factory.create(graph);
Vector<Article> articles = new Vector<>();
try {
Iterable<Vertex> vertices = graph.getVerticesOfClass("Article", false);
Iterator<Vertex> it = vertices.iterator();
if (it.hasNext()) {
do {
Article a = (Article) manager.frame(it.next(), Article.class);
articles.add(a);
} while (it.hasNext());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
graph.shutdown();
}
return articles;
}
This generates the following error:
{
"timestamp": 1418562889304,
"status": 500,
"error": "Internal Server Error",
"exception": "org.springframework.http.converter.HttpMessageNotWritableException",
"message": "Could not write JSON: Database instance is not set in current thread. Assure to set it with: ODatabaseRecordThreadLocal.INSTANCE.set(db); (through reference chain: java.util.Vector[0]->$Proxy43[\"name\"]); nested exception is com.fasterxml.jackson.databind.JsonMappingException: Database instance is not set in current thread. Assure to set it with: ODatabaseRecordThreadLocal.INSTANCE.set(db); (through reference chain: java.util.Vector[0]->$Proxy43[\"name\"])",
"path": "/articles"
}
I've been trying to figure this out, trying the "fix" that the error suggests. I've also tried to use TransactionalGraph instead of OrientGraph.
Here's the catch... I'm also using a similar method for getting a single resource. This method only works if I'm using the "System.out.println", otherwise it fails with the same error.
#RequestMapping(value = "/article", method = RequestMethod.GET)
public
#ResponseBody
Article get(
#RequestParam(value = "number", required = true) long number
) {
TransactionalGraph graph = new OrientGraph("remote:/path/to/local/orientdb/orientdb/databases/mydb", "user", "password");
FramedGraphFactory factory = new FramedGraphFactory();
FramedGraph manager = factory.create(graph);
Article article = null;
try {
Iterable<Article> articles = manager.getVertices("number", number, Article.class);
article = articles.iterator().next();
System.out.println(article.getName());
} catch (Exception e) {
e.printStackTrace();
} finally {
graph.shutdown();
}
return article;
}
Any help appreciated!
You should leave the graph (=connection) open while you're using the result. Can you move the graph.shutdown() after browsing your result set?

Resources