Has ElasticClient.TryConnect been removed from NEST? - elasticsearch

Here's a code snippet we've used in the past to ping an Elastic Search node, just to check if it's there:
Nest.ElasticClient client; // has been initialized
ConnectionStatus connStatus;
client.TryConnect(out connStatus);
var isHealthy = connStatus.Success;
It looks like ElasticClient.TryConnect has been removed in NEST 0.11.5. Is it completely gone or has it just been moved to somewhere else just like MapRaw/CreateIndexRaw?
In case it's been removed, here's what I'm planning to do instead:
Nest.ElasticClient client; // has been initialized
var connectionStatus = client.Connection.GetSync("/");
var isHealthy = connectionStatus.Success;
Looks like this works - or is there a better way to replace TryConnect?

yes they have. See the release notes:
https://github.com/Mpdreamz/NEST/releases/tag/0.11.5.0
Excerpt from the release notes:
Removed IsValid and TryConnect()
The first 2 features of ElasticClient I wrote nearly three years ago which seemed like a good idea at the time. TryConnect() and .IsValid() are two confusing ways to check if your node is up, RootNodeInfo() now returns a mapped response of the info elasticsearch returns when you hit a node at the root (version, lucene_version etc), or you can call client.Raw.MainGet() or perhaps even better client.Raw.MainHead() or even client.Connection.HeadSync("/").
You catch my drift: with so many ways of querying the root .IsValid and TryConnect() is just fluff that only introduces confusion.

Related

Raku cloned object not sunk

class A {
has $.n;
# If this method is uncommented then the clone won't be sunk
# method clone {
# my $clone = callwith(|%_);
# return $clone;
# }
method sink(-->Nil) { say "sinking...$!n" }
}
sub ccc(A:D $a) { $a.clone(n=>2) }
ccc(A.new(n=>1));
say 'Done';
Above prints:
sinking...2
Done
However, if the custom clone method is used, then the returned clone from ccc won't be sunk for some reason. It works if I sink it explicitly at call site or if I change the my $clone = callwith(|%_) line to my $clone := callwith(|%_). Is this expected? What's the reason that it's working this way?
Thanks!
There are lots of filed-and-still-open-years-later sink bugs (and, I suspect, loads that have not been filed).
As touched on in my answer to Last element of a block thrown in sink context:
someone needs to clean the kitchen sink, i.e. pick up where Zoffix left off with his Flaws in implied sinkage / &unwanted helper issue.
Zoffix's conclusion was:
So given there are so many issues with the system, I'm just wondering if there isn't a better one that can be used to indicate whether something is wanted or not.
Fast forward 2 years, and a better system is hopefully going to land. In a recent grant report jnthn writes:
The current code doing this work in Rakudo is difficult to follow and not terribly efficient. ... Since RakuAST models the language at a higher level and defers producing QAST until much later, a far cleaner solution to the sink analysis problem is possible. ... I'm optimistic that the model I've created will be flexible enough to handle all the sinking-related requirements
I'm not sure what is going on yet, but removing the return statement makes the cloned object call the right sink method.
I've created an issue for it: https://github.com/rakudo/rakudo/issues/3855

RethinkDB: Realtime Changes On Array, return only newly appended

Brief: I'm using RethinkDB's change feeds to watch changes on a specific document (not the entire table). Each record looks like this:
{
"feedback": [ ],
"id": "bd808f27-bf20-4286-b287-e2816f46d434" ,
"projectID": "7cec5dd0-bf28-4858-ac0f-8a022ba6a57e" ,
"timestamp": Tue Aug 25 2015 19:48:18 GMT+00:00
}
I have one process that is appending items to the feedback array, and another process that needs to watch for changes on the feedback array... and then do something (specifically, broadcast only the last item appended to feedback via websockets). I've got it wired up so that it will monitor updates to the entire document - however, it requires receiving the complete document, then getting just the last item in the feedback array. This feels overly heavy, when all I need to get back is the last thing added.
Current code used to update the document:
r.table('myTable').get(uuid)
.update({feedback:r.row('feedback').append('message 1')})
.run(conn, callback)(...}
^ That will run several times over the course of a minute or so, adding the latest message to 'feedback'.
Watching changes:
r.table('myTable').get(uuid)
.changes()
.run(conn, function(err, cursor){
cursor.each(function(err, row){
var indexLast = row.old_val ? row.old_val.feedback.length : 0,
nextItem = row.new_val.feedback[indexLast];
// ... do something with nextItem
})
})
Finally, here's the question (2 parts really):
1: When I'm updating the document (adding to feedback), do I have to run an update on the entire document (as in my code above), or is it possible to simply append to the feedback array and be done with it?
2: Is the way I'm doing it above (receiving the entire document and plucking the last element from feedback array) the only way to do this? Or can I do something like:
r.table('myTable').get(uuid)
.changes()
.pluck('feedback').slice(8) // <- kicking my ass
.run(conn, function(err, cursor){
cursor.each(function(err, row){
var indexLast = row.old_val ? row.old_val.feedback.length : 0,
nextItem = row.new_val.feedback[indexLast];
// ... do something with nextItem
})
})
Let's go over your questions
1: When I'm updating the document (adding to feedback),
do I have to run an update on the entire document (as in my code above),
No, you don't. As you did, you only update feedback field. Not an entirely document, doesn't it?
or is it possible to simply append to the feedback array and be done with it?
It's possible. And you already do it.
The way it writes looks like your client driver has to fetch content of feedback array, then append a new element, and update the whole content back. But that's not the case here. The whole query r.row('feedback').append('message 1') is serialized as a JSON string and pass to the RethinkDB. RethinkDB run it, atomically, on the server. The content of feedback and the appending isn't done on client nor sending back to server.
If you used tcpdump like this:
tcpdump -nl -w - -i lo0 -c 500 port 28015|strings
You can see this JSON string is sender to RethinkDB server when you run your query:
[1,[53,[[16,[[15,["myTable"]],1]],[69,[[2,[3]],{"feedback":[29,[[170,[[10,[3]],"feedback"]],"doc2"]]}]]]],{}]
Yes, that single JSON query was transmited over network, not the whole document. Hope it makes sense. More information of that JSON string can be found on http://rethinkdb.com/docs/writing-drivers/ and https://github.com/neumino/rethinkdbdash/blob/master/lib/protodef.js#L84
2: Is the way I'm doing it above (receiving the entire document and plucking the last element from feedback array) the only way to do this? Or can I do something like:
Ideally we will want to use bracket to get a field of document, and listen for change on that field. Unfortunately it doesn't work yet. So we have to use map to transform it. Again, this run on server and only the last document transmit to client, not the whole document:
r.table('myTable').get(1).changes().map(function(doc) {
return doc('new_val')('feedback').default([]).nth(-1)
})

Why is ReactiveList with ChangeTrackingEnabled slow when I Clear inside SuppressChangeNotifications?

Why is ReactiveList with ChangeTrackingEnabled slow when I Clear inside SuppressChangeNotifications?
With 10,000 entries it takes about 2 seconds for the Clear method to return.
Shouldn't SuppressChangeNotifications bypass the change tracking code?
Or how can I improve the performance here?
ReactiveList<Person> _personList = new ReactiveList<Person> { ChangeTrackingEnabled = true };
using (_personList.SuppressChangeNotifications())
{
_personList.Clear();
}
Thanks a lot.
Change tracking code is bypassed, but still ReactiveList needs to cleanup its internal stuff when you clear the list. And the method used to do so is extremely inefficient ( O(n2) ), as detailed in this SO answer.
The Clear implementation with change tracking enabled can definitely be improved, I'll send a PR to RxUI if I get the chance.
E.g. replacing this code by foreach (var foo in _propertyChangeWatchers.Values.ToList()) foo.Release(); makes the Clear immediate, w/o altering the behavior.
EDIT :
You can work around this performance issue by writing instead:
using (_personList.SuppressChangeNotifications())
_personList.RemoveRange(0, _personList.Count);

Can't make MVC4 WebApi include null fields in JSON

I'm trying to serialize objects as JSON with MVC4 WebAPI (RTM - just installed VS2012 RTM today but was having this problem yesterday in the RC) and I'd like for all nulls to be rendered in the JSON output.
Like this:
[{"Id": 1, "PropertyThatMightBeNull": null},{"Id":2, "PropertyThatMightBeNull": null}]
But what Im getting is
[{"Id":1},{"Id":2}]
I've found this Q/A WebApi doesnt serialize null fields but the answer either doesn't work for me or I'm failing to grasp where to put the answer.
Here's what I've tried:
In Global.asax.cs's Application_Start, I added:
var json = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
json.SerializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Include;
json.SerializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include;
This doesn't (seem to) error and seems to actually execute based on looking at the next thing I tried.
In a controller method (in a subclass of ApiController), added:
base.Configuration.Formatters.JsonFormatter.SerializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Include;
base.Configuration.Formatters.JsonFormatter.SerializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include;
I say #1 executed because both values in #2 were already set before those lines ran as I stepped through.
In a desperation move (because I REALLY don't want to decorate every property of every object) I tried adding this attrib to a property that was null and absent:
[JsonProperty(DefaultValueHandling = DefaultValueHandling.Include,
NullValueHandling = NullValueHandling.Include)]
All three produce the same JSON with null properties omitted.
Additional notes:
Running locally in IIS (tried built in too), Windows 7, VS2012 RTM.
Controller methods return List -- tried IEnumerable too
The objects I'm trying to serialize are pocos.
This won't work:
var json = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
json.SerializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Include;
But this does:
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings = new Newtonsoft.Json.JsonSerializerSettings()
{
NullValueHandling = Newtonsoft.Json.NullValueHandling.Include
};
For some odd reason the Newtonsoft.Json.JsonFormatter ignore assigments to the propreties os SerializerSettings.
In order to make your setting work create new instance of .SerializerSettings as shown below:
config.Formatters.JsonFormatter.SerializerSettings = new Newtonsoft.Json.JsonSerializerSettings
{
DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include,
NullValueHandling = Newtonsoft.Json.NullValueHandling.Include,
};
I finally came across this http://forums.asp.net/t/1824580.aspx/1?Serializing+to+JSON+Nullable+Date+gets+ommitted+using+Json+NET+and+Web+API+despite+specifying+NullValueHandling which describes what I was experiencing as a bug in the beta that was fixed for the RTM.
Though I had installed VS2012 RTM, my project was still using all the nuget packages that the beta came with. So I nugetted (nugot?) updates for everything and all is now well (using #1 from my question). Though I'm feeling silly for having burned half a day.
When I saw this answer I was upset because I was already doing this and yet my problem still existed. My problem rooted back to the fact that my object implemented an interface that included a nullable type, so, I had a contract stating if you want to implement me you have to have one of these, and a serializer saying if one of those is null don't include it. BOOM!

Make ZVAL persistent across the SAPI?

A ZVAL is typically created with emalloc so it is destroyed at the end of a page request. Is there a way to take an existing ZVAL and make it persist in the SAPI (equivalent of pemalloc)? What about creating a ZVAL with pemalloc?
Ideally what I'd like to do (in PHP code) is this:
class Object
{
public $foo;
}
if(!($object = persist("object")))
{
$object = persist("object", new Object());
}
$object->foo[] = "bar";
print count($object->foo);
On each request count would return +1 (assuming the same PHP "worker" is used every time - I'm using PHP-FPM).
You're basically describing http://lxr.php.net/opengrok/xref/PHP_5_3/ext/sysvshm/sysvshm.c#242
The closest you can get without duplicating functionality which is already in shm is https://github.com/flavius/php-persist. The catch: in a prefork/multiprocess SAPI (like apache's), different requests from the same client may end up in different processes, and as such, you'll see different data (try it on Linux + Firefox with a hard refresh every time in the browser).
Note: It's a work-in-progress, currently it persists only an integer. Adding an array should be trivial. Patches welcome. It still needs the deserialization part, and to actually use persist()'s first parameter.
zend_object can't be persisent therefore you can't do this. extensions like APC serializes the object into memory.

Resources