Differences between StringGetAsync vs StringGet in StackExchange.Redis Multiplexer Connection - async-await

I was wondering that there are any performance differences between
StringGetAsync vs StringGet while we are using the StackExchange.Redis Multiplexer Connection.
The StackExhange.Redis Document explained like
When used concurrently by different callers, it automatically pipelines the separate requests, so regardless of whether the requests use blocking or asynchronous access, the work is all pipelined.
The explanation is really confusing. Which one provides more performing code?
for (int i = 0; i < 5000; i++)
{
taskList.Add(Task.Run( async () =>
{
var value = db.StringGetAsync("Localization:ECommerceBackend:tr-TR:bo_IsThereAnyStoreStock");
}));
}
or
for (int i = 0; i < 5000; i++)
{
taskList.Add(Task.Run( async () =>
{
var value = db.StringGet("Localization:ECommerceBackend:tr-TR:bo_IsThereAnyStoreStock");
}));
}

Related

UWP: calculate expected size of screenshot using multiple monitor setup

Right, parsing the clipboard, I am trying to detect if a stored bitmap in there might be the result of a screenshot the user took.
Everything is working fine as long as the user only has one monitor. Things become a bit more involved with two or more.
I am using the routing below to grab all the displays in use. Now, since I have no idea how they are configured to hang together, I do not know how to calculate the size of the screenshot (that Windows would produce) from that information.
I explicitly do not want to take a screenshot myself to compare. It's a privacy promise by my app.
Any ideas?
Here is the code for the size extractor, run in the UI thread.
public static async Task<Windows.Graphics.SizeInt32[]> GetMonitorSizesAsync()
{
Windows.Graphics.SizeInt32[] result = null;
var selector = DisplayMonitor.GetDeviceSelector();
var devices = await DeviceInformation.FindAllAsync(selector);
if (devices?.Count > 0)
{
result = new Windows.Graphics.SizeInt32[devices.Count];
int i = 0;
foreach (var device in devices)
{
var monitor = await DisplayMonitor.FromInterfaceIdAsync(device.Id);
result[i++] = monitor.NativeResolutionInRawPixels;
}
}
return result;
}
Base on Raymonds comment, here is my current solution in release code...
IN_UI_THREAD is a convenience property to see if the code executes in the UI thread,
public static Windows.Foundation.Size GetDesktopSize()
{
Int32 desktopWidth = 0;
Int32 desktopHeight = 0;
if (IN_UI_THREAD)
{
var regions = Windows.UI.ViewManagement.ApplicationView.GetForCurrentView()?.WindowingEnvironment?.GetDisplayRegions()?.Where(r => r.IsVisible);
if (regions?.Count() > 0)
{
// Get grab the most left and the most right point.
var MostLeft = regions.Min(r => r.WorkAreaOffset.X);
var MostTop = regions.Min(r => r.WorkAreaOffset.Y);
// The width is the distance between the most left and the most right point
desktopWidth = (int)regions.Max(r => r.WorkAreaOffset.X + r.WorkAreaSize.Width - MostLeft);
// Same for height
desktopHeight = (int)regions.Max(r => r.WorkAreaOffset.Y + r.WorkAreaSize.Height - MostTop);
}
}
return new Windows.Foundation.Size(desktopWidth, desktopHeight);
}

What are the performance differences between Promise.join Promise.all?

I was combing through the Bluebird docs, and they recommend using Promise.join over Promise.all for concurrent discrete promises.
The documentation says
Promise.join is much easier (and more performant) to use when you have a fixed amount of discrete promises that you want to coordinate concurrently.
However there's no explanation about the performance comment.
The only difference I see is that .all does the extra operation of unpacking the array. Seems like a stretch to call that "more performant" so maybe there's something else under the hood?
Any explanation would be helpful, thanks!
I have not seen any major performance impacts. Checkout the Fiddle here for benchmarking this:
https://jsfiddle.net/msmta/3936tnca/
// spread
function startSpreadTest() {
var startTime = new Date()
var spreadArray = []
for(var i = 0; i < promiseCount; i++)
spreadArray.push(Promise.delay(promiseDelay))
return Promise.all(spreadArray).spread(function(){
return getStopTime(startTime)
})
}
// join
function startJoinTest() {
var startTime = new Date()
var args = []
for(var i = 0; i < promiseCount; i++)
args.push(Promise.delay(promiseDelay))
args.push(function(){
return getStopTime(startTime)
})
return Promise.join.apply(null, args)
}

Immutable object pattern

I keep hearing that using immutable data structures and immutable objects is a good pattern for thread safety and preventing race conditions without needing to use semaphores, but I still can't think of a way to use them. Even for the most simple scenarios. For example
int a = 0;
Semaphore s = new Semaphore();
void thread1() {
s.wait();
if (a == 2) {
// do something
}
a = 1;
s.signal();
}
void thread2() {
s.wait();
if (a == 1) {
// do something
}
a = 2;
s.signal();
}
How can I change this code to use an immutable object for a?

node.js Number.prototype isn't behaving as expected

I'm trying to do a little math library to facilitate my app, this however is throwing an off error.
TypeError: Object 25 has no method 'permutation'
function permutate(p) {
var states = new Number(p.length)
chat( states.permutation(states) )
}
Number.prototype.factorial = function() {
for(var i = 2; i <= this; i++)
n*=i
return n
}
Number.prototype.permutation = function(r) {
return (this.factorial() / (this-r).factorial())
}
in addition to hopefully fixing my code, I'm really curious why the objects type is being interpreted as a number primitive? (or whatever is really going on here)

facing performance issues with knockout mapping plugin

I have decent large data set of around 1100 records. This data set is mapped to an observable array which is then bound to a view. Since these records are updated frequently, the observable array is updated every time using the ko.mapping.fromJS helper.
This particular command takes around 40s to process all the rows. The user interface just locks for that period of time.
Here is the code -
var transactionList = ko.mapping.fromJS([]);
//Getting the latest transactions which are around 1100 in number;
var data = storage.transactions();
//Mapping the data to the observable array, which takes around 40s
ko.mapping.fromJS(data,transactionList)
Is there a workaround for this? Or should I just opt of web workers to improve performances?
Knockout.viewmodel is a replacement for knockout.mapping that is significantly faster at creating viewmodels for large object arrays like this. You should notice a significant performance increase.
http://coderenaissance.github.com/knockout.viewmodel/
I have also thought of a workaround as follows, this uses less amount of code-
var transactionList = ko.mapping.fromJS([]);
//Getting the latest transactions which are around 1100 in number;
var data = storage.transactions();
//Mapping the data to the observable array, which takes around 40s
// Instead of - ko.mapping.fromJS(data,transactionList)
var i = 0;
//clear the list completely first
transactionList.destroyAll();
//Set an interval of 0 and keep pushing the content to the list one by one.
var interval = setInterval(function () {if (i == data.length - 1 ) {
clearInterval(interval);}
transactionList.push(ko.mapping.fromJS(data[i++]));
}, 0);
I had the same problem with mapping plugin. Knockout team says that mapping plugin is not intended to work with large arrays. If you have to load such big data to the page then likely you have improper design of the system.
The best way to fix this is to use server pagination instead of loading all the data on page load. If you don't want to change design of your application there are some workarounds which maybe help you:
Map your array manually:
var data = storage.transactions();
var mappedData = ko.utils.arrayMap(data , function(item){
return ko.mapping.fromJS(item);
});
var transactionList = ko.observableArray(mappedData);
Map array asynchronously. I have written a function that processes array by portions in another thread and reports progress to the user:
function processArrayAsync(array, itemFunc, afterStepFunc, finishFunc) {
var itemsPerStep = 20;
var processor = new function () {
var self = this;
self.array = array;
self.processedCount = 0;
self.itemFunc = itemFunc;
self.afterStepFunc = afterStepFunc;
self.finishFunc = finishFunc;
self.step = function () {
var tillCount = Math.min(self.processedCount + itemsPerStep, self.array.length);
for (; self.processedCount < tillCount; self.processedCount++) {
self.itemFunc(self.array[self.processedCount], self.processedCount);
}
self.afterStepFunc(self.processedCount);
if (self.processedCount < self.array.length - 1)
setTimeout(self.step, 1);
else
self.finishFunc();
};
};
processor.step();
};
Your code:
var data = storage.transactions();
var transactionList = ko.observableArray([]);
processArrayAsync(data,
function (item) { // Step function
var transaction = ko.mapping.fromJS(item);
transactionList().push(transaction);
},
function (processedCount) {
var percent = Math.ceil(processedCount * 100 / data.length);
// Show progress to the user.
ShowMessage(percent);
},
function () { // Final function
// This function will fire when all data are mapped. Do some work (i.e. Apply bindings).
});
Also you can try alternative mapping library: knockout.wrap. It should be faster than mapping plugin.
I have chosen the second option.
Mapping is not magic. In most of the cases this simple recursive function can be sufficient:
function MyMapJS(a_what, a_path)
{
a_path = a_path || [];
if (a_what != null && a_what.constructor == Object)
{
var result = {};
for (var key in a_what)
result[key] = MyMapJS(a_what[key], a_path.concat(key));
return result;
}
if (a_what != null && a_what.constructor == Array)
{
var result = ko.observableArray();
for (var index in a_what)
result.push(MyMapJS(a_what[index], a_path.concat(index)));
return result;
}
// Write your condition here:
switch (a_path[a_path.length-1])
{
case 'mapThisProperty':
case 'andAlsoThisOne':
result = ko.observable(a_what);
break;
default:
result = a_what;
break;
}
return result;
}
The code above makes observables from the mapThisProperty and andAlsoThisOne properties at any level of the object hierarchy; other properties are left constant. You can express more complex conditions using a_path.length for the level (depth) the value is at, or using more elements of a_path. For example:
if (a_path.length >= 2
&& a_path[a_path.length-1] == 'mapThisProperty'
&& a_path[a_path.length-2] == 'insideThisProperty')
result = ko.observable(a_what);
You can use typeOf a_what in the condition, e.g. to make all strings observable.
You can ignore some properties, and insert new ones at certain levels.
Or, you can even omit a_path. Etc.
The advantages are:
Customizable (more easily than knockout.mapping).
Short enough to copy-paste it and write individual mappings for different objects if needed.
Smaller code, knockout.mapping-latest.js is not included into your page.
Should be faster as it does only what is absolutely necessary.

Resources