javascript setInterval not firing - ajax

I have a long running database update that I want to report on while it is happening.
I prime the interval time to fire at
I have seen lots of examples of problems with setInterval and it looks simple, but I cannot get it to fire with this code. The checkSaveStatus code pulls back a status report that tells how many rows have been saved. I will enhance it once I to display the results once I can get the basic functionality working.
I am beginning to suspect that this may have something to do with the way I am handling the Ajax call.
var checkTimer;
function checkSaveStatus() {
var saveStatus = ajaxCall("WebServices/FLSAService.asmx/CheckFLSASaveStatus");
if (saveStatus == null) {
clearInterval(checkTimer);
return;
}
log('saveStatus: ' + saveStatus.InputCount + '/' + saveStatus.ResultCount + ':' + saveStatus.SaveComplete, 'FLSA');
if (saveStatus.SaveComplete) {
reportSaveComplete = true;
clearInterval(checkTimer);
}
}
checkTimer = window.setInterval(checkSaveStatus, 1000);
... make an asynchronous Ajax call that takes a long time to complete
... the checkSaveStatus code doesn't fire until the Ajax call completes

As far as I know, you cannot return a value from your ajax because it is asynchronous. The way that is handled is through callbacks. So this section of code:
var saveStatus = ajaxCall("WebServices/FLSAService.asmx/CheckFLSASaveStatus");
if (saveStatus == null) {
clearInterval(checkTimer);
return;
}
Is more than likely not operating properly. It is hard to say without seeing the ajaxCall function, but you are probably not getting the result you intended to get. That left hand side assignment is not going to reflect anything which would be asynchronous. As a result it will either be what the default value is returned from ajaxCall, or it will be undefined, which when compared with == null will be true, and in turn will clear your interval and return.

Related

Why is there consistent variation in execution time on my timed trigger?

I have a timed trigger that runs every 15 minutes. A simplified partial version of the script is shown below. The script compiles data from about 50 other spreadsheets and records a row for each spreadsheet, then writes that summary data to the active spreadsheet.
I noticed that in the logs, there is an alternating pattern in the execution times for this script: half the executions take 200-400 seconds, and the other half typically take 700-900 seconds. It's a pretty significant difference, and the pattern persists over the past several days of logs.
There's nothing in the script itself that changes from one execution to the next, so I'm curious if anyone can suggest a reason this would happen (even better if it's a documented reason). For example, is there some sort of caching of the spreadsheet reads so that the next execution gets those values faster?
// The triggered function.
function updateRankings()
{
var rankingSheet = SS.getSheetByName(RANKING_SHEET_NAME) // SS is the active spreadsheet
// Read the id's of the target spreadsheets, which are stored on an external spreadsheet
var gyms = getRowsData( SpreadsheetApp.openById(ADMIN_PANEL_ID).getSheetByName(ADMIN_PANEL_SHEET_NAME))
// Iterate over gyms
gyms.forEach(getGymStats)
// Write the compiled data back to the active sheet
setRowsData(rankingSheet, gyms)
}
function getGymStats(gym)
{
var gymSpreadsheet = SpreadsheetApp.openById(gym.spreadsheetId)
// Force spreadsheet formulas to calculate before reading values
SpreadsheetApp.flush()
var metricsSheet = gymSpreadsheet.getSheetByName('Detailed Metrics')
var statsColumn = metricsSheet.getRange('E:E').getValues()
var roasColumn = metricsSheet.getRange('J:J').getValues()
// Get stats
var gymStats = {
facebookAdSpend: getFacebookAdSpend(gymSpreadsheet),
scheduling: statsColumn[8][0],
showup: statsColumn[9][0],
closing: statsColumn[10][0],
costPerLead: statsColumn[25][0],
costPerAppointment: statsColumn[26][0],
costPerShow: statsColumn[27][0],
costPerAcquisition: statsColumn[28][0],
leadCount: statsColumn[13][0],
frontEndRoas: (roasColumn[21][0] / statsColumn[5][0]) || 0,
totalRoas: (roasColumn[35][0] / statsColumn[5][0]) || 0,
totalProjectedRoas: (roasColumn[36][0] / statsColumn[5][0]) || 0,
conversionRate: (gym.currency ?
'=IFS(ISBLANK(INDIRECT("R[0]C[-4]", FALSE)),,ISBLANK(INDIRECT("R[0]C[-2]", FALSE)), 1,TRUE, IFERROR(GOOGLEFINANCE("Currency:"&INDIRECT("R[0]C[-2]", FALSE)&"USD")))' :
1)
}
Object.assign(gym, gymStats)
}
function getFacebookAdSpend(spreadsheet)
{
var range = spreadsheet.getRangeByName('FacebookAdSpend')
if (!range) return ''
return range.getValue()
}

Keep delaying HTTP request until new params are arriving

Suppose we have a function getIds() which takes an array of some ids
like this:
getIds([4, 1, 32]);
This function will delay HTTP call for 100ms. But during 100ms if this
same function is called again:
getIds([1, 8, 5]);
It will reset the 100ms timer and keep merging the passed ids. It will
send HTTP request only if it's not called by anyone for more than 100ms.
I am new to RxJS and here's my attempt to solve this problem but I have
a feeling that there could be better solution for this problem.
https://jsfiddle.net/iFadey/v3v3L0yd/2/
function getIds(ids) {
let observable = getIds._observable,
subject = getIds._subject;
if (!observable) {
subject = getIds._subject = new Rx.ReplaySubject();
observable = getIds._observable = subject
.distinct()
.reduce((arr, id) => {
arr.push(id);
return arr;
}, [])
// Some HTTP GET request will go here
// whose results may get flatMapped here
.publish()
.refCount()
;
}
ids.forEach((id) => {
console.log(id);
subject.next(id);
});
clearTimeout(getIds._timer);
getIds._timer = setTimeout(() => {
getIds._observable = null;
getIds._subject = null;
subject.complete();
}, 100);
return observable;
}
getIds([1, 2, 3])
.subscribe((ids) => {
console.log(ids);
});
getIds([3, 4, 5])
.subscribe((ids) => {
console.log(ids);
});
edit:
I am looking for an operator which behaves like debounce but without dropping previous values. Instead it must queue them.
I am not certain to have captured exactly which of the following you are looking for, so I will simply describe both. There are two "time based patterns" that are most often suited for this type of problem in my experience:
debounce
rxmarbles url: http://rxmarbles.com/#debounce ; github doc
As it says in its documentation, it
Emits an item from the source Observable after a particular timespan
has passed without the Observable omitting any other items.
throttle
rxmarbles url: none yet ; github doc
Returns an Observable that emits only the first item emitted by the
source Observable during sequential time windows of a specified
duration.
Basically, if you would like to wait until the inputs have quieted for a certain period of time before taking action, you want to debounce. If you do not want to wait at all, but do not wish to make more than 1 query within a specific amount of time, you will want to throttle.
Hope it makes sense.

group.all() call required for data to populate correctly

So I've encountered a weird issue when dealing with making Groups based on a variable when the crossfilter is using an array, instead of a literal number.
I currently have an output array of a date, then 4 values, that I then map into a composite graph. The problem is that the 4 values can fluctuate depending on the input given to the page. What I mean is that based on what it receives, I can have 3 values, or 10, and there's no way to know in advance. They're placed into an array which is then given to a crossfilter. When in testing, I was accessing using
dimension.group.reduceSum(function(d) { return d[0]; });
Where 0 was changed to whatever I needed. But I've finished testing, for the most part, and began to adapt it into a dynamic system where it can change, but there's always at least the first two. To do this I created an integer that keeps track of what index I'm at, and then increases it after the group has been created. The following code is being used:
var range = crossfilter(results);
var dLen = 0;
var curIndex = 0;
var dateDimension = range.dimension(function(d) { dLen = d.length; return d[curIndex]; });
curIndex++;
var aGroup = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
var bGroup = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
var otherGroups = [];
for(var h = 0; h < dLen-3; h++) {
otherGroups[h] = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
}
var charts = [];
for(var x = 0; x < dLen - 3; x++) {
charts[x] = dc.barChart(dataGraph)
.group(otherGroups[x], "Extra Group " + (x+1))
.hidableStacks(true)
}
charts[charts.length] = dc.lineChart(dataGraph)
.group(aGroup, "Group A")
.hidableStacks(true)
charts[charts.length] = dc.lineChart(dataGraph)
.group(aGroup, "Group B")
.hidableStacks(true)
The issue is this:
The graph gets built empty. I checked the curIndex variable multiple times and it was always correct. I finally decided to instead check the actual group's resulting data using the .all() method.
The weird thing is that AFTER I used .all(), now the data works. Without a .all() call, the graph cannot determine the data and outputs absolutely nothing, however if I call .all() immediately after the group has been created, it populates correctly.
Each Group needs to call .all(), or only the ones that do will work. For example, when I first was debugging, I used .all() only on aGroup, and only aGroup populated into the graph. When I added it to bGroup, then both aGroup and bGroup populated. So in the current build, every group has .all() called directly after it is created.
Technically there's no issue, but I'm really confused on why this is required. I have absolutely no idea what the cause of this is, and I was wondering if there was any insight into it. When I was using literals, there was no issue, it only happens when I'm using a variable to create the groups. I tried to get output later, and when I do I received NaN for all the values. I'm not really sure why .all() is changing values into what they should be especially when it only occurs if I do it immediately after the group has been created.
Below is a screenshot of the graph. The top is when everything has a .all() call after being created, while the bottom is when the Extra Groups (the ones defined in the for loop) do not have the .all() call anymore. The data is just not there at all, I'm not really sure why. Any thoughts would be great.
http://i.stack.imgur.com/0j1ey.jpg
It looks like you may have run into the classic "generating lambdas from loops" JavaScript problem.
You are creating a whole bunch of functions that reference curIndex but unless you call those functions immediately, they will refer to the same instance of curIndex in the global environment. So if you call them after initialization, they will probably all try to use a value which is past the end.
Instead, you might create a function which generates your lambdas, like so:
function accessor(curIndex) {
return function(d) { return d[curIndex]; };
}
And then each time call .reduceSum(accessor(curIndex))
This will cause the value of curIndex to get copied each time you call the accessor function (or you can think of each generated function as having its own environment with its own curIndex).

Angular.js - Data from AJAX request as a ng-repeat collection

In my web app i'm reciving data every 3-4 seconds from an AJAX call to API like this:
$http.get('api/invoice/collecting').success(function(data) {
$scope.invoices = data
}
Then displaying the data, like this: http://jsfiddle.net/geUe2/1/
The problem is that every time i do $scope.invoices = data ng-repeat rebuilds the DOM area which is presented in the jsfiddle, and i lose all <input> values.
I've tried to do:
angular.extend()
deep version of jQuery.extend
some other merging\extending\deep copying functions
but they can't handle the situation like this:
On my client a have [invoice1, invoice2, invoice3] and server sends me [invoice1, invoice3]. So i need invoice2 to be deleted from the view.
What are the ways to solve this problem?
Check the ng-repeat docs Angular.js - Data from AJAX request as a ng-repeat collection
You could use track by option:
variable in expression track by tracking_expression – You can also provide an optional tracking function which can be used to associate the objects in the collection with the DOM elements. If no tracking function is specified the ng-repeat associates elements by identity in the collection. It is an error to have more than one tracking function to resolve to the same key. (This would mean that two distinct objects are mapped to the same DOM element, which is not possible.) Filters should be applied to the expression, before specifying a tracking expression.
For example: item in items track by item.id is a typical pattern when the items come from the database. In this case the object identity does not matter. Two objects are considered equivalent as long as their id property is same.
You need to collect data from DOM when an update from the server arrives. Save whatever data is relevant (it could be only the input values) and don't forget to include the identifier for the data object, such as data._id. All of this should be saved in a temporary object such as $scope.oldInvoices.
Then after collecting it from DOM, re-update the DOM with the new data (the way you are doing right now) $scope.invoices = data.
Now, use underscore.js _.findWhere to locate if your data._id is present in the new data update, and if so - re-assign (you can use Angular.extend here) the data-value that you saved to the relevant invoice.
Came out, that #luacassus 's answer about track by option of ng-repeat directive was very helpful but didn't solve my problem. track by function was adding new invoices coming from server, but some problem with clearing inactive invoices occured.
So, this my solution of the problem:
function change(scope, newData) {
if (!scope.invoices) {
scope.invoices = [];
jQuery.extend(true, scope.invoices, newData)
}
// Search and update from server invoices that are presented in scope.invoices
for( var i = 0; i < scope.invoices.length; i++){
var isInvoiceFound = false;
for( var j = 0; j < newData.length; j++) {
if( scope.invoices[i] && scope.invoices[i].id && scope.invoices[i].id == newData[j].id ) {
isInvoiceFound = true;
jQuery.extend(true, scope.invoices[i], newData[j])
}
}
if( !isInvoiceFound ) scope.invoices.splice(i, 1);
}
// Search and add invoices that came form server, but are nor presented in scope.invoices
for( var j = 0; j < newData.length; j++){
var isInvoiceFound = false;
for( var i = 0; i < scope.invoices.length; i++) {
if( scope.invoices[i] && scope.invoices[i].id && scope.invoices[i].id == newData[j].id ) {
isInvoiceFound = true;
}
}
if( !isInvoiceFound ) scope.invoices.push(newData[j]);
}
}
In my web app i'm using jQuery's .extend() . There's some good alternative in lo-dash library.

When is LINQ (to objects) Overused?

My career started as a hard-core functional-paradigm developer (LISP), and now I'm a hard-core .net/C# developer. Of course I'm enamored with LINQ. However, I also believe in (1) using the right tool for the job and (2) preserving the KISS principle: of the 60+ engineers I work with, perhaps only 20% have hours of LINQ / functional paradigm experience, and 5% have 6 to 12 months of such experience. In short, I feel compelled to stay away from LINQ unless I'm hampered in achieving a goal without it (wherein replacing 3 lines of O-O code with one line of LINQ is not a "goal").
But now one of the engineers, having 12 months LINQ / functional-paradigm experience, is using LINQ to objects, or at least lambda expressions anyway, in every conceivable location in production code. My various appeals to the KISS principle have not yielded any results. Therefore...
What published studies can I next appeal to? What "coding standard" guideline have others concocted with some success? Are there published LINQ performance issues I could point out? In short, I'm trying to achieve my first goal - KISS - by indirect persuasion.
Of course this problem could be extended to countless other areas (such as overuse of extension methods). Perhaps there is an "uber" guide, highly regarded (e.g. published studies, etc), that takes a broader swing at this. Anything?
LATE EDIT: Wow! I got schooled! I agree I'm coming at this entirely wrong-headed. But as a clarification, please take a look below at sample code I'm actually seeing. Originally it compiled and worked, but its purpose is now irrelevant. Just go with the "feel" of it. Now that I'm revisiting this sample a half year later, I'm getting a very different picture of what is actually bothering me. But I'd like to have better eyes than mine make the comments.
//This looks like it was meant to become an extension method...
public class ExtensionOfThreadPool
{
public static bool QueueUserWorkItem(Action callback)
{
return ThreadPool.QueueUserWorkItem((o) => callback());
}
}
public class LoadBalancer
{
//other methods and state variables have been stripped...
void ThreadWorker()
{
// The following callbacks give us an easy way to control whether
// we add additional headers around outbound WCF calls.
Action<Action> WorkRunner = null;
// This callback adds headers to each WCF call it scopes
Action<Action> WorkRunnerAddHeaders = (Action action) =>
{
// Add the header to all outbound requests.
HttpRequestMessageProperty httpRequestMessage = new HttpRequestMessageProperty();
httpRequestMessage.Headers.Add("user-agent", "Endpoint Service");
// Open an operation scope - any WCF calls in this scope will add the
// headers above.
using (OperationContextScope scope = new OperationContextScope(_edsProxy.InnerChannel))
{
// Seed the agent id header
OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = httpRequestMessage;
// Activate
action();
}
};
// This callback does not add any headers to each WCF call
Action<Action> WorkRunnerNoHeaders = (Action action) =>
{
action();
};
// Assign the work runner we want based on the userWCFHeaders
// flag.
WorkRunner = _userWCFHeaders ? WorkRunnerAddHeaders : WorkRunnerNoHeaders;
// This outter try/catch exists simply to dispose of the client connection
try
{
Action Exercise = () =>
{
// This worker thread polls a work list
Action Driver = null;
Driver = () =>
{
LoadRunnerModel currentModel = null;
try
{
// random starting value, it matters little
int minSleepPeriod = 10;
int sleepPeriod = minSleepPeriod;
// Loop infinitely or until stop signals
while (!_workerStopSig)
{
// Sleep the minimum period of time to service the next element
Thread.Sleep(sleepPeriod);
// Grab a safe copy of the element list
LoadRunnerModel[] elements = null;
_pointModelsLock.Read(() => elements = _endpoints);
DateTime now = DateTime.Now;
var pointsReadyToSend = elements.Where
(
point => point.InterlockedRead(() => point.Live && (point.GoLive <= now))
).ToArray();
// Get a list of all the points that are not ready to send
var pointsNotReadyToSend = elements.Except(pointsReadyToSend).ToArray();
// Walk each model - we touch each one inside a lock
// since there can be other threads operating on the model
// including timeouts and returning WCF calls.
pointsReadyToSend.ForEach
(
model =>
{
model.Write
(
() =>
{
// Keep a record of the current model in case
// it throws an exception while we're staging it
currentModel = model;
// Lower the live flag (if we crash calling
// BeginXXX the catch code will re-start us)
model.Live = false;
// Get the step for this model
ScenarioStep step = model.Scenario.Steps.Current;
// This helper enables the scenario watchdog if a
// scenario is just starting
Action StartScenario = () =>
{
if (step.IsFirstStep && !model.Scenario.EnableWatchdog)
{
model.ScenarioStarted = now;
model.Scenario.EnableWatchdog = true;
}
};
// make a connection (if needed)
if (step.UseHook && !model.HookAttached)
{
BeginReceiveEventWindow(model, step.HookMode == ScenarioStep.HookType.Polled);
step.RecordHistory("LoadRunner: Staged Harpoon");
StartScenario();
}
// Send/Receive (if needed)
if (step.ReadyToSend)
{
BeginSendLoop(model);
step.RecordHistory("LoadRunner: Staged SendLoop");
StartScenario();
}
}
);
}
, () => _workerStopSig
);
// Sleep until the next point goes active. Figure out
// the shortest sleep period we have - that's how long
// we'll sleep.
if (pointsNotReadyToSend.Count() > 0)
{
var smallest = pointsNotReadyToSend.Min(ping => ping.GoLive);
sleepPeriod = (smallest > now) ? (int)(smallest - now).TotalMilliseconds : minSleepPeriod;
sleepPeriod = sleepPeriod < 0 ? minSleepPeriod : sleepPeriod;
}
else
sleepPeriod = minSleepPeriod;
}
}
catch (Exception eWorker)
{
// Don't recover if we're shutting down anyway
if (_workerStopSig)
return;
Action RebootDriver = () =>
{
// Reset the point SendLoop that barfed
Stagepoint(true, currentModel);
// Re-boot this thread
ExtensionOfThreadPool.QueueUserWorkItem(Driver);
};
// This means SendLoop barfed
if (eWorker is BeginSendLoopException)
{
Interlocked.Increment(ref _beginHookErrors);
currentModel.Write(() => currentModel.HookAttached = false);
RebootDriver();
}
// This means BeginSendAndReceive barfed
else if (eWorker is BeginSendLoopException)
{
Interlocked.Increment(ref _beginSendLoopErrors);
RebootDriver();
}
// The only kind of exceptions we expect are the
// BeginXXX type. If we made it here something else bad
// happened so allow the worker to die completely.
else
throw;
}
};
// Start the driver thread. This thread will poll the point list
// and keep shoveling them out
ExtensionOfThreadPool.QueueUserWorkItem(Driver);
// Wait for the stop signal
_workerStop.WaitOne();
};
// Start
WorkRunner(Exercise);
}
catch(Exception ex){//not shown}
}
}
Well, it sounds to me like you're the one wanting to make the code more complicated - because you believe your colleagues aren't up to the genuinely simple approach. In many, many cases I find LINQ to Objects makes the code simpler - and yes that does include changing just a few lines to one:
int count = 0;
foreach (Foo f in GenerateFoos())
{
count++;
}
becoming
int count = GenerateFoos().Count();
for example.
Where it isn't making the code simpler, it's fine to try to steer him away from LINQ - but the above is an example where you certainly aren't significantly hampered by avoiding LINQ, but the "KISS" code is clearly the LINQ code.
It sounds like your company could benefit from training up its engineers to take advantage of LINQ to Objects, rather than trying to always appeal to the lowest common denominator.
You seem to be equating Linq to objects with greater complexity, because you assume that unnecessary use of it violates "keep it simple, stupid".
All my experience has been the opposite: it makes complex algorithms much simpler to write and read.
On the contrary, I now regard imperative, statement-based, state-mutational programming as the "risky" option to be used only when really necessary.
So I'd suggest that you put effort into getting more of your colleagues to understand the benefit. It's a false economy to try to limit your approaches to those that you (and others) already understand, because in this industry it pays huge dividends to stay in touch with "new" practises (of course, this stuff is hardly new, but as you point out, it's new to many from a Java or C# 1.x background).
As for trying to pin some charge of "performance issues" on it, I don't think you're going to have much luck. The overhead involved in Linq-to-objects itself is minuscule.

Resources