Why isn't Laravel timing out? - laravel

I'm using Laravel 8.x on Windows 10 and am developing a CRUD that uses Vuetify on a Vue component that is trying to display the data in a small array of objects - just 11 items with four fields each - in a v-data-table. Most of the functionality is working now and I'm just doing the last bits of the code.
Everything has been going fine for several days but something has gone very wrong tonight. I changed a line or two of code, let npm run watch recompile the module I changed and hit refresh on my browser to see if the changes are working better than the original code. The refresh happened an hour and a half ago and I'm STILL waiting for it to render something!
It shows no sign of ending and I can't find any error messages anywhere. Apparently something is seriously wrong even though I've only changed a couple of lines of code. I'm completely baffled. I've never touched the default timeout settings and, honestly, don't even know where they are.
I have had timeouts before, in fact I manage to have two or three a day. For no reason I know, the first refresh after I've been away from the computer for an hour or two will tend to timeout while subsequent ones will go very quickly and let me get back to work coding. When I do get a timeout, the message says the timeout happened after 60 seconds. Of course that is not elapsed time - elapsed time is more like 5 to 10 minutes - but surely my current process should have timed out long ago.
Why am I getting timeouts when I get them? Why am I NOT getting one despite my last refresh having been over 90 minutes ago now? I should mention that I've rebooted the entire computer and even that did not help.
How can I get Laravel working better?
Update
As per Maarten's suggestion, I've reverted my code. Well, not exactly reverted it but it solved the problem. I had this:
deleteRow() {
this.deleteRowFlag = false;
this.deleteMessage = "Task deleted";
console.log(this.name + ".deleteRow() - deleteID: " + this.deleteID);
console.log(this.name + ".deleteRow() - deleteDescription: " + this.deleteDescription);
var index = -1;
for (let todo of this.todos) {
if (todo.id == deleteID) index = todo.index;
}
// const index = this.todos.indexOf(this.deleteDescription);
console.log(this.name + ".deleteRow() - index: " + index);
var deletedItemsArray = this.todos.splice(index, 1);
console.log(this.name + ".deleteRow() - deletedItemsArray.length: " + deletedItemsArray.length);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].id: " + deletedItemsArray[0].id);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].description: " + deletedItemsArray[0].description);
//FIX: Isn't actually deleting the row!
this.showDeleteSnackbar = true;
},
I couldn't imagine any of that being a problem and the little for loop was the last thing I added so I simply commented it out like so:
deleteRow() {
this.deleteRowFlag = false;
this.deleteMessage = "Task deleted";
console.log(this.name + ".deleteRow() - deleteID: " + this.deleteID);
console.log(this.name + ".deleteRow() - deleteDescription: " + this.deleteDescription);
// var index = -1;
// for (let todo of this.todos) {
// if (todo.id == deleteID) index = todo.index;
// }
// const index = this.todos.indexOf(this.deleteDescription);
console.log(this.name + ".deleteRow() - index: " + index);
var deletedItemsArray = this.todos.splice(index, 1);
console.log(this.name + ".deleteRow() - deletedItemsArray.length: " + deletedItemsArray.length);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].id: " + deletedItemsArray[0].id);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].description: " + deletedItemsArray[0].description);
//FIX: Isn't actually deleting the row!
this.showDeleteSnackbar = true;
},
I gave npm run watch a moment to compile the code, cloned the browser window, and refreshed. This time, the page rendered immediately. It looks as if I created an infinite loop and Laravel apparently failed to detect it and shut it down! I am astonished. I thought all professional development software for decades monitored for that kind of thing at compile time and execution time. I've been coding for multiple decades and I haven't seen a runaway loop since very early in my coding days.
I can see immediately that I need a this in front of deletedID in the loop. I'm surprised that the compiler didn't jump on that! (Actually, I think the logic still wouldn't work with that change so I'm going to take a few minutes to relearn how to manipulate arrays of objects in Javascript.)
As for the network tab in the browser, there was nothing in it. There was nothing in most of the tabs. And there was nothing in the npm run watch terminal window after I'd written the bad code.

Related

Can I use variables across all the threads in the thread groups in jmeter?

I'm trying to create a test plan for rate-limiting behavior.
I set a rule that blocks after X requests per minute, and I want to check that I get response code 200 until I reached the X requests, and from then, to get 429. I created a counter that shared between all the threads, but it seems to be a mess because it's not a thread-safe.
This is my beanshell "once only controller":
String props_pre_fix = ${section_id} + "-" + ${START.HMS};
props.remove("props_pre_fix" + ${section_id}, props_pre_fix);
props.put("props_pre_fix" + ${section_id}, props_pre_fix);
props.put(props_pre_fix + "_last_response_code", "200");
props.put(props_pre_fix + "_my_counter", "0");
and this is the beanshell assertion:
String props_pre_fix = props.get("props_pre_fix" + ${section_id});
//log.info("props_pre_fix " + props_pre_fix);
//extract my counter from props
int my_counter = Integer.parseInt(props.get(props_pre_fix + "_my_counter"));
//extract last response code
String last_response_code = props.get(props_pre_fix + "_last_response_code");
log.info("last_response_code " + last_response_code);
//if last seconds is greater than current seconds it means we are in a new minute - set counter to zero
if(last_response_code.equals("429") && ResponseCode.equals("200")){
log.info("we moved to a new minute - my_counter should be zero");
my_counter = 0;
}
//increase counter
my_counter++;
log.info("set counter with value: " + my_counter);
//save counter
props.put(props_pre_fix + "_my_counter", my_counter + "");
log.info("counter has set with value: " + my_counter);
if (ResponseCode.equals("200")) {
props.put(props_pre_fix + "_last_response_code", "200");
if(my_counter <= ${current_limit}){
Failure = false;
}
else {
Failure = true;
FailureMessage = "leakage of " + (my_counter - ${current_limit}) + " requests";
}
}
else if (ResponseCode.equals("429")) {
props.put(props_pre_fix + "_last_response_code", "429");
if(my_counter > ${current_limit}){
Failure = false;
}
}
I'm using props to share the counter, but I obviously feel that this is not the right way to do it.
Can you suggest me how to do that?
I don't think that it is possible to automatically test this requirement using JMeter Assertions because you don't have access to the current throughput so I would rather recommend considering cross-checking Response Codes per Second and Transactions per Second charts (can be installed using JMeter Plugins Manager)
All the 200 and 429 responses can be marked as successful using Response Assertion configured like:
If for some reason you still want to do this programmatically you might want to take a look at Summariser class source which is used for displaying current throughput in the STDOUT.
Also be informed that starting from JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting.

How to purge old content in firebase realtime database

I am using Firebase realtime database and overtime there is a lot of stale data in it and I have written a script to delete the stale content.
My Node structure looks something like this:
store
- {store_name}
- products
- {product_name}
- data
- {date} e.g. 01_Sep_2017
- some_event
Scale of the data
#Stores: ~110K
#Products: ~25
Context
I want to cleanup all the data which is like 30 months old. I tried the following approach :-
For each store, traverse all the products and for each date, delete the node
I ran ~30 threads/script instances and each thread is responsible for deleting a particular date of data in that month. The whole script is running for ~12 hours to delete a month data with above structure.
I have placed a limit/cap on the number of pending calls in each script and it is evident from logging that each script reaches the limit very quickly and speed of firing the delete call is much faster than speed of deletion So here firebase becomes a bottleneck.
Pretty evident that I am running purge script at client side and to gain performance script should be executed close to the data to save network round trip time.
Questions
Q1. How to delete firebase old nodes efficiently ?
Q2. Is there a way we can set a TTL on each node so that it cleans up automatically ?
Q3. I have confirmed from multiple nodes that data has been deleted from the nodes but firebase console is not showing decrease in data. I also tried to take backup of data and it still is showing some data which is not there when I checked the nodes manually. I want to know the reason behind this inconsistency.
Does firebase make soft deletions So when we take backups, data is actually there but is not visible via firebase sdk or firebase console because they can process soft deletes but backups don't ?
Q4. For the whole duration my script is running, I have a continuous rise in bandwidth section. With below script I am only firing delete calls and I am not reading any data still I see a consistency with database read. Have a look at this screenshot ?
Is this because of callbacks of deleted nodes ?
Code
var stores = [];
var storeIndex = 0;
var products = [];
var productIndex = -1;
const month = 'Oct';
const year = 2017;
if (process.argv.length < 3) {
console.log("Usage: node purge.js $beginDate $endDate i.e. node purge 1 2 | Exiting..");
process.exit();
}
var beginDate = process.argv[2];
var endDate = process.argv[3];
var numPendingCalls = 0;
const maxPendingCalls = 500;
/**
* Url Pattern: /store/{domain}/products/{product_name}/data/{date}
* date Pattern: 01_Jan_2017
*/
function deleteNode() {
var storeName = stores[storeIndex],
productName = products[productIndex],
date = (beginDate < 10 ? '0' + beginDate : beginDate) + '_' + month + '_' + year;
numPendingCalls++;
db.ref('store')
.child(storeName)
.child('products')
.child(productName)
.child('data')
.child(date)
.remove(function() {
numPendingCalls--;
});
}
function deleteData() {
productIndex++;
// When all products for a particular store are complete, start for the new store for given date
if (productIndex === products.length) {
if (storeIndex % 1000 === 0) {
console.log('Script: ' + beginDate, 'PendingCalls: ' + numPendingCalls, 'StoreIndex: ' + storeIndex, 'Store: ' + stores[storeIndex], 'Time: ' + (new Date()).toString());
}
productIndex = 0;
storeIndex++;
}
// When all stores have been completed, start deleting for next date
if (storeIndex === stores.length) {
console.log('Script: ' + beginDate, 'Successfully deleted data for date: ' + beginDate + '_' + month + '_' + year + '. Time: ' + (new Date()).toString());
beginDate++;
storeIndex = 0;
}
// When you have reached endDate, all data has been deleted call the original callback
if (beginDate > endDate) {
console.log('Script: ' + beginDate, 'Deletion script finished successfully at: ' + (new Date()).toString());
process.exit();
return;
}
deleteNode();
}
function init() {
console.log('Script: ' + beginDate, 'Deletion script started at: ' + (new Date()).toString());
getStoreNames(function() {
getProductNames(function() {
setInterval(function() {
if (numPendingCalls < maxPendingCalls) {
deleteData();
}
}, 0);
});
});
}
PS: This is not the exact structure I have but it is very similar to what we have (I have changed the node names and tried to make the example a realistic example)
Whether the deletes can be done more efficiently depends on how you now do them. Since you didn't share the minimal code that reproduces your current behavior it's hard to say how to improve it.
There is no support for a time-to-live property on documents. Typically developers do the clean-up in a administrative program/script that runs periodically. The more frequently you run the cleanup script, the less work it has to do, and thus the faster it will be.
Also see:
Delete firebase data older than 2 hours
How to delete firebase data after "n" days
Firebase actually deletes the data from disk when you tell it to. There is no way through the API to retrieve it, since it is really gone. But if you have a backup from a previous day, the data will of course still be there.

Understanding the output log's auto layout data

I'm debugging a crash that I believe is auto layout related. When the crash occurs, I get an enormous dump of information on the output area that begins like this:
2015-06-04 13:23:44.158 SpeedySend[22084:861374] Objective: {objective
0x7f99e06b3730: <500:242.5, 250:18443.5> +
<500:1>*0x7f99e061e570.negError{id: 4899} +
<500:1>*0x7f99e061e570.posErrorMarker{id: 4898} + <500:1,
250:-1>*0x7f99e061f940.negError{id: 4913} + <500:1,
250:1>*0x7f99e061f940.posErrorMarker{id: 4912} + <500:1,
250:-1>*0x7f99e061fb40.negError{id: 4915} + <500:1,
250:1>*0x7f99e061fb40.posErrorMarker{id: 4914} + <500:1,
250:-2>*0x7f99e0620890.negError{id: 4807} + <500:1,
250:2>*0x7f99e0620890.posErrorMarker{id: 4806} +
<500:2>*0x7f99e06496f0.posErrorMarker{id: 4916} +
<500:2>*0x7f99e0649f40.posErrorMarker{id: 4920} + <50 ...
and then runs on for a very long time and ends like this:
... 250:-1>*0x7f99e1d77ec0.negError{id: 5023} + <800:1,
250:1>*0x7f99e1d77ec0.posErrorMarker{id: 5022} + <500:1,
250:-1>*0x7f99e1d78150.negError{id: 5025} + <500:1,
250:1>*0x7f99e1d78150.posErrorMarker{id: 5024} +
<500:1>*0x7f99e1d78310.negError{id: 5027} +
<500:1>*0x7f99e1d78310.posErrorMarker{id: 5026} +
<500:1>*0x7f99e1d78620.negError{id: 5045} +
<500:1>*0x7f99e1d78620.posErrorMarker{id: 5044} +
<500:1>*0x7f99e1d788c0.negError{id: 5031} +
<500:1>*0x7f99e1d788c0.posErrorMarker{id: 5030} +
<500:1>*0x7f99e1d78d30.negError{id: 5033} +
<500:1>*0x7f99e1d78d30.posErrorMarker{id: 5032} +
<500:1>*0x7f99e1d790a0.negError{id: 5035} +
<500:1>*0x7f99e1d790a0.posErrorMarker{id: 5034} +
<500:1>*0x7f99e1d79460.negError{id: 5037} +
<500:1>*0x7f99e1d79460.posErrorMarker{id: 5036} +
<500:1>*0x7f99e1d79840.negError{id: 5039} +
<500:1>*0x7f99e1d79840.posErrorMarker{id: 5038} +
<500:1>*0x7f99e1d79c50.negError{id: 5041} +
<500:1>*0x7f99e1d79c50.posErrorMarker{id: 5040} +
<500:1>*0x7f99e1d7a080.negError{id: 5043} +
<500:1>*0x7f99e1d7a080.posErrorMarker{id: 5042} +
<500:1>*0x7f99e1d7aa60.negError{id: 5047} +
<500:1>*0x7f99e1d7aa60.posErrorMarker{id: 5046} +
<500:-7.45058e-08>*0x7f99e1f7ae60.negError{id: 3600}}
I would like to understand this data better as an aid to debugging my problem.
Is there a document or a posting that I can access that explains the format and meaning of this data?
Like, for instance, what does something like <500:1,250:-1> represent?
What is a negError?
And, most importantly, can something like {id: 3600} be tied back to a specific control that auto layout is laying out for me?
I'm particularly interested in the last questions because I've read here that very small numbers, when seen in these dumps, can indicate a crash due to accumulated loss of floating point precision in the auto layout engine.
You'll note that I have such a number on the very last line of my output data. So, if I can relate {id: 3600} back to one of my controls, I hope that will put me close to the origin of the problem.

Alternatives to macros for accessing data objects

I'm about to begin implementing a new version of an Email marketing program for my company. The old version of the program program heavily depended on micros and has about 2000 lines to prepare data for an email campaign to be run. But I have read somewhere that macros are not the best solution to run such heavy tasks and it's better we keep them for simple things.
I'm quite new to QV and I'm the kind of person that likes to learn as I go and not complete a big reference book before I start a project. I'm good at C# and Java but I realized QlikView scripts are in either VBScript or JScript. I have no experience with them whatsoever but they don't look very complicated to me at first glance.
What I was wondering was whether there is a better way of handling data in QlikView? That means can I use another programming language or do you suggest I stick to the script languages provided by QV? Because one big problem I've seen is that as macros get larger they become very hard to debug.
In the old version of our program developed by one of my colleagues who has now left the company, as soon as there was an error in preparing the data, all we got was the macros window with no clue about where the error had taken place. As I would like to implement this project incrementally and little by little, I would like to have a good mechanism for trouble shooting rather than goring though a 2000-line script to understand where the problem comes from.
Your suggestions about how to bring this project to a safe shore are very welcome. So, any good plugins or 3rd party app to monitor the data and facilitate my implementation can help.
We are an outlier there. We are using the OCX and connect to it in winforms.
We have then standard c# code with everything debuggable and it makes everyone here very happy indead after using endless amount of time debugging javascripts.
The users use QV for selecting stuff and then we use the selected event in the OCX and pull the selected data from QV for postprocessing and tag the QV data with dynamic sql update.
I do not nessesarily recommend this method, but it has increased dramatically the development output for us when using QV for datamining and then processing the data selected.
Next project we are not going to use the OCX. But all the buisness logic and postprocessing is in a com visible c# dll' that we access through vbscript macro.
EDIT. More details
This is the current setup communicating with the document through OCX
Change a selection
axQlikMainApp.ActiveDocument.Fields("%UnitID").Clear();
var selSuccess = axQlikMainApp.ActiveDocument.Fields(cls.QlikView.QvEvalStr.Fields.UnitId).Select("(%UnitID)");
reset a sheet object
axQlikMainApp.ActiveDocument.ClearCache();
axQlikMainApp.ActiveDocument.GetSheetObject("Document\\MySheetObjectName").Restore();
get a string from QV
string res axQlikMainApp.ActiveDocument.Evaluate("=concat(Distinct myField1 &'|' & MyField2,'*')");
and can get horribly complicated
string res axQlikMainApp.ActiveDocument.Evaluate( "=MaxString({1 <%UnitID= {" + sUnitIds + #"}>}'<b>' & UnitName & '</b> \r\n bla bla bla:' & UnitNotesPlanning) & " +
"'\n title1: ' & Count({1 <%UnitID= {" + sUnitIds +#"},%ISODate={'" + qlickViewIsoDate + "'},Need = {'Ja'}" + MinusCalc + ">}Distinct %CivicRegNo) & " +
"'\n Title2: ' & Count({1 <%UnitID= {" + sUnitIds + #"},%ISODate={'" + qlickViewIsoDate + "'} " + recallMinusCalc + ">}DISTINCT RevGUID) & " +
"'\n Title3: ' & Count({1 <%UnitID= {" + sUnitIds + #"},%ISODate={'" + qlickViewIsoDate + "'},Need2 = {'Ja'}>}Distinct %CivicRegNo) & '" +
"\n Title4:' & MinString({1 <%UnitID= {" + sUnitIds + #"},FutureBooking = {1}>} Date(BookingStart) & ' Beh: ' & If(IsNull(ResourceDisplayedName),'_',ResourceDisplayedName)) &'" +
"\n Title5:' & MaxString({1 <%UnitID= {" + sUnitIds + #"},FutureBooking = {0}>} Date(BookingStart) & ' Beh: ' & If(IsNull(ResourceDisplayedName),'_',ResourceDisplayedName)) &''" +
" & MaxString({1 <%UnitID= {" + sUnitIds + #"}>}if(UnitGeo_isRelocatedOnSameGeo=1,'\nOBS! Multiple geo addresses. Zoom!',''))" +
""
);

Entity Framework SaveChanges() first call is very slow

I appreciate that this issue has been raised a couple of times before, but I can't find a definitive answer (maybe there isn't one!).
Anyway the title tells it all really. Create a new context, add a new entity, SaveChanges() takes 20 seconds. Add second entity in same context, SaveChanges() instant.
Any thoughts on this? :-)
============ UPDATE =============
I've created a very simple app running against my existing model to show the issue...
public void Go()
{
ModelContainer context = new ModelContainer(DbHelper.GenerateConnectionString());
for (int i = 1; i <= 5; i++)
{
DateTime start = DateTime.Now;
Order order = context.Orders.Single(c => c.Reference == "AA05056");
DateTime end = DateTime.Now;
double millisecs = (end - start).TotalMilliseconds;
Console.WriteLine("Query " + i + " = " + millisecs + "ms (" + millisecs / 1000 + "s)");
start = DateTime.Now;
order.Note = start.ToLongTimeString();
context.SaveChanges();
end = DateTime.Now;
millisecs = (end - start).TotalMilliseconds;
Console.WriteLine("SaveChanges " + i + " = " + millisecs + "ms (" + millisecs / 1000 + "s)");
Thread.Sleep(1000);
}
Console.ReadKey();
}
Please do not comment on my code - unless it is an invalid test ;)
The results are:
Query 1 = 3999.2288ms (3.9992288s)
SaveChanges 1 = 3391.194ms (3.391194s)
Query 2 = 18.001ms (0.018001s)
SaveChanges 2 = 4.0002ms (0.0040002s)
Query 3 = 14.0008ms (0.0140008s)
SaveChanges 3 = 3.0002ms (0.0030002s)
Query 4 = 13.0008ms (0.0130008s)
SaveChanges 4 = 3.0002ms (0.0030002s)
Query 5 = 10.0005ms (0.0100005s)
SaveChanges 5 = 3.0002ms (0.0030002s)
The first query takes time which I assume is the view generation? Or db connection?
The first save takes nearly 4 seconds which for the more complex save in my app takes over 20 seconds which is not acceptable.
Not sure where to go with this now :-(
UPDATE...
SQL Profiler shows first query and update are fast and are not different for first. So I know delay is Entity Framework as suspected.
It might not be the SaveChanges call - the first time you make any call to the database in EF, it has to do some initial code generation from the metadata. You can pre-generate this though at compile-time: http://msdn.microsoft.com/en-us/library/bb896240.aspx
I would be surprised if that's the only problem, but it might help.
Also have a look here: http://msdn.microsoft.com/en-us/library/cc853327.aspx
I would run the following code on app start up and see how long it takes and if after that the first SaveChanges is fast.
public static void UpdateDatabase()
{
//Note: Using SetInitializer is reconnended by Ladislav Mrnka with reputation 275k
//http://stackoverflow.com/questions/9281423/entity-framework-4-3-run-migrations-at-application-start
Database.SetInitializer<DAL.MyDbContext>(
new MigrateDatabaseToLatestVersion<DAL.MyDbContext,
Migrations.MyDbContext.Configuration>());
using (var db = new DAL.MyDbContext()) {
db.Database.Initialize(false);//Execute the migrations now, not at the first access
}
}

Resources