Swift 2 How do I force Session data task to execute .resume() before continuing - swift2

I'm going to crazy trying to figure out how to prevent the rest of my code from executing before a session dataTaskWithRequest is finished. No matter what I do, the data task won't .resume() until everything else finishes. Nothing works. Not using a a while loop. Not putting the task inside a separate thread (dispatch_async). I'm at my wits end. To make matters worse, I feel like I'm overlooking something simple because it seems nobody else is asking this question.

NSURLSession is an asynchronous loading system. When you tell it to start, it begins fetching data in its own thread, and your code continues to run. If you want to have code that executes after the request returns data, the easiest way is to put that code in a completion handler block (by calling dataTaskWithRequest:completionHandler:).

Related

How to terminate long running function after a timeout

So I a attempting to shut down a long running function if something takes too long, maybe is just a solution to treating the symptoms rather than cause, but in any case for my situation it didn't really worked out.
I did it like this:
func foo(abort <- chan struct{}) {
for {
select{
case <-abort:
return
default:
///long running code
}
}
}
And in separate function I have which after some time closes the passed chain, which it does, if I cut the body returns the function. However if there is some long running code, it does not affect the outcome it simply continues the work as if nothing has happened.
I am pretty new to GO, but it feels like it should work, but it does not. Is there anything I am missing. After all routers frameworks have timeout function, after which whatever is running is terminated. So maybe this is just out of curiosity, but I would really want how to od it.
your code only checks whether the channel was closed once per iteration, before executing the long running code. There's no opportunity to check the abort chan after the long running code starts, so it will run to completion.
You need to occasionally check whether to exit early in the body of the long running code, and this is more idiomatically accomplished using context.Context and WithTimeout for example: https://pkg.go.dev/context#example-WithTimeout
In your "long running code" you have to periodically check that abort channel.
The usual approach to implement that "periodically" is to split the code into chunks each of which completes in a reasonably short time frame (given that the system the process runs on is not overloaded).
After executing each such chunk you check whether the termination condition holds and then terminate execution if it is.
The idiomatic approach to perform such a check is "select with default":
select {
case <-channel:
// terminate processing
default:
}
Here, the default no-op branch is immediately taken if channel is not ready to be received from (or closed).
Some alogrithms make such chunking easier because they employ a loop where each iteration takes roughly the same time to execute.
If your algorithm is not like this, you'd have to chunk it manually; in this case, it's best to create a separate function (or a method) for each chunk.
Further points.
Consider using contexts: they provide a useful framework to solve the style of problems like the one you're solving.
What's better, the fact they can "inherit" one another allow one to easily implement two neat things:
You can combine various ways to cancel contexts: say, it's possible to create a context which is cancelled either when some timeout passes or explicitly by some other code.
They make it possible to create "cancellation trees" — when cancelling the root context propagates this signal to all the inheriting contexts — making them cancel what other goroutines are doing.
Sometimes, when people say "long-running code" they do not mean code actually crunching numbers on a CPU all that time, but rather the code which performs requests to slow entities — such as databases, HTTP servers etc, — in which case the code is not actually running but sleeping on the I/O to deliver some data to be processed.
If this is your case, note that all well-written Go packages (of course, this includes all the packages of the Go standard library which deal with networked services) accept contexts in those functions of their APIs which actually make calls to such slow entities, and this means that if you make your function to accept a context, you can (actually should) pass this context down the stack of calls where applicable — so that all the code you call can be cancelled in the same way as yours.
Further reading:
https://go.dev/blog/pipelines
https://blog.golang.org/advanced-go-concurrency-patterns

How to test for infinite loop/recursion with ginkgo/gomega?

I have a golang function which recursively steps through a json string and replaces custom references with the json document they are referencing. I just noticed that I forgot to handle cyclic references, whose occurrence will lead to endless recursion.
Before I fix this, I'd like to write a test for this case (using ginkgo/gomega), so I can verify that my solution works and I will notice if I ever break it and run into this problem again.
But how do I do something like if this function call does not return within <timeout>, abort it and fail the test?
Gomega's Eventually has a timeout, but it doesn't abort the function if it is already running, so it will block forever in this case.
I found this example for how to check for a timeout using select and channels, but from what I understood, it is not possible to terminate a goroutine from outside - so my function will continue to run in the background, eating up resources?
What is the best way to check for infinite recursion?
You can't abort a running function. The function has to support abortion, idiomatically through a context.Context or a channel. If you want to support timeout or abortion, you have to change / refactor your function. And the function itself has to support this, e.g. it has to monitor the context.Context and return early if cancellation was requested. For details and example, see Terminating function execution if a context is cancelled
See related:
cancel a blocking operation in Go
Cancelling user specific goroutines

TOraDataSet blocking my program even with NonBlocking set to true

i was trying to do a little splashscreen so my program could open querys withaout blocking my aplication.
The code i wrote is this.
procedure TOpenThread.OpenTable;
begin
FActiveTable.NonBlocking := true;
FActiveTable.open;
end;
procedure TOpenThread.AbrirTablas;
begin
FActiveTable := FOwnerPlan.TablasEdicion.Tabla;
Synchronize(OpenTable);
while FActiveTable.Executing do
begin
if Terminated then CancelExecution;
sleep(10);
end;
FActiveTable.NonBlocking := false;
end;
This code is executing in a thread, and keeps doing it while the main thread gets stucked
I'm using delphi 2007
This code is executing in a thread
Now, it does not. Your code is:
Synchronize(OpenTable);
This explicitly means OpenTable procedure is executed within Main VCL thread and outside of your background auxillary TOpenThread.
More details on Synchronize that you may try to learn from are at https://stackoverflow.com/a/44162039/976391
All in all, there is just no simple solutions to complex problems.
If you want to offload DB interactions into a separate thread, you would have to make that thread exclusive owner and user of all DB components starting from the very DB connection and up to every transaction and every query.
Then you would have to make means to ASYNCHRONOUSLY post data requests from Main VCL Thread to the DB helper thread, and ASYNCHRONOUSLY receive data packets from it. Something like OmniThreadLibrary does with data streams - read their tutorials to get a gist of internal program structure when using multithreading.
You may TRY to modify your application to the following rules of thumb.
It would not be the fastest multithreading, but maybe the easiest.
all database components work is exclusively done inside TOpenThread.Execute context and those components are local members variables to the TOpenThread class. The connection-disconnection made only within TOpenThread.Execute; TOpenThread.Execute waits for the commands from the main thread in the almost infinite (until the thread gets terminated) and throttled loop.
specific database requests are made as anonymous procedures and are added to some TThreadedQueue<T> public member of TOpenThread object. The loop inside .Execute tries to fetch the action from that queue and execute it, if any exists, or throttle (Yield()) if the queue was empty. Neither Synchronize nor Queue wrappers are allowed around database operations. Main VCL thread only posts the requests, but NEVER waits for them to be actually executed.
those anonymous procedures after being executed do pass the database results back into main thread. Like http://www.uweraabe.de/Blog/2011/01/30/synchronize-and-queue-with-parameters/ or like Sending data from TThread to main VCL Thread or by any other back-into-main-thread way.
TOpenThread.Execute only exits the loop if the Terminated flag is set and the queue is empty. If Terminated is set then immediate exit would loose the actions still waiting on queue unprocessed.
Seems boring and tedious but easy? Not at all, add there that you would have to intercept exceptions and process all the errors in the async way - and you would "loose any hope entering this realm".
PS. and last but not least, about "This code is executing in a thread, and keeps doing it while the main thread gets stucked" supposition, frankly, I guess that you are wrong here and i think that BOTH your threads are stuck by one another.
Without fully understanding how thread-to-thread locking is designed to work in this specific component, carpet-bombing code with calls to Synchronize and other inter-thread locking tools, you have quite a chance to just chase them all your threads into the state of mutual lock, deadlock. See http://stackoverflow.com/questions/34512/

Windows: Child Process with Redirected Input and Output

I'm trying to create a Child Process with Redirected Input and Output (as described here - http://msdn.microsoft.com/en-us/library/ms682499(VS.85).aspx).
For the people that don't want to bother reading the source code on that page, the author is using anonymous pipes to redirect the child's input and output. The parent process writes to the child process's input and reads from the child process's output.
In that code however, the program is closing the pipes after reading and writing (in WriteToPipe and ReadFromPipe), so actually the program just reads a file, dumps it on the child process input stream and then reads the child process response.
Now, what I'm looking for is a code where we will not close the pipes, but we will continuously post requests and read the child process response (in contrast to making just 1 request).
I've tried several modifications to the source code given on the link posted above, but no matter what I try, the program always hangs when calling ReadFile() in the ReadFromPipe() function (it probably waits for the child to quit - but as I said I like to get the child response, and then send other requests to it).
Any ideas on how I can get over this?
Update:
Can anyone at least tell me whether using the .NET Process class with RedirectStandardInput and RedirectStandardOutput is a good option?
Had exactly the same problem, and solved it by using PeekNamedPipe (which according to MSDN is also fine for anonymous read pipes) to check for available data before each call to ReadFile. That removed the blocking issues I was getting, and allowed my GetExitCodeProcess() to see that the process had exited and cleanup the pipes.
Yes - the .Net Process class redirects the standard input / output of the child process with anonymous pipes in a very similar way to the linked sample if you use RedirectStandardInput and RedirectStandardOutput, so this is probably a fairly good option.
As for why ReadFile is hanging - it sounds like this function is waiting for the child process to either send some data back, or to close the pipe. If you want to continuously post requests to the child process then you need to either:
Know exactly when it is appropriate to read so that you are not left waiting / blocked for the child process (so for example you only read immediately after a request has been sent). This strategy is very risky as there is always a chance that the child process doesn't behave as expected and you are left waiting on the child process indefinitely.
Have a dedicated thread for reading - if you have a dedicated thread for reading then it doesn't matter that the thread may wait indefinitely for the child process as your other threads are still able to send requests and operate as normal. This method is more complex than the first option, however when done properly is far more robust. The only other drawback to this approach is that it requires you have an additional read thread for each child process, meaning that it doesn't scale very well if you need to communicate with a large number of child processes.
Use asynchronous IO - It is possible to call the ReadFile function in a way such that it always immediately returns, however notifies you when a read has completed (I'm a little fuzzy on the exact details on how this works as I'm more used to C#). This is know as Asynchronous IO and is the most versatile of these 3 methods as it allows you to communicate with many child processes without needing a dedicated thread for each one. The tradeoff however is that it is also the most complex to do correctly (at least in my opinion).

Functions too fast? So they get skipped?

With a function a div-popover gets called and filled with dynamic data using Ajax, PHP, MySQL and some HTML/CSS. All goes fine.
When I want to delete an entry in the list just popped over it functions as it should. When I send an update request for my list it also goes the way I want it. But, when i call delete(); update(); right after eachother my first function gets skipped somehow.
When I place alert()'s in both functions I see both functions are getting executed and the scripts walk fine through my ajax function, PHP ajax handler and returns the result back to the user, and with the alerts on all is going well too!
So my question is, are my functions too fast? Or is there something I'm missing here which is causing the non-delete?
Solution I've moved the update call to the line after the xmlHttp.resonseText in the delete function. In that way the second function call gets executed after the first function is done. Thanks all!
My guess would be that you haven't thought about the A in AJAX. It stands for asynchronous. That means that when you perform an XmlHttpRequest call, it will be executed in the background. I.e. after you've called delete(); the script will immediately continue and execute update();.
javascript will just execute the next statement while an ajax call is going on. Most ways of using ajax have an on complete function that you can call, so that code that you want executed after an ajax call is called only afterwards.
I've not worked with php, but it may be worth looking into that.
It sounds like the two methods are executed at the same time (asynchronously) since its AJAX.
you want them to be excuted synchronously.
See this patterns page for more information... Ajax patterns

Resources