Readln with TIdCmdTcpClient - indy

I have a TIdCmdTCPClient which receives commands teminated in LF from a tcp server (written in C) into commandhandlers and accordingly updates a UI using TIdNotify. All is fine if it was not that somtimes I need to talk to the server in the traditional way using writeln and readln. If I try to do it there are problems such as the UI freezes, subsequent commands arrive later etc.
IS there a specific way to make work the pair writeln-readln just fine with TIdCmdTCPClient as they work with TIdTCPClient?

Please provide more informmation about the protocol you are implementing. You can certainly issue additional WriteLn() and ReadLn() calls while you are inside of a command handler event, as long as that is what the server is expecting you to do. But if you need to call ReadLn() out-of-band then you are going to conflict with TIdCmdTCPClient's internal reading.

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

Synchronous VB6 apparently behaving asynchronously! Crash

We have a legacy VB6 app which has started, from time to time, hangs. We thought it may be to do with a shift to Citrix, but can now replicate the behaviour on a thick client on Win10. We don't think that we have seen this before on earlier Windows versions, but are still checking logs to confirm that.
We experience the behaviour when tabbing into a text box and then tabbing out. As we pass through it, we are making a simple ado call to lookup/validate some data in a text box. As part of the correct program running we are logging
“Opening Dataset: SELECT ... FROM ... ”
“Opened Dataset”
Between these 2 log statements is simple ado data retrieval code with which we have had no problems previously. It is in an ActiveX dll and is running synchronously. Most importantly is that between these 2 log statements there is no DoEvents or api call which would yield control. As far as we can see, it should be a purely synchronous operation.
When the system crashes, which happens sporadically, we can see other logging statements appear between these 2 which can be either resource status (e.g. how much memory, gdi/user objects - which would usually be found because a timer has triggered in the main form) or focus type events - which aren’t timer driven - at least in our codebase.
“Opening Dataset: SELECT ... FROM ... ”
“Resource Status: ...”
“Opened Dataset”```
or
“Opening Dataset: SELECT ... FROM ... ”
“TextItem.OnLostFocus Item1 ...”
“TextItem.Validate ...”
“TextItem.OnGotFocus Item2 ...
“Opened Dataset”
So my initial question is, in what scenario can what should be a synchronous operation be interrupted and appear to act asynchronously.
For example, and we aren’t doing this, I could imagine writing some unsafe code whereby by using a multimedia timer (on another thread) and supplying an AddressOf parameter to the address of a function on one of our modules, that that timer initiates execution of our code, separate to the correct control flow. Other than something like that, I just can’t see how synchronous vb6 code could be interrupted in this way.
I’d be really grateful of any thoughts, suggestions or advice. I’m really sorry if this is soo vague. It perhaps reflects how I’m struggling to get my head round this problem.
Just to say, we tracked this down to Windows 10 plus an old (out of support) socket component we are using. It looks like it is pumping the message queue "at the wrong time" and hence we are seeing UI events appear in the middle of a synchronous process. We don't see this behaviour on earlier Windows versions.
I don't know what may have changed in Win10 which would result in this, but we obviously need to upgrade.
In our case we had a few long running timers to pull status/changes from the DB which caused this. We are using ADO with SQL Native Client and MARS, which worked great up until Windows 10 where intermittent lock ups occurred. Logging and Windbg confirmed this was happening when 2 requests where hitting the ADO connection at the same time. The error from ADO was "Unable to open a logical session" error number -2147467259, and actually caused SQL Server 2014 (running on another machine) to block all other client queries from multiple different applications and machines until the locked up app was killed. I could not replicate this in the IDE as apparently that forces timers to work the way they always did. The fix was to async our ADO implementation and put a connection manager on top of the SQL connections to force requestors to wait their turn (basically taking the Win10 async'd timer feature back out). My only performance impact was the additional few milliseconds of delaying the timer fired SQL query when it collided with a another query.

Best practice for updating Go web application

I am wondering what would be the best practice for deploying updates to a (MVC) Go web application. Imagine the following scenario :
1) Code and test some changes for my Go Web Application
2) Deploy update without anyone currently using the previous version getting interrupted.
I don't know how to make sure point 2) can be covered - when somebody is sending a request to the server and I rebuild/restart it just in this moment, he gets an error - even if the request just uses a part of the code I did not touch or that is backwards-compatible, or if I just added a new Request-handler.
Maybe I'm missing something trivial or a well-known pattern as I am just in the process of learning go and my previous web applications were ASP.NET- or php-applications where this was no issue as I did not need to restart the webserver on code changes.
It's not just an issue with Go, but in general we can divide the problem into two separate ones:
Making sure current requests do not get terminated and affect user experience.
Making sure there is no down-time in which new requests cannot be handled.
The first one is easier to tackle: You just don't violently kill your server, but tell it to exit, causing a "Drain phase", in which it does not accept new requests and only finishes the currently running requests, and exits. This can be done by listening on signals for example, and entering the app into a special state.
It's not trivial with Go as the default http server doesn't support shutting it down, but you can start a server with a net.Listener, and then keep a reference to it an close it when the time is due.
Now, doing only approach one and then starting the service again will cause new requests not to be accepted while this is going on, and we all know this can take a number of seconds in extreme cases.
So what we need is another instance of the server already running with the new code, the instant the old one is not responding to new requests, right? That can be done in several ways:
Having more than one server, and a load-balancer on top of them, allowing one (or more) server to take the load while we restart another. That's the simplest way, and the way most people do it. If you need N servers to take the load of your users, just keep N+1 and restart one at a time.
Using socket sharing tricks. In Newer Linux kernels, Many processes can listen and accept on the same port. What you do is simply start the new instance and then tell the old one to finish and exit. This way there is no pause. This is done by setting SO_REUSEPORT on the listening socket.
The above can be automated with ready to ship solutions, like Einhorn, that deals with all the details for you, see https://github.com/stripe/einhorn
Another approach is documented in this blog post: http://blog.nella.org/?p=879

How do I implement CloudPort side logic? JScript or VBScript produce an ActiveX control fault (even when left dummy, with empty-body functions)

The team I am working with has bought a CloudPort license (from CrossCheck Networks) and we are currently facing the problem of not being able to implement any sort of logic in the service Mocks (to control response selection). It would be something as simple as:
if (requestCounter++ == 1)
then
response = $fn:Global(MyFirstXmlString)$ // <-- this is CloudP syntax for vrbls
else
response = $fn:Global(MySecondXmlString)$.
We did not find any sample for using the Dll Plugin and neither of the two JScript and VBScript Tasks are working (i.e., our client machine gets back not the desired MySecondXmlString response but instead a Fault with
<faultstring>
ActiveX control '0e59f1d5-1fbe-11d0-8ff2-00a0d10038bc' cannot be instantiated
because the current thread is not in a single-threaded apartment.
</faultstring>.
Believe it or not, the fault above is being obtained even if the J- or VB-Script task is left empty! It's hard for us to believe that all the logic functionality advertised in the CloudPort UI is fake and that nothing is able to help one implement the kind of logic described above.
Any help would be appreciated!
Thanks,
Pompi
PS: A bit more details here on why the kind of logic described above is needed: We use SoapSonar in our testing framework to fire requests to a BizTalk orchestration application. The CP mocks are needed to simulate the environment of that BT orchestration. We cannot control individual mocked responses via SSonar requests: the (for cloudport: incoming) client requests are made by Production code and cannot be altered or controled by our SSonar client). The only Tasks functionality that worked for us is a DB-table working as an offline channel b/w SSonar and CP (SSonar writes in it and CP reads from it). CloudPort's reading of, say, responseXml's, from DB works fine but we cannot find a way to implement further behavior controlling logic on the CP side. Therefore, this stackoverflow posting. And thx for having the patience to read this whole shananigan :).
Don't think you can control this from the script.
The threading model should be controlled by the host, which I suppose uses windows's "vbscript.dll" for the actual execution.
So if you cannot find any settings under the tool's options or in the help :), you should look in the registry keys for the threading options of that ActiveX or "vbscript.dll"
That is the "ThreadingModel" value and try to change the values (you will also have to search the net for those, don't know them by heart).
There are chances that some other application (antivirus?) has changes the path to the dll that the COM interface should actually point to (see http://social.technet.microsoft.com/Forums/en-US/ieitpropriorver/thread/ac10bd5f-6d91-4aac-857c-0ed5758088ec)
Hope it helps.

Reverse AJAX? Can data changes be 'PUSHED' to script?

I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.

Resources