Print from a windows App - data from an Oracle DB on Linux - windows

We have an Oracle DB running on Linux.
When data is ready to report, a value is placed in a table in the DB.
Presently an app is scheduled to run every 10 seconds to check for the value and if it's there it prints out the report. NOT prety.
How can I make this pretty?
I sort of envision the Oracle DB somehow triggering the windows server to print (TCP/IP? Small service listening on windows box) so that the windows app only fires up when it's time to do work.
How would you get the Linux/Oracle system to "signal" the Windows box?

Actually, polling is quite a quick, cheap, resilient, and best of all - implemented method of achieving your goal.
Remote db-initiated triggers are wonderfully sexy, and it could be as simple as using Cups with Samba and printing directly from the Linux box. Or it could be as complicated as a full two-way RPC with error checking.
Either way, don't fix things that aren't broken.

Related

Synchronous VB6 apparently behaving asynchronously! Crash

We have a legacy VB6 app which has started, from time to time, hangs. We thought it may be to do with a shift to Citrix, but can now replicate the behaviour on a thick client on Win10. We don't think that we have seen this before on earlier Windows versions, but are still checking logs to confirm that.
We experience the behaviour when tabbing into a text box and then tabbing out. As we pass through it, we are making a simple ado call to lookup/validate some data in a text box. As part of the correct program running we are logging
“Opening Dataset: SELECT ... FROM ... ”
“Opened Dataset”
Between these 2 log statements is simple ado data retrieval code with which we have had no problems previously. It is in an ActiveX dll and is running synchronously. Most importantly is that between these 2 log statements there is no DoEvents or api call which would yield control. As far as we can see, it should be a purely synchronous operation.
When the system crashes, which happens sporadically, we can see other logging statements appear between these 2 which can be either resource status (e.g. how much memory, gdi/user objects - which would usually be found because a timer has triggered in the main form) or focus type events - which aren’t timer driven - at least in our codebase.
“Opening Dataset: SELECT ... FROM ... ”
“Resource Status: ...”
“Opened Dataset”```
or
“Opening Dataset: SELECT ... FROM ... ”
“TextItem.OnLostFocus Item1 ...”
“TextItem.Validate ...”
“TextItem.OnGotFocus Item2 ...
“Opened Dataset”
So my initial question is, in what scenario can what should be a synchronous operation be interrupted and appear to act asynchronously.
For example, and we aren’t doing this, I could imagine writing some unsafe code whereby by using a multimedia timer (on another thread) and supplying an AddressOf parameter to the address of a function on one of our modules, that that timer initiates execution of our code, separate to the correct control flow. Other than something like that, I just can’t see how synchronous vb6 code could be interrupted in this way.
I’d be really grateful of any thoughts, suggestions or advice. I’m really sorry if this is soo vague. It perhaps reflects how I’m struggling to get my head round this problem.
Just to say, we tracked this down to Windows 10 plus an old (out of support) socket component we are using. It looks like it is pumping the message queue "at the wrong time" and hence we are seeing UI events appear in the middle of a synchronous process. We don't see this behaviour on earlier Windows versions.
I don't know what may have changed in Win10 which would result in this, but we obviously need to upgrade.
In our case we had a few long running timers to pull status/changes from the DB which caused this. We are using ADO with SQL Native Client and MARS, which worked great up until Windows 10 where intermittent lock ups occurred. Logging and Windbg confirmed this was happening when 2 requests where hitting the ADO connection at the same time. The error from ADO was "Unable to open a logical session" error number -2147467259, and actually caused SQL Server 2014 (running on another machine) to block all other client queries from multiple different applications and machines until the locked up app was killed. I could not replicate this in the IDE as apparently that forces timers to work the way they always did. The fix was to async our ADO implementation and put a connection manager on top of the SQL connections to force requestors to wait their turn (basically taking the Win10 async'd timer feature back out). My only performance impact was the additional few milliseconds of delaying the timer fired SQL query when it collided with a another query.

Why are my ADODB queries not being persisted to the SQL server?

My VB6 program uses ADODB to do a lot of SQL (2000) CRUD.
Sometimes the network connection between the remote clients and the data center somehow "drops" resulting in the impossibility to establish new connections (so users launching the program can't use it).
The issue is the following:
Anyone who is using the program at the moment of the "drop" can continue using it with no issues whatoever, perform every operation, update data, read data, and everything seems like is working normally.
User then proceeds to fire up a "sum-up" report which lists everything that was done (before or after the "drop").
If we then check the database, all data regarding whatever was done after the network drop is not there. User goes back into the program and everything is as it was before the network drop.
It seems like all queries where somehow performed in-memory ? I'm at a loss about how to even approach the issue (I'm familiar enough with VB6 to work with the source code but I don't know a lot about ADODB).
I haven't yet tried to replicate the behavior due to limited customer's availability (development environment is housed in their offices), I'll try starting up the program from the IDE then rip out the network cable.
Provided I can replicate the issue, how do I fix this ? Is there some setting I'm not aware of ?
On a side note: the issue is sporadic (it happened a handful of times during the last year, and the software is being used heavily and on a daily basis by mutiple concurrent users).
After reading up on Disconnected Recordsets, it seems that's what's behind this odd behavior I'm experiencing.
This is not something that can be simply "turned off".

Querying Windows Event Logs Remotely (Linux)

Right now I have a Java application running WMI querying event logs, this is painfully slow due to the nature of WMI (ok, not painfully, but it may not be able to keep up with our domain controller), and dllhost.exe gets hammered serving up WMI requests and remote DCOM objects, pretty unnecessary just to read logs.
The next bit of exploration is in Windows RPC calls, but I'm confused... is the Windows RPC implementation (sorta kinda not wonderfully documented for event logs) just another name for WMI? Or will I be receiving raw event log information?
Other than these two methods, does anyone know of any other ways to hook into the event log creation event so I can have the servers automatically push their logs to me? It would be nice if it was something that could be fairly easily implemented in Linux, but I can tamper with WINE and Mono if I have to...
Or would it probably be best to write and deploy scripts on all the servers and have them push it to my program on the Linux box (though now I have to worry about the uptime of all those scripts)?
Or better yet... should I just write a Java service that can plug into the event logs natively and install those on the various windows machines and have it hand off the logs to my central Linux box that way?
Jarapac looks promoising, I'm going to dig a little bit at this and see if Windows RPC performance is up to par and how hard it is to implement. If it's fairly straight forward: Yay! Windows RPC on Linux.
If not, it's simply not possible without your own implementation. :(

How can make my database records automatic

is there any way i can make my records in the database to be automatic. e.g i want a message to be sent to helpdesk if a requested service is not attended within 24 hours, without clicking anything.
technically it depends on the database you are using. if the database supports it, you could set up a scheduled job to scan the records and identify late services and email the helpdesk.
if the database doesn't support scheduled tasks then you could set up a client job on a timer to do the same thing.
This is what application software is for.
When the application saves to the database, the application also sends an email.
The traditional approach to this is to schedule a job (there are too many ways[1] to do that for me to go into details without knowing your server operating system, DBMS, and how much control you have to install or schedule programs on the server).
Your scheduled job would regularly check the database for records that have not been attended, and then take the appropriate action such as emailing the support team.
[1] Just so that this is not left completely unanswered; some DBMS (ex. SQL Server) have built in job scheduling facilities. You could run a Windows service on the server to do this. If not, you might consider running a Windows Service on one of your own servers to access the website (a great way to waste bandwidth).
Use a scheduler like this one, found on rufus site. You could program it to run, for instance, every hour, and make it do the job without human interaction.
I am a Java shop myself and I've been using quartz. It is quite good and usable if you can adjust to jruby.
I've never liked database or operating system based solutions, since you might not control them and often get asked to run on different environments.
Here's a very simple background job handler for Ruby:
codeforpeople.rubyforge.org/svn/bj/trunk/README
Easy to install and use. Fairly lightweight. It uses a SQL backend for managing concurrency. Runs on multiple machines simultaneously if you need it to.

How to identify users which are connected to a windows server via remote desktop

At my workplace, we have lab machines that we use to do our testing.
The standard procedure to reserve a machine for testing was to walk around the office to make sure that no one was using the machine.
This is highly inefficient and time consuming.
At first, I set up a web page where people could reserve the lab machine but nobody was keeping the page updated so that turned up to be useless.
I finally found a solution using Microsoft log parser and wanted to share it to the stack overflow community.
It is a batch file that runs on the machine so the user can identify the last users that use the machine and easily IM them to ask if the machine is free.
Is there a better solution to do this?
Use the built-in command qwinsta (Query Win Station) to figure out what sessions (including console) are active or inactive (disconnected) and then act on the given information (creds to krusty.ar btw for linking this already).
If you feel people are abusing the machine in question, refer to rwinsta to nuke their sessions into oblivion...
You will need to install the Microsoft Log Parser
Then create the following 2 files
TSLoginsDetails.sql
SELECT
timegenerated,
EXTRACT_TOKEN(Strings,1,'|') AS Domain,
EXTRACT_TOKEN(Strings,0,'|') AS User,
EXTRACT_TOKEN(Strings,3,'|') AS SessionName,
EXTRACT_TOKEN(Strings,4,'|') AS ClientName,
EXTRACT_TOKEN(Strings,5,'|') AS ClientAddress,
EventID
FROM Security
WHERE EventID=682
ORDER BY timegenerated DESC
TSLogins.bat
echo off
cls
c:
cd "c:\Program Files\Log Parser 2.2\"
logparser.exe file:TSLoginsDetails.sql -o:DATAGRID
Now by placing this batch file on the desktop, the user can see who were the last people to login and contact them by IM to verify if they are done.
How about posting the information from the log file to the website that tells who is currently using the machine as well.
Check and notify when they log in.
Updated the "who is using the machine" page you made prior.
Run a AT job that checks every couple of hours who is on it.
Totally out of the box:
You can install the Software Testing Automation Framework (STAF) on your servers and desktops to manage your tests. It's written in Java, so you can use it on Windows and Unix/Linux desktops and servers.
Using STAF, you can create a resource pool of test servers on which you conduct tests, then write STAX jobs (STAX is a STAF execution framework) to conduct the tests. The job can grab the first available server from the resource pool, run the test, monitor the test status, log results, notify the submitter, then release the server back into the pool when done. If you have multiple people submitting jobs for tests, STAF will manage the queue of requests and satisfy them as they came in. Users can either monitor the job from their desktop, or you can set up email alerts to notify them when the test is complete.
I'm not sure if I understand you, but there are a set of command line tools to deal with terminal server sessions, and there's also a Windows API to do the same if you need to do this from a program.
Since it sounds like you're a microsoft shop, you can set up the machines as resources in outlook/exchange and reserve them that way.

Resources