VFP networking issues with Windows 10 1803 - visual-foxpro

We are experiencing some huge issues with multi-user network file sharing on version 1803 of Windows and VFP9 SP2. Here are a few of the issues we see:
Blank records written to the database. A system will write a complete record with values in all fields, but the record is blank in the table.
Records that are written but don't appear for other users until the table is closed. If session A opens a table and adds 5 records, session B will see that the extra 5 records are there, but they will either be blank or will appear to have data from a previous record in them. Once session A closes the table, the data appears for other sessions.
Records will be appended to a table, and will end up just creating a duplicate of a previous record instead.
These all appear to be issues with caching or delayed writes of some sort.
I've seen various combinations of these problems very consistently across dozens of installations in the last couple of days. The only solution has been to have the users roll back to the previous build of Windows.
We've tried disabling oplocks on the client and server machines and verifying offline files are not enabled, but haven't found a solutions.
Has anyone else seen anything similar? Suggestions? This could be a disaster if we don't figure it out.

So here is what we have found. The problems seem to be specifically caused by the KB4103721 update to Windows 1803. We were able to resolve the problem by removing that update as a temporary solution.
We have now found that by disabling some of the SMB caching parameters.
Open a powershell administrative prompt. (right click on the start button)
Execute the following two commands:
set-smbclientconfiguration -DirectoryCacheLifetime 0
set-smbclientconfiguration -FileInfoCacheLifetime 0
You can then run
get-smbclientconfiguration
to verify that the values are set.

I solved this problem by locking the table and if cannot lock the table then wait until table availabe for lock. Sometimes it slowdown the process but data won't lost. the codes are here
t2 = [INSERT INTO table (fields_list_here) VALUES (fields_value_here)]
IF FLOCK() && RLOCK
&t2
UNLOCK
ELSE
do while .t.
IF not FLOCK() &&RLOCK()
WAIT WINDOW "Attempting to lock. Please wait ..." NOWAIT
IF INKEY() = 27 && The loop may be too fast you may not escape. Try putting a parameter "inkey(.1)"
WAIT WINDOW "Aborting lock operation." NOWAIT
exit
ENDIF
ELSE
&t2
UNLOCK
EXIT
ENDIF
ENDDO
ENDIF

Related

SAS Error Handling - Check after a data step to see if error was thrown

Hi I have a SAS job that runs successfully 90% of the time. But one of the steps relies on reading an oracle table that is occasionally being updated at the same time as I'm trying to read it. I implemented a check to see if it exists first before querying it, but since the pull takes ~15 minutes, it will sometimes exist at the start of the pull but not by the end which results in a SAS error.
What I want to do is gracefully catch this error, sleep for x time, and then attempt to re-run the same pull without the SAS job failing. Is there a way to do this in SAS? All the things I've searched rely on checking pre-conditions before the pull, but what can I do when those can change during the pull leading to an error?
Thanks.
You can do this a bunch of different ways, but I think the old school method is probably best.
Assuming you're running this in batch mode - split your oracle pull into its own program, and call that program with its own call to SAS.exe. Have it put out a value (touch a file, say, or write the date or something to a file) and have the batch program look for that file/value. When that file/value is updated, then the batch program moves onto the rest of the process; if it's not updated, then sleep and re-call that program.
If you're doing this in Enterprise Guide, it's a bit easier as you can have a condition that does more or less the same thing (but you can actually check for error conditions via macro variables). You would need to not have SAS set to ABEND on an error, though.
One other approach that might be worth trying: if your oracle database will allow you to lock tables via SAS, try running a lock statement immediately before the data step. You can then check the result of the lock attempt via the &SYSLCKRC automatic macro variable, wait, and try again.
E.g.
%macro wait_for_lock(DATASET);
%let MINUTES_WAITED = 0;
%do %until(&SYSLCKRC = 0 or &MINUTES_WAITED > 60);
lock &DATASET;
%if &SYSLCKRC ne 0 %then %do;
data _null_;
sleep = sleep(60);
run;
%end;
%let MINUTES_WAITED = %eval(&MINUTES_WAITED + 1);
%end;
%mend;
%wait_for_lock(oraclelib.mytable);
You can also use the FILEOCKWAIT system option to accomplish the same thing in more recent versions of SAS than the ancient one that I'm used to.

An opened document which isn't found in Word.Application.Documents but still locked out?

I do some Word automation filling in the blanks in some Word documents which are used as templates.
One template is used more often than the others, and this causes the error, as it locks out and Word is unable to open it, though I wish to open it in read only.
Opening the document
do until lole_word.Documents.Count = 0
lole_word.Documents[1].Close(lole_word.SaveOptions.wdDoNotSaveChanges)
loop
boolean lb_readOnly
lb_readonly = true
lole_word.Documents.Open(as_fileIn, lb_readOnly)
The problem is that the template document is opened once, with no flaws of any kind. But when the same template has to be reused, although the lole_word.Documents.Count always returns 0, when Word opens the previously used template, it is locked out, and Word finally shows up asking me whether I want to open it in read only mode.
I wish to avoid this annoyance and simply open the file in read only mode, as it shall be saved elsewhere once it is filled in.
My problem is that even though I specify open in read only mode by setting the second parameter to true, Word doesn't seem to see it this way and still pops up his File Already in Use by Another User dialog, and then my application loses control over Word and it crashes.
We had a similar problem and I wish that I could remember how we solved it. We may have used the Quit command. I know that we also did an attempted FileOpen in exclusive mode (with no intention of using the file) and immediately closing it. If we got a file locked return code we prompted the user to close out of excel first because there were times they would have the program open outside of OLE. I know this isn't exactly what you were looking for but hope it leads you somewhere. I recall this being an intermittent problem and there were some cases users had to open task manager and kill the extraneous excel process.
I vaguely remember the locking being caused by the file system and not Word, as we were opening in read only as well.

possible locking issue with a Talend job

I'm parsing data from one table and writing it back to another one. Input are characteristics, written as text. Output is a boolean field that needs to be updated. For example a characteristic would be "has 4 wheel drive" and I want to set a boolean has_4weeldrive to true.
I'm going through all the characteristics that belong to a car and set it to true if found, else to null. The filter after the tmap_1 filters the rows for which the attribute is true, and then updates that in a table. I want to do that for all different characteristics (around 10).
If I do it for one characteristic the job runs fine, as soon as I have more than 1 it only loads 1 record and waits indefinitely. I can of course make 10 jobs and it will run, but I need to touch all the characteristics 10 times, that doesn't feel right. Is this a locking issue? Is there a better way to do this? Target and source db is Postgresql if that makes a difference.
Shared connections could cause problems like this.
Also make sure you're committing after each update. Talend use 1 thread for execution (except the enterprise version) so multiple shared outputs could cause problems.
Setting the commit to 1 should eliminate the problem.

How to detect programmatically when the OS has done loading all of its applications\services?

Edit: I rephrased my question, please ignore all of the comments below (up to the 7th of May).
First off, I'll try to explain the problem:
My process is trying to show a Deskband programmatically using ITrayDeskBand::ShowDeskBand.
It works great at any time except for when the OS is loading all of its processes (after reset or logout).
After Windows boots and starts loading the various applications\services, the mouse cursor is set to wait for a couple of seconds (depends on how many applications are running \ how fast everything is).
If the mouse cursor is set to wait and the process is running during that time, the call will fail.
However, if my process waits a few seconds (after that time the cursor becomes regular) and then invokes the call, everything works great.
This behavior was reproduced both on Windows 7 and Windows Vista.
So basically what I'm asking is :
1) Just for basic knowledge, What the OS does when the cursor is set to busy?
2) The more important question : How can i detect programmatically when this process is over?
At first, I thought that explorer hadn't loaded properly so I've used WaitForInputIdle but it wasn't it.
Later I thought that the busy cursor indicates that the CPU is busy so I've created my process using IDLE_PRIORITY_CLASS but idle times were received while the cursor was busy.
Windows never stops loading applications and/or services!
As a matter of fact, applications come and go, some of these interactively some of these without any user interaction. Even Services are loaded at different points of time (depending on their settings and the external conditions - e.g the Smard Card Resource Manager Service might start only when the OS detects that a Smard Card device has connected). Applications can (but must not) stop automatically so do some Services.
One never knows when Windows has stop to load ALL applications and/or Services.
If ITrayDeskBand::ShowDeskBand fails, then wait for the TaskbarCreated message and then try again. (This is the same technique used by notification icons.)
The obvious approach would be to check whether ShowDeskband worked or not, and if not, retry it after a few seconds. I'm assuming you've already considered and rejected this option.
Since you seem to have narrowed down the criteria to which cursor is being displayed, how about waiting for the particular cursor you are wanting? You can find which cursor is being shown like this:
CURSORINFO cinfo;
ICONINFOEX info;
cinfo.cbSize = sizeof(cinfo);
if (!GetCursorInfo(&cinfo)) fail();
info.cbSize = sizeof(info);
if (!GetIconInfoEx(cinfo.hCursor, &info)) fail();
printf("szModName = %ws\n", info.szModName);
printf("wResID = %u\n", info.wResID);
Most of the simple cursors are in module USER32. The relevant resource IDs are listed in the article on GetIconInfo.
You apparently want to wait for the standard arrow cursor to appear. This is in module USER32, and the resource ID is 32512 (IDC_ARROW).
I suggest that you check the cursor type ten times a second. When you see the arrow cursor ten times in a row (i.e., for a full second) it is likely that Explorer has finished starting up.

Getting previous exit code of an application on Windows

Is there any way to find out what was the last Exit Code of an application the last time it run?
I want to check if application wasn't exit with zero exit code last time (which means abnormal termination in my case) And if so, do some checking and maybe fix/clean up previously generated data.
Since some applications do this (they give a warning and ask if you want to run in Safe Mode this time) I think maybe Windows can tell me this.
And if not, what is the best practice of doing this? Setting a flag on a file or something when application terminated correctly and check that next time it executed?
No, there's no permanent record of the exit code. It exists only as long as a handle to the process is kept open. And returned by GetExitCodeProcess(), it needs that handle. As soon as the last handle is closed then that exit code is gone for good. One technique is a little bootstrapper app that starts the process and keeps the handle. It can then also do other handy things like send alerts, keep a log, clean up partial files or record minidumps of crashes. Use WaitForSingleObject() to detect the process exit.
Btw, you definitely want to exit code number to mean the opposite thing. A zero is always the "normal exit" value. This helps you detect hard crashes. The exit code is always non-zero when Windows terminates the app forcibly, set to the exception code.
There are other ways, you can indeed create a file or registry key that indicates the process is running and check for that when it starts back up. The only real complication with it is that you need to do something meaningful when the user starts the program twice. Which is a hard problem to solve, such apps are usually single-instance apps. You use a named mutex to detect that an instance of the program is already running. Imprinting the evidence with the process ID and start time is workable.
There is no standard way to do this on the Windows Platform.
The easiest way to handle this case is to put a value on the registry and to clear it when the program exits.
If the value is still present when the program starts, then it terminated unexpectedly.
Put a value in the HKCU/Software// to be sure you have sufficient rights (the value will be per user in this case).

Resources