Is it possible to cut users' connections to a SAS server programmatically from the server? I know I can kill individual SAS processes by using standard Windows tools such as Task Manager when I have a remote desktop connection to the server, but can this be done programmatically, and moreover, can we prevent any users from connecting?
The situation is as follows:
We have a SAS 9.4 installation on a Windows Server 2008 R2 server. There's a bunch of folders on the server with SAS tables in them, and a bunch of end users who use these SAS tables through SAS Enterprise Guide (that has been installed on their desktops). Now, there's also a large SAS batch run, that we run every day to update all SAS tables. We would like to make sure that no users have any SAS tables opened through EG when the batch run is running; otherwise it might fail because a table is locked. Of course, the batch run is usually done at nighttime, but as its schedule depends on many other things we cannot be 100% certain it is completed before people come to work.
What we want is some kind of a script or SAS setting that would allow us to automatically cut all users' SAS connections before the batch run starts, keep them out during the run, and then allow them to reconnect when the batch run is complete.
Any hints would be appreciated!
Edit: Would it be possible to write a cmd-script using taskkill that would stop all sas.exe processes running under any other user than the current one? Does SAS actually even create a sas.exe process on the server for each user in this case?
You can use the SYSTASK or X commands in (Windows) or (UNIX) . My extract, transform and load (ETL) process runs in the UNIX environment and SYSTASK is a big help for just the reason being asked about. I create the data sets with SAS then use the mv (UNIX) command to replace the existing data set. The file system will override the SAS lock.
As you are on 9.4, I put forward the suggestion that your library should be converted to a read-only, metadata bound library. This should prevent table locks, and negate the need to close user sessions (with potential impact on their productivity through loss of work tables / macro variables etc)
Look into SAS/Share. If that's not an option, then two ugly ideas are:
If the users are connecting using libname statements to a windows share you could just disable the share. Then re-enable it. This could easily be done using command line instructions and the x command.
If they are connecting through a SAS Service, stop the service and restart it. Again this can be done via the command line and x.
After doing either (or both) of the above. Make sure the first thing that happens is that you issue a lock <tablename>; statement to make sure no-one can use those tables again until your jobs are done. You can then restart the service/re-enable the shares.
Once you are finished, make sure that the SAS session that issued the lock command is either closed or clears the lock.
Related
I'm trying to write values to the Run/RunOnce windows registry using PowerShell. The commands I'm using are successfully writing these values where they need to be. However, after I've written these values I need to power cycle the system (so we're skipping the shutdown process for various reasons) which is causing those values I've written to the registry never to be flushed to the disk (that's part of the shutdown process.) So once my system powers back up the Run/RunOnce commands are no longer in the registry halting my automation process.
My question: Is there anyway through a PowerShell command to force the flushing of a registry value?
You have to use syinternals Sync utility.
I need to back up some large files that are being written to disk by a process. The process is perpetually running, and occasionally dumps large files that need to be moved over the network. Having the process do this itself is not an option, as the process locks out users whilst it is doing file dumps.
So, this runs under a windows machine, and as a primarily linux user, I am not entirely certain how to do this...
Under linux I would simply use a cron job in the folder (I know the glob that will match the output files), then check lsof, to ensure that the file is not being written to, such that I don't try to copy a partially complete file. Data integrity is critical, so I would normally md5 the files before and after the copy.
So I guess my question is -- how does one do this sort of stuff under windows? I feel like I am kneecapped from the start -- I can use python, but I can't emulate lsof, nor cron to do the task scheduling.
I tried looking at "handle" -- but it needs admin privelidges at execution time, which is also not an option. I can't run the backup process as an admin, it has to run with user privs.
Thanks..
Edit: I just realised I could keep the python instance running, with a sleep, so task scheduling is not a problem :)
For replacing cron you can use the "Task Scheduler" in windows to start your script every few minutes (or specific times).
For lsof the question was discussed here : How can I determine whether a specific file is open in Windows?
I am writing some testing software that receives some source code, compiles it on the server, executes it, forward input from the database, catches output and compares it with one in the database to see if it is correct. The problem is that source code can be anything (it is written in c/c++ and compiled with visual studio cl), so I need to prevent somehow malicious users. I am automatically killing those processes if they run longer than some time, or use more memory than allowed.
The question is, can I only allow those processes only to write and read from standard in/out streams and to deny any other access rights on windows.
Please excuse my English.
Thank you in advance.
Probably Job Objects can help you (see http://msdn.microsoft.com/en-us/library/ms684161(VS.85).aspx). This very powerful feature is not good known.
Working with jobs is very easy. You can create a job with respect of CreateJobObject and set a lot of different restrictions like time, memory and some restrictions. Then You can create a process with suspended flag, assign process to the job and resume the process. Then you receive full control under the created process and all tree of child processes, which the process can create. Job feature exist since Windows 2000.
Another modern way is User Interface Privilege Isolation (UIPI) (see http://msdn.microsoft.com/en-us/library/bb625963.aspx) or the way of usage Low Integrity Processes introduced by Vista. See http://msdn.microsoft.com/en-us/library/Bb250462#dse_stlip how to create a process with Low Integrity.
The problem:
We use a program written by our biggest customer to receive orders, book tranports and do other order-related stuff. We have no other chance but to use the program and the customer is very unsupportive when it comes to problems with their program. We just have to live with the program.
Now this program is most of the time extremely slow when using it with two or more user so I tried to look behind the curtain and find the source of the problem.
Some points about the program I found out so far:
It's written in VB 6.0
It uses a password-protected Access-DB (Access 2000 MDB) that is located a folder on one user's machine.
That folder is shared over the network and used by all other users.
It uses the msjet40.dll version 4.00.9704 to communicate with access. I guess it's ADO?
I also used Process Monitor to monitor file access and found out why the program is so slow: it is doing thousands of read operations on the mdb-file, even when the program is idle. Over the network this is of course tremendously slow:
Process Monitor Trace http://img217.imageshack.us/img217/1456/screenshothw5.png
The real question:
Is there any way to monitor the queries that are responsible for the read activity? Is there a trace flag I can set? Hooking the JET DLL's? I guess the program is doing some expensive queries that are causing JET to read lots of data in the process.
PS: I already tried to put the mdb on our company's file server with the success that accessing it was even slower than over the local share. I also tried changing the locking mechanisms (opportunistic locking) on the client with no success.
I want to know what's going on and need some hard facts and suggestions for our customer's developer to help him/her make the programm faster.
To get your grubby hands on exactly what Access is doing query-wise behind the scenes there's an undocumented feature called JETSHOWPLAN - when switched on in the registry it creates a showplan.out text file. The details are in
this TechRepublic article alternate, summarized here:
The ShowPlan option was added to Jet 3.0, and produces a text file
that contains the query's plan. (ShowPlan doesn't support subqueries.)
You must enable it by adding a Debug key to the registry like so:
\\HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\JET\4.0\Engines\Debug
Under the new Debug key, add a string data type named JETSHOWPLAN
(you must use all uppercase letters). Then, add the key value ON to
enable the feature. If Access has been running in the background, you
must close it and relaunch it for the function to work.
When ShowPlan is enabled, Jet creates a text file named SHOWPLAN.OUT
(which might end up in your My Documents folder or the current
default folder, depending on the version of Jet you're using) every
time Jet compiles a query. You can then view this text file for clues
to how Jet is running your queries.
We recommend that you disable this feature by changing the key's value
to OFF unless you're specifically using it. Jet appends the plan to
an existing file and eventually, the process actually slows things
down. Turn on the feature only when you need to review a specific
query plan. Open the database, run the query, and then disable the
feature.
For tracking down nightmare problems it's unbeatable - it's the sort of thing you get on your big expensive industrial databases - this feature is cool - it's lovely and fluffy - it's my friend… ;-)
Could you not throw a packet sniffer (like Wireshark) on the network and watch the traffic between one user and the host machine?
If it uses an ODBC connection you can enable logging for that.
Start ODBC Data Source Administrator.
Select the Tracing tab
Select the Start Tracing Now button.
Select Apply or OK.
Run the app for awhile.
Return to ODBC Administrator.
Select the Tracing tab.
Select the Stop Tracing Now button.
The trace can be viewed in the location that you initially specified in the Log file Path box.
First question: Do you have a copy of MS Access 2000 or better?
If so:
When you say the MDB is "password protected", do you mean that when you try to open it using MS Access you get a prompt for a password only, or does it prompt you for a user name and password? (Or give you an error message that says, "You do not have the necessary permissions to use the foo.mdb object."?)
If it's the latter, (user-level security), look for a corresponding .MDW file that goes along with the MDB. If you find it, this is the "workgroup information file" that is used as a "key" for opening the MDB. Try making a desktop shortcut with a target like:
"Path to MSACCESS.EXE" "Path To foo.mdb" /wrkgrp "Path to foo.mdw"
MS Access should then prompt you for your user name and password which is (hopefully) the same as what the VB6 app asks you for. This would at least allow you to open the MDB file and look at the table structure to see if there are any obvious design flaws.
Beyond that, as far as I know, Eduardo is correct that you pretty much need to be able to run a debugger on the developer's source code to find out exactly what the real-time queries are doing...
It is not possible without the help of the developers. Sorry.
UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.