Search Entire Domain for File through Command prompt - windows

Would there be a way to search an entire domain for a specific file through the command prompt.
I am using dir/s Example.txt but of course that only searches one specific computer.

You could get a list of computer objects from the domain using an LDAP query in a VBScript, then iterate through the list of computers using a For Each and then execute the dir /s command on each of the computers in turn and read the output from the command and parse your results to see if you get a hit.
It wouldn't be pretty but it'd work.
EDIT
It'd use the credentials of whatever was running the executable at the time. Using WinNT is ok, but if you want to do it properly, use the DirectoryServices (I know it's C#, but you get the drift from there and can use this to convert) namespace.
Once you have your list of computers, you need to iterate through them and run your command/process against each computer.

One easy way would be to get a computer list, and then access them using the UNC path and the administrative shares ([driveletter]$). You would have to do this from an account with administrative access on all of the computers.
\\computer01\c$\windows
That would give you the windows folder on computer01. Throw that in a for-each loop, and do the searching as normal.
Also depending on the number of computers you are looking at, and the network conditions, you might be able to speed things up if you spawned a few worker threads.

Related

Make system calls (Windows Command Prompt) answer in english

I'm working on a Perl script that needs to do a few system calls to obtain some system data. In order to parse the output of those calls reliably on any computer, I need to be sure the output of the call is set to English.
The problem I'm facing is that, for example in my PC, I get localized output from those commands. My Windows is setup in Spanish so, calls like systeminfo return data in Spanish.
Is there a command (or something else) I can run in a command call to make all system calls act like if the system was in English always, without having to modify anything in the system configuration?
Thanks in advance for your comments.
NOTES for bounty: The answer to this problem must not interfere with the system in any way. It should be a way to obtain english answers from system calls/commands that works in any machine without modifying its configuration, registry or else.
This solution allows you to make Command Prompt act in english. It does alter some registry keys but it also changes them back if you want. You can run the same commands that they put in the .bats to make the system go to english and them go back to localized.
If you're trying to run commands that require adminsitrator privileges, then you can include these calls in your program without problems.
HTH
I think the WMIC command is your best best. It has been a standard feature of Windows since Windows XP.
WMIC has full access to the Windows system (subject to user permissions, etc.), and has a locale option that selects the locale in effect (for the command) from the installed language packs.
The locale is selected from the list here.
To get the current username using US English (if it's available) you'd use wmic /locale:ms_409 netlogin get name
Through the WMI interface, you may not even need to localise the results (i.e.: with sufficient care, you may just get the raw data).

File Access Times in Windows + NTFS

I am trying to figure out when and how does Windows update File Access Times on files.
First of all, most Windows installs come with File Access Times disabled for performance reasons, so before wrapping your head around it here is what you need to do in order to activate last access times on NTFS file systems: modify the key [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem] value name NtfsDisableLastAccessUpdate to DWORD 0 value data(if it is set to 1 of course). If it doesn't exist just create it.
Upon reading File Times article on MSDN i am still in doubt as to how Windows updates access times.
My questions are as follow:
Do access times update upon issuing a WinApi CreateFile() with FILE_READ_ATTRIBUTES ? In my case, while doing it programmatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
Do access times update upon issuing a WinApi ExtractIconEx() to read an icon from a file?
In my case doing so programatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
If you ask me, both of those cases should update the file access times, but it seems to me that direct WinApi calls don't update them or Window/NTFS driver really lags behind, while operating on files from Windows Explorer do update pretty well. What do you think is or could be the issue here?
As a side note, i did do CloseHandle() as per:
The only guarantee about a file
timestamp is that the file time is
correctly reflected when the handle
that makes the change is closed.
My end conclusion is that, indeed the opinions lying around the web are true and Windows does update File Access Times in a random fashion and thus one really shouldn't in no way depend on Windows File Access Times.
Off-topic rant: Sorry forensics guys, you'll have to prove access times using another method or you can have your case invalided in seconds. :P
No, accessing the metadata of the file isn't going to change the last access time (name, attributes, timestamps). Wouldn't work well in practice, just looking at the directory with Explorer would change it. You have to actually open the file. ExtractIconEx() would normally be an excellent candidate, except that Windows can play tricks with it. A hidden desktop.ini file can redirect the icon to another file.
Using the last access time is pretty worthless for forensics. You'd need a file system filter driver. Similar to the one embedded in SysInternals' ProcMon utility. It might be using ETW btw, that got pretty powerful at Vista time. Nevertheless, your project just got 10 times more complicated.

How to read disk file entries faster than FindFile API? [duplicate]

I am in the middle of writing a tool that finds lost files of an iTunes library, for both Mac and Windows. On the Mac, I can quickly find files by naming using the wonderful "CatalogSearch" function.
On Windows, however, there seems to be no OS API for searching by file name (or is there?).
After some googling, I learned that there are tools (like TFind, Everything) that read the NTFS directory directly and scan it to find files by name.
I would like to do the same, but without having to start from scratch (although I've written quite a few disk tools in the past, I've never had the energy to dig into NTFS).
I wonder if there are ready-made libs around, possibly as a .dll, that would give me this search feature: Pass in a file name, get back its path.
Alternatively, what about the Windows indexing service? At least when I tried this on a recently installed XP Home system, the Search operation under the Start menu would actually scan all directories, which suggests that it has no complete database. As I'm not a Windows user at all, I wonder why this isn't working.
In the end, the complete solution I need is: I have a list of file names to find, and I need code that searches the entire disk (or uses a DB for it) to get me all results in one go. E.g, the search should not start a new full scan for every file I'm looking up. That's why I think the MFT way would be optimal, as it could quickly iterate over all names, comparing each to my list.
The best way to solve your problem seems to be by using the Windows Change Journal.
Problem: If it is not enabled for a volume or the volume is a non-NTFS you need a fallback (or enable the Change Journal if it is NTFS). You need administrator rights as well to access the Change Journal.
You get the files by using the FSCTL_ENUM_USN_DATA and DeviceIOControll with LowUsn=0. This directly accesses the MFT and writes all filenames into the supplied buffer. Because it sequentially acesses the MFT it is faster than the FindFirstFile API.

Finding a set of file names quickly on NTFS volumes, ideally via its MFT

I am in the middle of writing a tool that finds lost files of an iTunes library, for both Mac and Windows. On the Mac, I can quickly find files by naming using the wonderful "CatalogSearch" function.
On Windows, however, there seems to be no OS API for searching by file name (or is there?).
After some googling, I learned that there are tools (like TFind, Everything) that read the NTFS directory directly and scan it to find files by name.
I would like to do the same, but without having to start from scratch (although I've written quite a few disk tools in the past, I've never had the energy to dig into NTFS).
I wonder if there are ready-made libs around, possibly as a .dll, that would give me this search feature: Pass in a file name, get back its path.
Alternatively, what about the Windows indexing service? At least when I tried this on a recently installed XP Home system, the Search operation under the Start menu would actually scan all directories, which suggests that it has no complete database. As I'm not a Windows user at all, I wonder why this isn't working.
In the end, the complete solution I need is: I have a list of file names to find, and I need code that searches the entire disk (or uses a DB for it) to get me all results in one go. E.g, the search should not start a new full scan for every file I'm looking up. That's why I think the MFT way would be optimal, as it could quickly iterate over all names, comparing each to my list.
The best way to solve your problem seems to be by using the Windows Change Journal.
Problem: If it is not enabled for a volume or the volume is a non-NTFS you need a fallback (or enable the Change Journal if it is NTFS). You need administrator rights as well to access the Change Journal.
You get the files by using the FSCTL_ENUM_USN_DATA and DeviceIOControll with LowUsn=0. This directly accesses the MFT and writes all filenames into the supplied buffer. Because it sequentially acesses the MFT it is faster than the FindFirstFile API.

How do I make Windows file-locking more like UNIX file-locking?

UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.

Resources