Under Linux, my program would do something like this:
Process 1 opens a file (e.g. by mapping it into memory). Let's call this file#1
Process 2 unlinks the file, and creates a new file with the same name. Let's call this file#2.
Process 1 continues to work with file#1. When it is closed, it is deleted (since it has no links). Process 1 continues to work with the content in file#1, and does not see content from file#2.
When both processes have exited, file#2 remains on disk.
I want to achieve the same semantics in Windows. After reading this question, I think FILE_SHARE_DELETE does basically this. Is opening the file with FILE_SHARE_DELETE enough, or do I need to consider something more?
The above execution flow is just an example, I know there are other ways of solving that exact problem in Windows, but I want to understand how to make such code portable between Windows and Linux.
Edit: to clarify: The use cases would be to reuse a filename for different unrelated files, but let existing processes keep their data (think transactional update of a config file for example), and to make a file anonymous (unnamed), but continue to use it like an anonymous memory map. Again I know both are possible on Windows through other means, but I am trying to find a way that is portable across platforms.
You can achieve this by using a combination of CreateFile, CreateFileMapping and MapViewOfFile calls. MapViewOfFile will give you a memory-mapped buffer of the file backed by the file on disk.
Following code when executed from different processes, will write the process id of last closing process in the file at c:\temp\temp.txt
int main()
{
TCHAR szMsg[256];
HANDLE hMapFile;
LPCTSTR pBuf;
HANDLE hFile = CreateFileW(
L"C:\\Temp\\temp.txt",
GENERIC_WRITE|GENERIC_READ,
FILE_SHARE_DELETE|FILE_SHARE_READ|FILE_SHARE_WRITE,
NULL,
CREATE_ALWAYS,
FILE_ATTRIBUTE_NORMAL,NULL);
hMapFile = CreateFileMapping(
hFile, // Handle of file opened with rw access
NULL, // default security
PAGE_READWRITE, // read/write access
0, // maximum object size (high-order DWORD)
BUF_SIZE, // maximum object size (low-order DWORD)
szName); // name of mapping object
if (hMapFile == NULL)
{
printf( "Could not create file mapping object (%d).\n", GetLastError());
return 1;
}
pBuf = (LPTSTR) MapViewOfFile(hMapFile, // handle to map object
FILE_MAP_ALL_ACCESS, // read/write permission
0,
0,
BUF_SIZE);
if (pBuf == NULL)
{
printf("Could not map view of file (%d).\n", GetLastError());
CloseHandle(hMapFile);
return 1;
}
wsprintfW(szMsg, L"This is process id %d", GetCurrentProcessId());
CopyMemory((PVOID)pBuf, szMsg, (wcslen(szMsg) * sizeof(TCHAR)));
MessageBoxW(NULL, szMsg, L"Check", MB_OK);
UnmapViewOfFile(pBuf);
CloseHandle(hMapFile);
CloseHandle(hFile);
return 0;
}
Make sure you open the file with GENERIC_READ|GENERIC_WRITE access and allow FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE access to subsequent opens.
Also note the use of CREATE_ALWAYS in CreateFile which will delete the old file and open a new one every-time CreateFile is called. This is the 'unlink' effect you talk about.
Code inspired from Creating Named Shared Memory at msdn.
Related
I know lots of similar questions on this topic have been asked before but so far I have been unable to find a solution that actually works. I want to start a console program from my program and capture its output. My implementation should be in a way that is compatible with WaitForMultipleObjects(), i.e. I want to get notified whenever there is new data to read in the pipe.
My implementation is based on this example from MSDN. However, I had to modify it a little because I need overlapped I/O in order to be able to wait for ReadFile() to finish. So I'm using named pipes created using Dave Hart's MyCreatePipeEx() function from here.
This is my actual code. I have removed error checks for readability reasons.
HANDLE hReadEvent;
HANDLE hStdIn_Rd, hStdIn_Wr;
HANDLE hStdOut_Rd, hStdOut_Wr;
SECURITY_ATTRIBUTES saAttr;
PROCESS_INFORMATION piProcInfo;
STARTUPINFO siStartInfo;
OVERLAPPED ovl;
HANDLE hEvt[2];
DWORD mask, gotbytes;
BYTE buf[4097];
saAttr.nLength = sizeof(SECURITY_ATTRIBUTES);
saAttr.bInheritHandle = TRUE;
saAttr.lpSecurityDescriptor = NULL;
MyCreatePipeEx(&hStdOut_Rd, &hStdOut_Wr, &saAttr, 0, FILE_FLAG_OVERLAPPED, FILE_FLAG_OVERLAPPED);
MyCreatePipeEx(&hStdIn_Rd, &hStdIn_Wr, &saAttr, 0, FILE_FLAG_OVERLAPPED, FILE_FLAG_OVERLAPPED);
SetHandleInformation(hStdOut_Rd, HANDLE_FLAG_INHERIT, 0);
SetHandleInformation(hStdIn_Wr, HANDLE_FLAG_INHERIT, 0);
memset(&piProcInfo, 0, sizeof(PROCESS_INFORMATION));
memset(&siStartInfo, 0, sizeof(STARTUPINFO));
siStartInfo.cb = sizeof(STARTUPINFO);
siStartInfo.hStdError = hStdOut_Wr;
siStartInfo.hStdOutput = hStdOut_Wr;
siStartInfo.hStdInput = hStdIn_Rd;
siStartInfo.dwFlags |= STARTF_USESTDHANDLES;
CreateProcess(NULL, "test.exe", NULL, NULL, TRUE, 0, NULL, NULL, &siStartInfo, &piProcInfo);
hReadEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
for(;;) {
int i = 0;
hEvt[i++] = piProcInfo.hProcess;
memset(&ovl, 0, sizeof(OVERLAPPED));
ovl.hEvent = hReadEvent;
if(!ReadFile(hStdOut_Rd, buf, 4096, &gotbytes, &ovl)) {
if(GetLastError() == ERROR_IO_PENDING) hEvt[i++] = hReadEvent;
} else {
buf[gotbytes] = 0;
printf("%s", buf);
}
mask = WaitForMultipleObjects(i, hEvt, FALSE, INFINITE);
if(mask == WAIT_OBJECT_0 + 1) {
if(GetOverlappedResult(hStdOut_Rd, &ovl, &gotbytes, FALSE)) {
buf[gotbytes] = 0;
printf("%s", buf);
}
} else if(mask == WAIT_OBJECT_0) {
break;
}
}
The problem with this code is the following: As you can see, I'm reading in chunks of 4kb using ReadFile() because I obviously don't know how much data the external program test.exe will output. Doing it this way was suggested here:
To read a variable amount of data from the client process just issue
read requests of whatever size you find convenient and be prepared to
handle read events that are shorter than you requested. Don't
interpret a short but non-zero length as EOF. Keep issuing read
requests until you get a zero length read or an error.
However, this doesn't work. The event object passed to ReadFile() as part of the OVERLAPPED structure will only trigger once there are 4kb in the buffer. If the external program just prints "Hello", the event won't trigger at all. There need to be 4kb in the buffer for hReadEvent to actually trigger.
So I thought I should read byte by byte instead and modified my program to use ReadFile() like this:
if(!ReadFile(hStdOut_Rd, buf, 1, &gotbytes, &ovl)) {
However, this doesn't work either. If I do it like this, the read event is not triggered at all which is really confusing me. When using 4096 bytes, the event does indeed trigger as soon as there are 4096 bytes in the pipe, but when using 1 byte it doesn't work at all.
So how am I supposed to solve this? I'm pretty much out of ideas here. Is there no way to have the ReadFile() event trigger whenever there is some new data in the pipe? Can't be that difficult, can it?
Just for the record, while there are some problems with my code (see discussion in comments below the OP), the general problem is that it's not really possible to capture the output of arbitrary external programs because they will typically use block buffering when their output is redirected to a pipe, which means that output will only arrive at the capturing program once that buffer is flushed so real time capturing is not really possible.
Some workarounds have been suggested though:
1) (Windows) Here is a workaround that uses GetConsoleScreenBuffer() to capture the output from arbitrary console programs but it currently only supports one console page of output.
2) (Linux) On Linux, it's apparently possible to use pseudo-terminals to force the external program to use unbuffered output. Here is an example.
I am trying to create a program that any normal user can run on windows and generate a process list of all processes, including the executable location. I have used CreateToolhelp32Snapshot() to get all process names, pid, ppid. But having issues getting the image path. Everything I do results in pretty much Access Denied.
I have tried ZwQueryInformationProcess, GetProcessImageFileName, etc. and also using OpenProcess to get the handle to each process. I can get the handle by using PROCESS_QUERY_LIMITED_INFORMATION, but any other option doesn't work. I am lost and have been at this for a few days. Can anyone point me in the right direction?
This is the code that works for non-admin user on Windows. Use the szExeFile member of PROCESSENTRY32 to get the path:
HANDLE hProcessSnap = NULL;
HANDLE hProcess = NULL;
PROCESSENTRY32 pe32;
DWORD dwPriorityClass = 0;
// Take a snapshot of all processes in the system.
hProcessSnap = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
if (hProcessSnap == INVALID_HANDLE_VALUE)
{
return;
}
// Set the size of the structure before using it.
pe32.dwSize = sizeof(PROCESSENTRY32);
// Retrieve information about the first process,
// and exit if unsuccessful
if (!Process32First(hProcessSnap, &pe32))
{
CloseHandle(hProcessSnap); // clean the snapshot object
return;
}
// Now walk the snapshot of processes, and
// display information about each process in turn
do
{
// do something with the pe32 struct.
// pe32.szExeFile -> path of the file
} while (Process32Next(hProcessSnap, &pe32));
CloseHandle(hProcessSnap);
I've written a program that reads text from one file and copies it to a new file. Using a while loop and the ReadFile/Writefile functions, my program works...but my program won't stop running unless I force stop it. I'm guessing that I'm not closing my handles properly or that my while loop may be set up wrong. Once I force stop my program, the file is successfully copied over to the new location with a new name.
int n = 0;
while(n=ReadFile(hFileSource, buffer, 23, &dwBytesRead, NULL)){
WriteFile(hFileNew, buffer, dwBytesRead, &dwBytesWritten, NULL);
}
CloseHandle(hFileSource);
CloseHandle(hFileNew);
return 0;
You're not correctly testing for the end-of-file. ReadFile doesn't return failure for EOF, it returns success but with 0 bytes read. To correctly check for EOF:
while (ReadFile(hFileSource, buffer, 23, &dwBytesRead, NULL))
{
if (dwBytesRead == 0)
break;
// write data etc
}
Is there any reason you're only reading/writing 23 bytes at a time? This will be rather inefficient.
I try to write to file while it is opened as file mapped from another process and it fails.
Please look at fragments of code:
access = GENERIC_READ | GENERIC_WRITE;
share = FILE_SHARE_READ | FILE_SHARE_WRITE;
disposition = OPEN_EXISTING;
HANDLE fileHandle = CreateFileA(fileName.c_str(), access, share, 0, disposition, 0);
//...
unsigned long valProtect = 0;
//...
valProtect = PAGE_READWRITE;
//...
const HANDLE mappingHandle = CreateFileMapping(fileHandle, 0, valProtect, 0, 0, 0);
//...
this->m_access = FILE_MAP_ALL_ACCESS;
//...
this->m_startAddress = (uint8_t*)MapViewOfFile(mappingHandle, this->m_access, 0, 0, 0);
//...
CloseHandle(fileHandle);
At this time file is closed (it's handle) but mapped to address space.
I open this file in notepad++, modify it and try to save, but I see message:
"Please check if this file is opened in another program."
So I cannot rewrite it from another process, seems like it's permissions for writing is locked.
If I unmapped file like:
UnmapViewOfFile(this->m_startAddress);
Then I cannot rewrite file again.
What I did wrong?
Notepad++ is likely trying to obtain exclusive access to the file when writing to it, which will fail while the mapping (or anything else using the file) is still open. Many apps obtain exclusive access when writing to files, to avoid other processes reading the data until it is finished being written. You are sharing your file, but Notepad++ is simply asking for too many rights. There is nothing you can do about that in your code.
I am trying to use combination of functions CreateFileMapping , MapViewOfFile, FlushViewOfFile.
the total buffer size is more than the mapped view.
example buffer is 50KB. and mapped view is 2KB. in such scenario,
i want to write the total buffer to a physical file, using the above function.
First part i am able to write to file. but the remaining part how to write to file. I mean, how to move to next page and write the next part of data.
#define MEM_UNIT_SIZE 100
-first module...Memory map creator
GetTempPath (256, szTmpFile);
GetTempFileName (szTmpFile, pName, 0, szMMFile);
hFile = CreateFile (szMMFile, GENERIC_WRITE | GENERIC_READ, FILE_SHARE_WRITE,
NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY, NULL);
HANDLE hFileMMF = CreateFileMapping( hFile ,NULL,PAGE_READWRITE,0,
(MEM_UNIT_SIZE),pName)
-second module... Memory writer
long lBinarySize = 1000;
long lPageSize = MEM_UNIT_SIZE;
HANDLE hFileMMF = OpenFileMapping(FILE_MAP_WRITE,FALSE,pMemName);
LPVOID pViewMMFFile = MapViewOfFile(hFileMMF,FILE_MAP_WRITE,0,0, lPageSize );
CMutex mutex (FALSE, _T("Writer"));
mutex.Lock();
try
{
ASSERT(FALSE);
CopyMemory(pViewMMFFile,pBinary,lPageSize); // write
FlushViewOfFile(pViewMMFFile,lPageSize);
// first 100 bytes flushed to file.
//how to move to next location and write next 900 bytes..<---??
}
catch(CException e)
{
...
}
please share if you have any suggestion.
thanks in advance,
haranadh
Repeat your call to MapViewOfFile with a different range.
as described in the following link,
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366761(v=VS.85).aspx
can you please check "allocation granularity", I think you should use this parameter to set the values for "dwFileOffsetLow" or "dwFileOffsetHigh".