Receiving order of socket - windows

I am using socket to send data from local machine to remote in TCP, stream mode.
The code in the local side is :
// ----------- Local
send(sd, pData, iSize, 0); // send data
The size of the data is about 1Mb, so socket might divide it to several packets.
While I am recieving the data on remote side, I have to recieve the data separately, and then combine them together.
The code in the remote side is :
// ----------- Remote : Receiving data
int iSizeThis(0);// size of a single separated data
static int iSizeAcc(0);//size of the total data I have already got.
static int iDataSize(0);// size of the original data.
// Get size
if (iDataSize <= 0)
{
if ( (iSizeThis = recv(cli_sd, (char*)&iDataSize, 4, MSG_PEEK)) == 0) {
....
} else if (iSizeThis == SOCKET_ERROR) {
....
} else {
// Allocates memory
if (iDataSize > 0)
pData = realloc(pData, iDataSize);
}
} else if (iSizeAcc < iDataSize){
// Get data.
// The size of the data is about 1Mb, so socket will divide it to several packets.
// I have to recieve the data separately, and then combine them together.
iSizeThis = recv(cli_sd, ((char*)pData) + iSizeAcc, iDataSize - iSizeAcc, 0);
iSizeAcc += iSizeThis;
//{// If I uncomment this block, the recieving order will be reversed. Why?????
// static int i(0);
// std::ostringstream oss;
// oss << i++ << "\n\n";
// oss << "iSizeThis : " << iSizeThis << "\n";
// oss << "iSizeAcc : " << iSizeAcc << "\n";
// oss << "iDataSize : " << iDataSize << "\n";
// ::MessageBoxA(this->GetSafeHwnd(), oss.str().c_str(), "---", 0);
//}
// If all the fragment are combined into pData, the save it to a file.
if (iSizeAcc >= iDataSize){
// Save to file
FILE * pFile;
pFile = fopen ("CCC.dat","wb");
if (pFile != NULL){
fwrite ( ((char*)pData)+4 , 1 , iDataSize-4 , pFile );
fclose (pFile);
}
iSizeAcc = 0;
iDataSize = 0;
}
}
The odd thing is. If I uncomment the message block on remote side, the recieving order will be reversed.
Thus, the result of the remote data is not in a correct order.
Why? (And how could I get the correct order of each fragment?)
Thanks in advance.

While the MessageBoxA function is executing, it pumps messages to your window. Whether or not your thread was expecting them, MessageBoxA dispatched them to you.

Calling MessageBoxA (a blocking, modal dialog) in a receive loop is a fundamentally flawed idea. If you want to see the values, run it in a debugger, print them to a dialog (e.g. a text field), output them to the console or dump them to a file.

Related

Non-blockings reads/writes to stdin/stdout in C on Linux or Mac

I have two programs communicating via named pipes (on a Mac), but the buffer size of named pipes is too small. Program 1 writes 50K bytes to pipe 1 before reading pipe 2. Named pipes are 8K (on my system) so program 1 blocks until the data is consumed. Program 2 reads 20K bytes from pipe 1 and then writes 20K bytes to pipe2. Pipe2 can't hold 20K so program 2 now blocks. It will only be released when program 1 does its reads. But program 1 is blocked waiting for program 2. deadlock
I thought I could fix the problem by creating a gasket program that reads stdin non-blocking and writes stdout non-blocking, temporarily storing the data in a large buffer. I tested the program using cat data | ./gasket 0 | ./gasket 1 > out, expecting out to be a copy of data. However, while the first invocation of gasket works as expected, the read in the second program returns 0 before all the data is consumed and never returns anything other than 0 in follow on calls.
I tried the code below both on a MAC and Linux. Both behave the same. I've added logging so that I can see that the fread from the second invocation of gasket starts getting no data even though it has not read all the data written by the first invocation.
#include <stdio.h>
#include <fcntl.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#define BUFFER_SIZE 100000
char buffer[BUFFER_SIZE];
int elements=0;
int main(int argc, char **argv)
{
int total_read=0, total_write=0;
FILE *logfile=fopen(argv[1],"w");
int flags = fcntl(fileno(stdin), F_GETFL, 0);
fcntl(fileno(stdin), F_SETFL, flags | O_NONBLOCK);
flags = fcntl(fileno(stdout), F_GETFL, 0);
fcntl(fileno(stdout), F_SETFL, flags | O_NONBLOCK);
while (1) {
int num_read=0;
if (elements < (BUFFER_SIZE-1024)) { // space in buffer
num_read = fread(&buffer[elements], sizeof(char), 1024, stdin);
elements += num_read;
total_read += num_read;
fprintf(logfile,"read %d (%d) elements \n",num_read, total_read); fflush(logfile);
}
if (elements > 0) { // something in buffer that we can write
int num_written = fwrite(&buffer[0],sizeof(char),elements, stdout); fflush(stdout);
total_write += num_written;
fprintf(logfile,"wrote %d (%d) elements \n",num_written, total_write); fflush(logfile);
if (num_written > 0) { // copy data to top of buffer
for (int i=0; i<(elements-num_written); i++) {
buffer[i] = buffer[i+num_written];
}
elements -= num_written;
}
}
}
}
I guess I could make the gasket multi-threaded and use blocking reads in one thread and blocking writes in the other, but I would like to understand why non-blocking IO seems to break for me.
Thanks!
My general solution to any IPC project is to make the client and server non-blocking I/O. To do so requires queuing data both on writing and reading, to handle cases where the OS can't read/write, or can only read/write a portion of your message.
The code below will probably seem like EXTREME overkill, but if you get it working, you can use it the rest of your career, whether for named pipes, sockets, network, you name it.
In pseudo-code:
typedef struct {
const char* pcData, * pcToFree; // pcData may no longer point to malloc'd region
int iToSend;
} DataToSend_T;
queue of DataToSend_T qdts;
// Caller will use malloc() to allocate storage, and create the message in
// that buffer. MyWrite() will free it now, or WritableCB() will free it
// later. Either way, the app must NOT free it, and must not even refer to
// it again.
MyWrite( const char* pcData, int iToSend ) {
iSent = 0;
// Normally the OS will tell select() if the socket is writable, but if were hugely
// compute-bound, then it won't have a chance to. So let's call WritableCB() to
// send anything in our queue that is now sendable. We have to send the data in
// order, of course, so can't send the new data until the entire queue is done.
WritableCB();
if ( qdts has no entries ) {
iSent = write( pcData, iToSend );
// TODO: check error
// Did we send it all? We're done.
if ( iSent == iToSend ) {
free( pcData );
return;
}
}
// OK, either 1) we had stuff queued already meaning we can't send, or 2)
// we tried to send but couldn't send it all.
add to queue qdts the DataToSend ( pcData + iSent, pcData, iToSend - iSent );
}
WritableCB() {
while ( qdts has entries ) {
DataToSend_T* pdts = qdts head;
int iSent = write( pdts->cData, pdts->iToSend );
// TODO: check error
if ( iSent == pdts->iToSend ) {
free( pdts->pcToFree );
pop the front node off qdts
else {
pdts->pcData += iSent;
pdts->iToSend -= iSent;
return;
}
}
}
// Off-subject but I like a TINY buffer as an original value, that will always
// exercise the "buffer growth" code for almost all usage, so we're sure it works.
// If the initial buffer size is like 1M, and almost never grows, then the grow code
// may be buggy and we won't know until there's a crash years later.
int iBufSize = 1, iEnd = 0; iEnd is the first byte NOT in a message
char* pcBuf = malloc( iBufSize );
ReadableCB() {
// Keep reading the socket until there's no more data. Grow buffer if necessary.
while (1) {
int iRead = read( pcBuf + iEnd, iBufSize - iEnd);
// TODO: check error
iEnd += iRead;
// If we read less than we had space for, then read returned because this is
// all the available data, not because the buffer was too small.
if ( iRead < iBufSize - iEnd )
break;
// Otherwise, double the buffer and try reading some more.
iBufSize *= 2;
pcBuf = realloc( pcBuf, iBufSize );
}
iStart = 0;
while (1) {
if ( pcBuf[ iStart ] until iEnd-1 is less than a message ) {
// If our partial message isn't at the front of the buffer move it there.
if ( iStart ) {
memmove( pcBuf, pcBuf + iStart, iEnd - iStart );
iEnd -= iStart;
}
return;
}
// process a message, and advance iStart by the size of that message.
}
}
main() {
// Do your initial processing, and call MyWrite() to send and/or queue data.
while (1) {
select() // see man page
if ( the file handle is readable )
ReadableCB();
if ( the file handle is writable )
WritableCB();
if ( the file handle is in error )
// handle it;
if ( application is finished )
exit( EXIT_SUCCESS );
}
}

Reading Data from a Physical Hard Drive

I am trying to develop a program that goes and finds 2 connected unformatted physical drives and read bytes. The program currently runs in the administrator mode since that's the only way I guess the program can see unformatted hard drives. I am using visual studio 2015 and it runs in windows 7 machine.
The problem is that it can only read multiples of 512 (512 is the sector size). Currently the unformatted hard drives are located in disk 2 and 3 slots (they are both SSDs). It first reads 512 bytes (works without an issue) and doesn't do any more reads if it's a formatted hard drive. If it's an unformatted hard drive it goes ahead and read more bytes. If it's hard drive A it then reads the next 1024 bytes and it works (read_amount = 1024). If it's hard drive B it then reads the next 1025 bytes and it doesn't work (read_amount = 0). I am not sure why it can't read a multiple of a 512/sector sizes. My understanding is that when you call "CreateFile()" function with dwFlagsAndAttributes = FILE_ATTRIBUTE_NORMAL, I should be able to read sizes that are not multiples of sector sizes (if you use FILE_FLAG_NO_BUFFERING then you can only read multiples of 512 and I am NOT using that flag). See my code below.
// Hard_Drive_Read.cpp : Defines the entry point for the console application.
// This program assumes you have EXACTLY TWO unformatted hard drives connected to your computer.
#include <Windows.h>
#include <io.h>
#include <fcntl.h>
#include <fstream>
#include <iostream>
#include <iomanip>
using namespace std;
int main(int argc, char *argv[])
{
if (argc != 3)
{
cout << "Need to enter 2 arguments" << endl;
exit(0);
}
int frames_to_process = atoi(argv[2]);
if (frames_to_process < 1)
{
cout << "invalid argument 2" << endl;
exit(0);
}
//HANDLE hDisk_A;
//HANDLE hDisk_B;
LPCTSTR dsksrc = L"\\\\.\\PhysicalDrive";
wchar_t dsk[512] = L"";
bool channel_A_found = false;
bool channel_B_found = false;
char frame_header_A[1024];
char frame_header_B[1025];
HANDLE hDisk;
char buff_read[512];
DWORD read_amount = 0;
for (int i = 0; i < 4; i++)
{
swprintf(dsk, 511, L"%s%d", dsksrc, i);
hDisk = CreateFile(dsk, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if (hDisk == INVALID_HANDLE_VALUE)
{
printf("%s%d%s", "couldn't open the drive ", i, "\n");
CloseHandle(hDisk);
}
else
{
printf("%s%d%s", "successfully open the drive ", i, "\n");
BOOL read_success_1 = ReadFile(hDisk, buff_read, 512, &read_amount, NULL);
cout << "read amount 1 - " << read_amount << endl;
if ((read_success_1 == TRUE) && (read_amount == 512))
{
if ((buff_read[510] == (char)0x55) && (buff_read[511] == (char)0xAA)) // test for a formatted drive; is there other identifiers?
{
cout << i << " is a formatted drive" << endl;
}
else
{
cout << "Not a formatted drive, trying to find sync " << endl;
ofstream writeBinary_Test;
if (i == 2)
{
writeBinary_Test.open("file_A_test.bin", ofstream::out | ofstream::binary);
ReadFile(hDisk, frame_header_A, 1024, &read_amount, NULL);
cout << "read amount " << read_amount << endl;
writeBinary_Test.write(frame_header_A, 1024);
writeBinary_Test.close();
}
else if(i == 3)
{
writeBinary_Test.open("file_B_test.bin", ofstream::out | ofstream::binary);
ReadFile(hDisk, frame_header_B, 1025, &read_amount, NULL);
cout << "read amount " << read_amount << endl;
writeBinary_Test.write(frame_header_B, 1025);
writeBinary_Test.close();
}
LARGE_INTEGER distanceToMove;
SetFilePointerEx(hDisk, distanceToMove, NULL, FILE_BEGIN);
}
}
else
{
}
}
if (channel_A_found && channel_B_found)
{
cout << "both drives found" << endl;
break;
}
}
if ((channel_A_found == false) || (channel_B_found == false))
{
cout << "Couldn't Find Hard Drive A or Drive B or Both" << endl;
cout << "Exiting the program" << endl;
exit(0);
}
CloseHandle(hDisk);
return 0;
}
Eventually, I want to use SetFilePointerEx() to move around the hard drive and I the program has to work with and data size (not multiples of 512). Therefore, it's imperative I can read sizes that's not multiples of 512. Any ideas of how to fix this program? Am I using my flags properly?
Any help is much appreciated!
The documentation for CreateFile says:
Volume handles can be opened as noncached at the discretion of the particular file system, even when the noncached option is not specified in CreateFile. You should assume that all Microsoft file systems open volume handles as noncached. The restrictions on noncached I/O for files also apply to volumes.
Although it doesn't spell it out explicitly, this applies to drives as well as to volumes.
In practice, this isn't a problem. It is straightforward to write a helper function that returns an arbitrary amount of data from an arbitrary offset, while performing only aligned reads.
It's imperative I can read sizes that's not multiples of 512.
That is not possible. For direct access of a disk, you can only read and write multiples of the sector size. Furthermore, you must align your read and write operations. That is the file pointer must be at a multiple of the sector size.
If you want to present an interface that allows arbitrary seeking, reading and writing, then you will need to implement your own buffering on top of the aligned raw disk access.

Howto make zeromq PUB/SUB drop old messages instead of new (for realtime feeds)?

Say I have a PUB server that zmq_send()'s realtime messages to SUB client. If client is busy and can not zmq_recv() messages quick enough, then messages will be buffered in client (and/or server).
If the buffer grows too large (high water mark) then NEW messages will be dropped. For realtime messages this is the opposite of what one wants. OLD messages should be dropped to make place for NEW ones.
Is there some way to do this?
Ideally I would like the SUB client's receive queue to be either empty or contain the most recent message only. When a new message is received it would replace the old one. ( I guess the problem here would be that the client would block on zmq_recv() when the queue is empty, wasting time doing so. )
So how are realtime feeds usually implemented in ZeroMQ?
I'll answer my own question here. The setting ZMQ_CONFLATE "Keep only last message" seemed promising but it doesn't work with subscription filters. It only ever keeps one message in the queue. If you have more than one filter, both old and new messages of the other filters type gets thrown away.
Likewise the recommendation of the zeromq guide to simply to kill slow subscribers, but that doesn't seem like realistic solution. Having subscribers with different read speeds, subscribed to the same fast publisher, should be a normal use case. Some of these subscribers might live on slow computers others on fast ones, etc. ZeroMQ should be able to handle that somehow.
http://zguide.zeromq.org/page:all#Slow-Subscriber-Detection-Suicidal-Snail-Pattern
I ended up doing manual dropping of old queued up messages on the client side. It seems to work fine. I get subscribed messages to the client that are less than 3ms old (through tcp localhost) that way. This works even in cases where I have five thousand, 10 second old messages, in the queue in front of those few real-time message at the back. This is good enough for me.
I cant help but think this is something that should be provided by the library. It could probably do a better job of it.
Anyways here is the client side, old message dropping, code:
bool Empty(zmq::socket_t& socket) {
bool ret = true;
zmq::pollitem_t poll_item = { socket, 0, ZMQ_POLLIN, 0 };
zmq::poll(&poll_item, 1, 0); //0 = no wait
if (poll_item.revents & ZMQ_POLLIN) {
ret = false;
}
return ret;
}
std::vector<std::string> GetRealtimeSubscribedMessageVec(zmq::socket_t& socket_sub, int timeout_ms)
{
std::vector<std::string> ret;
struct MessageTmp {
int id_ = 0;
std::string data_;
boost::posix_time::ptime timestamp_;
};
std::map<int, MessageTmp> msg_map;
int read_msg_count = 0;
int time_in_loop = 0;
auto start_of_loop = boost::posix_time::microsec_clock::universal_time();
do {
read_msg_count++;
//msg format sent by publisher is: filter, timestamp, data
MessageTmp msg;
msg.id_ = boost::lexical_cast<int>(s_recv(socket_sub));
msg.timestamp_ = boost::posix_time::time_from_string(s_recv(socket_sub));
msg.data_ = s_recv(socket_sub);
msg_map[msg.id_] = msg;
auto now = boost::posix_time::microsec_clock::universal_time();
time_in_loop = (now - start_of_loop).total_milliseconds();
if (time_in_loop > timeout_ms) {
std::cerr << "Timeout reached. Publisher is probably sending messages quicker than we can drop them." << std::endl;
break;
}
} while ((Empty(socket_sub) == false));
if (read_msg_count > 1) {
std::cout << "num of old queued up messages dropped: " << (read_msg_count - 1) << std::endl;
}
for (const auto &pair: msg_map) {
const auto& msg_tmp = pair.second;
auto now = boost::posix_time::microsec_clock::universal_time();
auto message_age_ms = (now - msg_tmp.timestamp_).total_milliseconds();
if (message_age_ms > timeout_ms) {
std::cerr << "[SUB] Newest message too old. f:" << msg_tmp.id_ << ", age: " << message_age_ms << "ms, s:" << msg_tmp.data_.size() << std::endl;
}
else {
std::cout << "[SUB] f:" << msg_tmp.id_ << ", age: " << message_age_ms << "ms, s:" << msg_tmp.data_.size() << std::endl;
ret.push_back(msg_tmp.data_);
}
}
return ret;
}

How to prevent flushing to disk of a memory map opened on a windows temporary delete-on-close file

UPDATE 2 / TL;DR
Is there some way to prevent dirty pages of a windows FILE_FLAG_DELETE_ON_CLOSE temporary file from being flushed as a result of closing memory maps opened on these files?
Yes. If you do not need to do anything with the files themselves after their initial creation and you implement some naming conventions, this is possible through the strategy explained in this answer.
Note: I am still quite interested in finding out the reasons for why there is so much difference in the behavior depending on how maps are created and the order of disposal/unmapping.
I have been looking into some strategies for an inter-process shared memory data structure that allows growing and shrinking its committed capacity on windows by using a chain of "memory chunks."
One possible way is to use pagefile backed named memory maps as the chunk memory. An advantage of this strategy is the possibility to use SEC_RESERVE to reserve a big chunk of memory address space and incrementally allocate it using VirtualAlloc with MEM_COMMIT. Disadvantages appear to be (a) the requirement to have SeCreateGlobalPrivilege permissions to allow using a shareable name in the Global\ namespace and (b) the fact that all committed memory contributes to the system commit charge.
To circumvent these disadvantages, I started investigating the use of temporary file backed memory maps. I.e. memory maps over files created using the FILE_FLAG_DELETE_ON_CLOSE | FILE_ATTRIBUTE_TEMPORARY flags combination. This appears to be a recommended strategy that according to e.g. this blog post should prevent flushing the mapped memory to disk (unless memory pressure causes dirty mapped pages to be paged out).
I am however observing that closing the map/file handle before the owning process exits, causes dirty pages to be flushed to disk. This occurs even if the view/file handle is not the one through which the dirty pages were created and when these views/file handles were opened after the pages were 'dirtied' in a different view.
It appears that changing the order of disposal (i.e. unmapping the view first or closing the file handle first) has some impact on when the disk flush is initiated, but not on the fact that flushing takes place.
So my questions are:
Is there some way to use temporary file backed memory maps and prevent them from flushing dirty pages when the map/file is closed, taking into account that multiple threads within a process/multiple processes may have open handles/views to such a file?
If not, what is/could be the reason for the observed behavior?
Is there an alternative strategy that I may have overlooked?
UPDATE
Some additional info: When running the "arena1" and "arena2" parts of the sample code below in two separate (independent) processes, with "arena1" being the process that creates the shared memory regions and "arena2" the one that opens them, the following behavior is observed for maps/chunks that have dirty pages:
If closing the view before the file handle in the "arena1" process, it flushes each of these chunks to disk in what seems a (partially) synchronous process (i.e. it blocks the disposing thread for several seconds), independent of whether or not the "arena2" process was started.
If closing the file handle before the view, disk flushes only occur for those maps/chunks that are closed in the "arena1" process while the "arena2" process still has an open handle to those chunks, and they appear to be 'asynchronous', i.e. not blocking the application thread.
Refer to the (c++) sample code below that allows reproducing the problem on my system (x64, Win7):
static uint64_t start_ts;
static uint64_t elapsed() {
return ::GetTickCount64() - start_ts;
}
class PageArena {
public:
typedef uint8_t* pointer;
PageArena(int id, const char* base_name, size_t page_sz, size_t chunk_sz, size_t n_chunks, bool dispose_handle_first) :
id_(id), base_name_(base_name), pg_sz_(page_sz), dispose_handle_first_(dispose_handle_first) {
for (size_t i = 0; i < n_chunks; i++)
chunks_.push_back(new Chunk(i, base_name_, chunk_sz, dispose_handle_first_));
}
~PageArena() {
for (auto i = 0; i < chunks_.size(); ++i) {
if (chunks_[i])
release_chunk(i);
}
std::cout << "[" << ::elapsed() << "] arena " << id_ << " destructed" << std::endl;
}
pointer alloc() {
auto ptr = chunks_.back()->alloc(pg_sz_);
if (!ptr) {
chunks_.push_back(new Chunk(chunks_.size(), base_name_, chunks_.back()->capacity(), dispose_handle_first_));
ptr = chunks_.back()->alloc(pg_sz_);
}
return ptr;
}
size_t num_chunks() {
return chunks_.size();
}
void release_chunk(size_t ndx) {
delete chunks_[ndx];
chunks_[ndx] = nullptr;
std::cout << "[" << ::elapsed() << "] chunk " << ndx << " released from arena " << id_ << std::endl;
}
private:
struct Chunk {
public:
Chunk(size_t ndx, const std::string& base_name, size_t size, bool dispose_handle_first) :
map_ptr_(nullptr), tail_(nullptr),
handle_(INVALID_HANDLE_VALUE), size_(0),
dispose_handle_first_(dispose_handle_first) {
name_ = name_for(base_name, ndx);
if ((handle_ = create_temp_file(name_, size)) == INVALID_HANDLE_VALUE)
handle_ = open_temp_file(name_, size);
if (handle_ != INVALID_HANDLE_VALUE) {
size_ = size;
auto map_handle = ::CreateFileMappingA(handle_, nullptr, PAGE_READWRITE, 0, 0, nullptr);
tail_ = map_ptr_ = (pointer)::MapViewOfFile(map_handle, FILE_MAP_ALL_ACCESS, 0, 0, size);
::CloseHandle(map_handle); // no longer needed.
}
}
~Chunk() {
if (dispose_handle_first_) {
close_file();
unmap_view();
} else {
unmap_view();
close_file();
}
}
size_t capacity() const {
return size_;
}
pointer alloc(size_t sz) {
pointer result = nullptr;
if (tail_ + sz <= map_ptr_ + size_) {
result = tail_;
tail_ += sz;
}
return result;
}
private:
static const DWORD kReadWrite = GENERIC_READ | GENERIC_WRITE;
static const DWORD kFileSharing = FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE;
static const DWORD kTempFlags = FILE_ATTRIBUTE_NOT_CONTENT_INDEXED | FILE_FLAG_DELETE_ON_CLOSE | FILE_ATTRIBUTE_TEMPORARY;
static std::string name_for(const std::string& base_file_path, size_t ndx) {
std::stringstream ss;
ss << base_file_path << "." << ndx << ".chunk";
return ss.str();
}
static HANDLE create_temp_file(const std::string& name, size_t& size) {
auto h = CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, CREATE_NEW, kTempFlags, 0);
if (h != INVALID_HANDLE_VALUE) {
LARGE_INTEGER newpos;
newpos.QuadPart = size;
::SetFilePointerEx(h, newpos, 0, FILE_BEGIN);
::SetEndOfFile(h);
}
return h;
}
static HANDLE open_temp_file(const std::string& name, size_t& size) {
auto h = CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, OPEN_EXISTING, kTempFlags, 0);
if (h != INVALID_HANDLE_VALUE) {
LARGE_INTEGER sz;
::GetFileSizeEx(h, &sz);
size = sz.QuadPart;
}
return h;
}
void close_file() {
if (handle_ != INVALID_HANDLE_VALUE) {
std::cout << "[" << ::elapsed() << "] " << name_ << " file handle closing" << std::endl;
::CloseHandle(handle_);
std::cout << "[" << ::elapsed() << "] " << name_ << " file handle closed" << std::endl;
}
}
void unmap_view() {
if (map_ptr_) {
std::cout << "[" << ::elapsed() << "] " << name_ << " view closing" << std::endl;
::UnmapViewOfFile(map_ptr_);
std::cout << "[" << ::elapsed() << "] " << name_ << " view closed" << std::endl;
}
}
HANDLE handle_;
std::string name_;
pointer map_ptr_;
size_t size_;
pointer tail_;
bool dispose_handle_first_;
};
int id_;
size_t pg_sz_;
std::string base_name_;
std::vector<Chunk*> chunks_;
bool dispose_handle_first_;
};
static void TempFileMapping(bool dispose_handle_first) {
const size_t chunk_size = 256 * 1024 * 1024;
const size_t pg_size = 8192;
const size_t n_pages = 100 * 1000;
const char* base_path = "data/page_pool";
start_ts = ::GetTickCount64();
if (dispose_handle_first)
std::cout << "Mapping with 2 arenas and closing file handles before unmapping views." << std::endl;
else
std::cout << "Mapping with 2 arenas and unmapping views before closing file handles." << std::endl;
{
std::cout << "[" << ::elapsed() << "] " << "allocating " << n_pages << " pages through arena 1." << std::endl;
PageArena arena1(1, base_path, pg_size, chunk_size, 1, dispose_handle_first);
for (size_t i = 0; i < n_pages; i++) {
auto ptr = arena1.alloc();
memset(ptr, (i + 1) % 256, pg_size); // ensure pages are dirty.
}
std::cout << "[" << elapsed() << "] " << arena1.num_chunks() << " chunks created." << std::endl;
{
PageArena arena2(2, base_path, pg_size, chunk_size, arena1.num_chunks(), dispose_handle_first);
std::cout << "[" << ::elapsed() << "] arena 2 loaded, going to release chunks 1 and 2 from arena 1" << std::endl;
arena1.release_chunk(1);
arena1.release_chunk(2);
}
}
}
Please refer to this gist that contains the output of running the above code and links to screen captures of system free memory and disk activity when running TempFileMapping(false) and TempFileMapping(true) respectively.
After the bounty period expired without any answers that provided more insight or solved the mentioned problem, I decided to dig a little deeper and experiment some more with several combinations and sequences of operations.
As a result, I believe I have found a way to achieve memory maps shared between processes over temporary, delete-on-close files, that are not being flushed to disk when they are closed.
The basic idea involves creating the memory map when a temp file is newly created with a map name that can be used in a call to OpenFileMapping:
// build a unique map name from the file name.
auto map_name = make_map_name(file_name);
// Open or create the mapped file.
auto mh = ::OpenFileMappingA(FILE_MAP_ALL_ACCESS, false, map_name.c_str());
if (mh == 0 || mh == INVALID_HANDLE_VALUE) {
// existing map could not be opened, create the file.
auto fh = ::CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, CREATE_NEW, kTempFlags, 0);
if (fh != INVALID_HANDLE_VALUE) {
// set its size.
LARGE_INTEGER newpos;
newpos.QuadPart = desired_size;
::SetFilePointerEx(fh, newpos, 0, FILE_BEGIN);
::SetEndOfFile(fh);
// create the map
mh = ::CreateFileMappingA(mh, nullptr, PAGE_READWRITE, 0, 0, map_name.c_str());
// close the file handle
// from now on there will be no accesses using file handles.
::CloseHandle(fh);
}
}
Thus, the file handle is only used when the file is newly created, and closed immediately after the map is created, while the map handle itself remains open, to allow opening the mapping without requiring access to a file handle. Note that a race condition exists here, that we would need to deal with in any "real code" (as well as adding decent error checking and handling).
So if we got a valid map handle, we can create the view:
auto map_ptr = MapViewOfFile(mh, FILE_MAP_ALL_ACCESS, 0, 0, 0);
if (map_ptr) {
// determine its size.
MEMORY_BASIC_INFORMATION mbi;
if (::VirtualQuery(map_ptr, &mbi, sizeof(MEMORY_BASIC_INFORMATION)) > 0)
map_size = mbi.RegionSize;
}
When, some time later closing a mapped file: close the map handle before unmapping the view:
if (mh == 0 || mh == INVALID_HANDLE_VALUE) {
::CloseHandle(mh);
mh = INVALID_HANDLE_VALUE;
}
if (map_ptr) {
::UnmapViewOfFile(map_ptr);
map_ptr = 0;
map_size = 0;
}
And, according to the test I have performed so far, this does not cause flushing dirty pages to disk on close, problem solved. Well partially anyway, there may still be a cross-session map name sharing issue.
If I take it correctly, commenting out Arena2 part of code shall reproduce the issue without the need for second process. I have tried this:
I edited base_path as follows for convenience:
char base_path[MAX_PATH];
GetTempPathA(MAX_PATH, base_path);
strcat_s(base_path, MAX_PATH, "page_pool");
I edited n_pages = 1536 * 128 to bring the used memory to 1.5GB, compared to your ~800mb.
I have tested TempFileMapping(false) and TempFileMapping(true), one at a time, for the same results.
I have tested with Arena2 commented out and intact, for the same results.
I have tested on Win8.1 x64 and Win7 x64, for ±10% same results.
In my tests, code runs in 2400ms ±10%, only 500ms ±10% spent on deallocating. That's clearly not enough for a flush of 1.5GB on a low-spinning silent HDDs I have there.
So, the question is, what are you observing? I'd suggest that you:
Provide your times for comparison
Use a different computer for tests, paying attention to excluding software issues such as "same antivirus"
Verify that you're not experiencing a RAM shortage.
Use xperf to see what's happening during the freeze.
Update
I have tested on yet another Win7 x64, and times are 890ms full, 430ms spent on dealloc. I have looked into your results, and what is VERY suspicious is that almost exactly 4000ms is spent in freeze each time on your machine. That can't be a mere coincidence, I believe. Also, it's rather obvious now the the problem is somehow bound to a specific machine you're using. So my suggestions are:
As stated above, test on another computer yourself
As stated above, Use XPerf, it will allow you to see what exactly happens in user mode and kernel mode during the freeze (I really suspect some non-standard driver in the middle)
Play with number of pages and see how it affects the freeze length.
Try to store files on a different disk drive on the same computer where you have tested initially.

Accessing return by pointer out of function

I have the following function
std::tuple<int,val*>Socket::recv(val* values ) // const
{
char buf [ MAXRECV + 1 ];
memset ( buf, 0, MAXRECV + 1 );
int status = ::recv ( m_sock, buf, MAXRECV, 0 );
if ( status == -1 )
{
std::cout << "status == -1 errno == " << errno << " in Socket::recv\n";
// return std::make_tuple(0,NULL);//this is not working
}
else if ( status == 0 )
{
//return std::make_tuple(0,NULL); //this is not working
}
else
{
struct val* values=(struct val*) buf;
if(!std::isnan(values->val1) &&
!std::isnan(values->val2) &&
!std::isnan(values->val3) &&
!std::isnan(values->val4),
!std::isnan(values->val5),
!std::isnan(values->val6))
printf("received:%f %f %f %f %f %f\n", values->val1, values->val2,
values->val3, values->val4, values->val5, values->val6);
return std::make_tuple(status,values);
}
}
The received values are printed out in to standard output correctly within the function.
But when I try to access these received values out of the function by calling as follows what I get is all 0's.[after creating Socket rcvd object]
Would you tell me how to access these values outside the function?
1.
std::cout << std::get<1>(rcvd.recv(&values)->val1)
<< std::get<1>(rcvd.recv(&values)->val2)
<< std::get<1>(rcvd.recv(&values)->val3)
<< std::get<1>(rcvd.recv(&values)->val4)
<< std::get<1>(rcvd.recv(&values)->val5)
<< std::get<1>(rcvd.recv(&values)->val6)
<< std::endl;
2.
std::cout << std::get<1>(rcvd.recv(&values).val1)
<< std::get<1>(rcvd.recv(&values).val2)
<< std::get<1>(rcvd.recv(&values).val3)
<< std::get<1>(rcvd.recv(&values).val4)
<< std::get<1>(rcvd.recv(&values).val5)
<< std::get<1>(rcvd.recv(&values).val6)
<< std::endl;
3.
std::cout << std::get<1>(rcvd.recv(&values)[0])
<< std::get<1>(rcvd.recv(&values)[1])
<< std::get<1>(rcvd.recv(&values)[2])
<< std::get<1>(rcvd.recv(&values)[3])
<< std::get<1>(rcvd.recv(&values)[4])
<< std::get<1>(rcvd.recv(&values)[5])
<< std::endl;
where "values" comes from
struct val {
val1;
val2;
val3;
val4;
val5;
val6;} values;
All the three options of calling the function or access the struct val could not work for me.
Would you tell me
how to access these received values externally from any function?
how to return zero to struct pointer [NULL is not working ] when status is 0 or -1
Try
return std::make_tuple<int, val*>(0, nullptr);
The type of tuple is deduced from arguments, so by using 0,NULL you are actually using the null constant wich is evaluted to 0 and hence deduced type is <int,int>.
By the way, I see no reason for using NULL in C++11, if you need that really for some reason then cast NULL to val*
static_cast<val*>(NULL);
EDIT:
Other viable alternatives are
val* nullval = nullptr;
return std::make_tuple(0, nullval);
Or
return std::make_tuple(0, static_cast<val*>(nullptr));
Or (as comment suggest)
return {0, nullptr};
Choose the one that seems more clear to you.
You are lucky that the outside function is printing zeroes. It might have as well just dumped the core on you :)
What you are doing is accessing a buffer, that was created on a stack, after that stack was released (once the function's execution finished). That is HIGHLY UNSAFE and, pretty much, illegal.
Instead what you should do is allocate your data buffer in a 'free memory", using functions like malloc (in C) or operator new/new[] (in C++).
The quick fix is to replace the line
char buf [ MAXRECV + 1 ];
with
char * buf = new char [ MAXRECV + 1 ];
And when you do a type casting on line
struct val* values=(struct val*) buf;
you really ought to be sure that what you do is correct. If the sizeof() of you struct val is more than the sizeof(char[MAXRECV + 1]) you'll get in memory access troubles.
After you are done using the returned data buffer don't forget to release it with a call to free (in C) or delete/delete[] (in C++). Otherwise you'd have what is called a memory leak.

Resources