std::queue dumping into file - c++11

This code should dump queue content into file.
Every record in new line
Books vDY;
std::queue<Books> qDY;
std::stringstream tmp_ss;
tmp_ss.str("");
while( !qDY.empty() )
{
vDY = qDY.front();
tmp_ss <<"["<< (vDY.dKey) <<":"<< (vDY.dValue) <<"]";
//--Write record into log file
Log_Line( tmp_ss.str() );
tmp_ss.str(std::string()); //reset stringstream
//---
qDY.pop();
}
But I'm getting output
[1:"Book1"]
[1:"Book1"][2:"Book2"]
[1:"Book1"][2:"Book2"][3:"Book3"]

Related

Reading binary data

I am trying to read data from a binary file. One block of data is 76 bytes long (this varies with the number of the 2-byte "main data items" in the middle of the block). The first datum is 4 bytes, second is 4 bytes, and then there are a bunch of 2 byte main data items, and at the end are 2 more 2-byte pieces of data.
Based on this Delphi sample I've learned how to read the file with the code below:
short AShortInt; // 16 bits
int AInteger; // 32 bits
try
{
infile=new TFileStream(myfile,fmOpenRead); // myfile is binary
BR = new TBinaryReader(infile, TEncoding::Unicode, false);
for (int rows = 0; rows < 5; rows++) { // just read the first 5 blocks of data for testing
AInteger = BR->ReadInt32(); // read first two 4 byte integers for this block
AInteger = BR->ReadInt32();
for (int i = 0; i < 32; i++) { // now read the 32 2-byte integers from this block
AShortInt = BR->ReadInt16();
}
AShortInt = BR->ReadInt16(); // read next to last 2-byte int
AShortInt = BR->ReadInt16(); // read the last 2-byte int
}
delete infile;
delete BR;
Close();
}
catch(...)
{
delete infile; // closes the file, doesn't delete it.
delete BR;
ShowMessage("Can't open file!");
Close();
}
But, what i would like to do is use a 76-byte wide buffer to read the entire block, and then pick the various datum out of that buffer. I put together the following code based on this question and i can read a whole block of data into the buffer.
UnicodeString myfile = System::Ioutils::TPath::Combine(System::Ioutils::TPath::GetDocumentsPath(), "binaryCOM.dat");
TFileStream*infile=0;
try
{
infile=new TFileStream(myfile,fmOpenRead);
const int bufsize=76;
char*buf=new char[bufsize];
int a = 0;
while(int bytesread=infile->Read(buf,bufsize)) {
a++; // just a place to break on Run to Cursor
}
delete[]buf;
}
catch(...)
{
delete infile;
ShowMessage("Can't open file!");
Close();
}
But i can't figure out how to piece together subsets out of the bytes in the buffer. Is there a way to concatenate bytes? So i could read a block of data into a 76 byte buffer and then do something like this below?
unsigned int FirstDatum = buf[0]+buf[1]+buf[2]+buf[3]; // concatenate the 4 bytes for the first piece of data
This will be an FMX app for Win32, iOS, and Android built in C++Builder 10.3.2.
Here is my modified code using Remy's suggestion of TMemoryStream.
UnicodeString myfile = System::Ioutils::TPath::Combine(System::Ioutils::TPath::GetDocumentsPath(), "binaryCOM.dat");
TMemoryStream *MS=0;
TBinaryReader *BR=0;
std::vector<short> myArray;
short AShortInt;
int AInteger;
int NumDatums = 32; // the variable number of 2-byte main datums
try
{
MS = new TMemoryStream();
MS->LoadFromFile(myfile);
BR = new TBinaryReader(MS, TEncoding::Unicode, false);
for (int rows = 0; rows < 5; rows++) { // testing with first 5 blocks of data
AInteger = BR->ReadInt32(); // read first two 4 byte integers
AInteger = BR->ReadInt32(); // here
for (int i = 0; i < NumDatums; i++) { // read the main 2-byte data
AShortInt = BR->ReadInt16();
myArray.push_back(AShortInt); // push it into vector
}
AShortInt = BR->ReadInt16(); // read next to last 2-byte int
AShortInt = BR->ReadInt16(); // read the last 2-byte int
// code here to do something with this block of data just read from file
}
}
delete MS;
delete BR;
}
catch(...)
{
delete MS;
delete BR;
ShowMessage("Can't open file.");
}

Non-blockings reads/writes to stdin/stdout in C on Linux or Mac

I have two programs communicating via named pipes (on a Mac), but the buffer size of named pipes is too small. Program 1 writes 50K bytes to pipe 1 before reading pipe 2. Named pipes are 8K (on my system) so program 1 blocks until the data is consumed. Program 2 reads 20K bytes from pipe 1 and then writes 20K bytes to pipe2. Pipe2 can't hold 20K so program 2 now blocks. It will only be released when program 1 does its reads. But program 1 is blocked waiting for program 2. deadlock
I thought I could fix the problem by creating a gasket program that reads stdin non-blocking and writes stdout non-blocking, temporarily storing the data in a large buffer. I tested the program using cat data | ./gasket 0 | ./gasket 1 > out, expecting out to be a copy of data. However, while the first invocation of gasket works as expected, the read in the second program returns 0 before all the data is consumed and never returns anything other than 0 in follow on calls.
I tried the code below both on a MAC and Linux. Both behave the same. I've added logging so that I can see that the fread from the second invocation of gasket starts getting no data even though it has not read all the data written by the first invocation.
#include <stdio.h>
#include <fcntl.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#define BUFFER_SIZE 100000
char buffer[BUFFER_SIZE];
int elements=0;
int main(int argc, char **argv)
{
int total_read=0, total_write=0;
FILE *logfile=fopen(argv[1],"w");
int flags = fcntl(fileno(stdin), F_GETFL, 0);
fcntl(fileno(stdin), F_SETFL, flags | O_NONBLOCK);
flags = fcntl(fileno(stdout), F_GETFL, 0);
fcntl(fileno(stdout), F_SETFL, flags | O_NONBLOCK);
while (1) {
int num_read=0;
if (elements < (BUFFER_SIZE-1024)) { // space in buffer
num_read = fread(&buffer[elements], sizeof(char), 1024, stdin);
elements += num_read;
total_read += num_read;
fprintf(logfile,"read %d (%d) elements \n",num_read, total_read); fflush(logfile);
}
if (elements > 0) { // something in buffer that we can write
int num_written = fwrite(&buffer[0],sizeof(char),elements, stdout); fflush(stdout);
total_write += num_written;
fprintf(logfile,"wrote %d (%d) elements \n",num_written, total_write); fflush(logfile);
if (num_written > 0) { // copy data to top of buffer
for (int i=0; i<(elements-num_written); i++) {
buffer[i] = buffer[i+num_written];
}
elements -= num_written;
}
}
}
}
I guess I could make the gasket multi-threaded and use blocking reads in one thread and blocking writes in the other, but I would like to understand why non-blocking IO seems to break for me.
Thanks!
My general solution to any IPC project is to make the client and server non-blocking I/O. To do so requires queuing data both on writing and reading, to handle cases where the OS can't read/write, or can only read/write a portion of your message.
The code below will probably seem like EXTREME overkill, but if you get it working, you can use it the rest of your career, whether for named pipes, sockets, network, you name it.
In pseudo-code:
typedef struct {
const char* pcData, * pcToFree; // pcData may no longer point to malloc'd region
int iToSend;
} DataToSend_T;
queue of DataToSend_T qdts;
// Caller will use malloc() to allocate storage, and create the message in
// that buffer. MyWrite() will free it now, or WritableCB() will free it
// later. Either way, the app must NOT free it, and must not even refer to
// it again.
MyWrite( const char* pcData, int iToSend ) {
iSent = 0;
// Normally the OS will tell select() if the socket is writable, but if were hugely
// compute-bound, then it won't have a chance to. So let's call WritableCB() to
// send anything in our queue that is now sendable. We have to send the data in
// order, of course, so can't send the new data until the entire queue is done.
WritableCB();
if ( qdts has no entries ) {
iSent = write( pcData, iToSend );
// TODO: check error
// Did we send it all? We're done.
if ( iSent == iToSend ) {
free( pcData );
return;
}
}
// OK, either 1) we had stuff queued already meaning we can't send, or 2)
// we tried to send but couldn't send it all.
add to queue qdts the DataToSend ( pcData + iSent, pcData, iToSend - iSent );
}
WritableCB() {
while ( qdts has entries ) {
DataToSend_T* pdts = qdts head;
int iSent = write( pdts->cData, pdts->iToSend );
// TODO: check error
if ( iSent == pdts->iToSend ) {
free( pdts->pcToFree );
pop the front node off qdts
else {
pdts->pcData += iSent;
pdts->iToSend -= iSent;
return;
}
}
}
// Off-subject but I like a TINY buffer as an original value, that will always
// exercise the "buffer growth" code for almost all usage, so we're sure it works.
// If the initial buffer size is like 1M, and almost never grows, then the grow code
// may be buggy and we won't know until there's a crash years later.
int iBufSize = 1, iEnd = 0; iEnd is the first byte NOT in a message
char* pcBuf = malloc( iBufSize );
ReadableCB() {
// Keep reading the socket until there's no more data. Grow buffer if necessary.
while (1) {
int iRead = read( pcBuf + iEnd, iBufSize - iEnd);
// TODO: check error
iEnd += iRead;
// If we read less than we had space for, then read returned because this is
// all the available data, not because the buffer was too small.
if ( iRead < iBufSize - iEnd )
break;
// Otherwise, double the buffer and try reading some more.
iBufSize *= 2;
pcBuf = realloc( pcBuf, iBufSize );
}
iStart = 0;
while (1) {
if ( pcBuf[ iStart ] until iEnd-1 is less than a message ) {
// If our partial message isn't at the front of the buffer move it there.
if ( iStart ) {
memmove( pcBuf, pcBuf + iStart, iEnd - iStart );
iEnd -= iStart;
}
return;
}
// process a message, and advance iStart by the size of that message.
}
}
main() {
// Do your initial processing, and call MyWrite() to send and/or queue data.
while (1) {
select() // see man page
if ( the file handle is readable )
ReadableCB();
if ( the file handle is writable )
WritableCB();
if ( the file handle is in error )
// handle it;
if ( application is finished )
exit( EXIT_SUCCESS );
}
}

not able to read a text file in vs2010 using c .. i am new to vs please help me

i kept my text file at exactly same place where .exe is existing , then also its not working ..
hi this is my code , i kept my text file at exactly same place where .exe is existing , then also its not working ..
hi this is my code , i kept my text file at exactly same place where .exe is existing , then also its not working ..
int main(int argc, _TCHAR* argv[])
{
int result = 0;
char ca, file_name[25];
FILE *fp;
//printf("Enter the name of file you wish to see\n");
gets(file_name);
fp = fopen("sample.txt","r"); // read mode
if( fp == NULL )
{
perror("Error while opening the file.\n");
//exit(EXIT_FAILURE);
}
if( fgets (str, 60, fp)!=NULL )
{
/* writing content to stdout */
puts(str);
}
fclose(fp);
}
Try this , i basically work in C & C++ , i use this code to perform file operation
int main()
{
char filename[10];char extension[5]=".txt";
printf("Enter the name of file you wish to see\n");
gets(filename);
fflush(stdin);
filename[10]='\0';
strcat(filename,extension);
puts(filename);
FILE *p; char acline[80];
p=fopen(filename,"r");
if(p==NULL)
{
printf("%s file is missing\n",filename);system("pause");
}
fseek(p,0,SEEK_SET); // Setting file pointer to beginning of the file
while (!feof(p)) // Detecting end of file
{
fgets(acline,80,p);
puts(acline);
}
printf("\n File end\n");
system("pause");
}
*but while(!feof()) has certain issues see this

In C# VS2013 how do you read a resource txt file one line at a time?

static void Starter(ref int[,] grid)
{
StreamReader reader = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream(Resources.Sudoku));
string line = reader.ReadLine();
Console.Write(line);
Console.ReadLine();
}
I know this isn't right, but it gets my point across.
I would like to be able to read in the resource file one line at a time.
Like so:
System.IO.StreamReader StringFromTxt
= new System.IO.StreamReader(path);
string line = StringFromTxt.ReadLine();
I do not necessarily have to read in from the resource, but I am not sure of any other way to call a text file without knowing the directory every time, or hard coding it. I can't have the user pick files.
StreamReader sr = new StreamReader("D:\\CountryCodew.txt");
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
}
MSDN lists the following as the way to read in one line at a time:
https://msdn.microsoft.com/en-us/library/aa287535(v=vs.71).aspx
int counter = 0; //keep track of #lines read
string line;
// Read the file and display it line by line.
System.IO.StreamReader file =
new System.IO.StreamReader("c:\\test.txt");
while((line = file.ReadLine()) != null)
{
Console.WriteLine (line);
counter++;
}
file.Close();
// Suspend the screen.
Console.ReadLine();
Additional examples for getline:
https://msdn.microsoft.com/en-us/library/2whx1zkx.aspx

Boost GraphML reader and yEd

I am trying to read a .graphml that yEd (yEd) generates. I am able to read simple and manually-generated .graphml files but the yEd files contains several properties to be defined. Does any one has a running example that show how to deal with such yEd files?
The yED file must be filtered to remove all the yEd stuff that boost::read_graphml does not recognize. If all you want is the vertices and edges, this is simple enough. However, if you do want some of the attributes, then it becomes more complex, and will depend on what you need.
Here is some code that filters out all the yED stuff, except the text of the node labels, which is converted to the simplest possible node label attribute that boost::read_graphml can parse and store in a bundled property.
/**
Check for a yEd file
#param[in] n the filename
#return true if the file weas written by yED
The input file is copied to a new file graphex_processed.graphml
If the intput file was NOT produced by yEd, then the copy is perfect
If input was produced by yEd then the copy is filtered so that it can be
read by boost::read_graphml
Most of the yEd stuff is discarded, except for the node labels
the text of which are copied to a simple node attribute "label"
*/
bool cGraph::IsGraphMLbyYED(const std::wstring& n)
{
bool yEd = false;
// open the input file
std::ifstream fin;
fin.open(n.c_str(), std::ifstream::in);
if( ! fin.is_open() ) {
return false;
}
// open the output file
std::ofstream fout;
fout.open("graphex_processed.graphml", std::ifstream::out );
if( ! fout.is_open() ) {
return false;
}
// loop over input file lines
fin.clear();
char buf[1000];
while( fin.good() ) {
fin.getline( buf,999 );
std::string l( buf );
// check for file produced by yEd
if( l.find("<!--Created by yFiles") != -1 ) {
yEd = true;
// convert NodeLabel text to simple label attribute
fout << "<key id=\"key0\" for=\"node\" attr.name=\"label\" attr.type=\"string\" />\n";
}
// check for file already identified as yEd
if( yEd ) {
// filter out yED attributes
if( l.find("<key") != -1 ) {
continue;
}
// convert NodeLabel text
if( l.find("<y:NodeLabel") != -1 ) {
int p = l.find(">")+1;
int q = l.find("<",p);
std::string label = l.substr(p,q-p);
fout << "<data key=\"key0\">" << label << "</data>\n";
continue;
}
// filter out outher yEd stuff
if( l.find("<y:") != -1 ) {
continue;
}
if( l.find("</y:") != -1 ) {
continue;
}
if( l.find("<data") != -1 ) {
continue;
}
if( l.find("</data") != -1 ) {
continue;
}
}
// copy input line to output
fout << buf << std::endl;
}
// close files
fin.close();
fout.close();
// return true if yED file
return yEd;
}
Here is some code to read the filtered file
void cGraph::ReadGraphML(const std::wstring& n)
{
// check if file was produced by yEd
IsGraphMLbyYED( n );
boost::dynamic_properties dp;
dp.property("label", boost::get(&cVertex::myName, myGraph));
myGraph.clear();
std::ifstream fin;
fin.open("graphex_processed.graphml", std::ifstream::in);
if( ! fin.is_open() ) {
return;
}
boost::read_graphml( fin, myGraph, dp );
}
If you want to see an example of this running in an application, take a look at Graphex, a GUI for the BGL, which can read yEd files using this code.
Try this workaround:
https://stackoverflow.com/a/55807107/4761831
I just inherited a class and removed some codes that cause the exception.

Resources