I couldn't find a sufficient example on how to use apache thrift for ipc-communication via shared memory. My goal is to serialize an exisiting class with help of thrift, then send via shared memory to a different process where i deserialize it again with help of thrift. Right now i'm using TMemoryBuffer and TBinaryProtocol to serialize the data. Although this works, I have no idea on how to write it to shared memory.
Here is my code so far:
#include "test_types.h"
#include "test_constants.h"
#include "thrift/protocol/TBinaryProtocol.h"
#include "thrift/transport/TBufferTransports.h"
int main(int argc, char** argv)
{
int shID;
char* myPtr;
Person* dieter = new Person("Dieter", "Neuer");
//Person* johann = new Person("Johann", "Liebert");
//Car* ford = new Car("KLENW", 4, 4);
PersonThrift dieterThrift;
dieterThrift.nachName = dieter->getNachname();
dieterThrift.vorName = dieter->getVorname();
boost::shared_ptr<apache::thrift::transport::TMemoryBuffer> transport(new apache::thrift::transport::TMemoryBuffer);
boost::shared_ptr<apache::thrift::protocol::TBinaryProtocol> protocol(new apache::thrift::protocol::TBinaryProtocol(transport));
test thriftTest;
thriftTest.personSet.insert(dieterThrift);
u_int32_t size = thriftTest.write(protocol.get());
std::cout << transport.get()->getBufferAsString();
shID = shmget(1000, 100, IPC_CREAT | 0666);
if (shID >= 0)
{
myPtr = (char*)shmat(shID, 0, 0);
if (myPtr==(char *)-1)
{
perror("shmat");
}
else
{
//myPtr = protocol.get();
}
}
getchar();
shmdt(myPtr);
}
The main problem is the part
//myPtr = protocol.get();
How do I use thrift so that I can write my deserialized data into myPtr (and thus into shared memory). I guess TMemoryBuffer might already be a bad idea. As you may see, I'm not really experienced with this.
Kind regards and thanks in advance
Michael
After reading the question again and having a closer look at the code ... you were almost there. The mistake you made is to look at the protocol, which gives you no data. Instead, you have to ask the transport, as you already did with
std::cout << transport.get()->getBufferAsString();
The way to get the raw data is quite similar, just use getBuffer(&pbuf, &sz); instead. Using this, we get something like this:
// query buffer pointer and data size
uint8_t* pbuf;
uint32_t sz;
transport.get()->getBuffer(&pbuf, &sz);
// alloc shmem blöock of adequate size
shID = shmget(1000, sz, IPC_CREAT | 0666);
if (shID >= 0)
{
myPtr = (char*)shmat(shID, 0, 0);
if (myPtr==(char *)-1)
{
perror("shmat");
}
else
{
// copy serialized data into shared memory
memcpy( myPtr, pbuf, sz);
}
}
Since shmget() may give you a larger block than requested, it seems to be a good idea to additionally use the framed transport, which automatically carries the real data size in the serialized data. Some sample code for the latter can be found in the Test Client or server code.
Related
I'm trying out to forward output stream from XCode (v12.4) to Processing (https://processing.org/).
My goal is: To draw a simple object in Processing according to my XCode project data.
I need to see value of my variable in the Processing.
int main(int argc, const char * argv[]) {
// insert code here...
for (int i=0; i<10; i++)
std::cout << "How to send value of i to the Processing!\n";
return 0;
}
Finally I found the way. Hope it help someone. Share it.
Xcode app ->(127.0.0.1:UDP)-> Processing sketch
Source Links:
Sending string over UDP in C++
https://discourse.processing.org/t/receive-udp-packets/19832
Xcode app (C++):
int main(int argc, char const *argv[])
{
std::string hostname{"127.0.0.1"};
uint16_t port = 6000;
int sock = ::socket(AF_INET, SOCK_DGRAM, 0);
sockaddr_in destination;
destination.sin_family = AF_INET;
destination.sin_port = htons(port);
destination.sin_addr.s_addr = inet_addr(hostname.c_str());
std::string msg = "Hello world!";
for(int i=0; i<5; i++){
long n_bytes = ::sendto(sock, msg.c_str(), msg.length(), 0, reinterpret_cast<sockaddr*>(&destination), sizeof(destination));
std::cout << n_bytes << " bytes sent" << std::endl;
}
::close(sock);
return 0;
}
Processing code:
import java.net.*;
import java.io.*;
import java.util.Arrays;
DatagramSocket socket;
DatagramPacket packet;
byte[] buf = new byte[12]; //Set your buffer size as desired
void setup() {
try {
socket = new DatagramSocket(6000); // Set your port here
}
catch (Exception e) {
e.printStackTrace();
println(e.getMessage());
}
}
void draw() {
try {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
InetAddress address = packet.getAddress();
int port = packet.getPort();
packet = new DatagramPacket(buf, buf.length, address, port);
//Received as bytes:
println(Arrays.toString(buf));
//If you wish to receive as String:
String received = new String(packet.getData(), 0, packet.getLength());
println(received);
}
catch (IOException e) {
e.printStackTrace();
println(e.getMessage());
}
}
The assumption is you're using c++ in Xcode (and not Objective-C, nor Swift).
Every processing sketch inherits the args property (very similar to main's const char * argv[] in c++ program). You can make use of that to initialise a Processing sketch with options from c++.
You could have something like:
int main(int argc, const char * argv[]) {
system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0,1,2,3,4,5,6,7,8,9");
return 0;
}
(This is oversimplified, you'd have your for loop accumulate ints into a string with a separator character, maybe setup variables for paths to processing-java and the processing sketch)
To clarify, processing-java is a command line utility that ships with Processing. (You can find it in inside the Processing.app folder (via show contents), alongside the processing executable and install it via Tools menu inside Processing). It allows you to easily run a sketch from the command line. Alternatively, you can export an application, however if you're prototyping, the processing-java option might be more practical.
In Processing you'd check if the sketch was launched with arguments, and if so, parse those arguments.
void setup(){
if(args != null){
printArray(args);
}
}
You can use split() to split 0,1,2,3,4,5,6,7,8,9 into individual numbers that can be parsed (via int() for example).
If you have more complex data, you can consider formatting your c++ output as JSON, then using parseJSONObject() / parseJSONArray().
(If you don't want to split individual values, you can just use spaces with command line arguments: /path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0 1 2 3 4 5 6 7 8 9. If you want to send a JSON formatted string from c++, be aware you may need to escape " (e.g. system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run {\"myCppData\":[0,1,2]}");)
This would work if you need to launch the processing sketch once and initialise with values from your c++ program at startup. Outside of the scope of your question, if you need to continously send values from c++ to Processing, you can look at opening a local socket connection (TCP or UDP) to estabish communication between the two programs. One easy to use protocol is OSC (via UDP). You can use oscpack in raw c++ and oscp5 in Processing. (Optionally, depending on your setup you can consider openFrameworks which (already has oscpack integrated as ofxOsc and ships with send/receive examples): its ofApp is similar Processing's PApplet (e.g. setup()/draw()/mousePressed(), etc.)
I am starting with ESP32 and FREERTOS, and I am having problems sending an Struct array across a queue. I have already sent another kind of variables but never an array of Structs and I am getting an exception.
The sender and the receiver are in different source files and I am starting to thing that maybe is the problem (or at least part of the problem).
My simplified code looks like this:
common.h
struct dailyWeather {
// Day of the week starting in Monday (1)
int dayOfWeek;
// Min and Max daily temperature
float minTemperature;
float maxTemperature;
int weather;
};
file1.h
#pragma once
#ifndef _FILE1_
#define _FILE1_
// Queue
extern QueueHandle_t weatherQueue;
#endif
file1.cpp
#include "common.h"
#include "file1.h"
// Queue
QueueHandle_t weatherQueue = xQueueCreate( 2, sizeof(dailyWeather *) ); // also tried "dailyWeather" without pointer and "Struct dailyWeather"
void task1(void *pvParameters) {
for (;;) {
dailyWeather weatherDATA[8] = {};
// Code to fill the array of structs with data
if (xQueueSend( weatherQueue, &weatherDATA, ( TickType_t ) 0 ) == pdTRUE) {
// The message was sent sucessfully
}
}
}
file2.cpp
#include "common.h"
#include "file1.h"
void task2(void *pvParameters) {
for (;;) {
dailyWeather *weatherDATA_P; // Also tried without pointer and as an array of Structs
if( xQueueReceive(weatherQueue, &( weatherDATA_P ), ( TickType_t ) 0 ) ) {
Serial.println("Received");
dailyWeather weatherDATA = *weatherDATA_P;
Serial.println(weatherDATA.dayOfWeek);
}
}
}
When I run this code on my ESP32 it works until I try to print the data with Serial.println. The "Received" message is printed, but it crash in the next Serial.println with this error.
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
I am locked with this problem and I am not able to find a way to fix it, so any help will be very apreciated.
EDIT:
I am thinking that maybe a solution will be just to add an order item to the struct, make the queue bigger (in number) and send all the Structs separately to the queue. Then use that order in reader to order it again.
Anyway, will be nice to learn what I am doing wrong with the above code.
freeRTOS queues operate by using the buffer and data size you specify during initialization, when you call xQueueCreate(), to make copies of the data you want to send-receive.
When you call xQueueSend(), which is equivalent to xQueueSendToBack(), it makes a copy into that buffer.
If another task is awaiting for the queue in a call to xQueueReceive(), at the moment it becomes ready to run, xQueueReceive() will make a copy of the item in front of the queue's buffer into the destination buffer you specify in your call to xQueueReceive().
If the data you want to send is of pointer/array type, dailyWeather * in your case, then you need to make sure the memory pointed to by the pointer does not get out of scope before being read by the task that receives the pointer by calling xQueueReceive(). Local variables are created in the calling task's stack and will certainly get out of scope, and very likely overwritten, after the function returns.
IMO, best solution if you really need to pass pointers is to allocate the structures array in the function that generates the data and deallocate it in the task that consumes the data.
For many scenarios it is highly desirable not to abuse of dynamic memory handling, so in several communications stacks you will find the use of buffer pools, which at the end are also queues that are initialized during application startup. Operation is approximately as follows:
Initialization:
Initialize the buffer pool queues (simple queues of pointers).
Fill the buffer pools with dynamically allocated buffers of appropriated sizes.
Initialize the queues for inter- task communications.
Task that provides the data:
Get (Receive) a buffer pointer from one buffer pool.
Fill the buffer with data.
Send the buffer pointer to the communications queue.
Task that receives the data:
Get (Receive) the data buffer pointer from the communications queue.
Use the data.
Return (Send) the buffer pointer to the buffer pool.
In case your structures are small, so you have a more or less constrained copy-then-copy overhead, it makes more sense to create the queue so you work directly with structure instances and structure copies instead of structure buffer pointers.
Firstly it's not a good idea to create the queue in the global scope like you do. A global queue handle is OK. But run xQueueCreate() in the same function that creates task1 and task2 (queue must be created before the tasks), something like this:
QueueHandle_t weatherQueue = NULL;
void main() {
weatherQueue = xQueueCreate(32, sizeof(struct dailyWeather));
if (!weatherQueue) {
// Handle error
}
if (xTaskCreate(task1, ...) != pdPASS) {
// Handle error
}
if (xTaskCreate(task2, ...) != pdPASS) {
// Handle error
}
}
Secondly, the code in task1() does the following in a loop:
Create a new array of 8 dailyWeather structs in stack (in the scope of a single loop iteration)
Copy a pointer to first item in weatherDATA[] to the queue (task2 will receive it a bit later, when it's time to switch tasks)
Release the array of 8 dailyWeather (because we're exiting loop scope)
A bit later task2() executes and tries to read the pointer to first item in weatherDATA[]. However this memory has probably been released already. You can't dereference it.
So you're passing pointers to invalid memory over the queue.
It's much, much easier to work with a queue if you just pass the data you want to send instead of a pointer. Your structure is small and consists of elementary data types, so it's a good idea to pass it over the queue in its entirety, one at a time (you can pass an entire array if you want, but this way is simpler).
Something like this:
void task1(void *pvParameters) {
for (;;) {
dailyWeather weatherDATA[8] = {};
// Code to fill the array of structs with data
for (int i = 0; i < 8; i++) {
// Copy the structs to queue, one at a time
if (xQueueSend( weatherQueue, &(weatherDATA[i]), ( TickType_t ) 0 ) == pdTRUE) {
// The message was sent successfully
}
}
}
}
On the receiver side:
void task2(void *pvParameters) {
for (;;) {
dailyWeather weatherDATA;
if( xQueueReceive(weatherQueue, &( weatherDATA ), ( TickType_t ) 0 ) ) {
Serial.println("Received");
Serial.println(weatherDATA.dayOfWeek);
}
}
}
I cannot recommend the official FreeRTOS book enough, it's a great resource for beginners.
Thanks for all for the answers.
Finally I have added a variable to track the item position and I have passed all the data through the queue to the destination task. Then I put all those structs back into another array.
common.h
#include <stdint.h>
struct dailyWeather {
// Day of the week starting in Monday (1)
uint8_t dayOfWeek;
// Min and Max daily temperature
float minTemperature;
float maxTemperature;
uint8_t weather;
uint8_t itemOrder;
};
file1.h
// Queue
extern QueueHandle_t weatherQueue;
file1.cpp
#include "common.h"
#include "file1.h"
// Queue
QueueHandle_t weatherQueue = xQueueCreate( 2 * 8, sizeof(struct dailyWeather) );
void task1(void *pvParameters) {
for (;;) {
dailyWeather weatherDATA[8];
// Code to fill the array of structs with data
for (uint8_t i = 0; i < 8; i++) {
weatherDATA[i].itemOrder = i;
if (xQueueSend( weatherQueue, &weatherDATA[i], ( TickType_t ) 0 ) == pdTRUE) {
// The message was sent sucessfully
}
}
}
}
file2.cpp
#include "common.h"
#include "file1.h"
void task2(void *pvParameters) {
dailyWeather weatherDATA_D[8];
for (;;) {
dailyWeather weatherDATA;
if( xQueueReceive(weatherQueue, &( weatherDATA ), ( TickType_t ) 0 ) ) {
Serial.println("Received");
weatherDATA_D[weatherDATA.itemOrder] = weatherDATA;
}
}
}
Best regards.
first of all I admit I'm a newbie in C++ addons for node.js.
I'm writing my first addon and I reached a good result: the addon does what I want. I copied from various examples I found in internet to exchange complex data between the two languages, but I understood almost nothing of what I wrote.
The first thing scaring me is that I wrote nothing that seems to free some memory; another thing which is seriously worrying me is that I don't know if what I wrote may helps or creating confusion for the V8 garbage collector; by the way I don't know if there are better ways to do what I did (iterating over js Object keys in C++, creating js Objects in C++, creating Strings in C++ to be used as properties of js Objects and what else wrong you can find in my code).
So, before going on with my job writing the real math of my addon, I would like to share with the community the nan and V8 part of it to ask if you see something wrong or that can be done in a better way.
Thank you everybody for your help,
iCC
#include <map>
#include <nan.h>
using v8::Array;
using v8::Function;
using v8::FunctionTemplate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Value;
using v8::String;
using Nan::AsyncQueueWorker;
using Nan::AsyncWorker;
using Nan::Callback;
using Nan::GetFunction;
using Nan::HandleScope;
using Nan::New;
using Nan::Null;
using Nan::Set;
using Nan::To;
using namespace std;
class Data {
public:
int dt1;
int dt2;
int dt3;
int dt4;
};
class Result {
public:
int x1;
int x2;
};
class Stats {
public:
int stat1;
int stat2;
};
typedef map<int, Data> DataSet;
typedef map<int, DataSet> DataMap;
typedef map<float, Result> ResultSet;
typedef map<int, ResultSet> ResultMap;
class MyAddOn: public AsyncWorker {
private:
DataMap *datas;
ResultMap results;
Stats stats;
public:
MyAddOn(Callback *callback, DataMap *set): AsyncWorker(callback), datas(set) {}
~MyAddOn() { delete datas; }
void Execute () {
for(DataMap::iterator i = datas->begin(); i != datas->end(); ++i) {
int res = i->first;
DataSet *datas = &i->second;
for(DataSet::iterator l = datas->begin(); l != datas->end(); ++l) {
int dt4 = l->first;
Data *data = &l->second;
// TODO: real population of stats and result
}
// test result population
results[res][res].x1 = res;
results[res][res].x2 = res;
}
// test stats population
stats.stat1 = 23;
stats.stat2 = 42;
}
void HandleOKCallback () {
Local<Object> obj;
Local<Object> res = New<Object>();
Local<Array> rslt = New<Array>();
Local<Object> sts = New<Object>();
Local<String> x1K = New<String>("x1").ToLocalChecked();
Local<String> x2K = New<String>("x2").ToLocalChecked();
uint32_t idx = 0;
for(ResultMap::iterator i = results.begin(); i != results.end(); ++i) {
ResultSet *set = &i->second;
for(ResultSet::iterator l = set->begin(); l != set->end(); ++l) {
Result *result = &l->second;
// is it ok to declare obj just once outside the cycles?
obj = New<Object>();
// is it ok to use same x1K and x2K instances for all objects?
Set(obj, x1K, New<Number>(result->x1));
Set(obj, x2K, New<Number>(result->x2));
Set(rslt, idx++, obj);
}
}
Set(sts, New<String>("stat1").ToLocalChecked(), New<Number>(stats.stat1));
Set(sts, New<String>("stat2").ToLocalChecked(), New<Number>(stats.stat2));
Set(res, New<String>("result").ToLocalChecked(), rslt);
Set(res, New<String>("stats" ).ToLocalChecked(), sts);
Local<Value> argv[] = { Null(), res };
callback->Call(2, argv);
}
};
NAN_METHOD(AddOn) {
Local<Object> datas = info[0].As<Object>();
Callback *callback = new Callback(info[1].As<Function>());
Local<Array> props = datas->GetOwnPropertyNames();
Local<String> dt1K = Nan::New("dt1").ToLocalChecked();
Local<String> dt2K = Nan::New("dt2").ToLocalChecked();
Local<String> dt3K = Nan::New("dt3").ToLocalChecked();
Local<Array> props2;
Local<Value> key;
Local<Object> value;
Local<Object> data;
DataMap *set = new DataMap();
int res;
int dt4;
DataSet *dts;
Data *dt;
for(uint32_t i = 0; i < props->Length(); i++) {
// is it ok to declare key, value, props2 and res just once outside the cycle?
key = props->Get(i);
value = datas->Get(key)->ToObject();
props2 = value->GetOwnPropertyNames();
res = To<int>(key).FromJust();
dts = &((*set)[res]);
for(uint32_t l = 0; l < props2->Length(); l++) {
// is it ok to declare key, data and dt4 just once outside the cycles?
key = props2->Get(l);
data = value->Get(key)->ToObject();
dt4 = To<int>(key).FromJust();
dt = &((*dts)[dt4]);
int dt1 = To<int>(data->Get(dt1K)).FromJust();
int dt2 = To<int>(data->Get(dt2K)).FromJust();
int dt3 = To<int>(data->Get(dt3K)).FromJust();
dt->dt1 = dt1;
dt->dt2 = dt2;
dt->dt3 = dt3;
dt->dt4 = dt4;
}
}
AsyncQueueWorker(new MyAddOn(callback, set));
}
NAN_MODULE_INIT(Init) {
Set(target, New<String>("myaddon").ToLocalChecked(), GetFunction(New<FunctionTemplate>(AddOn)).ToLocalChecked());
}
NODE_MODULE(myaddon, Init)
One year and half later...
If somebody is interested, my server is up and running since my question and the amount of memory it requires is stable.
I can't say if the code I wrote really does not has some memory leak or if lost memory is freed at each thread execution end, but if you are afraid as I was, I can say that using same structure and calls does not cause any real problem.
You do actually free up some of the memory you use, with the line of code:
~MyAddOn() { delete datas; }
In essence, C++ memory management boils down to always calling delete for every object created by new. There are also many additional architecture-specific and legacy 'C' memory management functions, but it is not strictly necessary to use these when you do not require the performance benefits.
As an example of what could potentially be a memory leak: You're passing the object held in the *callback pointer to the function AsyncQueueWorker. Yet nowhere in your code is this pointer freed, so unless the Queue worker frees it for you, there is a memory leak here.
You can use a memory tool such as valgrind to test your program further. It will spot most memory problems for you and comes highly recommended.
One thing I've observed is that you often ask (paraphrased):
Is it okay to declare X outside my loop?
To which the answer actually is that declaring variables inside of your loops is better, whenever you can do it. Declare variables as deep inside as you can, unless you have to re-use them. Variables are restricted in scope to the outermost set of {} brackets. You can read more about this in this question.
is it ok to use same x1K and x2K instances for all objects?
In essence, when you do this, if one of these objects modifies its 'x1K' string, then it will change for all of them. The advantage is that you free up memory. If the string is the same for all these objects anyway, instead of having to store say 1,000,000 copies of it, your computer will only keep a single one in memory and have 1,000,000 pointers to it instead. If the string is 9 ASCII characters long or longer under amd64, then that amounts to significant memory savings.
By the way, if you don't intend to modify a variable after its declaration, you can declare it as const, a keyword short for constant which forces the compiler to check that your variable is not modified after declaration. You may have to deal with quite a few compiler errors about functions accepting only non-const versions of things they don't modify, some of which may not be your own code, in which case you're out of luck. Being as conservative as possible with non-const variables can help spot problems.
I try to implement a simple algorithm in preperation of a more complex one.
I want to call a kernel several times and it shall increment each value within an array by let's say 5 in each call.
So when I have initially the array [1,2,3,4] I want [6,7,8,9] after the first call and [11,12,13,14] after the second call and so on. But I don't unterstand how to configure my buffers and how to enqueue my buffer in that case. I tried to orient at this tutorial:
http://www.browndeertechnology.com/docs/BDT_OpenCL_Tutorial_NBody-rev3.html
(this is the algorithm I want to implement in the end with some modifications)
but the library used there hides the most important aspects.
At the moment I create my buffer with:
pos2g_buf = clCreateBuffer(
context,
CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR,
sizeof(cl_float) * (nparticle*4),
pos2g,
&status);
The calls of the Kernel are placed within a for-loop
for(int i=0; i
And there I set the kernel arguments and call it by:
status = clEnqueueNDRangeKernel(
oclm->commandQueue,
kernel,
NDRangeDimension,
NULL,
globalThreads,
localThreads,
0,
NULL,
&events[0]); //
Can someone please help me and give the correct (pseudo-) code, how to create my simple iterator program?
Many thanks in advance!
Michael
On Host side:
cl_mem buffer = clCreateBuffer(..., CL_MEM_READ_WRITE, ...);
cl_kernel kernel = clCreateKernel(...);
clSetKernelArg(.., kernel, buffer, ...);
for(int i=0; i<num_laps; i++){
clEnqueueNDRangeKernel(..., kernel, ...);
}
void *host_mem = malloc(...);
clEnqueueReadBuffer(..., buffer, ..., host_mem, ...);
On Device side:
void __kernel my(global int* mem)
{
mem[get_global_id(0) += 5;
return;
}
Don't forget to check return codes and release resources.
I serialize multiple objects into a binary archive with Boost.
When reading back those objects from a binary_iarchive, is there a way to know how many objects are in the archive or simply a way to detect the end of the archive ?
The only way I found is to use a try-catch to detect the stream exception.
Thanks in advance.
I can think of a number of approaches:
Serialize STL containers to/from your archive (see documentation). The archive will automatically keep track of how many objects there are in the containers.
Serialize a count variable before serializing your objects. When reading back your objects, you'll know beforehand how many objects you expect to read back.
You could have the last object have a special value that acts as a kind of sentinel that indicates the end of the list of objects. Perhaps you could add an isLast member function to the object.
This is not very pretty, but you could have a separate "index file" alongside your archive that stores the number of objects in the archive.
Use the tellp position of the underlying stream object to detect if you're at the end of file:
Example (just a sketch, not tested):
std::streampos archiveOffset = stream.tellg();
std::streampos streamEnd = stream.seekg(0, std::ios_base::end).tellg();
stream.seekg(archiveOffset);
while (stream.tellp() < streamEnd)
{
// Deserialize objects
}
This might not work with XML archives.
Do you have all your objects when you begin serializing? If not, you are "abusing" boost serialization - it is not meant to be used that way. However, I am using it that way, using try catch to find the end of the file, and it works for me. Just hide it away somewhere in the implementation. Beware though, if using it this way, you need to either not serialize pointers, or disable pointer tracking.
If you do have all the objects already, see Emile's answer. They are all valid approaches.
std::istream* stream_;
boost::iostreams::filtering_streambuf<boost::iostreams::input>* filtering_streambuf_;
...
stream_ = new std::istream(memoryBuffer_);
if (stream_) {
filtering_streambuf_ = new boost::iostreams::filtering_streambuf<boost::iostreams::input>();
if (filtering_streambuf_) {
filtering_streambuf_->push(boost::iostreams::gzip_decompressor());
filtering_streambuf_->push(*stream_);
archive_ = new eos::portable_iarchive(*filtering_streambuf_);
}
}
using zip when reading data from the archives, and filtering_streambuf have such method as
std::streamsize std::streambuf::in_avail()
Get number of characters available to read
so i check the end of archive as
bool IArchiveContainer::eof() const {
if (filtering_streambuf_) {
return filtering_streambuf_->in_avail() == 0;
}
return false;
}
It is not helping to know how many objects are last in the archive, but helping to detect the end of them
(i'm using eof test only in the unit test for serialization/unserialization my classes/structures - to make sure that i'm reading all what i'm writing)
Sample code which I used to debug the similar issue
(based on Emile's answer) :
#include <fstream>
#include <iostream>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
struct A{
int a,b;
template <typename T>
void serialize(T &ar, int ){
ar & a;
ar & b;
}
};
int main(){
{
std::ofstream ofs( "ff.ar" );
boost::archive::binary_oarchive ar( ofs );
for(int i=0;i<3;++i){
A a {2,3};
ar << a;
}
ofs.close();
}
{
std::ifstream ifs( "ff.ar" );
ifs.seekg (0, ifs.end);
int length = ifs.tellg();
ifs.seekg (0, ifs.beg);
boost::archive::binary_iarchive ar( ifs );
while(ifs.tellg() < length){
A a;
ar >> a;
std::cout << "a.a-> "<< a.a << " and a.b->"<< a.b << "\n";
}
}
return 0;
}
you just read a byte from the file.
If you do not reach the end,
backword a byte then.