I'm trying out to forward output stream from XCode (v12.4) to Processing (https://processing.org/).
My goal is: To draw a simple object in Processing according to my XCode project data.
I need to see value of my variable in the Processing.
int main(int argc, const char * argv[]) {
// insert code here...
for (int i=0; i<10; i++)
std::cout << "How to send value of i to the Processing!\n";
return 0;
}
Finally I found the way. Hope it help someone. Share it.
Xcode app ->(127.0.0.1:UDP)-> Processing sketch
Source Links:
Sending string over UDP in C++
https://discourse.processing.org/t/receive-udp-packets/19832
Xcode app (C++):
int main(int argc, char const *argv[])
{
std::string hostname{"127.0.0.1"};
uint16_t port = 6000;
int sock = ::socket(AF_INET, SOCK_DGRAM, 0);
sockaddr_in destination;
destination.sin_family = AF_INET;
destination.sin_port = htons(port);
destination.sin_addr.s_addr = inet_addr(hostname.c_str());
std::string msg = "Hello world!";
for(int i=0; i<5; i++){
long n_bytes = ::sendto(sock, msg.c_str(), msg.length(), 0, reinterpret_cast<sockaddr*>(&destination), sizeof(destination));
std::cout << n_bytes << " bytes sent" << std::endl;
}
::close(sock);
return 0;
}
Processing code:
import java.net.*;
import java.io.*;
import java.util.Arrays;
DatagramSocket socket;
DatagramPacket packet;
byte[] buf = new byte[12]; //Set your buffer size as desired
void setup() {
try {
socket = new DatagramSocket(6000); // Set your port here
}
catch (Exception e) {
e.printStackTrace();
println(e.getMessage());
}
}
void draw() {
try {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
InetAddress address = packet.getAddress();
int port = packet.getPort();
packet = new DatagramPacket(buf, buf.length, address, port);
//Received as bytes:
println(Arrays.toString(buf));
//If you wish to receive as String:
String received = new String(packet.getData(), 0, packet.getLength());
println(received);
}
catch (IOException e) {
e.printStackTrace();
println(e.getMessage());
}
}
The assumption is you're using c++ in Xcode (and not Objective-C, nor Swift).
Every processing sketch inherits the args property (very similar to main's const char * argv[] in c++ program). You can make use of that to initialise a Processing sketch with options from c++.
You could have something like:
int main(int argc, const char * argv[]) {
system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0,1,2,3,4,5,6,7,8,9");
return 0;
}
(This is oversimplified, you'd have your for loop accumulate ints into a string with a separator character, maybe setup variables for paths to processing-java and the processing sketch)
To clarify, processing-java is a command line utility that ships with Processing. (You can find it in inside the Processing.app folder (via show contents), alongside the processing executable and install it via Tools menu inside Processing). It allows you to easily run a sketch from the command line. Alternatively, you can export an application, however if you're prototyping, the processing-java option might be more practical.
In Processing you'd check if the sketch was launched with arguments, and if so, parse those arguments.
void setup(){
if(args != null){
printArray(args);
}
}
You can use split() to split 0,1,2,3,4,5,6,7,8,9 into individual numbers that can be parsed (via int() for example).
If you have more complex data, you can consider formatting your c++ output as JSON, then using parseJSONObject() / parseJSONArray().
(If you don't want to split individual values, you can just use spaces with command line arguments: /path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0 1 2 3 4 5 6 7 8 9. If you want to send a JSON formatted string from c++, be aware you may need to escape " (e.g. system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run {\"myCppData\":[0,1,2]}");)
This would work if you need to launch the processing sketch once and initialise with values from your c++ program at startup. Outside of the scope of your question, if you need to continously send values from c++ to Processing, you can look at opening a local socket connection (TCP or UDP) to estabish communication between the two programs. One easy to use protocol is OSC (via UDP). You can use oscpack in raw c++ and oscp5 in Processing. (Optionally, depending on your setup you can consider openFrameworks which (already has oscpack integrated as ofxOsc and ships with send/receive examples): its ofApp is similar Processing's PApplet (e.g. setup()/draw()/mousePressed(), etc.)
I have a function which needs to parse some arguments and several if clauses inside it need to perform similar actions. In order to reduce typing and help keep the code readable, I thought I'd use a lambda to encapsulate the recurring actions, but I'm having trouble finding sufficient info to determine whether I'm mistakenly invoking undefined behavior or what I need to do to actualize my approach.
Below is a simplified code snippet of what I have currently:
int foo(int argc, char* argv[])
{
Using ss = std::istringstream;
auto sf = [&](ss&& stream) -> ss& {
stream.exceptions(ss::failbit);
return stream;
};
int retVal = 0;
bool valA = false;
bool valB = false;
try
{
for(int i=1; i < argc; i++)
{
std::string arg( argv[i] );
if( !valA )
{
valA = true;
sf( ss(arg) ) >> myInt;
}
else
if( !valB )
{
valB = true;
sf( ss(arg) ) >> std::hex >> myOtherInt;
}
}
}
catch( std::exception& err )
{
retVal = -1;
std::cerr << err.what() << std::endl;
}
return retVal;
}
First, based on what I've read, I don't think that specifying the lambda argument as an rvalue reference (ss&&) is doing quite what I want it to do, however, trying to compile with it declared as a normal reference (ss&) failed with the error cannot bind non-const lvalue reference of type 'ss&'. Changing ss& to ss&& got rid of the error and did not produce any warnings, but I'm not convinced that I'm using that construct correctly.
I've tried reading up on the various definitions for each, but the wording is a bit confusing.
I guess ultimately my questions are:
Can I expect the lifetime of my temporary ss(arg) object to extend through the entire extraction expression?
What is the correct way to define a lambda such that I can use the lambda in the way I demonstrate above, assuming that such a thing is actually possible?
I am a novice trying to use google protobuf for work project. I want to find out difference between protobuf messages and hence trying to use the MessageDifferencer APIs. I get the SEGV while running the code below. Commenting the line "reporter->ReportModified(*Obj1, *Obj2, field_path);" results in no segv
Any help in usage of differencer appreciated!
google::protobuf::util::MessageDifferencer diff;
diff.set_report_matches(false);
diff.set_report_moves(false);
std::string reportDiff;
google::protobuf::io::StringOutputStream* opstream = new google::protobuf::io::StringOutputStream(&reportDiff);
google::protobuf::util::MessageDifferencer::StreamReporter* reporter = new google::protobuf::util::MessageDifferencer::StreamReporter(opstream);
diff.ReportDifferencesTo(reporter);
std::vector<google::protobuf::util::MessageDifferencer::SpecificField> field_path;
try
{
reporter->ReportModified(*Obj1, *Obj2, field_path);
}
catch (const std::exception& e)
{
std::cout << e.what() <<"\n";
}
cout << __func__ << " Report added " << field_path.size();
//Cleanup objects
delete Obj1;
delete Obj2;
delete reporter;
Thanks,
Maddy
You shouldn't be calling the ReportModified method directly, the MessageDifferencer class calls it when it finds a difference.
MessageDifferencer::Compare is the correct method to call, according to the docs. Assuming all else is correct, I believe changing your code inside the try-loop to call that should work.
Moving your code to a function, you could have something like
std::string CompareMessages(
const google::protobuf::Message& m1,
const google::protobuf::Message& m2) {
using google::protobuf::util::MessageDifferencer;
MessageDifferencer diff;
diff.set_report_matches(false);
diff.set_report_moves(false);
std::string reportDiff;
{
google::protobuf::io::StringOutputStream opstream(&reportDiff);
MessageDifferencer::StreamReporter reporter(&opstream);
diff.ReportDifferencesTo(&reporter);
diff.Compare(m1, m2);
}
return std::move(reportDiff);
}
UPDATE 2 / TL;DR
Is there some way to prevent dirty pages of a windows FILE_FLAG_DELETE_ON_CLOSE temporary file from being flushed as a result of closing memory maps opened on these files?
Yes. If you do not need to do anything with the files themselves after their initial creation and you implement some naming conventions, this is possible through the strategy explained in this answer.
Note: I am still quite interested in finding out the reasons for why there is so much difference in the behavior depending on how maps are created and the order of disposal/unmapping.
I have been looking into some strategies for an inter-process shared memory data structure that allows growing and shrinking its committed capacity on windows by using a chain of "memory chunks."
One possible way is to use pagefile backed named memory maps as the chunk memory. An advantage of this strategy is the possibility to use SEC_RESERVE to reserve a big chunk of memory address space and incrementally allocate it using VirtualAlloc with MEM_COMMIT. Disadvantages appear to be (a) the requirement to have SeCreateGlobalPrivilege permissions to allow using a shareable name in the Global\ namespace and (b) the fact that all committed memory contributes to the system commit charge.
To circumvent these disadvantages, I started investigating the use of temporary file backed memory maps. I.e. memory maps over files created using the FILE_FLAG_DELETE_ON_CLOSE | FILE_ATTRIBUTE_TEMPORARY flags combination. This appears to be a recommended strategy that according to e.g. this blog post should prevent flushing the mapped memory to disk (unless memory pressure causes dirty mapped pages to be paged out).
I am however observing that closing the map/file handle before the owning process exits, causes dirty pages to be flushed to disk. This occurs even if the view/file handle is not the one through which the dirty pages were created and when these views/file handles were opened after the pages were 'dirtied' in a different view.
It appears that changing the order of disposal (i.e. unmapping the view first or closing the file handle first) has some impact on when the disk flush is initiated, but not on the fact that flushing takes place.
So my questions are:
Is there some way to use temporary file backed memory maps and prevent them from flushing dirty pages when the map/file is closed, taking into account that multiple threads within a process/multiple processes may have open handles/views to such a file?
If not, what is/could be the reason for the observed behavior?
Is there an alternative strategy that I may have overlooked?
UPDATE
Some additional info: When running the "arena1" and "arena2" parts of the sample code below in two separate (independent) processes, with "arena1" being the process that creates the shared memory regions and "arena2" the one that opens them, the following behavior is observed for maps/chunks that have dirty pages:
If closing the view before the file handle in the "arena1" process, it flushes each of these chunks to disk in what seems a (partially) synchronous process (i.e. it blocks the disposing thread for several seconds), independent of whether or not the "arena2" process was started.
If closing the file handle before the view, disk flushes only occur for those maps/chunks that are closed in the "arena1" process while the "arena2" process still has an open handle to those chunks, and they appear to be 'asynchronous', i.e. not blocking the application thread.
Refer to the (c++) sample code below that allows reproducing the problem on my system (x64, Win7):
static uint64_t start_ts;
static uint64_t elapsed() {
return ::GetTickCount64() - start_ts;
}
class PageArena {
public:
typedef uint8_t* pointer;
PageArena(int id, const char* base_name, size_t page_sz, size_t chunk_sz, size_t n_chunks, bool dispose_handle_first) :
id_(id), base_name_(base_name), pg_sz_(page_sz), dispose_handle_first_(dispose_handle_first) {
for (size_t i = 0; i < n_chunks; i++)
chunks_.push_back(new Chunk(i, base_name_, chunk_sz, dispose_handle_first_));
}
~PageArena() {
for (auto i = 0; i < chunks_.size(); ++i) {
if (chunks_[i])
release_chunk(i);
}
std::cout << "[" << ::elapsed() << "] arena " << id_ << " destructed" << std::endl;
}
pointer alloc() {
auto ptr = chunks_.back()->alloc(pg_sz_);
if (!ptr) {
chunks_.push_back(new Chunk(chunks_.size(), base_name_, chunks_.back()->capacity(), dispose_handle_first_));
ptr = chunks_.back()->alloc(pg_sz_);
}
return ptr;
}
size_t num_chunks() {
return chunks_.size();
}
void release_chunk(size_t ndx) {
delete chunks_[ndx];
chunks_[ndx] = nullptr;
std::cout << "[" << ::elapsed() << "] chunk " << ndx << " released from arena " << id_ << std::endl;
}
private:
struct Chunk {
public:
Chunk(size_t ndx, const std::string& base_name, size_t size, bool dispose_handle_first) :
map_ptr_(nullptr), tail_(nullptr),
handle_(INVALID_HANDLE_VALUE), size_(0),
dispose_handle_first_(dispose_handle_first) {
name_ = name_for(base_name, ndx);
if ((handle_ = create_temp_file(name_, size)) == INVALID_HANDLE_VALUE)
handle_ = open_temp_file(name_, size);
if (handle_ != INVALID_HANDLE_VALUE) {
size_ = size;
auto map_handle = ::CreateFileMappingA(handle_, nullptr, PAGE_READWRITE, 0, 0, nullptr);
tail_ = map_ptr_ = (pointer)::MapViewOfFile(map_handle, FILE_MAP_ALL_ACCESS, 0, 0, size);
::CloseHandle(map_handle); // no longer needed.
}
}
~Chunk() {
if (dispose_handle_first_) {
close_file();
unmap_view();
} else {
unmap_view();
close_file();
}
}
size_t capacity() const {
return size_;
}
pointer alloc(size_t sz) {
pointer result = nullptr;
if (tail_ + sz <= map_ptr_ + size_) {
result = tail_;
tail_ += sz;
}
return result;
}
private:
static const DWORD kReadWrite = GENERIC_READ | GENERIC_WRITE;
static const DWORD kFileSharing = FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE;
static const DWORD kTempFlags = FILE_ATTRIBUTE_NOT_CONTENT_INDEXED | FILE_FLAG_DELETE_ON_CLOSE | FILE_ATTRIBUTE_TEMPORARY;
static std::string name_for(const std::string& base_file_path, size_t ndx) {
std::stringstream ss;
ss << base_file_path << "." << ndx << ".chunk";
return ss.str();
}
static HANDLE create_temp_file(const std::string& name, size_t& size) {
auto h = CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, CREATE_NEW, kTempFlags, 0);
if (h != INVALID_HANDLE_VALUE) {
LARGE_INTEGER newpos;
newpos.QuadPart = size;
::SetFilePointerEx(h, newpos, 0, FILE_BEGIN);
::SetEndOfFile(h);
}
return h;
}
static HANDLE open_temp_file(const std::string& name, size_t& size) {
auto h = CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, OPEN_EXISTING, kTempFlags, 0);
if (h != INVALID_HANDLE_VALUE) {
LARGE_INTEGER sz;
::GetFileSizeEx(h, &sz);
size = sz.QuadPart;
}
return h;
}
void close_file() {
if (handle_ != INVALID_HANDLE_VALUE) {
std::cout << "[" << ::elapsed() << "] " << name_ << " file handle closing" << std::endl;
::CloseHandle(handle_);
std::cout << "[" << ::elapsed() << "] " << name_ << " file handle closed" << std::endl;
}
}
void unmap_view() {
if (map_ptr_) {
std::cout << "[" << ::elapsed() << "] " << name_ << " view closing" << std::endl;
::UnmapViewOfFile(map_ptr_);
std::cout << "[" << ::elapsed() << "] " << name_ << " view closed" << std::endl;
}
}
HANDLE handle_;
std::string name_;
pointer map_ptr_;
size_t size_;
pointer tail_;
bool dispose_handle_first_;
};
int id_;
size_t pg_sz_;
std::string base_name_;
std::vector<Chunk*> chunks_;
bool dispose_handle_first_;
};
static void TempFileMapping(bool dispose_handle_first) {
const size_t chunk_size = 256 * 1024 * 1024;
const size_t pg_size = 8192;
const size_t n_pages = 100 * 1000;
const char* base_path = "data/page_pool";
start_ts = ::GetTickCount64();
if (dispose_handle_first)
std::cout << "Mapping with 2 arenas and closing file handles before unmapping views." << std::endl;
else
std::cout << "Mapping with 2 arenas and unmapping views before closing file handles." << std::endl;
{
std::cout << "[" << ::elapsed() << "] " << "allocating " << n_pages << " pages through arena 1." << std::endl;
PageArena arena1(1, base_path, pg_size, chunk_size, 1, dispose_handle_first);
for (size_t i = 0; i < n_pages; i++) {
auto ptr = arena1.alloc();
memset(ptr, (i + 1) % 256, pg_size); // ensure pages are dirty.
}
std::cout << "[" << elapsed() << "] " << arena1.num_chunks() << " chunks created." << std::endl;
{
PageArena arena2(2, base_path, pg_size, chunk_size, arena1.num_chunks(), dispose_handle_first);
std::cout << "[" << ::elapsed() << "] arena 2 loaded, going to release chunks 1 and 2 from arena 1" << std::endl;
arena1.release_chunk(1);
arena1.release_chunk(2);
}
}
}
Please refer to this gist that contains the output of running the above code and links to screen captures of system free memory and disk activity when running TempFileMapping(false) and TempFileMapping(true) respectively.
After the bounty period expired without any answers that provided more insight or solved the mentioned problem, I decided to dig a little deeper and experiment some more with several combinations and sequences of operations.
As a result, I believe I have found a way to achieve memory maps shared between processes over temporary, delete-on-close files, that are not being flushed to disk when they are closed.
The basic idea involves creating the memory map when a temp file is newly created with a map name that can be used in a call to OpenFileMapping:
// build a unique map name from the file name.
auto map_name = make_map_name(file_name);
// Open or create the mapped file.
auto mh = ::OpenFileMappingA(FILE_MAP_ALL_ACCESS, false, map_name.c_str());
if (mh == 0 || mh == INVALID_HANDLE_VALUE) {
// existing map could not be opened, create the file.
auto fh = ::CreateFileA(name.c_str(), kReadWrite, kFileSharing, nullptr, CREATE_NEW, kTempFlags, 0);
if (fh != INVALID_HANDLE_VALUE) {
// set its size.
LARGE_INTEGER newpos;
newpos.QuadPart = desired_size;
::SetFilePointerEx(fh, newpos, 0, FILE_BEGIN);
::SetEndOfFile(fh);
// create the map
mh = ::CreateFileMappingA(mh, nullptr, PAGE_READWRITE, 0, 0, map_name.c_str());
// close the file handle
// from now on there will be no accesses using file handles.
::CloseHandle(fh);
}
}
Thus, the file handle is only used when the file is newly created, and closed immediately after the map is created, while the map handle itself remains open, to allow opening the mapping without requiring access to a file handle. Note that a race condition exists here, that we would need to deal with in any "real code" (as well as adding decent error checking and handling).
So if we got a valid map handle, we can create the view:
auto map_ptr = MapViewOfFile(mh, FILE_MAP_ALL_ACCESS, 0, 0, 0);
if (map_ptr) {
// determine its size.
MEMORY_BASIC_INFORMATION mbi;
if (::VirtualQuery(map_ptr, &mbi, sizeof(MEMORY_BASIC_INFORMATION)) > 0)
map_size = mbi.RegionSize;
}
When, some time later closing a mapped file: close the map handle before unmapping the view:
if (mh == 0 || mh == INVALID_HANDLE_VALUE) {
::CloseHandle(mh);
mh = INVALID_HANDLE_VALUE;
}
if (map_ptr) {
::UnmapViewOfFile(map_ptr);
map_ptr = 0;
map_size = 0;
}
And, according to the test I have performed so far, this does not cause flushing dirty pages to disk on close, problem solved. Well partially anyway, there may still be a cross-session map name sharing issue.
If I take it correctly, commenting out Arena2 part of code shall reproduce the issue without the need for second process. I have tried this:
I edited base_path as follows for convenience:
char base_path[MAX_PATH];
GetTempPathA(MAX_PATH, base_path);
strcat_s(base_path, MAX_PATH, "page_pool");
I edited n_pages = 1536 * 128 to bring the used memory to 1.5GB, compared to your ~800mb.
I have tested TempFileMapping(false) and TempFileMapping(true), one at a time, for the same results.
I have tested with Arena2 commented out and intact, for the same results.
I have tested on Win8.1 x64 and Win7 x64, for ±10% same results.
In my tests, code runs in 2400ms ±10%, only 500ms ±10% spent on deallocating. That's clearly not enough for a flush of 1.5GB on a low-spinning silent HDDs I have there.
So, the question is, what are you observing? I'd suggest that you:
Provide your times for comparison
Use a different computer for tests, paying attention to excluding software issues such as "same antivirus"
Verify that you're not experiencing a RAM shortage.
Use xperf to see what's happening during the freeze.
Update
I have tested on yet another Win7 x64, and times are 890ms full, 430ms spent on dealloc. I have looked into your results, and what is VERY suspicious is that almost exactly 4000ms is spent in freeze each time on your machine. That can't be a mere coincidence, I believe. Also, it's rather obvious now the the problem is somehow bound to a specific machine you're using. So my suggestions are:
As stated above, test on another computer yourself
As stated above, Use XPerf, it will allow you to see what exactly happens in user mode and kernel mode during the freeze (I really suspect some non-standard driver in the middle)
Play with number of pages and see how it affects the freeze length.
Try to store files on a different disk drive on the same computer where you have tested initially.
I recently attended a coding interview and I was asked a question which I didn't know the answer to. After searching the answer on the internet for a few day, I come here call for help.
The question is described as followed: You should propose a approach to record the running information of function in your program, for example, the times of a function called, and so on.
By the way, you are not allowed to modify these functions. Maybe you want to define a global variant in these function to record the running function, but that is not allowed.
Ok! That's all about the question I met in a coding interview.
This is the best I could come up with using C++ macros. I don't know whether it conforms to the requirements.
A very basic version just recording the count. The macro replaces all existing calls to the function with the contents of the macro, which records the stats and calls the function. Can easily be extended to record more details. Assumes there's only one function with that name or you want one count for all of them. Requires a macro for each function.
// here's our function
void func()
{ /* some stuff */ }
// this was added
int funcCount = 0;
#define func(...) do { funcCount++; func(__VA_ARGS__); } while(0)
int main()
{
// call the function
func();
// print stats
cout << funcCount << endl;
return 0;
}
Prints 1.
A more generic version. Requires changes to how the function is called.
// here are our functions
void someFunc()
{ /* some stuff */ }
void someOtherFunc()
{ /* some stuff */ }
// this was added
map<string, int> funcCounts;
#define call(func, ...) do { funcCounts[ #func ]++; func(##__VA_ARGS__); } while(0)
int main()
{
// call the functions
// needed to change these from 'someFunc();' format
call(someFunc);
call(someOtherFunc);
call(someFunc);
// print stats
for (map<string, int>::iterator i = funcCounts.begin(); i != funcCounts.end(); i++)
cout << i->first << " - " << i->second << endl;
return 0;
}
Prints:
someFunc - 2
someOtherFunc - 1