Boost Beast HTTP - c++11

I am working on a http parser, and it looks like boost.beast is a nice one. However, I still have some questions:
*** Assume HTTP Request POST data already received via boost.asio socket. Stored inside a std::string buffer.
Is there any good sample on how to extract http header fields and its value (one-after-another)? I assume it will be an iterator method, but i tried several way and still won't work.
How to extract the http body?
Thank you very much.

Starting from a simple example: https://www.boost.org/doc/libs/develop/libs/beast/example/http/client/sync/http_client_sync.cpp
// Declare a container to hold the response
http::response<http::dynamic_body> res;
// Receive the HTTP response
http::read(socket, buffer, res);
Extract The Headers
The response object already contains all the goods:
for(auto const& field : res)
std::cout << field.name() << " = " << field.value() << "\n";
std::cout << "Server: " << res[http::field::server] << "\n";
You can also just stream the entire response object:
std::cout << res << std::endl;
Extract The Body
std::cout << "Body size is " << res.body().size() << "\n";
To actually use the "dynamic_body", use standard Asio buffer manipulation:
#include <boost/asio/buffers_iterator.hpp>
#include <boost/asio/buffers_iterator.hpp>
std::string body { boost::asio::buffers_begin(res.body().data()),
boost::asio::buffers_end(res.body().data()) };
std::cout << "Body: " << std::quoted(body) << "\n";
Alternatively, see beast::buffers_to_string
Obviously, things become more straight-forward when using a string_body:
std::cout << "Body: " << std::quoted(res.body()) << "\n";

Related

WinSock: How to properly time out receive using overlapped I/O

Problem criteria:
my service is Windows-only, so portability is not a constraint for me
my service uses threadpools with overlapped I/O
my service needs to open a connection to a remote service, ask a question and receive a reply
the remote service may refuse to answer (root cause is not important)
The solution is trivial to describe: set a timeout on the read.
The implementation of said solution has been elusive.
I think I may have finally tracked down something that is viable, but I am so weary from false starts that I seek someone's approval who has done this sort of thing before before moving ahead with it.
By calling GetOverlappedResultsEx with a non-zero timeout:
https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-getoverlappedresultex
If dwMilliseconds is nonzero, and an I/O completion routine or APC is queued, GetLastError returns WAIT_IO_COMPLETION.
If dwMilliseconds is nonzero and the specified timeout interval elapses, GetLastError returns WAIT_TIMEOUT.
Thus, I can sit and wait until IO has been alerted or the timeout exceeded and react accordingly:
WAIT_TIMEOUT: CancelIoEx on the overlapped structure from the WSARecv, which will trigger my IO complete callback and allow me to do something meaningful (e.g. force the socket closed).
WAIT_IO_COMPLETION: Do nothing. Timeout need not be enforced.
Is it really that simple, though? Because I have yet to find any questions or example code, etc. that closely resembles what I got going on here (which is largely based on a codebase I inherited) and as a consequence, have failed to find any examples/suggestions to support that this is appropriate.
Demo program: https://github.com/rguilbault-mt/rguilbault-mt/blob/main/WinSock.cpp
to run:
-p -d -t -gor
Make the read delay > timeout to force the timeout condition.
Relevant bits for this question:
StartThreadpoolIo(gIoTp[s]);
if (WSARecv(s, bufs, 1, &readBytes, &dwFlags, &ioData->ol, NULL) == SOCKET_ERROR)
{
std::lock_guard<std::mutex> log(gIoMtx);
switch (WSAGetLastError())
{
case WSA_IO_PENDING:
std::cout << preamble(__func__) << "asynchronous" << std::endl;
break;
default:
std::cerr << preamble(__func__) << "WSARecv() failed: " << WSAGetLastError() << std::endl;
CancelThreadpoolIo(gIoTp[s]);
return false;
}
}
else
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << "synchronous - " << readBytes << " read" << std::endl;
}
if (gGetOverlappedResult)
{
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << "wait until I/O occurs or we timeout..." << std::endl;
}
DWORD bytesTransferred = 0;
if (!GetOverlappedResultEx((HANDLE)s, &ioData->ol, &bytesTransferred, gTimeout, true))
{
DWORD e = GetLastError();
std::lock_guard<std::mutex> log(gIoMtx);
switch (e)
{
case WAIT_IO_COMPLETION:
std::cout << preamble(__func__) << "read activity is forthcoming" << std::endl;
break;
case WAIT_TIMEOUT:
// we hit our timeout, cancel the I/O
CancelIoEx((HANDLE)s, &ioData->ol);
break;
default:
std::cerr << preamble(__func__) << "GetOverlappedResult error is unhandled: " << e << std::endl;
}
}
else
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cerr << preamble(__func__) << "GetOverlappedResult success: " << bytesTransferred << std::endl;
}
}
Confirmation/other suggestions welcomed/appreciated.
I was debating what the proper protocol was and decided I'm just going to answer my own question for the benefit of the world (if anyone bumps into my similar criteria/issue) even though I would have preferred that #HansPassant get credit for the answer.
Anyway, with his suggestion, using the wait mechanism provided by Microsoft allows me to pull of what I need without orchestrating any thread-based monitoring of my own. Here are the relevant bits:
after calling WSARecv, register a wait callback:
else if (gRegisterWait)
{
if (!RegisterWaitForSingleObject(&ioData->waiter, (HANDLE)s, waitOrTimerCallback, ioData, gTimeout, WT_EXECUTEONLYONCE))
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cerr << preamble(__func__) << "RegisterWaitForSingleObject failed: " << GetLastError() << std::endl;
}
else
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << "RegisterWaitForSingleObject success: " << ioData->waiter << std::endl;
}
}
when the wait callback is invoked, use the second parameter to decide if the callback was called because of a timeout (true) or other signal (false):
VOID CALLBACK waitOrTimerCallback(
PVOID lpParameter,
BOOLEAN TimedOut
)
{
IoData* ioData = (IoData*)lpParameter;
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << (TimedOut ? "true" : "false") << std::endl;
std::cout << "\tSocket: " << ioData->socket << std::endl;
}
if (!TimedOut)
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << "read activity is forthcoming" << std::endl;
}
else
{
// we hit our timeout, cancel the I/O
CancelIoEx((HANDLE)ioData->socket, &ioData->ol);
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << "timeout reached, cancelling I/O" << std::endl;
}
// need to unregister the waiter but not supposed to do it in the callback
if (!TrySubmitThreadpoolCallback(unregisterWaiter, &ioData->waiter, NULL))
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cerr << preamble(__func__) << "failed to unregister waiter...does this mean I have a memory leak?" << std::endl;
}
}
per the recommendations of the API:
https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-registerwaitforsingleobject
When the wait is completed, you must call the UnregisterWait or UnregisterWaitEx function to cancel the wait operation. (Even wait operations that use WT_EXECUTEONLYONCE must be canceled.) Do not make a blocking call to either of these functions from within the callback function.
submit the unregistering of the waiter to the threadpool to be dealt with outside of the callback:
VOID CALLBACK unregisterWaiter(
PTP_CALLBACK_INSTANCE Instance,
PVOID Context
)
{
PHANDLE pWaitHandle = (PHANDLE)Context;
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cout << preamble(__func__) << std::endl;
std::cout << "\Handle: " << (HANDLE)*pWaitHandle << std::endl;
}
if (!UnregisterWait(*pWaitHandle))
{
std::lock_guard<std::mutex> log(gIoMtx);
std::cerr << preamble(__func__) << "UnregisterWait failed: " << GetLastError() << std::endl;
}
}
Managing the pointer to the handle created needs to be accounted for, but I think you can tuck it into the structure wrapping the overlapped IO and then pass the pointer to your wrapper around. Seems to work fine. The documentation makes no indication of whether I'm on the hook for freeing anything, so I assume that is why we're required to call the UnregisterWait function regardless of whether we're only executing once, etc. That detail can be considered outside the scope of the question.
Note, for others' benefit, I've updated the github link from my question with the latest version of the code.

Boost asio synchronous https call- Json response have unintended character

we are migrating from http to https boost asio sysnchornous call and I am using the below code to make https synchoronous call with ssl certificate validation and we got the response into multiple lines; as of now i have removed the line feed character(\r\n)(upstream system is saying that they are sending response in single line and without any extra character as described below) and tried to parse the response but sometimes we are getting the response with extra characters in key value pairs as shown below:
try{
fast_ostringstream oss;
boost::asio::streambuf request_;
boost::asio::streambuf response_;
boost::system::error_code ec;
boost::asio::ssl::context ctx(boost::asio::ssl::context::sslv23);
ctx.set_verify_mode(boost::asio::ssl::verify_peer);
ctx.set_default_verify_paths(ec);
if (ec)
{
fast_ostringstream oss;
oss << "Issue in settign the default path:" << ec.message();
PBLOG_INFO(oss.str());
}
oss << ec.message();
ctx.add_verify_path("/home/test/pemcert/");
ctx.set_options(boost::asio::ssl::context::default_workarounds |
boost::asio::ssl::context::no_sslv2 |
boost::asio::ssl::context::no_sslv3);
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket(io_service,ctx);
std::ostream request_stream(&request_);
request_stream << "POST " << server_endpoint << " HTTP/1.1\r\n";
request_stream << "Host: " << hostname << "\r\n";
request_stream << "Accept: */*\r\n";
request_stream << authorization_token << "\r\n";
request_stream << client_name << "\r\n";
request_stream << "Content-Length: " << req_str.length() << "\r\n";
request_stream << "Content-Type: application/x-www-form-urlencoded \r\n";
request_stream << "Connection: close\r\n\r\n";
request_stream << req_str << "\r\n";
tcp::resolver resolver(io_service);
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostname, port_no);
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::resolver::iterator end;
boost::system::error_code error = boost::asio::error::host_not_found;
boost::asio::connect(socket.lowest_layer(), endpoint_iterator, error);
boost::system::error_code echs;
socket.handshake(boost::asio::ssl::stream_base::client, echs);
boost::asio::write(socket, request_);
PBLOG_INFO("Trac Request successfully sent");
// Read the response status line.
boost::asio::read_until(socket, response_, "\r\n");
string res=make_string(response_);
// Check that response is OK.
std::istream response_stream(&response_);
std::string http_version;
response_stream >> http_version;
unsigned int status_code;
response_stream >> status_code;
std::string status_message;
std::getline(response_stream, status_message);
if (!response_stream || http_version.substr(0, 5) != "HTTP/")
{
PBLOG_WARN("Invalid response\n");
}
if (status_code != 200)
{
fast_ostringstream oss;
oss << "Response returned with status code: " << status_code << "\n";
PBLOG_WARN(oss.str());
}
boost::asio::read(socket, response_, boost::asio::transfer_all(), error);
if (error.value() != 335544539 && strcmp(error.category().name(),"asio.ssl") != 0 )
{
fast_ostringstream oss;
oss << "Error : " << error.message() << "Value:" << error.value() << "Category Name:" << error.category().name();
PBLOG_WARN(oss.str());
return false;
}
else
{
string message = make_string(response_);
size_t pos = message.find( "header" );
if( pos != std::string::npos)
{
pos = pos - 2;
string msg = message.substr(pos, message.length());
msg.erase(std::remove(msg.begin(),msg.end(),'\n'),msg.end());
msg.erase(std::remove(msg.begin(),msg.end(),'\r'),msg.end());
msg.erase(msg.size()-1); //to ignore the short read error
response = msg;
}
else
{
fast_ostringstream oss;
oss << "Invalid Response: " << message;
PBLOG_WARN(oss.str());
return false;
}
socket.lowest_layer().shutdown(tcp::socket::shutdown_both);
}
}
Json response:
I couldn't paste the full response due to security reason but small part where the extra character(here 21f0 is getting appended while we got the resposne)is getting added is shown below:
"SGEType":{"decisionKey"21f0:"SGMtype","decisionValue":null,"decisionGroup":"partyTranslations","ruleName":"Party Details Translations"}
Please let me know whether i am reading from the socket is accurate or needs modification.
I couldn't paste the full response due to security reason but small part where the extra character(here 21f0 is getting appended while we got the response)is getting added is shown below:
We have no way of knowing. How big is the response? My wild stab at things is that it might be using chunked encoding which you are potentially mis-handling because you manually "non-parse" HTTP responses?
In that case, my earlier answer might be kind of prophetic:
Luckily, you can also refer to that live example to see how to use Boost Beast to read the response correctly.
Live On Coliru
I'll also repeat the summary because the lesson is an important one:
Side note: Just reading until EOF would probably have worked for HTTP/1.0. But the server might rightfully reject that version or choose to respond with HTTP/1.1 anyways.

std::unordered_map iterator returns erased keys, why and how to skip them

I am very new to C++ programming and have stumbled across a behaviour that confuses me and makes my coding harder. I have searched for answer a bit and could not find anything - I have also scrolled through C++ reference pages and that did not help either (please don't crucify me if the answer is in there - the page isn't role model for explaining things). Maybe I am missing something really obvious.
Could someone explain, the following behaviour of std::unordered_map ?
std::unordered_map<std::string, std::string> test_map;
test_map["test_key_1"] = "test_value_1";
test_map["test_key_2"] = "test_value_2";
std::cout << "'test_key_1' value: " << test_map["test_key_1"] << std::endl; // This returns "test_value_1"
std::cout << "test_map size before erase: " << test_map.size() << std::endl; // This returns 2
test_map.erase("test_key_1");
std::cout << "test_map size after erase: " << test_map.size() << std::endl; // This returns 1
std::cout << "'test_key_1' value after erase: " << test_map["test_key_1"] << std::endl; // This returns empty string
std::cout << "'non_existing_key' value: " << test_map["non_existing_key"] << std::endl; // This returns empty string
test_map.rehash(test_map.size()); // I am doing this because vague hints from internet, code behaves
// same way without it.
for (std::unordered_map<std::string, std::string>::iterator it = test_map.begin();
it != test_map.end(); ++it)
{
std::cout << "Key: " << it->first << std::endl;
}
// Above loop return both 'test_key_1' and 'test_key_2'.
// WHY!?
Why iterator is returning items that were already erased ? How can I make iterator return only items that are present in map ?
I will be grateful for any help, as I am really lost.
You are using operator[] to access previously erased elements which
Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist.
If you need just to search for given key, use find method that returns map.end() if element was not found.

Boost Interprocess Send giving error: boost::interprocess_exception::library_error

I am using boost message queue to communicate among different processes. I am transmitting an object of type Packet. To do this, I am using serialization and deserialization in send and receive functions.
However, when I try to send the data, I am getting this error:
boost::interprocess_exception::library_error
No other information is given.
This is how I create message queues.
for(i = 0; i< PROC_MAX_E ; i++){
std::string mqName = std::string("mq") + std::to_string(i);
std::cout << " Size of Packet is " << sizeof(Packet) << std::endl;
message_queue mq(open_or_create, mqName.c_str(), MAX_QUEUE_SIZE_E, 100*sizeof(Packet)); // size of packet later
}
This is my Packet :
class Packet{
public :
Packet();
Packet(uint32_t aType, uint32_t aProcId);
~Packet();
uint32_t getType();
union{
uint32_t mFuncId;
//uint8_t mResult8;
uint32_t mResult32;
//uint64_t mResult64;
//bool mResult;
//uint8_t* mAddr8;
//uint32_t* mAddr32;
//uint64_t* mAddr64;
//char mData[MAX_PACKET_SIZE]; // This will be used to store serialized data
};
friend class boost::serialization::access;
template <class Archive>
void serialize(Archive & ar, const unsigned int version){
ar & _mType;
ar & _mProcId;
//ar & mData;
ar & mFuncId;
//ar & mResult32;
}
private :
uint32_t _mType;
uint32_t _mProcId;
}; // end class
} // end namespace
This is my serialize and deserialize functions:
std::string IPC::_serialize(Packet aPacket){
std::stringstream oss;
boost::archive::text_oarchive oa(oss);
oa << aPacket;
std::string serialized_string (oss.str());
return serialized_string;
}
Packet IPC::_deserialize(std::string aData){
Packet p;
std::stringstream iss;
iss << aData;
boost::archive::text_iarchive ia(iss);
ia >> p;
return p;
}
And this is my send and receive functions:
bool IPC::send(uint32_t aProcId, Packet aPacket){
try{
_mLogFile << "<-- Sending Data to Process : " << aProcId << std::endl;
//uint32_t data = aPacket;
std::string mqName = std::string("mq") + std::to_string(aProcId);
message_queue mq(open_only, mqName.c_str());
//serialize Packet
std::cout << "Serializing \n";
std::string data = _serialize(aPacket);
std::cout << " Serialized data =" << data.data() << "Size = " << data.size()<< std::endl;
mq.send(data.data(), data.size(), 0);
//mq.send(&data, sizeof(uint32_t), 0);
}catch(interprocess_exception &ex){
_mLogFile << "***ERROR*** in IPC Send to process : " << aProcId << " " << ex.what() << std::endl;
std::cout << "***ERROR*** in IPC Send to process : " << aProcId << " " << ex.what() << std::endl;
_ipc_exit();
}
}
I am getting exception during mq.send
When I transmit only integers it works fine. Only with serialization and deserialization, I get this error
Any help is greatly appreciated.I am a little stuck as the exception message is also not clear.
I am using boost 1_57_0
Rgds
Sapan
Try closing or flushing the string steam before using the string.
std::string IPC::_serialize(Packet aPacket){
std::stringstream oss;
{
boost::archive::text_oarchive oa(oss);
oa << aPacket;
}
return oss.str();
}

Using std::unique_ptr and lambdas to advance a state of an object

When advancing the state of an object, use of std::swap works well for simple objects and pointer swaps. For other in place actions, Boost.ScopeExit works rather well, but it's not terribly elegant if you want to share exit handlers across functions. Is there a C++11 native way to accomplish something similar to Boost.ScopeExit but allow for better code reuse?
(Ab)use std::unique_ptr's custom Deleters as a ScopeExitVisitor or Post Condition. Scroll down to ~7th line of main() to see how this is actually used at the call site. The following example allows for either std::function or lambdas for Deleter/ScopeExitVisitor's that don't require any parameters, and a nested class if you do need to pass a parameter to the Deleter/ScopeExitVisitor.
#include <iostream>
#include <memory>
class A {
public:
using Type = A;
using Ptr = Type*;
using ScopeExitVisitorFunc = std::function<void(Ptr)>;
using ScopeExitVisitor = std::unique_ptr<Type, ScopeExitVisitorFunc>;
// Deleters that can change A's private members. Note: Even though these
// are used as std::unique_ptr<> Deleters, these Deleters don't delete
// since they are merely visitors and the unique_ptr calling this Deleter
// doesn't actually own the object (hence the label ScopeExitVisitor).
static void ScopeExitVisitorVar1(Ptr aPtr) {
std::cout << "Mutating " << aPtr << ".var1. Before: " << aPtr->var1;
++aPtr->var1;
std::cout << ", after: " << aPtr->var1 << "\n";
}
// ScopeExitVisitor accessing var2_, a private member.
static void ScopeExitVisitorVar2(Ptr aPtr) {
std::cout << "Mutating " << aPtr << ".var2. Before: " << aPtr->var2_;
++aPtr->var2_;
std::cout << ", after: " << aPtr->var2_ << "\n";
}
int var1 = 10;
int var2() const { return var2_; }
// Forward declare a class used as a closure to forward Deleter parameters
class ScopeExitVisitorParamVar2;
private:
int var2_ = 20;
};
// Define ScopeExitVisitor closure. Note: closures nested inside of class A
// still have access to private variables contained inside of A.
class A::ScopeExitVisitorParamVar2 {
public:
ScopeExitVisitorParamVar2(int incr) : incr_{incr} {}
void operator()(Ptr aPtr) {
std::cout << "Mutating " << aPtr << ".var2 by " << incr_ << ". Before: " << aPtr->var2_;
aPtr->var2_ += incr_;
std::cout << ", after: " << aPtr->var2_ << "\n";
}
private:
int incr_ = 0;
};
// Can also use lambdas, but in this case, you can't access private
// variables.
//
static auto changeStateVar1Handler = [](A::Ptr aPtr) {
std::cout << "Mutating " << aPtr << ".var1 " << aPtr->var1 << " before\n";
aPtr->var1 += 2;
};
int main() {
A a;
std::cout << "a: " << &a << "\n";
std::cout << "a.var1: " << a.var1 << "\n";
std::cout << "a.var2: " << a.var2() << "\n";
{ // Limit scope of the unique_ptr handlers. The stack is unwound in
// reverse order (i.e. Deleter var2 is executed before var1's Deleter).
A::ScopeExitVisitor scopeExitVisitorVar1(nullptr, A::ScopeExitVisitorVar1);
A::ScopeExitVisitor scopeExitVisitorVar1Lambda(&a, changeStateVar1Handler);
A::ScopeExitVisitor scopeExitVisitorVar2(&a, A::ScopeExitVisitorVar2);
A::ScopeExitVisitor scopeExitVisitorVar2Param(nullptr, A::ScopeExitVisitorParamVar2(5));
// Based on the control of a function and required set of ScopeExitVisitors that
// need to fire use release() or reset() to control which visitors are used.
// Imagine unwinding a failed but complex API call.
scopeExitVisitorVar1.reset(&a);
scopeExitVisitorVar2.release(); // Initialized in ctor. Use release() before reset().
scopeExitVisitorVar2.reset(&a);
scopeExitVisitorVar2Param.reset(&a);
std::cout << "a.var1: " << a.var1 << "\n";
std::cout << "a.var2: " << a.var2() << "\n";
std::cout << "a.var2: " << a.var2() << "\n";
}
std::cout << "a.var1: " << a.var1 << "\n";
std::cout << "a.var2: " << a.var2() << "\n";
}
Which produces:
a: 0x7fff5ebfc280
a.var1: 10
a.var2: 20
a.var1: 10
a.var2: 20
a.var2: 20
Mutating 0x7fff5ebfc280.var2 by 5. Before: 20, after: 25
Mutating 0x7fff5ebfc280.var2. Before: 25, after: 26
Mutating 0x7fff5ebfc280.var1 10 before
Mutating 0x7fff5ebfc280.var1. Before: 12, after: 13
a.var1: 13
a.var2: 26
On the plus side, this trick is nice because:
Code used in the Deleters can access private variables
Deleter code is able to be centralized
Using lambdas is still possible, though they can only access pubic members.
Parameters can be passed to the Deleter via nested classes acting as closures
Not all std::unique_ptr instances need to have an object assigned to them (e.g. it's perfectly acceptable to leave unneeded Deleters set to nullptr)
Changing behavior at runtime is simply a matter of calling reset() or release()
Based on the way you build your stack it's possible at compile time to change the safety guarantees on an object when the scope of the std::unique_ptr(s) go out of scope
Lastly, using Boost.ScopeExit you can forward calls to a helper function or use a conditional similar to what the Boost.ScopeExit docs suggest with bool commit = ...;. Something similar to:
#include <iostream>
#include <boost/scope_exit.hpp>
int main() {
bool commitVar1 = false;
bool commitVar2 = false;
BOOST_SCOPE_EXIT_ALL(&) {
if (commitVar1)
std::cout << "Committing var1\n"
if (commitVar2)
std::cout << "Committing var2\n"
};
commitVar1 = true;
}
and there's nothing wrong with that, but like was asked in the original question, how do you share code without proxying the call someplace else? Use std::unique_ptr's Deleters as ScopeExitVisitors.

Resources