Introduction
I'm trying to create a port-forwarding sample with tcp connections, so I need map client identification with his socket. When the client requests port-forwarding, I have to know who owns the sockets.
To do that, I created the following code:
std::map<std::string, tcp::socket> box_map;
std::map<std::string, tcp::socket>::iterator it;
it = box_map.find(id);
if (it != box_map.end())
return;
else{
box_map.insert(std::pair<std::string, tcp::socket>(id,s));
return;
}
Problem
But I got the following error:
error: use of deleted function ‘boost::asio::basic_stream_socket<boost::asio::ip::tcp>::basic_stream_socket(const boost::asio::basic_stream_socket<boost::asio::ip::tcp>&)’
tcp::socket is not copy constructible. So you'd have to construct the new pair in-place by moving your socket using emplace:
box_map.emplace(id, std::move(s));
Alternatively, you could just still use insert and just move into the pair you're constructing:
box_map.insert(std::make_pair(id, std::move(s)));
Related
I just solved a latency issue in our infrastructure that was triggered because this code snippet here triggered a call to getaddrinfo on every run of the code:
sock = UDPSocket.open
sock.send("#{key}|#{value}", 0,
GRAPHITE_SERVER,
STATSD_PORT)
sock.close
Because we use statsd and graphite for high-volume event and stats monitoring, we were effectively triggering numerous calls getaddrinfo on every API call, and potentially tens of thousands every minute.
I modified this code to use the internal IP address, not the DNS name, of our graphite server, and was able to resolve the latency issue (presumably because the internal AWS VPC DNS server was not equipped to handle such a high volume of requests).
Now that my issue is resolved, I would love to know why the UDP implementation in Ruby is not using a cached IP address value (presumably based on the TTL of the domain name entry). Here is the relevant line and the function in full, you can see the call to rsock_addrinfo just at the end:
static VALUE
udp_send(int argc, VALUE *argv, VALUE sock)
{
VALUE flags, host, port;
struct udp_send_arg arg;
VALUE ret;
if (argc == 2 || argc == 3) {
return rsock_bsock_send(argc, argv, sock);
}
rb_scan_args(argc, argv, "4", &arg.sarg.mesg, &flags, &host, &port);
StringValue(arg.sarg.mesg);
GetOpenFile(sock, arg.fptr);
arg.sarg.fd = arg.fptr->fd;
arg.sarg.flags = NUM2INT(flags);
arg.res = rsock_addrinfo(host, port, rsock_fd_family(arg.fptr->fd), SOCK_DGRAM, 0);
ret = rb_ensure(udp_send_internal, (VALUE)&arg,
rsock_freeaddrinfo, (VALUE)arg.res);
if (!ret) rsock_sys_fail_host_port("sendto(2)", host, port);
return ret;
}
I assume this decision is intentional and would love to learn more about the reasons why.
getaddrinfo does not return data about the TTL... because it may not have it at all in fact, as the resolution may not necessarily be done over the DNS (could be hosts file, LDAP, etc. see /etc/nsswitch.conf)
From its manual here is the structure returned:
int getaddrinfo(const char *hostname, const char *servname, const struct addrinfo *hints, struct addrinfo **res);
struct addrinfo {
int ai_flags; /* input flags */
int ai_family; /* protocol family for socket */
int ai_socktype; /* socket type */
int ai_protocol; /* protocol for socket */
socklen_t ai_addrlen; /* length of socket-address */
struct sockaddr *ai_addr; /* socket-address for socket */
char *ai_canonname; /* canonical name for service location */
struct addrinfo *ai_next; /* pointer to next in list */
};
After a successful call to getaddrinfo(), *res is a pointer to a linked list of one or more addrinfo structures.
So it is up to the thing "behind" getaddrinfo to do some caching or not, because getaddrinfo may have used the DNS to retrieve data, or not.
Some specific API for DNS, like getdnsapi will give back to the caller some information on the TTL, see https://getdnsapi.net/documentation/spec/ and example 6.2
6·2 Get IPv4 and IPv6 Addresses for a Domain Name
This example is similar to the previous one, except that it retrieves more information than just the addresses, so it traverses the replies_tree. In this case, it gets both the addresses and their TTLs.
Without any cache layer anywhere, since UDP is stateless, any new send must trigger resolution in some way or form.
You said:
"modified this code to use the internal IP address, not the DNS name"
You should instead install a local (on the box) recursive caching nameserver, such as unbound. All your local applications will benefit from it, and a faster DNS resolution (depending on how /etc/nsswitch.conf, /etc/resolv.conf and /etc/hosts are setup also).
For the associated bug report hinted by #Casper it seems at its core more an issue about IPv6 vs IPv4 which could be solved either by adjusting /etc/gai.conf or equivalent or doing some more clever programming around opening the connection, with the so called "happy eyeball algorithm" where you try to resolve both A and AAAA at the same time which means two parallel DNS queries (because you can not combine them into one per the protocol) and try to use the fastest one coming back, with a slight preference for AAAA if you want to be in the modern camp so you would fire the A one only some given amount of milliseconds after the AAAA to catch the case where you do not get a reply at all for AAAA or a negative one. See RFC6555 for details.
I have the following pull - publisher ZMQ schema on an Amazon EC2 machine:
I am working with the Public IP address of my EC2 Amazon machine.
I am trying send data via ZMQ PUSH socket from the client side to ZMQ PULL socket server side, which is this:
import zmq
from zmq.log.handlers import PUBHandler
import logging
# from zmq.asyncio import Context
def main():
ctx = zmq.Context()
publisher = ctx.socket(zmq.PUB)
# publisher.bind("tcp://*:5557")
publisher.bind("tcp://54.89.25.43:5557")
handler = PUBHandler(publisher)
logger = logging.getLogger()
logger.addHandler(handler)
print("Network Manager CNVSS Broker listening")
collector = ctx.socket(zmq.PULL)
# collector.bind("tcp://*:5558")
collector.bind("tcp://54.89.25.43:5558")
while True:
message = collector.recv()
print("Publishing update %s" % message)
publisher.send(message)
if __name__ == '__main__':
main()
But when I excute this script, I get this error:
(cnvss_nm) ubuntu#ip-172-31-55-72:~/cnvss_nm$ python pull_pub-nm.py
Traceback (most recent call last):
File "pull_pub-nm.py", line 28, in <module>
main()
File "pull_pub-nm.py", line 10, in main
publisher.bind("tcp://54.89.25.43:5557")
File "zmq/backend/cython/socket.pyx", line 547, in zmq.backend.cython.socket.Socket.bind
File "zmq/backend/cython/checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Cannot assign requested address
(cnvss_nm) ubuntu#ip-172-31-55-72:~/cnvss_nm$
I've changed my IP-address to publisher.bind("tcp://*:5557") and collector.bind("tcp://*:5558") in the server side, and my script is running:
(cnvss_nm) ubuntu#ip-x-x-x-x:~/cnvss_nm$ python pull_pub-nm.py
Network Manager CNVSS Broker listening
But from my client-side code ( added recently ), any data is sent.
#include <zmq.hpp>
#include <zmq.h>
#include <iostream>
#include "zhelpers.hpp"
using namespace std;
int main(int argc, char *argv[])
{
zmq::context_t context(1);
/*
std::cout << "Sending message to NM Server…\n" << std::endl; */
zmq::socket_t subscriber(context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5557");
subscriber.setsockopt(ZMQ_SUBSCRIBE, "", 0);
zmq::socket_t sender(context, ZMQ_PUSH);
sender.connect("tcp://localhost:5558");
string firstMessage = "Hola, soy el cliente 1";
while (1)
{
// Wait for next request from client
std::string string = s_recv(subscriber);
std::cout << "Received request: " << string << std::endl;
// Do some 'work'
// sleep(1);
// Send reply back to client
// zmq::message_t message(firstMessage.size() + 1);
// Cualquiera de los dos se puede
// memcpy(message.data(), firstMessage.c_str(), firstMessage.size() + 1);
// s_send(sender, "Hola soy un responder 1");
// sender.send(message);
}
}
I think that my inconvenient is on my EC2 machine network configuration or on the way of setup the IP address of the server.
When I test the clients and server locally, all it works perfectly.
Is there any possibility of performing some forwarding or NAT operation on my EC2 machine?
My clients do not reach the server.
I have the security groups rule the above mentioned ports 5557 and 5558.
How to solve this inconvenience?
I had a similar situation where I was using ZMQ on EC2 and getting "Cannot assign requested address." I was also using Elastic IP as suggested in the answer, but it did not work for me. It turned out that on EC2, the sending side (ZMQ.PUSH) needs to bind to the private IP rather than to the public, while the receiving side needs to bind to the public IP, so trying to bind the server to Elastic IP caused the error. After I changed it to bind the server ZMQ.PUSH side to Private IP and the client ZMQ.PULL to Elastic IP (on the same port), it worked.
How to solve this inconvenience?
1 )If in doubts about the EC2 addresses, first try to test the reversed .bind() / .connect(), so that the EC2-side localhost address assignments are out f the game, and your connectivity proof towards a known IP-address will not depend on the EC2-side settings.
2 )Next, given there are no details about the client-side part of the MCVE, I may have got the scenario idea incorectly, so bear with me - there are only these compatible ZeroMQ Scalable Formal Communication Archetype sockets' matches available ever since, up to API v4.2.x in 2018/Q2:
{ PUB: [ SUB,
XSUB,
None
],
PULL: [ PUSH,
None
],
...
}
3 )There is a good engineering practice not to let unhandled exceptions happen, the more, if Context()-instance may still bear the possession of IP:PORT# (b)locked resource ( sometimes even beyond the python process termination ( many incidents with my own naive and this way deadlocked experiments in my past dark history :o) )
Each step in the infrastructure setup ought be wrapped into error-handling syntax-clause, best including a finally: section, where so far created resources will occasionally get dismantled in a graceful manner in cases, when exception(s) spring out. This way your code will prevent a forever hanging orphan(s), that have just an option to reboot the platform so as to get rid of these, otherwise impossible to salvage, hostages.
Problem solved,a final summary :
The initially indicated problem ( diagnosed at .bind() / .connect() phase ) was, as depicted earlier, related to Amazon EC2 instance IP-address mapping, as the term, needed for any transport-class Endpoint setup, localhost:port#
camdebu on Nov 1, 2012 5:07 PM explained all the steps needed:Setup an Elastic IP to your EC2 isntance. You will then have a static IP address. There's no cost for the Elastic IP as long as you have it pointed to an EC2 instance.
You should then have no problem connecting to your new IP Address and port as long as your security group is setup correctly.
-Cam-
Check your Security Group Rule. Make sure you allow the port to communicate from outside the instance. (Enable All TCP and Check). [ added Yesu Jeya Bensh.P ]
The recently posted client-code but shows another issue, a mutual block, generated by a non-cooperating zmq::socket_t sender( context, ZMQ_PUSH ), which actually never sends a single message.
Given the client goes into while(1)-loop as posted above, the associated peer will inadvertently get into an unsalvageable blocked state inside the python-made main(), since :
def main():
...
collector = ctx.socket( zmq.PULL )
#ollector.bind( "tcp://*:5558" )
collector.bind( "tcp://54.89.25.43:5558" )
while True:
message = collector.recv() # THIS SLOC WILL BLOCK FOREVER HERE,
... # GIVEN <sender> NEVER SENDS...
so more care is to be taken, so as to make the flow of events robust enough, not to ever fall into this or similar unsalvageable mutual block.
I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.
Instead of using a logger or database server I'd like to append information to one file from possibly many verticle instances.
There are versions of methods for writing asynchronously to a file.
Can I assume that vertx handles the synchronisation between the writes so that these dont interfere when using those versions of methods marked as ¨async¨ ?
There seems to be a rule that one can rely on vertx providing all isolation between concurrent processing out of the box. But is that true in case of writing file access?
Could you please include a code snippet into the answer that shows how to open and write to one file from many verticle instances with finest possible granularity, e.g. for logging requests.
I wouldn't recommend writing to a single file with many different "writers". Regarding concurrent logging I would stick to the Single Writer principle.
Create a Verticle which subscribes to the Event Bus and listens for messages to be logged. Lets call this Verticle Logger which listens to system.logger.
EventBus eb = vertx.eventBus();
eb.consumer("system.logger", message -> {
// write to file
});
Verticles which like to log something need to send a message to the Logger Verticle:
eventBus.send("system.logger", "foobar");
Appending to a existing file work something like this (didn't test):
vertx.fileSystem().open("file.log", new OpenOptions(), result -> {
if (result.succeeded()) {
Buffer buff = Buffer.buffer(message); // message from consume
AsyncFile file = result.result();
file.write(buff, buff.length() * i, ar -> {
if (ar.succeeded()) {
System.out.println("done");
} else {
System.err.println("write failed: " + ar.cause());
}
});
} else {
System.err.println("open file failed " + result.cause());
}
});
I am trying to create a Processing application connected with an Arduino.
Due to the fact that I want the connection between the two to be established automatically, meaning that I do not specify the name of the port, but I'm using Serial.list() to get the names of the ports available and then with a for loop I will check which one is printing the correct string.
The problem is that when I access firstly the /dev/cu.* then all the /dev/tty.* ports are busy and vice versa. This is quite strange and I do not want this to happen.
You should be able to use one(/dev/tty.*) or the other(/dev/cu.*), but not both at the same time as they might point to the same resource in different ways.
I recommend listing the ports, checking the port prefix (agains let's say /dev/tty.*, but not /dev/cu.*), initialising the Serial port, then exiting the loop that traverses the listed serial ports:
import processing.serial.*;
Serial arduino;
final int BAUD_RATE = 9600;
void setup(){
String[] ports = Serial.list();
for(int i = 0 ; i < ports.length; i++){//go through each port
if(ports[i].contains("tty.usbmodem")){//find one that looks like an Arduino on OSX
try{
arduino = new Serial(this,ports[i],BAUD_RATE);//initialize the connection
i = ports.length;//exit the loop, break; should also work
println("Arduino connection succesfully initialized");
}catch(Exception e){
System.err.println("Error opening Serial port!\nPlease check USB connection and ensure the port is not already open in another application.");
e.printStackTrace();
}
}
}
if(arduino == null) System.err.println("Serial connection to Arduino failed!");
}