Block incoming IPs in TidHTTPServer - indy

I want to block certain client "OnConnect" to my Server, but I am not sure which event is best to use and how to find the remote IP...

In your app's code, using the OnConnect event is the simplest choice. You can get the client's IP from the Binding.PeerIP property of the provided AContext parameter, eg:
procedure TMyForm.IdHTTPServer1Connect(AContext: TIdContext);
begin
if (AContext.Binding.PeerIP is blacklisted) then
AContext.Connection.Disconnect; // or raise an Exception...
end;
However, a better choice is to put your server app behind a firewall that blocks connections by the desired IPs from reaching TIdHTTPServer in the first place.

Related

Indy's idSimpleServer reconnection

I'm want to be able to re-connect to a idSimpleServer after one client connects to it and then disconnects. The first client can connect and disconnect no problem but the next client can't. I did a simple test procedure to demonstrate it my problem.
procedure Tfrmmain.btnBlockingClick(Sender: TObject);
begin
Server1.BeginListen;
Server1.Listen;
CodeSite.Send(csmLevel2, 'Listen');
CodeSite.Send(csmLevel2, 'Server1.IOHandler.Connected', Server1.IOHandler.Connected);
try
while (Server1.Connected) do
begin
while Server1.IOHandler.CheckForDataOnSource() do
begin
CodeSite.Send(csmLevel3, 'InputBufferAsString', Server1.IOHandler.InputBufferAsString);
Server1.IOHandler.WriteLn('0006CANPDD');
end;
end;
finally
Server1.Disconnect;
CodeSite.Send(csmLevel4, 'Finally');
end;
end;
This give the following results in my codesite log:
Listen
Server1.IOHandler.Connected = True
Finally
Listen
Server1.IOHandler.Connected = False
Finally
Notice the second connection doesn't seem to bind the IOHandler properly. Not sure where I should be looking. Any ideas?
Thanks
Steve
The problem is that you are reusing the same TIdSimpleServer object each time.
After the first disconnect, the same IOHandler is reused for the next connection, but the IOHandler.ClosedGracefully property remains true from the earlier connection becaue it is not being reset each time. The ClosedGracefully property is reset only by the IOHander.Open() method, which TIdSimpleServer calls only when it creates a new IOHandler. Disconnect() does not free the IOHandler, but it does call IOHandler.Close().
The missing call to Open() on subsequent connections looks like a bug to me, so I have checked in a fix for it to Indy's SVN (rev 5103).
You can either upgrade to the latest SVN release, or else you will have to destroy the TIdSimpleServer.IOHandler (or the TIdSimpleServer itself) in between each connection.

Boost::asio UDP Broadcast with ephemeral port

I'm having trouble with udp broadcast transactions under boost::asio, related to the following code snippet. Since I'm trying to broadcast in this instance, so deviceIP = "255.255.255.255". devicePort is a specified management port for my device. I want to use an ephemeral local port, so I would prefer if at all possible not to have to socket.bind() after the connection, and the code supports this for unicast by setting localPort = 0.
boost::asio::ip::address_v4 targetIP = boost::asio::ip::address_v4::from_string(deviceIP);
m_targetEndPoint = boost::asio::ip::udp::endpoint(targetIP, devicePort);
m_ioServicePtr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
m_socketPtr = boost::shared_ptr<boost::asio::ip::udp::socket>(new boost::asio::ip::udp::socket(*m_ioServicePtr));
m_socketPtr->open(m_targetEndPoint.protocol());
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this))); // Start thread running io_service process
No matter what I do in terms of the following settings, the transmit is working fine, and I can use Wireshark to see the response packets coming back from the device as expected. These response packets are also broadcasts, as the device may be on a different subnet to the pc searching for it.
The issues are extremely strange to my mind, but are as follows:
If I specify the local port and set m_forceConnect=false, everything works fine, and my recieve callback fires appropriately.
If I set m_forceConnect = true in the constructor, but pass in a local port of 0, the transmit works fine, but my receive callback never fires. I would assume this is because the 'target' (m_targetEndpoint) is 255.255.255.255, and since the device has a real IP, the response packet gets filtered out.
(what I actually want) If m_forceConnect = false (and data is transmitted using a send_to call), and local port = 0, therefore taking an ephemeral port, my RX callback immediately fires with an error code 10022, which I believe is an "Invalid Argument" socket error.
Can anyone suggest why I can't use the connection in this manner (not explicitly bound and not explicitly connected)? I obviously don't want to use socket.connect() in this case, as I want to respond to anything I receive. I also don't want to use a predefined port, as I want the user to be able to construct multiple copies of this object without port conflicts.
As some people may have noticed, the overall aim of this is to use the same network-interface base-class to handle both the unicast and broadcast cases. Obviously for the unicast version, I can perfectly happily m_socket->connect() as I know the device's IP, and I receive the responses since they're from the connected IP address, therefore I set m_forceConnect = true, and it all just works.
As all my transmits use send_to, I have also tried to socket.connect(endpoint(ip::addressv4::any(), devicePort), but I get a 'The requested address is not valid in its context' exception when I try it.
I've tried a pretty serious hack:
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), m_socketPtr->local_endpoint().port());
m_socketPtr->bind(localEndpoint);
where I extract the initial ephemeral port number and attempt to bind to it, but funnily enough that throws an Invalid Argument exception when I try and bind.
OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method.
My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes:
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
// A dummy TX is required for the socket to acquire the local port properly under windoze
// Transmitting an empty string works fine for this, but the TX must take place BEFORE the first call to Asynch_receive_from(...)
#ifdef WIN32
m_socketPtr->send_to(boost::asio::buffer("", 0), m_targetEndPoint);
#endif
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this)));
It's a bit of a hack in my book, but it is a lot better than implementing all the requirements to defer the call to the asynch recieve until after the first transmission.

Elegant way to stop socket read operation from outside

I implemented a small client server application in Ruby and I have the following problem: The server starts a new client session in a new thread for each connecting client, but it should be possible to shutdown the server and stop all the client sessions in a 'polite' way from outside without just killing the thread while I don't know which state it is in.
So I decided that the client session object gets a `stop' flag which can be set from outside and is checked before each action. The problem is that it should not wait for the client, if it is just waiting for a request. I have the following temporary solution:
def read_client
loop do
begin
timeout(1) { return #client.gets }
rescue Timeout::Error
if #stop
stop # Notifies the client and closes the connection
return nil
end
end
end
end
But that sucks, looks terrible and intuitively, this should be such a normal thing that there has to be a `normal' solution to it. I don't even know if it is safe or if it could happen that the gets operation reads part of the client request, but not all of it.
Another side question is, if setting/getting a boolean flag is an atomic operation in Ruby (or if I need an additional Mutex for the flag).
Thread-per-client approach is usually a disaster for server design. Also blocking I/O is difficult to interrupt without OS-specific tricks. Check out non-blocking sockets, see for example, answers to this question.

How can I tell if my Ruby server script is being overloaded?

I have a daemonized ruby script running on my server that looks like this:
#server = TCPServer.open(61101)
loop do
#thr = Thread.new(#server.accept) do |sock|
Thread.current[:myArrayOfHashes] = [] # hashes containing attributes of myObject
SystemTimer.timeout_after(5) do
Thread.current[:string] = sock.gets
sock.close
# parse the string and load the data into myArrayOfHashes
Myobject.transaction do # Update the myObjects Table
Thread.current[:myArrayOfHashes].each do |h|
Thread.current[:newMyObject] = Myobject.new
# load up the new object with data
Thread.current[:newMyObject].save
end
end
end
end
#thr.join
end
This server receives and manages data for my rails application which is all running on Mac OS 10.6. The clients call the server every 15 minutes on the 15 and while I currently only have 16 or so clients calling every 15 min on the 15, I'm wondering about the following:
If two clients call at close enough to the same time, will one client's connection attempt fail?
How I can figure out how many client connections my server can accommodate at the same time?
How can I monitor how much memory my server is using?
Also, is there an article you can point me toward that discusses the best way to implement this kind of a server? I mean can I have multiple instances of the server listening on the same port? Would that even help?
I am using Bluepill to monitor my server daemons.
1 and 2
The answer is no, two clients connecting close to each other will not make the connection fail (however multiple clients connecting may fail, see below).
The reason is the operating system has a default so called listening queue built into all server sockets. So even if you are not calling accept fast enough in your program, the OS will still keep buffering incoming connections for you. It will buffer these connections for as long as the listening queue does not get filled.
Now what is the size of this queue then?
In most cases the default size typically used is 5. The size is set after you create the socket and you call listen on this socket (see man page for listen here).
For Ruby TCPSocket automatically calls listen for you, and if you look at the C-source code for TCPSocket you will find that it indeed sets the size to 5:
https://github.com/ruby/ruby/blob/trunk/ext/socket/ipsocket.c#L108
SOMAXCONN is defined as 5 here:
https://github.com/ruby/ruby/blob/trunk/ext/socket/mkconstants.rb#L693
Now what happens if you don't call accept fast enough and the queue gets filled?
The answer is found in the man page of listen:
The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.
In your code however there is one problem which can make the queue fill up if more than 5 clients try to connect at the same time: you're calling #thr.join at the end of the loop.
What effectively happens when you do this is that your server will not accept any new incoming connections until all your stuff inside your accept-thread has finished executing.
So if the database stuff and the other things you are doing inside the accept-thread takes a long time, the listening queue may fill up in the meantime. It depends on how long your processing takes, and how many clients could potentially be connecting at the exact same time.
3
You didn't say which platform you are running on, but on linux/osx the easiest way is to just run top in your console. For more advanced memory monitoring options you might want to check these out:
ruby/ruby on rails memory leak detection
track application memory usage on heroku

Problem with Boost Asio asynchronous connection using C++ in Windows

Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
Te first step is create a persistent object -say pop3- with all that Boost.asio stuff to do asynchronous connections, in the WM_INITDIALOG message of the dialog-box-procedure. Some like:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
Here we assume that iniPop3Dlg() create the pop3 heap object -say pointed out by pop3p-. Then connect with the remote server, and a session is initiated with the client’s id and password (USER and PASS commands). Here we assume that the server is in TRANSACTION state.
Then, in response to some user input, the dialog-box-procedure, call the appropriate function. Say:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
Note that total() uses the POP3’s STAT command to get how many emails in the server, while detail() uses two commands consecutively; first STAT to get the total and then a loop with the GET command to retrieve the content of each message.
As an aside: detail() and total() share the same subroutines -the STAT handle routine-, and when finished, both leaves the session as-is. That is, without closing the connection; the socket remains opened an the server in TRANSACTION state.
When any option is selected by the first time, the things run as expected, obtaining the desired results. But when making the second chance, the connection hangs.
A closer inspection show that the first time that the statement
socket_.get_io_service().run();
Is used, never ends.
Note that all asynchronous write and read routines uses the same io_service, and each routine uses socket_.get_io_service().reset() prior to any run()
Not also that all R/W operations also uses the same timer, who is reseted to zero wait after each operation is completed:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
I suspect that the problem is in the io_service or in the timer, and the fact that subsequent executions occurs in a different load of the routine.
As a first approach to my problem, I hope that someone would bring some light in it, prior to a more detailed exposition of the -very few and simple- routines involved.
Have you looked at the asio examples and studied them? There are several asynchronous examples that should help you understand the basic control flow. Pay particular importance to the main event loop started by invoking io_service::run, it's important to understand control is not expected to return to the caller until the io_service has no more remaining work to do.

Resources