I am simply trying to verify that an IP address and port is available. The TIdTCPClient.Connect(HOST, PORT) if it succeeds should verify that the IP Address and Port are available (according to my reading). It doesn't, as soon as the Connect is executed my application is abruptly terminated.
Using Lazarus 2.0.0RC3 on a Parallels Mojave VM. The Connect is in a DYLIB called from a simple test application. In the DYLIB, before I attempt a REST call, I verify IP address and port (that's the only thing I am trying to do). If I ignore the call to TIdTCPClient.connect the REST call succeeds (because I know the IP address is correct as is the port, I have the server sitting next to me). BUT for putting this code into the wild I wanted a way to verify that the server (IP address/Port) was available and pass back an error if not.
I have tried using an IP address, a valid URL (like Google.com) and a simple port like port 50. If I do not provide a port I get an error: 'A Port is Required'.
I have set the IP address and port using the TIdTCPClient properties, passed them as parameters to the Connect method all with the same (catastrophic) result.
function IsInternetConnected: boolean;
var
IdTCPClient1: TIdTCPClient;
begin
(*
Result := True;
Exit;
*)
// Verify internet available
Writelog('DEBUG', 'Enter IsInternetConnected');
Result := False;
try
try
IdTCPClient1 := TIdTCPClient.Create;
IdTCPClient1.ReadTimeout := 2000;
IdTCPClient1.ConnectTimeout := 2000;
IdTCPClient1.Host := 'xxx.xxx.xx.xx'; // This is a valid IP address
IdTCPClient1.Port := xxxx; // This is a valid port address
showmessage('B4 Connect');
//IdTCPClient1.Connect(HOST_NAME, HOST_PORT); // SAME RESULT AS SETTING PROPERTIES
// THE FOLLOWING LINE CAUSES AN AV, KILLS THE APPLICATION
IdTCPClient1.Connect;
showmessage('After Connect'); // NEVER GETS HERE
IdTCPClient1.Disconnect;
Result := True;
except
on E: Exception do
begin
showmessage('ERROR Failed to verify internet connection: ' + E.Message);
Writelog('ERROR', 'Failed to verify internet connection: ' + E.Message);
Result := False;
end;
end;
finally
if Assigned(IdTCPClient1) then
FreeandNil(IdTCPClient1);
Writelog('DEBUG', 'Exit IsInternetConnected');
end;
end;
I should either get a Result of TRUE if the IP/Port combination is correct or False if incorrect or not that's the point of the exercise.
What I actually get logged in the host application Log is that an error is returned from the DYLIB (and AV), it then performs the finally block for the calling routine and immediately performs the finalisation section of the host program, ie. it terminates abruptly.
It should simply get back a response to say the call was unsuccessful and why (no internet connection).
Related
I want to block certain client "OnConnect" to my Server, but I am not sure which event is best to use and how to find the remote IP...
In your app's code, using the OnConnect event is the simplest choice. You can get the client's IP from the Binding.PeerIP property of the provided AContext parameter, eg:
procedure TMyForm.IdHTTPServer1Connect(AContext: TIdContext);
begin
if (AContext.Binding.PeerIP is blacklisted) then
AContext.Connection.Disconnect; // or raise an Exception...
end;
However, a better choice is to put your server app behind a firewall that blocks connections by the desired IPs from reaching TIdHTTPServer in the first place.
I've been scouring the internet and can't find much at all about posting forms in golang tests. This is my attempt at it. I get the error "dial tcp: too many colons in address ::1" though. If I change the address to "http://localhost:8080/" I get "dial tcp 127.0.0.1:8080: connection refused".
I've read that if you put the (IPv6) address in brackets, the brackets will fix the problem, but then I get the error unrecognized protocol.
var addr = "http://::1/"
h := handlers.GetHandler()
server := httptest.NewServer(h)
server.URL = addr
req, err := http.PostForm(addr+"login",
url.Values{"username": {"lemonparty"}, "password": {"bluewaffle"}})
if err != nil {
log.Fatal(err)
}
tl;dr the Listener in httptest.Server doesn't use the httptest.Server.URL as the url to listen on. It doesn't care what that value is. It listens on local hosts lowest open port number.
The URL property on httptest.Server is not really doing anything. Change it all you want, just don't send your requests there. Check out this example program https://play.golang.org/p/BsH38WLkrJ
Basically, if I change the servers URL then send the request to the value I set it to it doesn't work, but if I send it to the default value it does.
Also check out the source http://golang.org/src/net/http/httptest/server.go?s=415:1018#L65, as well as the certs at the bottom of the file; clearly hard coded for the lowest open port on local host. If you want to make request to another URL the Listener.
I'm having trouble with udp broadcast transactions under boost::asio, related to the following code snippet. Since I'm trying to broadcast in this instance, so deviceIP = "255.255.255.255". devicePort is a specified management port for my device. I want to use an ephemeral local port, so I would prefer if at all possible not to have to socket.bind() after the connection, and the code supports this for unicast by setting localPort = 0.
boost::asio::ip::address_v4 targetIP = boost::asio::ip::address_v4::from_string(deviceIP);
m_targetEndPoint = boost::asio::ip::udp::endpoint(targetIP, devicePort);
m_ioServicePtr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
m_socketPtr = boost::shared_ptr<boost::asio::ip::udp::socket>(new boost::asio::ip::udp::socket(*m_ioServicePtr));
m_socketPtr->open(m_targetEndPoint.protocol());
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this))); // Start thread running io_service process
No matter what I do in terms of the following settings, the transmit is working fine, and I can use Wireshark to see the response packets coming back from the device as expected. These response packets are also broadcasts, as the device may be on a different subnet to the pc searching for it.
The issues are extremely strange to my mind, but are as follows:
If I specify the local port and set m_forceConnect=false, everything works fine, and my recieve callback fires appropriately.
If I set m_forceConnect = true in the constructor, but pass in a local port of 0, the transmit works fine, but my receive callback never fires. I would assume this is because the 'target' (m_targetEndpoint) is 255.255.255.255, and since the device has a real IP, the response packet gets filtered out.
(what I actually want) If m_forceConnect = false (and data is transmitted using a send_to call), and local port = 0, therefore taking an ephemeral port, my RX callback immediately fires with an error code 10022, which I believe is an "Invalid Argument" socket error.
Can anyone suggest why I can't use the connection in this manner (not explicitly bound and not explicitly connected)? I obviously don't want to use socket.connect() in this case, as I want to respond to anything I receive. I also don't want to use a predefined port, as I want the user to be able to construct multiple copies of this object without port conflicts.
As some people may have noticed, the overall aim of this is to use the same network-interface base-class to handle both the unicast and broadcast cases. Obviously for the unicast version, I can perfectly happily m_socket->connect() as I know the device's IP, and I receive the responses since they're from the connected IP address, therefore I set m_forceConnect = true, and it all just works.
As all my transmits use send_to, I have also tried to socket.connect(endpoint(ip::addressv4::any(), devicePort), but I get a 'The requested address is not valid in its context' exception when I try it.
I've tried a pretty serious hack:
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), m_socketPtr->local_endpoint().port());
m_socketPtr->bind(localEndpoint);
where I extract the initial ephemeral port number and attempt to bind to it, but funnily enough that throws an Invalid Argument exception when I try and bind.
OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method.
My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes:
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
// A dummy TX is required for the socket to acquire the local port properly under windoze
// Transmitting an empty string works fine for this, but the TX must take place BEFORE the first call to Asynch_receive_from(...)
#ifdef WIN32
m_socketPtr->send_to(boost::asio::buffer("", 0), m_targetEndPoint);
#endif
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this)));
It's a bit of a hack in my book, but it is a lot better than implementing all the requirements to defer the call to the asynch recieve until after the first transmission.
I'm trying to write a program that receives DHCP discoveries (UDP) and forwards them on to a given IP address using a different source IP address depending on the content of a specific field (GIADDR) in the DHCP packet.
I could get working the receiving and sending bit but I'm having an issue with using as IP source address anything that is not a configured IP address on the local machine.
I believe that this can only be done using Raw sockets; is that true ?
Are there any examples out there on how to do that in Go ?
I've spent a couple of days looking around but could not find much.
Cheers,
Sal
There are a number of hurdles to jump with what you propose:
Security
In general, being able to set the source IP address for a packet could be a very dangerous thing security wise. Under linux, in order to forge your own raw DHCP packets with custom headers, you will need to run your application as root or from an application with the CAP_NET_RAW capability (see setcap).
Raw Sockets in Go
The standard net library does not provide raw socket capability because it is very specialized and the API may be subject to change as people begin to use it in anger.
The go.net subrepository provides an ipv4 and an ipv6 package, the former of which should suit your needs:
http://godoc.org/code.google.com/p/go.net/ipv4#NewRawConn
Header Spoofing
You will need to use ipv4.RawConn's ReadFrom method to read your source packet. You should then be able to use most of those fields, along with your GIADDR logic, to set up the headers for the WriteTo call. It will probably look something like:
for {
hdr, payload, _, err := conn.ReadFrom(buf)
if err != nil { ... }
hdr.ID = 0
hdr.Checksum = 0
hdr.Src = ...
hdr.Dst = ...
if err := conn.WriteTo(hdr, payload, nil); err != nil { ... }
}
A previous question asked if changing one line of code implemented persistent SSL connections. After seeing that question's responses, and checking the dearth of SSL documentation, the following appear true:
for the server, a persistent connection is simply doing repeated requests/responses between SSL_accept() and SSL_set_shutdown().
according to this page, the client has to indicate how many requests there will be by sending the appropriate "Content-length:" header or using an agreed-upon terminating request.
However, there's no guarantee the client will send what it's supposed to. Therefore, it would seem a server using blocking sockets can hang indefinitely on a SSL_read() while waiting for additional requests that never arrive. (SSL_CTX_set_timeout() doesn't appear to cause a subsequent SSL_read() to exit early, so it's not clear how to do timed-out connections as described at this Wikipedia page if sockets are blocking.)
Apparently, a server can indicate it won't do keep-alive by returning a "Connection: Close" header with a response, so I've ended up with the following code, which at least should always correctly do a single request/response per connection:
while TRUE do
begin // wait for incoming TCP connection
if notzero(listen(listen_socket, 100)) then continue; // listen failed
client_len := SizeOf(sa_cli);
sock := accept(listen_socket, #sa_cli, #client_len); // create socket for connection
if sock = INVALID_SOCKET then continue; // accept failed
ssl := SSL_new(ctx); // TCP connection ready, create ssl structure
if assigned(ssl) then
begin
SSL_set_fd(ssl, sock); // assign socket to ssl structure
if SSL_accept(ssl) = 1 then // handshake worked
begin
request := '';
repeat // gather request
bytesin := SSL_read(ssl, buffer, sizeof(buffer)-1);
if bytesin > 0 then
begin
buffer[bytesin] := #0;
request := request + buffer;
end;
until SSL_pending(ssl) <= 0;
if notempty(request) then
begin // decide on response, avoid keep-alive
response := 'HTTP/1.0 200 OK'#13#10'Connection: Close'#13#10 + etc;
SSL_write(ssl, pchar(response)^, length(response));
end; // else read empty or failed
end; // else handshake failed
SSL_set_shutdown(ssl, SSL_SENT_SHUTDOWN or SSL_RECEIVED_SHUTDOWN);
CloseSocket(sock);
SSL_free(ssl);
end; // else ssl creation failed
end; // infinite while
Two questions:
(1) Since SSL_accept() must be true to reach SSL_read(), is it true SSL_read() can never hang waiting for the first request?
(2) How should this code be modified to do timed-out persistent/keep alive SSL connections with blocking sockets (if that's even possible)?
To quote this letter, "The only way to ensure that indefinite blocking is avoided is to use nonblocking I/O." So, I guess I'll give up trying to timeout blocked SSL_read()s.
(1) if the client connects but does not send a request (DoS attack, for instance), then SSL_read() would hang.
(2) try calling setsockopt(SO_RCVTIMEO) on the accepted SOCKET to set a reading timeout on it.