Reusing socket handle - vb6

We have a legacy vb6 automation application that communicate over a sockets on need basis.
But opening and establishing connection (only when required) to the remote port taking more time frequently.
So,i am planning to write other application (say a socket server) that opens the required sockets and keep the connections alive.This application will write connected socket handle values to a file or database.
Is it possible in vb6 to create a socket object using socket handle from the already opened socket that was owned by other process (socket server application in this case)?

This is exactly the type of situation that WSADuplicateSocket() is intended for.
Your "server" can create a socket and use WSADuplicateSocket() to fill a WSAPROTOCOL_INFO record that describes the socket. The "server" can then expose the WSAPROTOCOL_INFO to your VB app using any IPC mechanism you want. The VB app can pass the WSAPROTOCOL_INFO to WSASocket() to access the socket and use it as needed.

No, Windows sockets cannot be shared cross-process, not even through handle inheritance (this is because although it is usually a handle, an LSP might return something that is not a handle and thus not inherited). You should make one process open and maintain the connection and the others talk to that process to communicate with the server.

Related

Alternative ways to connect to an embedded etcd server

I use the embedded etcd server for testing and was wondering if there was a more direct in memory way of connecting to it? An alternative to using a socket. The socket isn't a big deal but it feels silly leaving the process only to connect back into the same process via socket, it would be interesting to do that with channels or something.

ZeroMQ connect to physically non connected socket

I'm trying to understand if ZeroMQ can connect pub or sub socket to non existing (yet) ip address. Will it automatically connect when this IP address will appear in the future?
Or should I check up existance first before connecting?
Is the behavior same for PUB and SUB sockets?
The answer is buried somewhat in the manual, here:
for most transports and socket types the connection is not performed immediately but as needed by ØMQ. Thus a successful call to zmq_connect() does not mean that the connection was or could actually be established. Because of this, for most transports and socket types the order in which a server socket is bound and a client socket is connected to it does not matter. The ZMQ_PAIR sockets are an exception, as they do not automatically reconnect to endpoints.
As that quote says, the order of binding and connecting does not matter. This is extremely useful, as you don't then have to worry about start-up order; the client will be quite happy waiting for a server to come online, able to run other things without blocking on the connect.
Other Things That Are Useful
The direction of bind/connect is independent of the pattern used on top; thus a PUB socket can be connected to a SUB socket that has been bound to an interface (whereas the other way round might feel more natural).
The other thing that I think a lot of people don't realise is that you can bind (or connect) sockets more than once, to different transports. So a PUB socket can quite happily send to SUB clients that are both local in-process threads, other processes on the same machine via ipc, and to clients on remote machines via tcp.
There are other things that you can do. If you use the ZMQ_FD option from here, you can get ZMQ_EVENT notifcations in some way or other (I can't remember the detail) which will tell you when the underlying connection has been successfully made. Using the file descriptor allows you to include that in a zmq_poll() (or some other reactor like epoll() or select()). You can also exploit the heartbeat functionality that a socket can have, which will tell you if the connection dies for some reason or other (e.g. crashed process at the other end, or network cable fallen out). Use of a reactor like zmq_poll(), epoll() or select() means that you can have a pure actor model event-driven system, with no need to routinely check up on status flags, etc.
Using these facilities in ZMQ allows for the making of very robust distributed applications/system that know when various bits of themselves have died, come back to life, taken a network-out holiday, etc. For example, just knowing that a link is dead perhaps means that a node in your distributed app changes its behaviour somehow to adapt to that.

Can you poll a TCP socket created by another program?

Is it possible on Windows 7 to write a C++ or .NET program that finds out whether an existing, connected TCP socket created by another program has any data in its send or receive buffer?
Use case: There's a 16-bit legacy application doing TCP communication with some .NET applications. To work around a concurrency issue in the legacy app, it would be helpful if we could inspect either of two sockets that are connected to each other and tell whether there's some data sent on one end but not yet received on the other end.
The connection is TCP and the sockets are on the loopback interface (127.0.0.1).
Approach: WSADuplicateSocket() + WSAPoll() could be the solution but I don't know how to get a hold of the socket handle programmatically because the socket is created by another program.

Does websocket only broadcasts the data to all clients connected instead of sending to a particular client?

I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).

Suggestions on keeping connections alive with FTP file listing via AJAX?

I have a multi-user Ruby on Rails web application that can interact with an FTP server via AJAX. The application allows the user to browse an FTP site. Javascript makes an AJAX call which communicates with a server-script that returns a list of files and directories within a given directory.
This works fine. However, each time a directory listing is requested, the server must re-establish a connection with the FTP server, which takes a lot of time. I'm looking for a way to leave the FTP connection open for until some number of timeout seconds.
I could probably do this using threads (though, I'm completely open to other ideas) or some fancy connection-pooling scheme (perhaps via a daemon that manages this).
What are some ways I could persist and regain reference to connections in my ruby source?
Someone suggested using a "Connection: Keep-Alive" header, but I don't see how that would help in this case.
Not a complete answer, but if you did have some sort of daemon or something managing the connection, you could use TCP keepalives to keep the control connection alive for an extended period of time.
FTP uses two connections. A control connection is established client-to-server, and data connections are established server-to-client for each request. So each directory listing or GET would prompt another data connection to be opened for the duration of the request.
You shouldn't worry about keeping lots of listening sockets open because the data connections are negotiated over the control connection just prior to being established. (Also the data connections could be made client-to-server instead of server-to-client by using passive mode if you want, but it's still a separate connection.)
Either way, I think the source of sluggishness is more to do with closing and reopening the control connection (and authenticating) for each request. I think if you have some process that keeps the control connection open using TCP keepalives (SO_KEEPALIVE socket option), you'll see a big improvement.

Resources