I'm trying to send video from Max/Jitter using [jit.net.send] to a Processing sketch. The sketch is then supposed to redraw the image on screen. However I can't seem to receive anything sent with [jit.net.send] in Processing.
On the Jitter side the IP is 127.0.0.1, port 7474 (which are also the defaults. I can receive them using [jit.net.recv]). This is the Processing sketch:
import processing.net.*;
Client myClient;
void setup() {
size(200, 200);
myClient = new Client(this, "127.0.0.1", 7474);
}
void draw() {
if (myClient.available() > 0) {
println(myClient.read());
}
}
When I run the sketch, Processing says:
java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:189)
at processing.net.Client.<init>(Client.java:77)
at sketch_140123a.setup(sketch_140123a.java:24)
at processing.core.PApplet.handleDraw(PApplet.java:2241)
at processing.core.PGraphicsJava2D.requestDraw(PGraphicsJava2D.java:243)
at processing.core.PApplet.run(PApplet.java:2140)
at java.lang.Thread.run(Thread.java:662)
Is processing.net.Client not suitable for this? I'm on Windows 7 32bit and the firewall is off.
I solved it in the end in a hackish way. I saved a continuous stream of bitmap images from Jitter to the disk on a location accessible through a web server running on the same machine as jitter was running. On the Processing side I made individual requests for those images. I can provide the code if anyone reading this is interested.
Related
I am experimenting with ZeroMQ. And I found it really interesting that in ZeroMQ, it does not matter whether either connect or bind happens first. I tried looking into the source code of ZeroMQ but it was too big to find anything.
The code is as follows.
# client side
import zmq
ctx = zmq.Context()
socket = ctx.socket(zmq.PAIR)
socket.connect('tcp://*:2345') # line [1]
# make it wait here
# server side
import zmq
ctx = zmq.Context()
socket = ctx.socket(zmq.PAIR)
socket.bind('tcp://localhost:2345')
# make it wait here
If I start client side first, the server has not been started yet, but magically the code is not blocked at line [1]. At this point, I checked with ss and made sure that the client is not listening on any port. Nor does it have any open connection. Then I start the server. Now the server is listening on port 2345, and magically the client is connected to it. My question is how does the client know the server is now online?
The best place to ask your question is the ZMQ mailing list, as many of the developers (and founders!) of the library are active there and can answer your question directly, but I'll give it a try. I'll admit that I'm not a C developer so my understanding of the source is limited, but here's what I gather, mostly from src/tcp_connector.cpp (other transports are covered in their respective files and may behave differently).
Line 214 starts the open() method, and here looks to be the meat of what's going on.
To answer your question about why the code is not blocked at Line [1], see line 258. It's specifically calling a method to make the socket behave asynchronously (for specifics on how unblock_socket() works you'll have to talk to someone more versed in C, it's defined here).
On line 278, it attempts to make the connection to the remote peer. If it's successful immediately, you're good, the bound socket was there and we've connected. If it wasn't, on line 294 it sets the error code to EINPROGRESS and fails.
To see what happens then, we go back to the start_connecting() method on line 161. This is where the open() method is called from, and where the EINPROGRESS error is used. My best understanding of what's happening here is that if at first it does not succeed, it tries again, asynchronously, until it finds its peer.
I think the best answer is in zeromq wiki
When should I use bind and when connect?
As a very general advice: use bind on the most stable points in your architecture and connect from the more volatile endpoints. For request/reply the service provider might be point where you bind and the client uses connect. Like plain old TCP.
If you can't figure out which parts are more stable (i.e. peer-to-peer) think about a stable device in the middle, where boths sides can connect to.
The question of bind or connect is often overemphasized. It's really just a matter of what the endpoints do and if they live long — or not. And this depends on your architecture. So build your architecture to fit your problem, not to fit the tool.
And
Why do I see different behavior when I bind a socket versus connect a socket?
ZeroMQ creates queues per underlying connection, e.g. if your socket is connected to 3 peer sockets there are 3 messages queues.
With bind, you allow peers to connect to you, thus you don't know how many peers there will be in the future and you cannot create the queues in advance. Instead, queues are created as individual peers connect to the bound socket.
With connect, ZeroMQ knows that there's going to be at least a single peer and thus it can create a single queue immediately. This applies to all socket types except ROUTER, where queues are only created after the peer we connect to has acknowledge our connection.
Consequently, when sending a message to bound socket with no peers, or a ROUTER with no live connections, there's no queue to store the message to.
When you call socket.connect('tcp://*:2345') or socket.bind('tcp://localhost:2345') you are not calling these methods directly on an underlying TCP socket. All of ZMQ's IO - including connecting/binding underlying TCP sockets - happens in threads that are abstracted away from the user.
When these methods are called on a ZMQ socket it essentially queues these events within the IO threads. Once the IO threads begin to process them they will not return an error unless the event is truly impossible, otherwise they will continually attempt to connect/reconnect.
This means that a ZMQ socket may return without an error even if socket.connect is not successful. In your example it would likely fail without error but then quickly reattempt and succeeded if you were to run the server side of script.
It may also allow you to send messages while in this state (depending on the state of the queue in this situation, rather than the state of the network) and will then attempt to transmit queued messages once the IO threads are able to successfully connect. This also includes if a working TCP connection is later lost. The queues may continue to accept messages for the unconnected socket while IO attempts to automatically resolve the lost connection in the background. If the endpoint takes a while to come back online it should still receive it's messages.
To better explain here's another example
<?php
$pid = pcntl_fork();
if($pid)
{
$context = new ZMQContext();
$client = new ZMQSocket($context, ZMQ::SOCKET_REQ);
try
{
$client->connect("tcp://0.0.0.0:9000");
}catch (ZMQSocketException $e)
{
var_dump($e);
}
$client->send("request");
$msg = $client->recv();
var_dump($msg);
}else
{
// in spawned process
echo "waiting 2 seconds\n";
sleep(2);
$context = new ZMQContext();
$server = new ZMQSocket($context, ZMQ::SOCKET_REP);
try
{
$server->bind("tcp://0.0.0.0:9000");
}catch (ZMQSocketException $e)
{
var_dump($e);
}
$msg = $server->recv();
$server->send("response");
var_dump($msg);
}
The binding process will not begin until 2 seconds later than the connecting process. But once the child process wakes and successfully binds the req/rep transaction will successfully take place without error.
jason#jason-VirtualBox:~/php-dev$ php play.php
waiting 2 seconds
string(7) "request"
string(8) "response"
If I was to replace tcp://0.0.0.0:9000 on the binding socket with tcp://0.0.0.0:2345 it will hang because the client is trying to connect to tcp://0.0.0.0:9000, yet still without error.
But if I replace both with tcp://localhost:2345 I get an error on my system because it can't bind on localhost making the call truly impossible.
object(ZMQSocketException)#3 (7) {
["message":protected]=>
string(38) "Failed to bind the ZMQ: No such device"
["string":"Exception":private]=>
string(0) ""
["code":protected]=>
int(19)
["file":protected]=>
string(28) "/home/jason/php-dev/play.php"
["line":protected]=>
int(40)
["trace":"Exception":private]=>
array(1) {
[0]=>
array(6) {
["file"]=>
string(28) "/home/jason/php-dev/play.php"
["line"]=>
int(40)
["function"]=>
string(4) "bind"
["class"]=>
string(9) "ZMQSocket"
["type"]=>
string(2) "->"
["args"]=>
array(1) {
[0]=>
string(20) "tcp://localhost:2345"
}
}
}
["previous":"Exception":private]=>
NULL
}
If your needing real-time information for the state of underlying sockets you should look into socket monitors. Using socket monitors along with the ZMQ poll allows you to poll for both socket events and queue events.
Keep in mind that polling a monitor socket using ZMQ poll is not similar to polling a ZMQ_FD resource via select, epoll, etc. The ZMQ_FD is edge triggered and therefor doesn't behave the way you would expect when polling network resources, where a monitor socket within ZMQ poll is level triggered. Also, monitor sockets are very light weight and latency between the system event and the resulting monitor event is typically sub microsecond.
I read netty proxy example, (https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/proxy )
and I have two requirement.
I want to use fixed-count connection on proxy->server.
On proxy example, proxy->server conn. count equals client->proxy conn. count.
It may be too many.
When client->proxy connection ends, proxy->server connection has to be keep alived
And when new client->proxy connection established, reuse proxy->server connections.
How can it be implemented?
The first requirement can be realized rather easily by using a DefaultChannelGroup to store your channels. Assuming that the ChannelHandler which is accepting incoming connections is a singleton, then you can use the following code.
// initialize channelgroup in your singleton handler
ChannelGroup ALL_CONNECTIONS = new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
...
#Override
public synchronized void channelActive(ChannelHandlerContext ctx) throws Exception
{
if(ALL_CONNECTIONS.size() > 100){
ctx.channel().close();// dont accept further connections
}else{
ALL_CONNECTIONS.add(ctx.channel());
// do whatever logic.
}
}
I think you are thinking of "connection pooling" for the second requirement. If so, its not a great idea I think. Since, when a new client "connects" to your server, it is always a new connection since it is coming from outside of your network. However I am not sure of this and someone with more knowledge can answer.
Both what your need, i think, is a client with connection pool.
Both HttpComponents and AsyncHttpClient support pooling, You could have a look at the codes in AsyncHttpClient which also have a netty based implementation.
I`m using zero mq 3.2.0 C++ libary. I use zmq_connect to connect a port before zmq_bild. But this function return success. How can I know connect fail? My code is:
void *ctx = zmq_ctx_new(1);
void *skt = zmq_socket(ctx, ZMQ_SUB);
int ret = zmq_connect(skt, "tcp://192.168.9.97:5561"); // 192.168.9.97:5561 is not binded
// zmq_connect return zero
This is actually a feature of zeromq, connection status and so on is abstracted away from you. There is no exposed information you can check to see if you're connected or not AFAIK. This means that you can connect even if the server is temporarily down, and zeromq will handle everything when the server comes available later. This can be both a blessing and a curse.
What most people end up doing if they need to know connection status is to implement some sort of heartbeat. REQ/REP ping/pong for example.
Have a look at the lazy pirate pattern for an example of how to ensure reliability from a client perspective.
I am writing an application to send small file (~2kb) from Netty server to client through WebSocket.
For testing whether the file send success, I had the follows test.
A client connect to server.
Setting to drop all packets from server on the client machine.
The server send a file to the client.
Checking the result of "ChannelFuture" on the server.
I got true from "future.isSuccess()" and "future.isDone()" immediately when I send a file with ~2kb in this test even client side cannot receive the file.
I repeated this test for files with larger size. I find out that if the file size is larger than ~7kb, the "ChannelFuture future" will wait the feedback from transmission. This is the result I expected.
I am using Netty3.6.1 and my application is built base on "org.jboss.netty.example.http.websocketx.server".
Here is part of my code:
ChannelBuffer cb = ChannelBuffers.copiedBuffer(myfile_byteArray);
ChannelFuture result = ctx.getChannel().write( new BinaryWebSocketFrame( cb ) );
result.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()){
System.err.println("future.isSuccess()");
}
if (future.isDone()){
System.err.println("future.isDone()");
}
if (future.isCancelled()){
System.err.println("future.isCancelled()");
}
}
});
Does anyone know how could I having "ChannelFuture" work correctly for file with small file size?
Many thanks in advance!
ChannelFuture will only be notified if the data can be written out to the remote peer. So if it is notified then the other peer received the data without a problem. This is true for all sizes of data.
I have a really weird problem that is driving me crazy.
I have a Ruby server and a Flash client (Action Script 3). It's a multiplayer game.
The problem is that everything is working perfect and then, suddenly, a random player stops receiving data. When the server closes the connection because of inactivity, about 20-60 seconds later, the client receives all the buffered data.
The client uses XMLsocket for retrieving data, so the way the client receives data is not the problem.
socket.addEventListener(Event.CONNECT, connectHandler);
function connectHandler(event)
{
sendData(sess);
}
function sendData(dat)
{
trace("SEND: " + dat);
addDebugData("SEND: " + dat)
if (socket.connected) {
socket.send(dat);
} else {
addDebugData("SOCKET NOT CONNECTED")
}
}
socket.addEventListener(DataEvent.DATA, dataHandler);
function dataHandler(e:DataEvent) {
var data:String = e.data;
workData(data);
}
The server flushes data after every write, so is not a flushing problem:
sock.write(data + DATAEOF)
sock.flush()
DATAEOF is null char, so the client parses the string.
When the server accepts a new socket, it sets sync to true, to autoflush, and TCP_NODELAY to true too:
newsock = serverSocket.accept
newsock.sync = true
newsock.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, true)
This is my research:
Info: I was dumping netstat data to a file each second.
When the client stops receiving data, netstat shows that socket status is still ESTABLISHED.
Some seconds after that, send-queue grows accordingly to data sent.
tcpflow shows that packets are sent 2 times.
When the server closes the socket, socket status changes to FIN_WAIT1, as expected. Then, tcpflow shows that all buffered data is sent to the client, but the client don't receives data. some seconds after that, connection dissapears from netstat and tcpflow shows that the same data is sent again, but this time the client receives the data so starts sending data to the server and the server receives it. But it's too late... server has closed connection.
I don't think it's an OS/network problem, because I've changed from a VPS located in Spain to Amazon EC2 located in Ireland and the problem still remains.
I don't think it's a client network problem too, because this occurs dozens of times per day, and the average quantity of online users is about 45-55, with about 400 unique users a day, so the ratio is extremely high.
EDIT:
I've done more research. I've changed the server to C++.
When a client stops sending data, after a while the server receives a "Connection reset by peer" error. In that moment, tcpdump shows me that the client sent a RST packet, this could be because the client closed the connection and the server tried to read, but... why the client closed the connection? I think the answer is that the client is not the one closing the connection, is the kernel. Here is some info: http://scie.nti.st/2008/3/14/amazon-s3-and-connection-reset-by-peer
Basically, as I understand it, Linux kernels 2.6.17+ increased the maximum size of the TCP window/buffer, and this started to cause other gear to wig out, if it couldn’t handle sufficiently large TCP windows. The gear would reset the connection, and we see this as a “Connection reset by peer” message.
I've followed the steps and now it seems that the server is closing connections only when the client losses its connection to internet.
I'm going to add this as an answer so people know a bit mroe about this.
I think the answer is that the kernel is the one closing the connection. Here is some info: http://scie.nti.st/2008/3/14/amazon-s3-and-connection-reset-by-peer
Basically, as I understand it, Linux kernels 2.6.17+ increased the maximum size of the TCP window/buffer, and this started to cause other gear to wig out, if it couldn’t handle sufficiently large TCP windows. The gear would reset the connection, and we see this as a “Connection reset by peer” message.
I've followed the steps and now it seems that the server is closing connections only when the client losses its connection to internet.