I'm looking into ZeroMQ for its PGM support.
Running on Windows (in a VirtualBox with MacOS as host, if that could matter), using the NetMQ library.
The test I want to do is very simple: send messages from A to B as fast as possible...
First I used TCP as transport; this got easily to >150 000 messages per second, with two receivers keeping pace.
Then I wanted to test PGM; all I did was to replace the address "tcp://*:5556" with "pgm://239.0.0.1:5557" on both sides.
Now, the PGM tests give very strange results: the sender easily gets to >200 000 messages/s; the receiver though, manages to process only about 500 messages/s !?
So, I don't understand what is happening.
After slowing down the sender (sleep 10ms after each message, since otherwise it's practically impossible to investigate the flow) it appears to me that the receiver is trying to keep up, initially sees every message passing by, then chokes, misses a range of messages, then tries to keep up again...
I played with the HWM and Recovery Interval settings, but that didn't seem to make much difference (?!).
Can anyone explain what's going on?
Many thanks,
Frederik
Note: Not sure if it's matters: as far as I understand, I don't use OpenPGM - I just download the ZeroMQ setup, and enabled 'Multicasting Support' in Windows.
This is the Sender code:
class MassSender
{
private const string TOPIC_PREFIX = "Hello:";
private static int messageCounter = 0;
private static int timerCounter = 0;
public static void Main(string[] args)
{
Timer timer = new Timer(1000);
timer.Elapsed += timer_Elapsed;
SendMessages_0MQ_NetMQ(timer);
}
private static void SendMessages_0MQ_NetMQ(Timer timer)
{
using (NetMQContext context = NetMQContext.Create())
{
using (NetMQSocket publisher = context.CreateSocket(ZmqSocketType.Pub))
{
//publisher.Bind("tcp://*:5556");
publisher.Bind("pgm://239.0.0.1:5557"); // IP of interface is not specified so use default interface.
timer.Start();
while (true)
{
string message = GetMessage();
byte[] body = Encoding.UTF8.GetBytes(message);
publisher.Send(body);
}
}
}
}
private static string GetMessage()
{
return TOPIC_PREFIX + "Message " + (++messageCounter).ToString();
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
Console.WriteLine("=== SENT {0} MESSAGES SO FAR - TOTAL AVERAGE IS {1}/s ===", messageCounter, messageCounter / ++timerCounter);
}
}
and the Receiver:
class MassReceiver
{
private const string TOPIC_PREFIX = "Hello:";
private static int messageCounter = 0;
private static int timerCounter = 0;
private static string lastMessage = String.Empty;
static void Main(string[] args)
{
// Assume that sender and receiver are started simultaneously.
Timer timer = new Timer(1000);
timer.Elapsed += timer_Elapsed;
ReceiveMessages_0MQ_NetMQ(timer);
}
private static void ReceiveMessages_0MQ_NetMQ(Timer timer)
{
using (NetMQContext context = NetMQContext.Create())
{
using (NetMQSocket subscriber = context.CreateSocket(ZmqSocketType.Sub))
{
subscriber.Subscribe(""); // Subscribe to everything
//subscriber.Connect("tcp://localhost:5556");
subscriber.Connect("pgm://239.0.0.1:5557"); // IP of interface is not specified so use default interface.
timer.Start();
while (true)
{
messageCounter++;
byte[] body = subscriber.Receive();
string message = Encoding.UTF8.GetString(body);
lastMessage = message; // Only show message when timer elapses, otherwise throughput drops dramatically.
}
}
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
Console.WriteLine("=== RECEIVED {0} MESSAGES SO FAR - TOTAL AVERAGE IS {1}/s === (Last: {2})", messageCounter, messageCounter / ++timerCounter, lastMessage);
}
}
What is the size of each message?
You are not using OpenPGM, you are using what is called ms-pgm (Microsoft implementation of PGM).
Anyway you might have to change the MulticastRate of the socket (it defaults to 100kbit/s).
Also what kind of network are you using?
I run into the same issue, the sender can send thousands of messages per second. But my receiver can only receive two hundred messages per second.
I think it could be sending or receiving rate is limited. I check
ZMQ_RATE: Set multicast data rate in http://api.zeromq.org/3-0:zmq-setsockopt
The default rate is just 100kb/s.
When I increase it to 1Gb/s, everything is OK now.
const int rate = 1000000; // 1Gb TX- and RX- rate
m_socket.setsockopt(ZMQ_RATE, &rate, sizeof(rate));
Related
I'm learning RX and would like to use Console.ReadLine as a source for observable sequences.
I know that I can create "IEnumerable" using "yield return", but for my concrete use case I've decided to create a C# event, so that potentially many observers will be able to share the same keyboard input.
Here is my code:
class Program
{
private delegate void OnNewInputLineHandler(string line);
private static event OnNewInputLineHandler OnNewInputLineEvent = _ => {};
static void Main(string[] args)
{
Task.Run((Action) GetInput);
var input = ConsoleInput();
input.Subscribe(s=>Console.WriteLine("1: " + s));
Thread.Sleep(30000);
}
private static void GetInput()
{
while (true)
OnNewInputLineEvent(Console.ReadLine());
}
private static IObservable<string> ConsoleInput()
{
return Observable.Create<string>(
(IObserver<string> observer) =>
{
OnNewInputLineHandler h = observer.OnNext;
OnNewInputLineEvent += h;
return Disposable.Create(() => { OnNewInputLineEvent -= h; });
});
}
}
My problem - when I run the GetInput method as it is shown above, the very first input line is not sent to the sequence (but it is sent to the event handler).
However, if I replace it with the following version, everything works as expected:
private static void GetInput()
{
while (true)
{
var s = Console.ReadLine();
OnNewInputLineEvent(s);
}
}
Could someone shed some light on why this might happen?
You're trying to make life difficult for yourself. There is almost always a way to make things simple with Rx. It's just a matter of learning to think more functionally rather than procedurally.
This is all you need:
class Program
{
static void Main(string[] args)
{
var subscription = ConsoleInput().Subscribe(s => Console.WriteLine("1: " + s));
Thread.Sleep(30000);
subscription.Dispose();
}
private static IObservable<string> ConsoleInput()
{
return
Observable
.FromAsync(() => Console.In.ReadLineAsync())
.Repeat()
.Publish()
.RefCount()
.SubscribeOn(Scheduler.Default);
}
}
This lets multiple subscribers share the one input through the .Publish().RefCount(). And the .SubscribeOn(Scheduler.Default) pushes the subscription out to a new thread - without it you block on a subscription.
If you move Task.Run((Action) GetInput); to after the subscription your code will work as desired. This is because in your original version, the first call of OnNewInputEvent(Console.ReadLine()) is run before you've hooked OnNewInputLineEvent to the observer.OnNext.
I am using PushStreamContent to keep a persistent connection to each client. Pushing short heartbeat messages to each client stream every 20 seconds works great with 100 clients, but at about 200 clients, the client first starts receiving it a few seconds delayed, then it doesn't show up at all.
My controller code is
// Based loosely on https://aspnetwebstack.codeplex.com/discussions/359056
// and http://blogs.msdn.com/b/henrikn/archive/2012/04/23/using-cookies-with-asp-net-web-api.aspx
public class LiveController : ApiController
{
public HttpResponseMessage Get(HttpRequestMessage request)
{
if (_timer == null)
{
// 20 second timer
_timer = new Timer(TimerCallback, this, 20000, 20000);
}
// Get '?clientid=xxx'
HttpResponseMessage response = request.CreateResponse();
var kvp = request.GetQueryNameValuePairs().Where(q => q.Key.ToLower() == "clientid").FirstOrDefault();
string clientId = kvp.Value;
HttpContext.Current.Response.ClientDisconnectedToken.Register(
delegate(object obj)
{
// Client has cleanly disconnected
var disconnectedClientId = (string)obj;
CloseStreamFor(disconnectedClientId);
}
, clientId);
response.Content = new PushStreamContent(
delegate(Stream stream, HttpContent content, TransportContext context)
{
SaveStreamFor(clientId, stream);
}
, "text/event-stream");
return response;
}
private static void CloseStreamFor(string clientId)
{
Stream oldStream;
_streams.TryRemove(clientId, out oldStream);
if (oldStream != null)
oldStream.Close();
}
private static void SaveStreamFor(string clientId, Stream stream)
{
_streams.TryAdd(clientId, stream);
}
private static void TimerCallback(object obj)
{
DateTime start = DateTime.Now;
// Disable timer
_timer.Change(Timeout.Infinite, Timeout.Infinite);
// Every 20 seconds, send a heartbeat to each client
var recipients = _streams.ToArray();
foreach (var kvp in recipients)
{
string clientId = kvp.Key;
var stream = kvp.Value;
try
{
// ***
// Adding this Trace statement and running in debugger caused
// heartbeats to be reliably flushed!
// ***
Trace.WriteLine(string.Format("** {0}: Timercallback: {1}", DateTime.Now.ToString("G"), clientId));
WriteHeartBeat(stream);
}
catch (Exception ex)
{
CloseStreamFor(clientId);
}
}
// Trace... (this trace statement had no effect)
_timer.Change(20000, 20000); // re-enable timer
}
private static void WriteHeartBeat(Stream stream)
{
WriteStream(stream, "event:heartbeat\ndata:-\n\n");
}
private static void WriteStream(Stream stream, string data)
{
byte[] arr = Encoding.ASCII.GetBytes(data);
stream.Write(arr, 0, arr.Length);
stream.Flush();
}
private static readonly ConcurrentDictionary<string, Stream> _streams = new ConcurrentDictionary<string, Stream>();
private static Timer _timer;
}
Could there be some ASP.NET or IIS setting that affects this? I am running on Windows Server 2008 R2.
UPDATE:
Heartbeats are reliably sent if 1) the Trace.WriteLine statement is added, 2) Visual Studio 2013 debugger is attached and debugging and capturing the Trace.WriteLines).
Both of these are necessary; if the Trace.WriteLine is removed, running under the debugger has no effect. And if the Trace.WriteLine is there but the program is not running under the debugger (instead SysInternals' DbgView is showing the trace messages), the heartbeats are unreliable.
UPDATE 2:
Two support incidents with Microsoft later, here are the conclusions:
1) The delays with 200 clients were resolved by using a business class Internet connection instead of a Home connection
2) whether the debugger is attached or not really doesn't make any difference;
3) The following two additions to web.config are required to ensure heartbeats are sent timely, and failed heartbeats due to client disconnecting "uncleanly" (e.g. by unplugging computer rather than normal closing of program which cleanly issues TCP RST) trigger a timely ClientDisconnected callback as well:
<httpRuntime executionTimeout="5" />
<serverRuntime appConcurrentRequestLimit="50000" uploadReadAheadSize="1" frequentHitThreshold="2147483647" />
I'm trying to use push/pull pattern with jeromq(0.3.2). At the beginnig, it works well. but after a period of time. the push side doesn't send out messages and blocked there. I don't know why. I set the sendTimeout param, and print the zmq socket error number. it is 35. Is there something I do not notice? or other suggests?
Thanks!
The push side code:
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket push4Topic = context.socket(ZMQ.PUSH);
private void init() {
push4Topic.setTCPKeepAlive(1);
push4Topic.setSendTimeOut(30000);
push4Topic.bind(bindUrl);
}
public boolean send(String msg) {
return push4Topic.send(msg);
}
private void destroy() {
if (push4Topic != null) {
push4Topic.close();
}
if (context != null) {
context.term();
}
logger.info("destroy() socket destroied");
}
====
I add one monitor thread monitoring the push side. then, I found that ZMQ_EVENT_DISCONNECTED event. what is that mean? my pull side code has problems?
I need to send a server request about once per minute, to get a new products list (in case it was changed via web).
So, i'm using DispatcherTimer
public static void Start()
{
if (timer != null) return;
timer = new DispatcherTimer {Interval = TimeSpan.FromSeconds(0.1)};
timer.Tick += Run;
timer.Start();
}
private static async void Run(object sender, EventArgs e)
{
timer.Interval = TimeSpan.FromSeconds(60); // TODO" add dynamic changes here
timer.Stop();
** Do stuff
timer.Start();
}
However, sometimes, i need to force updating. Is it correct to run
public static void ForceUpdate()
{
Run(null, null);
}
EDIT: i mean, if Do stuff is long enough, wouldn't it be called second time via timer? Or maybe i should use something else for this kind of job?
EDIT: Insert a variable which should store the last update time and check if update had been done in a certain interval.
Ah, well, it is quite simple
public static void ForceUpdate()
{
timer.Stop();
timer.Interval = TimeSpan.FromMilliseconds(10);
timer.Start();
}
For the last couple of days I've been trying to find out why my gwt application is leaking on IE 9.
I want to share one of my findings with you and maybe someone can give me a clue about what is going one here...
I wrote this small test:
public class Memory implements EntryPoint
{
FlowPanel mainPanel = new FlowPanel();
FlowPanel buttonsPanel = new FlowPanel();
FlowPanel contentPanel = new FlowPanel();
Timer timer;
Date startDate;
public void onModuleLoad()
{
mainPanel.setWidth("100%");
mainPanel.setHeight("100%");
RootPanel.get().add(mainPanel);
Button startBtn = new Button("start test");
startBtn.addClickHandler(new ClickHandler(){
#Override
public void onClick(ClickEvent event)
{
startDate = new Date();
System.out.println("Started at " + startDate);
timer = new Timer()
{
public void run()
{
Date now = new Date();
if(isWithin5Minutes(startDate, now))
{
manageContent();
}
else
{
System.out.println("Complete at " + new Date());
timer.cancel();
contentPanel.clear();
}
}
};
timer.scheduleRepeating(50);
}
});
buttonsPanel.add(startBtn);
mainPanel.add(buttonsPanel);
mainPanel.add(contentPanel);
}
private void manageContent()
{
if(contentPanel.getWidgetCount() > 0)
{
contentPanel.clear();
}
else
{
for(int i =0; i < 20; i++)
{
Image image = new Image();
image.setUrl("/images/test.png");
contentPanel.add(image);
}
}
}
private boolean isWithin5Minutes(Date start, Date now)
{
//true if 'now' is within 5 minutes of 'start' date
}
}
So, I have this Timer that runs every 50 ms (during around 5 minutes) and executes the following:
- if the panel has content, clear it;
- if the panel has no content add 20 png images (30x30 with transparency) to it.
Using the Process Explorer from sysInternals I got the following results:
IE 9:
Firefox 21.0:
I ran the same program with some changes (.jpg images instead of .png, create the images only once and use them as member variables, create the images using a ClientBundle) but the result was the same. Also, I ran the application in production mode.
Is there something wrong with my code that could cause this behavior in IE?
Shouldn't the Garbage Collector (GC) free some of the used memory at least when the timer ends?
Any of you came across this problem before?
Garbage collector in IE is quite strange thing. E.g. you can force it to run by simply minimizing browser window. I guess leaks in your case are images that weren't removed properly by browser when you clear container. Try to remove them by using JS "delete" operation, like that:
private native void utilizeElement(Element element) /*-{
delete element;
}-*/;
Then change your manageContent a little:
if(contentPanel.getWidgetCount() > 0)
{
for (Iterator<Widget> it = contentPanel.iterator(); it.hasNext();)
utilizeElement(it.next().getElement());
contentPanel.clear();
}
Hope this helps.