I am using the following class to accept incoming connections from client applications - using the send function I want to write the same UTFBytes to each client at the same time - is this possible? or if not, what would be the fastest way to write to them sequentially.
public class ProjectorClients
{
private var _serverSocket:ServerSocket;
private var _clients:Vector.<Socket> = new Vector.<Socket>;
private function ProjectorClients():void
{
_serverSocket = new ServerSocket();
_serverSocket.addEventListener(ServerSocketConnectEvent.CONNECT, onConnect)
_serverSocket.bind(888);
_serverSocket.listen();
}
private function onConnect(e:ServerSocketConnectEvent):void
{
trace("Client is connected");
e.socket.addEventListener(ProgressEvent.SOCKET_DATA, onData);
e.socket.addEventListener(Event.CLOSE, onConnectionClosed);
_clients.push(e.socket);
trace("Number of connected clients: " + _clients.length);
}
public function send(command:String):void
{
for each(var clientSocket:Socket in _clients)
{
if (clientSocket.connected)
{
clientSocket.writeUTFBytes(command);
clientSocket.flush();
}
}
}
private function onData(e:ProgressEvent):void
{
trace("data received");
}
private function onConnectionClosed(e:Event):void
{
trace("Client Socket is Closed");
for (var i:int = 0; i < _clients.length; i++)
{
if (_clients[i] == e.currentTarget)
{
_clients.splice(i,1);
break;
}
}
trace("Number of connected clients: " + _clients.length);
}
}
As mentioned by #eSniff, you need a publish subscribe module here. Redis would be a better option as it requires bare minimal steps to set up. The incoming connections will subscribe to the queue and you can publish the data, so that all the client receive it the same time. Please refer to the link below for better understanding.
http://redis.io/topics/pubsub
Related
I'm making a 2-stage "reservation management" program that other Software interacts with via WCF messages. Clients can claim a reservation, and release a reservation they previously had laid claim to. I'd like all this reservation info to be viewable on a WinForm for the host.
What I'm not sure how to accomplish is getting data from the Service into my WinForm. Im still very green when it comes to WCF stuff, I followed the MSDN guide to get that skeleton, but they didnt go into much detail tying to GUIs, or having any stored info.
My Service:
namespace ReservationServiceLib
{
public class reservation
{
public string location;
public bool claimed;
public string currentHolder;
public reservation(string l)
{
location = l;
claimed = false;
currentHolder = "host";
}
}
public class ReservationService : IReservationService
{
public bool reservationManagerLocked= false;
public List<reservation> keys = new List<reservation>();
private static Mutex managerLock = new Mutex();
public bool GetAccess(string n1)
{
//get acceess lock for using the reservationManager
return true;
}
public bool ClaimReservation(string rez, string clientN)
{
//Client requests/retrieves key
if(reservationManagerLocked!=true)
{
int i = 999;
i = keys.IndexOf(keys.Find(delegate (reservation x) { return x.location == rez; }));
if((keys[i].claimed == false) && (i < 999))
{
i = 999;
keys[i].claimed = true;
keys[i].currentHolder = clientN;
return true;
}
}
return false;
}
public bool ReleaseReservation(string n1)
{
//Client relenquishes access to key
return true;
}
And my "host" Winform:
namespace ReservationManager
{
public partial class ReservationManager : Form
{
public void populateComboBox()
{
for (int x = 0; x < dataGridView_IZone.Rows.Count-1; x++)
{
//comboBox_keyNames.Items.Add(dataGridView_IZone.Rows[x].Cells[0].Value);
}
}
public IZoneManager()
{
//create Keys and add to service's store of keys
//setup GUI
InitializeComponent();
// Step 1: Create a URI to serve as the base address.
Uri baseAddress = new Uri("http://localhost:8000/ReservationServiceLib/");
CalculatorService CSs = new CalculatorService();
// Step 2: Create a ServiceHost instance.
ServiceHost selfHost = new ServiceHost(typeof(ReservationService), baseAddress);
try
{
// Step 3: Add a service endpoint.
selfHost.AddServiceEndpoint(typeof(IReservation), new WSHttpBinding(), "ReservationService");
// Step 4: Enable metadata exchange.
ServiceMetadataBehavior smb = new ServiceMetadataBehavior();
smb.HttpGetEnabled = true;
selfHost.Description.Behaviors.Add(smb);
// Step 5: Start the service.
selfHost.Open();
Console.WriteLine("The service is ready.");
}
catch (CommunicationException ce)
{
Console.WriteLine("An exception occurred: {0}", ce.Message);
selfHost.Abort();
}
}
}
}
I have a Shared Project where I have changed the database to Realm instead of SQLite.
My problem is, if I close the Realm in my DatabaseManager, the result is removed. Therefore i have created a static singelton instance of the Realm, which all my DatabaseManager use. Now my app crash after short time on memory, and if i remove all my database-functions, it works.
I create my Realm-instance here:
public class RealmDatabase
{
private Realm mRealmDB;
public Realm RealmDB
{
get
{
if (mRealmDB == null || mRealmDB.IsClosed)
{
SetRealm ();
}
return mRealmDB;
}
}
static RealmDatabase cCurrentInstance;
public static RealmDatabase Current
{
get
{
if (cCurrentInstance == null)
cCurrentInstance = new RealmDatabase ();
return cCurrentInstance;
}
}
public RealmDatabase ()
{
}
private void SetRealm ()
{
var config = new RealmConfiguration ("DBName.realm", true);
mRealmDB = Realm.GetInstance (config);
}
public Transaction BeginTransaction ()
{
return RealmDB.BeginWrite ();
}
}
The I have my DatabaseManagler looking like this:
public class NewFreeUserManager
{
internal Realm RealmDB = RealmDatabase.Current.RealmDB;
static NewFreeUserManager cCurrentInstance;
public static NewFreeUserManager Current
{
get
{
if (cCurrentInstance == null)
cCurrentInstance = new NewFreeUserManager ();
return cCurrentInstance;
}
}
private NewFreeUserManager ()
{
}
internal bool Save (FreeUser freeuser)
{
try
{
using (var trans = RealmDB.BeginWrite ())
{
RealmDB.RemoveAll<FreeUser> ();
var fu = RealmDB.CreateObject<FreeUser> ();
fu = freeuser;
trans.Commit ();
}
return true;
}
catch (Exception e)
{
Console.WriteLine ("FreeUser save: " + e.ToString ());
return false;
}
}
internal FreeUser Get ()
{
return RealmDB.All<FreeUser> ().FirstOrDefault ();
}
}
Can anyone help me?
there are a few issues with your current setup that prevent you from persisting objects properly.
The first and very important one is that Realm instances are not thread-safe. That is, using them as singletons is strongly discouraged, unless you are certain that you'll never access them from another thread.
The second is more subtle, but in your save method you are calling:
var fu = RealmDB.CreateObject<FreeUser>();
fu = freeuser;
What it does is, effectively, you are creating an object in the Realm, and then assigning the variable to another object. This will not assign freeuser's properties to fu, it just replaces one reference with another. What you're looking for is Realm.Manage so your code should look like this:
using (var trans = RealmDB.BeginWrite())
{
RealmDB.Manage(freeuser);
trans.Commit();
}
Once you fix the second bug, you should be able to go back and close Realm instances when you don't need them anymore.
I'm wondering if there is a tool or lib that can move messages between queues?
Currently, i'm doing something like below
public static void ProcessQueueMessage([QueueTrigger("myqueue-poison")] string message, TextWriter log)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference("myqueue");
queue.CreateIfNotExists();
var messageData = JsonConvert.SerializeObject(data, new JsonSerializerSettings { ContractResolver = new CamelCasePropertyNamesContractResolver() });
queue.AddMessage(new CloudQueueMessage(messageData));
}
As at (2018-09-11) version 1.4.1 of the Microsoft Azure Storage Explorer doesn’t have the ability to move messages from one Azure queue to another.
I blogged a simple solution to transfer poison messages back to the originating queue and thought it might save someone a few minutes. Obviously, you'll need to have fixed the error that caused the messages to end up in the poison message queue!
You’ll need to add a NuGet package reference to Microsoft.NET.Sdk.Functions :
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
void Main()
{
const string queuename = "MyQueueName";
string storageAccountString = "xxxxxx";
RetryPoisonMesssages(storageAccountString, queuename);
}
private static int RetryPoisonMesssages(string storageAccountString, string queuename)
{
CloudQueue targetqueue = GetCloudQueueRef(storageAccountString, queuename);
CloudQueue poisonqueue = GetCloudQueueRef(storageAccountString, queuename + "-poison");
int count = 0;
while (true)
{
var msg = poisonqueue.GetMessage();
if (msg == null)
break;
poisonqueue.DeleteMessage(msg);
targetqueue.AddMessage(msg);
count++;
}
return count;
}
private static CloudQueue GetCloudQueueRef(string storageAccountString, string queuename)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageAccountString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queuename);
return queue;
}
Azure Storage Explorer version 1.15.0 can now do this as of 2020. https://github.com/microsoft/AzureStorageExplorer/issues/1064
Essentially Azure Storage doesn't support moving messages from one queue to another. You would need to do this on your own.
One way to implement moving the messages from one queue to another is by dequeuing the messages from the source queue (by calling GetMessages), read the contents of the message and then creating a new message in the target queue. This you can do via using Storage Client Library.
One tool that comes to my mind for moving messages is Cerebrata Azure Management Studio(paid product with 15 days free trial). It has this functionality.
As at (2018-09-11) version 1.4.1 of the Microsoft Azure Storage Explorer doesn't support moving queue messages.
Here's an updated version of Mitch's answer, using the latest Microsoft.Azure.Storage.Queue package. Simply create a new .NET Console application, add the above-mentioned package to it, and replace the contents of Program.cs with the following:
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Queue;
using System.Threading.Tasks;
namespace PoisonMessageDequeuer
{
class Program
{
static async Task Main(string[] args)
{
const string queuename = "MyQueueName";
string storageAccountString = "xxx";
await RetryPoisonMesssages(storageAccountString, queuename);
}
private static async Task<int> RetryPoisonMesssages(string storageAccountString, string queuename)
{
var targetqueue = GetCloudQueueRef(storageAccountString, queuename);
var poisonqueue = GetCloudQueueRef(storageAccountString, queuename + "-poison");
var count = 0;
while (true)
{
var msg = await poisonqueue.GetMessageAsync();
if (msg == null)
break;
await poisonqueue.DeleteMessageAsync(msg);
await targetqueue.AddMessageAsync(msg);
count++;
}
return count;
}
private static CloudQueue GetCloudQueueRef(string storageAccountString, string queuename)
{
var storageAccount = CloudStorageAccount.Parse(storageAccountString);
var queueClient = storageAccount.CreateCloudQueueClient();
var queue = queueClient.GetQueueReference(queuename);
return queue;
}
}
}
It's still pretty slow if you're working with >1000 messages though, so I'd recommend looking into batch APIs for higher quantities.
Here's a python script you may find useful. You'll need to install azure-storage-queue
queueService = QueueService(connection_string = "YOUR CONNECTION STRING")
for queue in queueService.list_queues():
if "poison" in queue.name:
print(queue.name)
targetQueueName = queue.name.replace("-poison", "")
while queueService.peek_messages(queue.name):
for message in queueService.get_messages(queue.name, 32):
print(".", end="", flush=True)
queueService.put_message(targetQueueName, message.content)
queueService.delete_message(queue.name, message.id, message.pop_receipt)
I just had to do this again and took the time to update my snipped to the new storage SDKs. See post at https://www.bokio.se/engineering-blog/how-to-re-run-the-poison-queue-in-azure-webjobs/ for more info.
Here is the code I used
using Azure.Storage.Queues;
using System;
using System.Threading;
using System.Threading.Tasks;
namespace AzureQueueTransfer
{
internal class Program
{
// Need Read, Update & Process (full url, can create in storage explorer)
private const string sourceQueueSAS = "";
// Need Add (full url, can create in storage explorer)
private const string targetQueueSAS = "";
private static async Task Main(string[] args)
{
var sourceQueue = new QueueClient(new Uri(sourceQueueSAS));
var targetQueue = new QueueClient(new Uri(targetQueueSAS));
var queuedAny = true;
while (queuedAny)
{
Thread.Sleep(30000); // Sleep to make sure we dont build too much backlog so we can process new messages on higher prio than old ones
queuedAny = false;
foreach (var message in sourceQueue.ReceiveMessages(maxMessages: 32).Value)
{
queuedAny = true;
var res = await targetQueue.SendMessageAsync(message.Body);
Console.WriteLine($"Transfered: {message.MessageId}");
await sourceQueue.DeleteMessageAsync(message.MessageId, message.PopReceipt);
}
Console.WriteLine($"Finished batch");
}
}
}
}
To anyone coming here looking for a Node equivalent of #MitchWheats answer using an Azure Function.
import AzureStorage from 'azure-storage'
import { Context, HttpRequest } from '#azure/functions'
import util from 'util'
const queueService = AzureStorage.createQueueService()
queueService.messageEncoder = new AzureStorage.QueueMessageEncoder.TextBase64QueueMessageEncoder()
const deleteMessage = util.promisify(queueService.deleteMessage).bind(queueService)
const createMessage = util.promisify(queueService.createMessage).bind(queueService)
const getMessage = util.promisify(queueService.getMessage).bind(queueService)
export async function run (context: Context, req: HttpRequest): Promise<void> {
try {
const poisonQueue = (req.query.queue || (req.body && req.body.queue));
const targetQueue = poisonQueue.split('-')[0]
let count = 0
while (true) {
const message = await getMessage(poisonQueue)
if (!message) { break; }
if (message.messageText && message.messageId && message.popReceipt) {
await createMessage(targetQueue, message.messageText)
await deleteMessage(poisonQueue, message.messageId, message.popReceipt)
}
count++
}
context.res = {
body: `Replayed ${count} messages from ${poisonQueue} on ${targetQueue}`
};
} catch (e) {
context.res = { status: 500 }
}
}
To use the function you need to you provide connection information for the storage account used for your storage queues. This is provided as environment variables. Either you provide AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY, or AZURE_STORAGE_CONNECTION_STRING. More on this is available in the Azure Storage SDK docs.
Also wrote a few lines about it in this Medium article
Updated python based on Jon Canning's answer:
from azure.storage.queue import QueueServiceClient
queueService = QueueServiceClient.from_connection_string(conn_str="DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<key>;EndpointSuffix=core.windows.net")
for queue in queueService.list_queues():
if "poison" in queue.name:
print(queue.name)
targetQueueName = queue.name.replace("-poison", "")
queue = queueService.get_queue_client(queue=queue.name)
targetQueue = queueService.get_queue_client(queue=targetQueueName)
while queue.peek_messages() :
messages = queue.receive_messages()
for msg in messages:
targetQueue.send_message(msg.content)
queue.delete_message(msg)
As Mikael Eliasson noted, the code in IGx89 answer is broken because
AddMessageAsync will overwrite some info on the message and then
DeleteMessagAsync will give a 404. The better solution is to copy the
values into a new message for AddMessageAsync
Please see enhanced version of RetryPoisonMesssages with an ability to specify only list of messages(instead of all in a queue) and allow to copy messages instead of move them.
It also logs success/failure for each message.
/// <param name="storageAccountString"></param>
/// <param name="queuename"></param>
/// <param name="idsToMove">If not null, only messages with listed IDs will be moved/copied</param>
/// <param name="deleteFromPoisonQueue">if false, messages will be copied; if true, they will be moved
///Warning: if queue is big, keeping deleteFromPoisonQueue=false can cause the same row
///from poisonqueue to be copied more than once(the reason is not found yet)</param>
/// <returns></returns>
private static async Task<int> RetryPoisonMesssages(string storageAccountString, string queuename, string[] idsToMove=null, bool deleteFromPoisonQueue=false)
{
var targetqueue = GetCloudQueueRef(storageAccountString, queuename);
var poisonQueueName = queuename + "-poison";
var poisonqueue = GetCloudQueueRef(storageAccountString, poisonQueueName);
var count = 0;
while (true)
{
var msg = await poisonqueue.GetMessageAsync();
if (msg == null)
{
Console.WriteLine("No more messages in a queue " + poisonQueueName);
break;
}
string action = "";
try
{
if (idsToMove == null || idsToMove.Contains(msg.Id))
{
var msgToAdd = msg;
if (deleteFromPoisonQueue)
{
//The reason is that AddMessageAsync will overwrite some info on the message and then DeleteMessagAsync will give a 404.
//The better solution is to copy the values into a new message for AddMessageAsync
msgToAdd = new CloudQueueMessage(msg.AsBytes);
}
action = "adding";
await targetqueue.AddMessageAsync(msgToAdd);
Console.WriteLine(action + " message ID " + msg.Id);
if (deleteFromPoisonQueue)
{
action = "deleting";
await poisonqueue.DeleteMessageAsync(msg);
}
Console.WriteLine(action + " message ID " + msg.Id);
}
}
catch (Exception ex)
{
Console.WriteLine("Error encountered when "+ action + " " + ex.Message + " at message ID " + msg.Id);
}
count++;
}
return count;
}
I want to implement a distributed priority queue without using Zookeeper?
If you know how to communicate between client and server (e.g. with TCP sockets) it should be straightforward. The server contains a thread safe implementation of the Priority Queue, hence providing an "interface". Clients connect to the server and uses this "interface".
Server
The server must provide a priority queue interface (i.e. supporting add, peek, poll, ...). Important is that these methods must be thread safe ! So we will use PriorityBlockingQueue (which is synchronized) instead of PriorityQueue.
public class Server {
private static ServerSocket server_skt;
public PriorityBlockingQueue<Integer> pq;
// Constructor
Server(int port, int pq_size) {
server_skt = new ServerSocket(port);
this.pq = new PriorityBlockingQueue<Integer>(pq_size);
}
public static void main(String argv[]) {
Server server = new Server(5555, 20); // Make server instance
while(true) {
// Always wait for new clients to connect
try {
System.out.println("Waiting for a client to connect...");
// Spawn new thread for communication with client
new CommunicationThread(server_skt.accept(), server.pq).start();
} catch(IOException e) {
System.out.println("Exception occured :" + e.getStackTrace());
}
}
}
}
And this is how CommunicationThread class would look like
public class CommunicationThread extends Thread {
private Socket client_socket;
private InputStream client_in;
private OutputStream client_out;
private PriorityBlockingQueue<Integer> pq;
public CommunicationThread(Socket socket, PriorityBlockingQueue<Integer> pq) {
try {
this.client_socket = socket;
this.client_in = client_socket.getInputStream();
this.client_out = client_socket.getOutputStream();
this.pq = pq;
System.out.println("Client connected : " + client_socket.getInetAddress().toString());
} catch(IOException e) {
System.out.println("Could not initialize communication properly. -- CommunicationThread.\n");
}
}
#Override
public void run() {
boolean active = true;
while(active) {
int message_number = client_in.read(); // Listen for next integer --> dispatch to correct method
switch(message_number) {
case -1: case 0:
// Will stop the communication between client and server
active = false;
break;
case 1:
// Add
int element_to_add = client_in.read(); // read element to add to the priority queue
pq.add(element_to_add); // Note that a real implementation would send the answer back to the client
break;
case 2:
// Poll (no extra argument to read)
int res = pq.poll();
// Write result to client
client_out.write(res);
client_out.flush();
break;
/*
* IMPLEMENT REST OF INTERFACE (don't worry about synchronization, PriorityBlockingQueue methods are already thread safe)
*/
}
}
client_in.close();
client_out.close();
}
}
This class is listening to what the client is sending.
According to what the client sent, the server knows what to do, hence there is a mini protocol. That protocol is : when the client wants to invoke a method of the distributed priority queue, he sends an integer (e.g. 2 = poll()). The server reads that integer and knows which method to invoke.
Note that sometimes sending one integer is enough (see poll() example), but not always. Think for example of add() which has to specify an argument. The server will receive 1 from the client (i.e. add()) and will read a second integer (or any other object that has to be stored in the distributed priority queue).
Client
Based on the protocol, the server is offering the client an interface (e.g. 0 = stop communication, 1 = add() , ...). The client only has to connect to the server and send messages (respecting the procotol!) to it.
A client example :
public class PQ_Client {
private static Socket skt;
private InputStream in;
private OutputStream out;
private final int _STOP_ = 0, _ADD_ = 1, _POLL_ = 2; // By convention (protocol)
PQ_Client(String ip, int port) {
try {
this.skt = new Socket(ip, port);
this.in = skt.getInputStream();
this.out = skt.getOutputStream();
System.out.println("Connected to distributed priority queue.");
} catch(IOException e) {
System.out.println("Could not connect with the distributed priority queue : " + e.getStackTrace());
}
}
// Sort of stub functions
public void stop() {
out.write(_STOP_);
out.flush();
out.close();
}
public void add(Integer el) {
out.write(_ADD_); // Send wanted operation
out.write(el); // Send argument element
// Real implementation would listen for result here
out.flush();
}
public int poll() {
out.write(_POLL_);
out.flush();
// Listen for answer
return in.read();
}
/*
* Rest of implementation
*/
}
Note that thanks to these self made "stub functions" we can make a PQ_Client object and use it as if it was a priority queue (the client/server communication is hidden behind the stubs).
String ip = "...";
int port = 5555;
PQ_Client pq = new PQ_Client(ip , port);
pq.add(5);
pq.add(2);
pq.add(4);
int res = pq.poll();
Note that by using RPC (Remote Procedure Call) it could be easier (stub function generated automatically, ...).
In fact what we implemented above is a little RPC-like mechanism, as it does nothing else then sending a message to call a procedure (e.g. add()) on the server, serializing the result (not needed for integers), send it back to the client.
I am working on a Spring-MVC application in which I have implemented chat functionality using Cometd. As a feature, I would like to know if there is any way Cometd has support or some way I can show which user is typing. Ofcourse the user information I can retrieve. Here is my chat code. Thanks.
ChatServiceImpl :
#Named
#Singleton
#Service
public class ChatServiceImpl {
#Inject
private BayeuxServer bayeux;
#Session
private ServerSession serverSession;
#Listener(value = "/service/person/{id}")
public void privateChat(ServerSession remote, ServerMessage.Mutable message,#Param("id")String id) {
System.out.println("wassup");
Person sender = this.personService.getCurrentlyAuthenticatedUser();
String senderName = sender.getFirstName();
Map<String, Object> input = message.getDataAsMap();
String data = (String) input.get("name");
String timestamp = (String) input.get("timestamp");
String temp = message.getChannel();
String temp1 = temp;
temp = temp.replace("/service/person/", "");
String channelName = temp1.replace("/service","");
final int conversationId = Integer.valueOf(temp);
Replies replies = new Replies();
replies.setReplyingPersonName(senderName);
replies.setReplyText(data);
replies.setReplyTimeStamp(timestamp);
replies.setReplyingPersonId(sender.getId());
replies.setRead(false);
Long replyId = this.repliesService.addReply(replies, conversationId, sender);
Map<String, Object> output = new HashMap<String, Object>();
output.put("text", data);
output.put("firstname", senderName);
output.put("channelname", channelName);
output.put("timestamp", timestamp);
output.put("id",sender.getId());
output.put("read","true");
output.put("replyid",replyId);
ServerChannel serverChannel = bayeux.createChannelIfAbsent("/person/" + id).getReference();
serverChannel.setPersistent(true);
serverChannel.publish(serverSession, output);
}
Application.js : Please note, I am using parts of this file in other JS file.
(function($)
{
var cometd = $.cometd;
$(document).ready(function()
{
function _connectionEstablished()
{
$('#body').append('<div>CometD Connection Established</div>');
}
function _connectionBroken()
{
$('#body').append('<div>CometD Connection Broken</div>');
}
function _connectionClosed()
{
$('#body').append('<div>CometD Connection Closed</div>');
}
var _connected = false;
function _metaConnect(message)
{
if (cometd.isDisconnected())
{
_connected = false;
_connectionClosed();
return;
}
var wasConnected = _connected;
_connected = message.successful === true;
if (!wasConnected && _connected)
{
_connectionEstablished();
}
else if (wasConnected && !_connected)
{
_connectionBroken();
}
}
// Function invoked when first contacting the server and
// when the server has lost the state of this client
function _metaHandshake(handshake)
{
if (handshake.successful === true)
{
cometd.batch(function()
{
cometd.subscribe('/chat/1306', function(message)
{
var data = message.data;
$('#body').append('<div>Server Says: ' + data.firstname + '/' + data.accountid + data.time1+'</div>');
});
});
}
}
// Disconnect when the page unloads
$(window).unload(function()
{
cometd.disconnect(true);
});
$(document).on('click', '#sender', function()
{
cometd.publish('/service/chat/1306', { name: 'hello_' + Date.now() });
});
var cometURL = location.protocol + "//" + location.host + config.contextPath + "/cometd";
cometd.configure({
url: cometURL,
logLevel: 'debug'
});
cometd.websocketEnabled = false;
cometd.addListener('/meta/handshake', _metaHandshake);
cometd.addListener('/meta/connect', _metaConnect);
cometd.handshake();
});
})(jQuery);
Kindly let me know how I can achieve this, as I cannot find many references for this. Thanks a lot. :-)
This is easily achieved by detecting on the client side the typing start/stop (in a smart way to avoid to send too many messages to the server), then send a CometD service message to the server.
The server can then just broadcast a message to a special channel (say /chat/typing) with the nickname of the user that is typing.
The client application will subscribe to /chat/typing and receive these messages, then display in the UI who is typing, possibly coalescing multiple users into a single UI notification.
The CometD part is trivial, the detection of the start/stop of the typing in a smart way is probably most of the work.