I know that out of the box that GridGain connects to the other clients through multicast, but is there a way to configure GridGain to accept connections outside of the local network? Also is there a way to enable encryption for the communication as well?
The Disovery SPI and Communication SPI allows you to plug alternative discovery and communication mechanisms.
For more detail, refer to the comprehensive API documentation (GridGain 3).
This is necessary on Amazon EC2, which doesn't support multicast. Here's an article discussing this setup.
Multicast only works well within a certain network segment (and in some cases this isn't even allowed for security reasons). So if you want to connect nodes to your grid that are outside your local network you have to resort to other transports such as JMS or mail (if performance is an issue you might get it away with unicast/static ip's and JGroups).
I think that encryption is possible with both the JMS and mail transport, depending on your message broker and mail setup.
Related
Besides federation (talking to other XMPP server), what's the role of an XMPP server in the communication between two peers?
Wikipedia says that
The XMPP network uses a client–server architecture; clients do not
talk directly to one another.
In that case, they must talk through the server, so messages must go through the server, correct?
Does it role change if we're using XMPP over Websockets, BOSH, or bare TCP?
For instance if we use XMPP over Websockets, is there a Websocket between client1 and the server, and another Websocket between client2 and the server?
An XMPP server provides basic messaging, presence, and XML routing features. This page lists Jabber/XMPP server software that you can use to run your own XMPP service, either over the Internet or on a local area network. Wikipedia is also right. According to wikipedia it means that it uses server client system. It is a system in which a computing system composed of two logical parts: a server, which provides information or services, and a client, which requests them. On a network, for example, users can access server resources from their personal computers using client software. Server client system is widely used for communication purpose and also in DBMS
actually it is a communication protocol for message oriented middle ware ( lies between software and application ) based on xml (extensible markup language ) It enables the near-real-time exchange of structured yet extensible data between any two or more network entities.
I have a concox GT06 device from which I want to send tracking data to my AWS Server.
The coding protocol manual that comes with it only explains the data structure and protocol.
How does my server receive the GPS data collected by my tracker?
Verify if your server allows you to open sockets, which most low cost solutions do NOT allow for security reasons (i recommend using an Amazon EC2 virtual machine as your platform).
Choose a port on which your application will listen to incoming data, verify if it is open (if not open it) and code your application (i use C++) to listen to that port.
Compile and run your application on the server (and make sure that it stays alive).
Configure your tracker (usually by sending an sms to it) to send data to your server's IP and to the port which your application is listening to.
If you are, as i suspect you are, just beginning, consider that you will invest 2 to 3 weeks to develop this solution from scratch. You might also consider looking for a predeveloped tracking platform, which may or may not be acceptable in terms of data security.
You can find examples and tutorials online. I am usually very open with my coding and would gladly send a copy of the socket server, but, in this case, for security reasons, i cannot do so.
Instead of direct parsing of TCP or UDP packets you may use simplified solution putting in-between middleware backends specialized in data parsing e.g. flespi.
In such approach you may use HTTP REST API to fetch each new portion of data from trackers sent to you dedicated IP:port (called channel) or even send standardized commands with HTTP REST to connected devices.
At the same time it is possible to open MQTT connection using standard libraries and receive converted into JSON messages from devices as MQTT in real time, which is even better then REST due to almost zero latency.
If you are using python you may take a look at open-source flespi_receiver library. In this approach with 10 lines of code you may have on your EC2 whole parsed into JSON messages from Concox GT06.
We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.
I need to scale up my ActiveMQ solution so I have defined a network of brokers.
I'm tring to figure out how to connect my producers and consumers to the cluster.
does each producer has to be connected to a single broker (with the failover uri for availability)? in this case how can I guarentry the distribution of traffic accross the brokers? do I need to configure the producers to connect each to a diffrent broker?
should I apply the same schema for the consumers?
This makes the application aware of the cluster topology, which I hope can be avoided by a discent cluster
Tx
Tomer
I strongly suggest you carefully read through the documentation from activemq.apache.org on clustering ActiveMQ. There are a lot of very helpful tips.
From what you have written I suggest you pay special attention to this. At the bottom of the page it details how you can control from server side the failover/failback configuration for your producers.
For example:
updateClusterClients - if true pass information to connected clients about changes in the topology of the broker cluster
rebalanceClusterClients - if true, connected clients will be asked to rebalance across a cluster of brokers when a new broker joins the network of brokers
updateURIsURL - A URL (or path to a local file) to a text file containing a comma separated list of URIs to use for reconnect in the case of failure
In a production active system then I would think that making use of updateURIsURL would make it a lot less painful scaling out.
I want to implement messaging over internet. But didn't have IP Public yet.
So I want to ask any one here about sending message to ActiveMQ using JMS over internet?
Could It be done ?
Yes, it exposes a normal TCP based endpoint(by default at port 61616). However, this would not be a recommended deployment model - a better model will be to expose a http based endpoint using a servlet container which internally hands over the message to the activemq broker.
There a lot of good solutions that can do this -
Spring Integration , Apache Camel
Exposing a Webservice endpoint using say Apache CXF (which will bring you a standards based interface), which will internally hand over the message to ActiveMQ.
yes, It can be done. we are currently running a little under a thousand "consumers" which connect to our brokers over the internet.
As to the insecurity of traffic over the internet, i do not agree completely:
exposing a webservice is just as riskfull as exposing the broker. In the end, you are never 100% sure your own code or the code or the underlying application (Apache CXF, Webserver, application server, database server, message broker) contain flaws that could be a security risk. Second to that, HTTP is just as much TCP traffic as ActiveMQ is ( Stomp or openwire protocol)
That being said, you can take all measures to make the risk as small as possible.
we have done the following:
User & Password Required to connect to the broker (ActiveMQ suports a wide range of Authentication solutions and you can roll your own if required)
Switch port to a different number so detection is more difficult
if you have control over the consumers aswell, apply IP filters in the firewall for what ip's can connect to the broker ( unfortunately, this was not possible in our case)
encrypt your messages
We have added an application level Authentication aswell using a token. This way, every message is authenticated in our own application
-> if all of these are implemented, I think you are pretty safe and as a bonus, you do not need the extra layer of webservices ( if this application needs to scale, you will need to scale your webservices equally with your brokers.
Plain connections (openwire) should be fine. It's much simpler to stick with the standard setup than to try setting up web services and whatnot. Just make sure to encrypt the channels with SSL. If you use plain passwords, they can possibly be picked up over public networks (unlikely but anyway) - that's why I prefer SSL.
Actually, ActiveMQ is a very good way to do communication over the Internet since it supports transactions and persistence, making it cope well with network stability issues.
However, you need a public IP (or some NAT/port forwarding solution form a public IP) on the machine running the ActiveMQ server for this to work.