Can't find Nomad client, but I can see all servers and I also can find the clients in Consul.
Nomad config
data_dir = "/opt/nomad/data"
server {
enabled = true
bootstrap_expect = 3
retry_join = ["provider=aws tag_key=Function tag_value=consul_client"]
}
client {
enabled = true
}
Clients in Consul UI:
Servers in Nomad UI:
Clients in Nomad UI:
So why do I see only one Client in the last screenshot?
figured it out, I should make separate config files for server and client instead of combining them together.
Related
I developed a video chatting application using simple peer and socket.io . But when I tried hosting the application the peers could not be connected because of the firewall issue . I am aware that STUN and TURN servers are to be used for this purpose . Is it possible to connect to those servers using simple peer ?
If so how?
Any explanation or reference articles will be helpful
You can add the iceServer configuration like in original webrtc in the simple-peer config like so:
{
initiator: false,
config: { iceServers: [{ urls: 'stun:stun.l.google.com:19302' }, { urls: 'stun:global.stun.twilio.com:3478?transport=udp' }] },
}
You can add stun servers and/or turn servers.
If you read the source code of the simple-peer npm package you will realize that it currently uses
URLs: [
'stun:stun.l.google.com:19302',
'stun:global.stun.twilio.com:3478'
]
for its public IP discovery needs.
Your app fails to work in case of firewall because just stun server is insufficient in case of firewall.
Besides a STUN server, you need a TURN server is this case.
TURN is the fallback is case STUN fails to deliver.
I'm trying to setup 3 server nodes in my local machine. Each node starts but unable to join the cluster.
I can see the below message being logged in the log file for each server node.
Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, heap=0.1GB]
Here is my code to start the server.
IgniteConfiguration config = new IgniteConfiguration();
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
ipFinder.setAddresses(Arrays.asList("192.168.0.3","192.168.0.3:47100..47120"));
spi.setIpFinder(ipFinder);
config.setDiscoverySpi(spi);
// Start Ignite node.
ignite = Ignition.start(config);
Can anyone please suggest if I was missing something here!
Try removing the address without the port and leave only the one that specifies the range:
ipFinder.setAddresses(Arrays.asList("192.168.0.3:47100..47120"));
Struggled with this for hours and could only solve it by doing the following.
Ensure that you've set the local interface and port range on which your servers are litening:
TcpDiscoverySpi spi = new TcpDiscoverySpi();
//This address should be accessible to other nodes
spi.setLocalAddress("192.168.0.1");
spi.setLocalPort(48500);
spi.setLocalPortRange(20);
Configure your IP finder accordingly. Supposing that the node is to find peers on the same machine configured similarly (i.e., as per [1] above):
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses("192.168.0.1:48500..48520");
spi.setIpFinder(ipFinder);
As instances start up in turn, they'll use ports within the configured range, and within that range, they'll discover one another using TCP discovery.
This is the only way I managed to connect 2+ server nodes on the same machine, without using multicast discovery.
my cluster it's not in the same network, and there is a tunnel between my pc and the server.
i have got this error :
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available
If you are using elasticsearch default multicast mechanism to discover nodes in the cluster you must have all cluster nodes located on the same subnet (this is true till version 2.0).
In order to have your node discovering the other nodes in the cluster you may configure [elasticsearch home]/config/elasticsearch.yaml field name: discovery.zen.ping.unicast.hosts as described [here]
(https://www.elastic.co/guide/en/elasticsearch/reference/2.x/modules-network.html):
discovery.zen.ping.unicast.hosts
In order to join a cluster, a node needs to know the hostname or IP address of at least some of the other nodes in the cluster. This >settting provides the initial list of other nodes that this node will try to contact. Accepts IP addresses or hostnames.
Defaults to ["127.0.0.1", "[::1]"].
Hope it helps.
I have tried to recreate your configuration on my environment and managed to work with Elasticsearch (created an index). Here is how it goes:
Configure Putty tunneling for Elasticsearch 9300 and 9200 ports
After configuring you'll need to open the SSH connection and make sure it is connected
You may look at the SSH event log here is a link on how to do it
The code looks like this
public class App
{
public static void main( String[] args ) throws Exception
{
Settings settings = ImmutableSettings.settingsBuilder().
put("cluster.name", "my-cluster").build();
TransportClient client = new TransportClient(settings)
.addTransportAddress(
new InetSocketTransportAddress(
"localhost", 9093));
CreateIndexResponse rs = client.admin().indices().create(new CreateIndexRequest("tunnelingindex")).actionGet();
System.out.println(rs.isAcknowledged());
client.close();
}
}
The code creates an index named tunnelingindex
If it still do not work for you, I think that you may have an issue which is not related to tunneling or Elasticsearch.
Hope I have managed to help.
you must set:
transport.publish_host: localhost
details here:
Elasticsearch basic client connection
Is it possible to proxy websocket connections within the webpack dev server? I know how to proxy regular HTTP requests to another backend but it's not working for websockets, presumably because the target in the proxy configuration starts with http://...
Version 1.15.0 of the webpack-dev-server supports proxying websocket connections. Add the following to your configuration:
devServer: {
proxy: {
'/api': {
target: 'ws://[address]:[port]',
ws: true
},
},
}
Webpack dev server does not support proxying ws connections yet.
Until then, you can implement proxying manually, by adding additional http-proxy to webpack server:
Add new dependency to package.json:
"http-proxy": "^1.11.2"
Proxy websocket connections manually by listening to upgrade events
// existing webpack-dev-server setup
// ...
var server = new WebpackDevServer(...);
proxy = require('http-proxy').createProxyServer();
server.listeningApp.on('upgrade', function(req, socket) {
if (req.url.match('/socket_url_to_match')) {
console.log('proxying ws', req.url);
proxy.ws(req, socket, {'target': 'ws://localhost:4000/'});
}
});
//start listening
server.listen(...)
NOTE (after using this for some time)
There is an issue with proxying websockets as socket.io is used by WebpackDevServer to notify browser of code changes. socket.io may conflict with proxying websockets; in my case, connections were being dropped before handshake was returned from my server unless it responded very quickly.
At that point, I just ditched WebpackDevServer and used custom implementation based on react-hot-boilerplate
#Mr. Spice's answer is correct. But it can be further simplified, check http-proxy-middleware, it can be set as following, i.e. just add ws: true and keep other settings as usual.
// proxy middleware options
var options = {
target: 'http://www.example.org', // target host
changeOrigin: true, // needed for virtual hosted sites
ws: true, // proxy websockets
...
There might be a post which I am looking for. I have very limited time and got requirement at the last moment. I need to push the code to QA and setup elasticsearch with admin team. Please respond me as soon as possible or share the link which has similar post!!.
I have scenario wherein I will have multiple elasticsearch servers, one is hosted on USA , another one in UK and one more server is hosted in India within the same network(companies network) which shares same cluster name. I can set multicast to false and unicast to provide host and IP address information to form a topology.
Now in my application I know that I have to use Transport cLient as follows,
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
Following are my concerns,
1) As per the above information, admin team will just provide the single ip address that is load balancer ip address and the loadbalancer will manage the request and response handling .I mean the loadbalance is responsible to redirect to the respective elasticsearch server . Here my question is, Is it okay to use Transport client to connect to the host with the portnumber as follows ,
new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("loadbalancer-ip-address", “loadbalance-port-number”)) ;
If loadbalancer will redirect the request to elastcisearch server what should be the configuration to loadbalancer like, we need to provde all the elasticsearch host or ipaddress details to it? so that at any given point of time , if there is any failure to the master elasticsearch server it will pick another master.
2) What is the best configuration for 4 nodes or elasticsearch servers like, shards , replicas and etc.
Each node will have one primary shard and 1 replicate ? which can be configured in elasticsearch.yml
Please replay me as soon as possible.
Thanks in advance.