I have multiple ESP32 BLE Mesh provisioners close to each other and several nodes, too. Is it possible for one provisioner to send/receive data only to/from some particular group of nodes? With success, I tried to set a different netkey to each provisioner (subnet)(esp_ble_mesh_provisioner_update_local_net_key()), but I had no success when changing the device key on nodes (esp_ble_mesh_node_add_local_net_key() only can be called after node is provisioned). Another alternative seems to be use different Appkeys, but I don't know how to set an AppKey to a node (as in the first case, this is Ok for the provisioner). In other words, I want to get isolation between the groups/apps.
Related
We have a small Elasticsearch cluster for 3 nodes: two in one datacenter and one in another for disaster recovery reasons. However, if the first two nodes fail simultaneously, the third one won't work either - it will just throw "master not discovered or elected yet".
I understand that this is intended - this is how Elasticsearch cluster should work. But is there some additional special configuration that I don't know to keep the third single node working, even if in the read-only mode?
nope, there's not. as you mentioned it's designed that way
you're probably not doing yourselves a lot of favours by running things across datacentres like that. network issues are not kind on Elasticsearch due to it's distributed nature
Elasticsearch runs in distributed mode by default. Nodes assume that there are or will be a part of the cluster, and during setup nodes try to automatically join the cluster.
If you want your Elasticsearch to be available for only node without the need to communicate with other Elasticsearch nodes. It works similar to a standalone server. To do this we can tell Elasticsearch to work in local only (disable network)
open your elasticsearch/config/elasticsearch.yml and set:
node.local: true
I am using Elastic/Filebeat/Kibana and want to monitor users who ssh into a Jump Box specifically
What IPs are they ssh'ng to
Which users are connecting to those IP's
What are the most connected to machines
Which user is creating the most outbound connections
I have the system module enabled and all I can see is "related.user" to tell me who connects to the server via ssh but that's it.
You need to adjust your configuration in order to see all the information that you want.
What IPs are they ssh'ng to?
You are missing the destination.ip, you can easily just pick it up from it. Changes are you want to write some code and you can also extract it from the ssh command itself, you can see in the command the user, other arguments, and the destination ip in there as well, but you will need to parse that list. (process.parent.args), additionally, you can get the list count, and get the last element which is usually the IP, but I think it is easier to use the destination.ip itself.
Which users are connecting to those IP's?
For this, once you have the source and destination details, you need to create the Kibana report, you can run several aggregations and add different panels. A simple aggregation by IP will show you this, it is a matter of preference how you want it displayed.
What are the most connected to machines?
The same, you first run a count on the sources, or destinations (or both), then run a max on them.
Which user is creating the most outbound connections?
Here you can do all the users at once by running a count and grouping by user, then you list in descending order.
You can see a full list of properties here (ecs fields)
Summary:
You need some extra fields, destiantion.ip, source.ip, eventually parse your arguments, then for reporting you need to count them and aggregate them, but once you have that data you can easily pull them and run the aggregations on them. I think the related user is a good one since it is the only one shown in the event itself, but how about if this user A actually uses an account B to connect to SSH, in that case you need to part the arguments from the process.parent.args .
Cheers.
I'm trying to simulate a sensor network in Castalia, where each radio works with a different duty cycle. I'm controlling the radio by the application, through the commands toRadioLayer(createRadioCommand(SET_STATE,SLEEP)) to turn off and toNetworkLayer(createRadioCommand(SET_STATE,RX)) to turn on. However, as each radio has its own schedule, I need to send this command to a specific radio. Is it possible to define for which node these commands, or another if it exists, are executed?
Every node has its own application module. So when the application sends the commands you describe, these go to the radio module of the same node. So if you need different nodes to use different duty cycles, then you'd have to build it in the application to behave differently according to whatever conditions you have in mind. One very simple way is to choose randomly the duty cycle (so each application module will have a different duty cycle).
If you want application modules to communicate across nodes, then there is no magic way for this. You'll have to establish communication via data packets.
After going through H2 developer guide I still don't understand how can I find out what cluster node(s) was/were failing and which database needs to be recovered in the event of temporary network failure.
Let's consider the following scenario:
H2 cluster started with N active nodes (is actually it true that H2 can support N>2, i.e. more than 2 cluster nodes?)
(lots DB updates, reads...)
Network connection with one (or several) cluster nodes gets down and node becomes invisible to the rest of the cluster
(lots of DB updates, reads...)
Network link with previously disconnected node(s) restored
It is discovered that cluster node was probably missing (as far as I can see SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME='CLUSTER' starts responding with empty string if one node in cluster fails)
After this point it is unclear how to find out what nodes were failing?
Obviously, I can do some basic check like comparing DB size, but it is unreliable.
What is the recommended procedure to find out what node was missing in the cluster, esp. if query above responds with empty string?
Another question - why urlTarget doesn't support multiple parameters?
How I am supposed to use CreateCluster tool if multiple nodes in the cluster failed and I want to recover more than one?
Also I don't understand how CreateCluster works if I had to stop the cluster and I don't want to actually recover any nodes? What's not clear to me is what I need to pass to CreateCluster tool if I don't actually need to copy database.
That is partially right SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME='CLUSTER', will return an empty string when queried in standard mode.
However, you can get the list of servers by using Connection.getClientInfo() as well, but it is a two-step process. Paraphrased from h2database.com:
The list of properties returned by getClientInfo() includes a numServers property that returns the number of servers that are in the connection list. getClientInfo() also has properties server0..serverN, where N is the number of servers - 1. So to get the 2nd server from the list you use getClientInfo('server1').
Note: The serverX property only returns IP addresses and ports and not
hostnames.
And before you say simple replication, yes that is default operation, but you can do more advanced things that are outside the scope of your question in clustered H2.
Here's the quote for what you're talking about:
Clustering can only be used in the server mode (the embedded mode does not support clustering). The cluster can be re-created using the CreateCluster tool without stopping the remaining server. Applications that are still connected are automatically disconnected, however when appending ;AUTO_RECONNECT=TRUE, they will recover from that.
So yes if the cluster stops, auto_reconnect is not enabled, and you stick with the basic query, you are stuck and it is difficult to find information. While most people will tell you to look through the API and or manual, they haven't had to look through this one so, my sympathies.
I find it way more useful to track through the error codes, because you get a real good idea of what you can do when you see how the failure is planned for ... here you go.
ElasticSearch open port 9300 for node-to-node communication, and every machine in the same network with same cluster.name can auto join this cluster?
I doubt is it safe to allow every node to join?
If not, do I need to set network.host to a fixed ip address? Or is there a better way?
It really depends on the networking stack of your nodes and how you interact with your cluster. If they are all running on a local network, inaccessible from the outside, then in general, allow other nodes to join freely is OK since it means someone from inside your network is trying to join.
However, if your nodes have a public IP address, it's a good idea to change the default ports used, disable Zen multicast discovery, and give each node a list of the other nodes that are allowed to communicate with it.
Straight from the elasticsearch.yml file :
# 1. Disable multicast discovery (enabled by default):
#
discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
discovery.zen.ping.unicast.hosts: ["enter_ip_here","enter_other_ip:port","etc..."]
Note that these settings needs to be the same on all nodes (except for the list of hosts obviously) and a restart of the node is required for these to be taken into account.
Also, you can indeed set the network.host to a fixed IP. This IP should be the one appearing in the list of discovery.zen.ping.unicast.hosts.