I am using Datastax community edition in two windows PCs(64 bit and 32 bit respectively). After setting the initial configuration in cassandra.yaml, in the Opscenter web interface its showing that "1 of 2 agents connected" and recommending to install opscenter agent.Node 1(ip: X.X.X.X) Configuration:Cluster name : Test Centerseeds : Y.Y.Y.Ylisten address :rpc_address : 0.0.0.0endpoint_snitch: SimpleSnitchnum_tokens: 256Node 2(ip: Y.Y.Y.Y) Configuration:Cluster name : Test Centerseeds : X.X.X.Xlisten address :rpc_address : 0.0.0.0endpoint_snitch: SimpleSnitchnum_tokens: 256By default auto_bootstrap attribute was absent so I didn't add that and as per instruction I first stopped the services and after changing this setting I started them.Q1. Any settings I'm missing ?Thanks for kindly help.Edited : From X.X.X.X node, the status of Y.Y.Y.Y node
You need to configure the datastax-agents so they know what machine OpsCenter is running on.
To do this you will need to edit the following line in address.yaml located in C:\Program Files\DataStax Community\opscenter\agent\conf.
stomp_interface:
If X.X.X.X is your opscenterd machine:
set stomp_interface: X.X.X.X for all nodes.
you have made a mistake with the seeds. If these 2 nodes are part of the same cluster (and you've indicated that they both have same name "Test Center", then the seeds should be the same, not different. Set seeds: Y.Y.Y.Y in both nodes. Shutdown both nodes. Start node 1, after it is up then start Node 2. Node 2 will get its settings from the seed (Node 1).
listen_address: shouldn't be blank. set it to the ip address of the interface that the node will be listening on. I am assuming these are physical machines.
Related
I have an issue according to elasticsearch, when I am running this command php artisan index:ambassadors inside docker, it gives me this exception.
**Exception : No alive nodes found in your cluster**
Here is my output.
Exception : No alive nodes found in your cluster
412/4119 [▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░] 10%Exception : No alive nodes found in your cluster
824/4119 [▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░] 20%Exception : No alive nodes found in your cluster
1236/4119 [▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░] 30%Exception : No alive nodes found in your cluster
1648/4119 [▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░] 40%Exception : No alive nodes found in your cluster
2472/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░] 60%Exception : No alive nodes found in your cluster
2884/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░] 70%Exception : No alive nodes found in your cluster
3296/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 80%Exception : No alive nodes found in your cluster
3997/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 97%Exception : No alive nodes found in your cluster
4119/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%Exception : No alive nodes found in your cluster
Also I have an error message in my elasticsearch container logs.
Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties.
Is there anyone who faced this issue before?
I've already run one SonarQube instance at port 9000 and able access it at address: localhost:9000.
Now I would like to run another SonarQube instance for my new project at port 10000. I've changed in sonar.properties file:
sonar.web.port: 10000
sonar.web.context: /
However, when I run C:\SonarMAP\bin\windows-x86-64\StartSonar.bat, I got the ERROR message:
wrapper | ERROR: Another instance of the SonarQube application is already running.
Press any key to continue . . .
I do some research to solve this problem but can't find any helpful information.
Any suggestion ? Thanks !
UPDATE
The instance 1 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url=jdbc:postgresql://server15/sonarQube
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
The instance 2 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url: jdbc:postgresql://localhost/sonarMAP
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive: 20
sonar.jdbc.maxIdle: 5
sonar.jdbc.minIdle: 2
sonar.jdbc.maxWait: 5000
sonar.jdbc.minEvictableIdleTimeMillis: 600000
sonar.jdbc.timeBetweenEvictionRunsMillis: 30000
sonar.web.port: 9100
sonar.web.context: /
sonar.search.port=9101
sonar.notifications.delay: 60
Apparently you can't run multiple instances on Windows because of wrapper.single_invocation=true in conf/wrapper.conf.
Setting it to false seems to allow this (you'll still have to use different ports as Fabrice explained in his answer) but this is getting into grey zone: non recommended and non tested setup.
You need to change other settings inside the conf/sonar.properties file, namely:
sonar.search.port: the port of Elasticsearch
sonar.search.httpPort: if you enabled it in the first instance, you've got to change it as well
and obviously you can't connect to the same schema of the same DB
I use ES 2.2.0. and have a cluster of nodes. I would like to know which node or nodes are actual master ones. How can I do that?
I tried the following ways:
curl http://my_computer:9200/_cluster/state?pretty
curl http://my_computer:9200/_nodes?pretty
and I was unable to find which node is master.
There is only ever one single master in a cluster, chosen among the set of master-eligible nodes.
You can either run the /_cat/master command or the /_cat/nodes command.
The former will yield something like this
% curl 'localhost:9200/_cat/master?v'
id ip node
Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
and the latter command will yield the list of nodes with the master column (m for short). Nodes with m are master-eligible nodes and the one with the * is the current master.
% curl 192.168.56.10:9200/_cat/nodes?v&h=id,ip,port,v,m
id ip port version m
pLSN 192.168.56.30 9300 2.2.0 m
k0zy 192.168.56.10 9300 2.2.0 m
6Tyi 192.168.56.20 9300 2.2.0 *
It isn't nodes that are primary, but shards. If you check out https://www.elastic.co/guide/en/elasticsearch/reference/2.2/cat-shards.html
You can try something like: http://my_computer:9200/_cat/shards?v
With respect to Elasticsearch 6.6, this is how you can get the id of the master_node
curl -X GET "192.168.0.1:9200/_cluster/state/master_node?pretty"
{
"cluster_name" : "logbox",
"compressed_size_in_bytes" : 11150,
"cluster_uuid" : "eSpyTgXbTJirTjWtPW_HYQ",
"master_node" : "R8Gn9Km0T92H9D7TXGpX4k"
}
I did set up everything according to tutorial here http://funkload.nuxeo.org/monitoring.html , started monitor server, made bench test, builded report. But in report there are no added graphs from monitoring... Any idea? I am using credential server as well, but that was and is working correctly... its just that after i added monitor things, nothing seems to change...
monitor.conf
[server]
host = localhost
port = 8008
interval = .5
interface = eth0
[client]
host = localhost
port = 8008
my_test.conf:
[main]
title= some title
description= some descr
url=http://localhost:8000
... some other not important lines here
[monitor]
hosts=localhost
[localhost]
port=8008
description=The benching machine
use
sudo easy_install -f http://funkload.nuxeo.org/snapshots/ -U funkload
instead of just
pip install funkload
Looks like pip does have some old bad version of funkload
I have installed net-snmp5.7.2 on my system, I have written my app_agent.conf for my application and
agentXSocket udp:X.X.X.X:1610
and exported SNMPCONFIGPATH=path_to_app_agent.conf
I have also wrtten snmpd.conf in /usr/etc/snmp/snmp.conf
trap2sink X.X.X.Y
agentXSocket udp:X.X.X.X:1610
I have two more snmpd.conf present in my /etc/snmp/ and /var/net-snmp/
Config from /etc/snmp:
com2sec notConfigUser default public
com2sec notConfigUser v1 notConfigUser
com2sec notConfigUser v1 notConfigUser
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.25.1.1
access notConfigGroup "" any noauth exact systemview none none
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
Config from /var/net-snmp:
setserialno 1322276014
ifXTable .1 14:0 18:0x $
ifXTable .2 14:0 18:0x $
ifXTable .3 14:0 18:0x $
engineBoots 14
oldEngineID 0x80001f888000e17f6964b28450
I have started snmpd and snmptrapd. Now in my code I am calling
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
init_agent("app_agent");
init_snmp("app_agent");
init_snmp is throwing a warning
Warning: Failed to connect to the agentx master agent ([NIL]):
I have no idea why?? Thanks in advance for any help
This is basically saying the sub-agent you wrote failed to connect to NetSNMP master agent, as the message suggested. In Linux, by default agentx will attempt to make the connection via socket using /var/agentx/master. The following hint might help:
Running your sub-agent under appropriate privilege that has access
to sockets e.x. sudo
Check socket setting in your snmpd.conf (which located varies) if not already specified, such as
agentxsocket /var/agentx/master and agentxperms 777 777
Restart NetSNMP for any change to take effect with sudo service snmpd restart; or as an option you can try stop the service with sudo service snmpd stop and run an instance with debugging mode snmpd -f -Lo -Dagentx which most likely will output useful information on sub-agent connection.
I ran into this problem right now with quagga and ospfd and after doing an strace -f -p PID, noticed this among the output:
connect(14, {sa_family=AF_FILE, path="/var/agentx/master"}, 110) = -1 EACCES (Permission denied)
so I:
$ ls -al /var/agentx/
total 8
drwx------ 2 root root 4096 Sep 12 20:50 .
drwxr-xr-x. 27 root root 4096 Sep 12 20:13 ..
srwxrwxrwx 1 root root 0 Sep 12 20:50 master
and then I:
$ chmod 755 /var/agentx/
and immediately zebra and ospfd had their Agentx subnets connect.
$ tail -10f /var/log/quagga/zebra.log
2014/09/12 20:52:59 ZEBRA: snmp[info]: NET-SNMP version 5.5 AgentX subagent connected
$ tail -10f /var/log/quagga/ospfd.log
2014/09/12 20:52:59 OSPF: snmp[info]: NET-SNMP version 5.5 AgentX subagent connected
This is running quagga-0.99.23-2014062401 on RHEL6. hope this helps.
Had a similar problem, whether it be with the unix Sockets or Tcp:localhost:750 i was still getting the same error message:
/var/log/quagga/ospfd.log: warning, failed to connect to Master AgentX [nill] or [tcp:localhost:750].
I resolved the issue by disabling SELINUX.
This is not the answer to your problem, but I too got "Warning: Failed to connect to the agentx master agent ([NIL]):" message when my snmpd service didn't startup properly or went down. For my SNMP Sub-Agent, I used the example they provide, example-demon.c, and found I get this message nonstop (about every second) when processing agent_check_and_process(0) on every loop.
while (true) {
agent_check_and_process(0); /* 0 == don't block */
}
This is how I fixed it.
netsnmp_transport *snmpTransport;
while( true ) {
// Check to see snmpd is still running
snmpTransport = netsnmp_transport_open_client("agentx", NULL);
if (snmpTransport == NULL)
{
// Just went down?
if (snmpAgentDown == false)
{
snmp_log( LOG_INFO, "Net-SNMP Agent is down\n" );
snmpAgentDown = true;
}
Sleep(5000); // Sleep for a 5 sec
} else
{
if (snmpAgentDown)
{
snmp_log( LOG_INFO, "Net-SNMP Agent is back up\n" );
snmpAgentDown = false;
}
// Close connection test
snmpTransport->f_close(snmpTransport); // This burn me without; its needed
netsnmp_transport_free(snmpTransport);
// Process SNMP request and notifications
agent_check_and_process( 0 ); // 0 == don't block, 1 = block
Sleep(1); // Sleep for 1ms; Need to sleep thread, but need subAgent to be responsive too
}
i++;
}
Now if the snmpd goes down, my app can detect it being down and not process agent_check_and_process() stopping the "Warning: Failed to connect to the agentx master agent ([NIL]):" from ever appearing. If snmpd comes back up, then it processes it.
Final Note: I determine that code based off subagent.c file subagent_open_master_session() funtion in net-snmp-5.7.2 package. snmpTransport->f_close(snmpTransport) is also needed and determine that by following what snmp_close() did at the end of subagent_open_master_session() function.
As the subagent of Net-SNMP sometimes unable to read the adress of master agent from the configuration file, so you can even try
/* set the location of master agent */
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID,
NETSNMP_DS_AGENT_X_SOCKET, "udp:X.X.X.X:1610");
Write these lines in the agentx code before calling init_agent().
I have solved problem next comands line in OS Ubuntu 17.07
Change code (add line)
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.2
view systemview included .1.3.6.1.2.1.25.1.1
instead of
view systemview included .1.3.6.1.2.1.1
view systemview included .1.3.6.1.2.1.25.1.1
Write down new line master agentx in /etc/snmpd.conf
Restart snmpd demon:
sudo /etc/init.d/snmpd restart or sudo service snmpd restart