I have a problem with centreon when i would like add a Poller. I don't have touch the gorgone file on the central server
Error MSG
I reinstalled the poller and still the same problem, I think it comes from the server
the service gorgoned is running on the both machine
You might also have your firewall or selinux still up and blocking traffic.
Did your poller install complete or is it stuck at the part where you have to copy the gorgone config to the CLI?
Do you already have the gorgone config file /etc/centreon-gorgone/config.d/40-gorgoned.yaml on your poller?
Related
I configure logstash service following the instructions in the link https://www.elastic.co/guide/en/logstash/current/running-logstash-windows.html (logstash as a service using nssm) but I noted that the service does actually not running when I am disconnected from the remote server I installed it.
Is there a way to fix this problem?
thanks,
g
The same thing happens also running logstash manually (I mean , running the appropriate bat file in command prompt).
I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf
I'm trying to get an SFTP connection working in PhpStorm. It works fine in FileZilla.
In the SSH config section of the STFP config, I enter host, username and auth type (password) and click Test Connection. It connects fine.
If I click OK and go dialog level back and click Test Connection on the main SFTP config, I get Connection to dev.the-server.net failed. EOF while reading packet error. Like wise when I close the SFTP config dialog, there is an EOF while reading packet error where a directory listing should be.
If I use the same credentials and connect by FTPS, I can get a remote directory listing and download files, but I get the end of file error trying to upload.
This all seems to be PhpStorm issue because I can upload and download fine with FileZilla. For workflow reasons, I really need PhpStorm to connect.
Any thoughts on where to start?
Images of the SFTP dialog:
Main SFTP config
SSH section of SFTP config
Restarting PhpStorm helped to solve the problem.
For me, it failed because sftp-server was configured with the wrong path in sshd_config, and this link saved me. so:
Find the correct path of sftp-server (the whereis sftp-server command may be help), e.g. /usr/libexec/sftp-server.
Set the correct path in sshd_config (most likely in /etc/ssh/sshd_config), e.g. Subsystem sftp /usr/libexec/sftp-server.
Restart the sshd server (possibly /etc/init.d/sshd restart or /usr/sbin/sshd restart).
Restart your IDE.
In my case I've changed SSH settings, restarted SSH service, restarted PHPStorm but it didn't help. But when I restarted the whole server and then tried again it started to working again.
In my case I didn't install openssh in server. You can try this command.
sudo apt-get install openssh-server
sudo systemctl enable ssh
sudo systemctl start ssh
I have just upgraded to Kafka 1.0 and zookeeper 3.4.10.At first, it all started fine. Stand - alone producer and consumer worked as expected. After I've ran my code for about 10 minutes, Kafka fails with this error:
[2017-11-07 16:48:01,304] INFO Stopping serving logs in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
[2017-11-07 16:48:01,320] FATAL Shutdown broker because all log dirs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)
I have reinstalled and reconfigured Kafka 1.0 again, the same thing happened. If I try to restart, the same error occurs.
Deleting log files helps to start Kafka, but it fails again after the short run.
I have been running 0.10.2 version for a long while, and never encountered anything like this, it was very stable over the long periods of time.
I have tried to find a solution and followed instructions in the documentation.
This is not yet a production environment, it is fairly simple setup, one producer, one consumer reading from one topic.
I am not sure if this could have anything to do with zookeeper.
**Update: ** the issue has been posted at Apache JIRA board
The consensus so far seems to be that it is a Windows issue.
Ran into this issue as well, and only clearing the kafka-logs did not work. You'll also have to clear zookeeper.
Steps to resolve:
Make sure to stop zookeeper.
Take a look at your server.properties file and locate the logs directory under the following entry.
Example:
log.dirs=/tmp/kafka-logs/
Delete the log directory and its contents. Kafka will recreate the directory once it's started again.
Take a look at the zookeeper.properties file and locate the data directory under the following entry.
Example:
dataDir=/tmp/zookeeper
Delete the data directory and its contents. Zookeeper will recreate the directory once it's started again.
Start zookeeper.
<KAFKA_HOME>bin/zookeeper-server-start.sh -daemon <KAFKA_HOME>config/zookeeper.properties
Start the kakfa broker.
<KAFKA_HOME>bin/kafka-server-start.sh -daemon <KAFKA_HOME>config/server.properties
Verify the broker has started with no issues by looking at the logs/kafkaServer.out log file.
I've tried all the solutions like
Clearing Kafka Logs and Zookeeper Data (issue reoccurred after creating new topic)
Changing log.dirs path from forward slash "/" to backward slash "\" (like log.dirs=C:\kafka_2.12-2.1.1\data\kafka ) folder named C:\kafka_2.12-2.1.1\kafka_2.12-2.1.1datakafka was created and the issue did stop and the issue was resolved.
Finally I found this link, you'll get it if you google kafka log.dirs windows
Just clean the logs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs and restart kafka
If at all, you are trying to execute in Windows machine, try changing path in windows way for parameter log.dirs (like log.dirs=C:\some_path\some_path_kafLogs) in server.properties in /config folder.
By default, this path will be in unix way (like /unix/path/).
This worked for me in Windows machine.
So this seems to be a windows issue.
https://issues.apache.org/jira/browse/KAFKA-6188
The JIRA is resolved, and there is an unmerged patch attached to it.
https://github.com/apache/kafka/pull/6403
so your options are:
get it running on windows and build it with the patch
run it in a unix style filesystem (linux or mac)
perhaps running it on docker in windows is worth a shot
The problem is in a concurrent working with log files of kafka. The task is a delaying of external log files changing between all Kafka threads and
Topic configuration can help:
Map<String, String> config = new HashMap<>();
config.put(CLEANUP_POLICY_CONFIG, CLEANUP_POLICY_COMPACT);
config.put(FILE_DELETE_DELAY_MS_CONFIG, "3600000");
config.put(DELETE_RETENTION_MS_CONFIG, "864000000");
config.put(RETENTION_MS_CONFIG, "86400000");
What worked for me was deleting both kafka and zookeeper log directories then configuring my log directories path in both kafka and zookeeper server.properties files (can be found in kafka/conf/server.properties) from the usual slash '/' to a backslash '\'
on windows changing to path separators '' resolved the issue, each required a double backslash ' C:\\path\\logs
Simply delete all the logs from :
C:\tmp\kafka-logs
and restart zookeeper and kafka server.
I have the following issue. I deleted the guest user and now basically I cant login to rabbitmq using guest-guest.
I have tried the following things:
1) I did uninstall the service and install it again. It didn't work. It worked for some other colleagues but it didn't work for me. I have tried to uninstall/install nearly 10 times. I can see the service disappearing and readded. Still no success
2) Config file. The config file resides in the following location :
C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.5.1\etc\rabbitmq.
I removed the comments from the following setting :
It was like that
%% {loopback_users, []},
and now it is
{loopback_users, []},
still no success. I restart the service every time i modify the config file but still no success. It was working before I delete the guest user. Does anyone know what else could I do or what am i doing wrong??
Thanks!!
Try again but with the config file found in:
C:\Users\<<Username>>\AppData\Roaming\RabbitMQ
For environment changes to take effect on Windows, the service must be re-installed. It is not sufficient to restart the service.
You can re-install from Start Menu > RabbitMQ Server > RabbitMQ Service - re(install)