Gearman is not writing anything to DB - amazon-ec2

I'm new with gearman and cannot figure out why it's not sending anything in DB
So,
I've created new EC2 and RDS instances for gearman. RDS Engine version - MySQL 5.7.19
On EC2 I've performed:
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum install gearmand -y
Then, I've created config file:
vi /etc/sysconfig/gearmand
Which contains:
### Settings for gearmand
OPTIONS="--port=4730 --queue-type=MySQL --mysql-host=path_to_amazon_RDS_instance --mysql-port=3306 --mysql-user=root --mysql-password='dbpass' --mysql-db=db_prod --mysql-table=queue_dev --verbose DEBUG --log-file=/var/log/gearmand.log"
After I started gearmand service and connected to MySQL database on RDS, I see
that gearman created mysql table queue_dev. So, I assume, that there is no error in connection and/or access.
From log file I cannot see any ERROR type messages.
Anyone can help me or hint, what additionally must be done, so gearman can send messages to DB, or how can I send any test message to db?

gearmand does not persist non-background jobs at all.
Only request of the type will be persisted.
SUBMIT_JOB_BG
SUBMIT_JOB_HIGH_BG
SUBMIT_JOB_LOW_BG
See Persistent Queues
Persistent queues were added to allow background jobs to be stored in an external durable queue so they may live between server restarts and crashes.

Related

Clickhouse server error - org.freedesktop.PolicyKit1

I am getting this error when i am trying to restart my clickhouse server.
Failed to start clickhouse-server.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status clickhouse-server.service' for details.
Upon further inspection of server. We noticed that Log directory was full. After flushing the logs clickhouse server restarted normally. But the error message made no sense cite the actual problem. then what is this error pointing to ? Pls enlight
org.freedesktop.PolicyKit1 , is like sudo but for systemd. It should be enabled for systemd to work. Resolved it by accessing ec2 superuser privelage.
sudo su

How to safely fix an AWOL ambari system user?

I'm a student working on a test cluster, consisting of around 25 hosts. We installed using Ambari and have FreeIpa running on a host as a dns and ldap server. The rest are typical Hadoop
infrastructure. Hive was failing and I wondered whether the db connection parameters used during the Ambari installation were incorrect and I tried to find a way to re-run the db connection process. I didn't get anywhere and it was late so I left it, ambari interface working.
Next morning, ambari webUI seems to be down. I thought that maybe the webserver needed restarted so I tried the following:
[akidd#dw ~]$ sudo ambari-server start
Using python /usr/bin/python
Starting ambari-server
ERROR: Exiting with exit code 1.
REASON: Unable to detect a system user for Ambari Server.
- If this is a new setup, then run the "ambari-server setup" command to create the user
- If this is an upgrade of an existing setup, run the "ambari-server upgrade" command.
Refer to the Ambari documentation for more information on setup and upgrade.
Can anyone help me to understand what could have happened?
If I run ambari-server setup will the existing cluster be ok assuming I create everything like for like with how it was originally?
Thanks for your help!
#user3535074 You should try to start it with the user that installed it.
If you do run ambari-server setup as current user, remember to choose No the following options:
Customize user account for ambari-server daemon [y/n] (n)? n
Do you want to change Oracle JDK [y/n] (n)? n
Enter advanced database configuration [y/n] (n)? n
More info on the following post, including how to backup ambari database before running setup again:
https://community.cloudera.com/t5/Support-Questions/Ambari-server-failed-to-start-after-system-reboot-Below-is/td-p/203806

Cannot produce events to Confluent Kafka deployed on AWS EC2 from local machine

I'm trying to connect from an external client (my laptop) to a broker in a Kafka cluster that I have running on ec2 machines. When I try and connect from my local machine I get the following error:
$ ./kafka-console-producer --broker-list AWS.PRIV.ATE.IP:9092 --topic test
>hi
>[2018-09-20 13:28:53,952] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1519 ms has passed since batch creation plus linger time
The topic exists because if I run (from local machine)
$ ./kafka-topics --list --zookeeper AWS.PRIV.ATE.IP:2181
__confluent.support.metrics
__consumer_offsets
_schemas
connect-configs
connect-offsets
connect-status
test
The cluster configuration is from Confluent's AWS quickstart template: https://github.com/aws-quickstart/quickstart-confluent-kafka/blob/master/templates/confluent-kafka.template and I'm running the open source version.
The three broker ec2 instances are visible to my local machine, which I verified by stopping the Kafka broker, starting a simple HTTP server on port 9092, and successfully curling that server using the internal IP address of the ec2 instance.
If I ssh into one of the broker instances I can successfully produce and consume messages across the cluster. The only update I've made to the out-of-the-box configuration provided by the template is changing listeners=PLAINTEXT://ec2-AWS-PUB-LIC-IP.compute-1.amazonaws.com:9092 in server.properties on each machine and then restarted the kafka server.
I can provide more configuration or debugging info if necessary. Believe the issue is something regarding IP address discoverability/visibility but I'm not entirely sure what.
You need to set advertised.listeners too.
See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.

How to install Redis Sentinel as a Windows service?

I am trying to set up a redis sentinel as a windows service on a Azure VM (IaaS).
I am using the MS OpenTech port of Redis for Windows and running the following command...
redis-server --service-install --service-name rdsent redis.sentinel.conf --sentinel
This command installs the service on my system but when I try to start this service either through the services control panel or through the following command...
redis-server --service-run --service-name rdsent redis.sentinel.conf --sentinel
Then the service fails to start with the following error...
HandleServiceCommands: system error caught. error code=1063, message = StartServiceCtrlDispatcherA failed: unknown error
Am I missing something here?
Please someone help me start this service make it work properly.
I had the same problem, and mine was related to my sentinel config. A number of articles I have found have some incorrect examples, so my service install would not work until the configuration was correct. Anyway, here is what you need at a minimum for your sentinel config (for Windows Redis 2.8.17):
sentinel monitor <name of redis cache> <server IP> <port> 2
sentinel down-after-milliseconds <name of redis cache> 4000
sentinel failover-timeout <name of redis cache> 180000
sentinel parallel-syncs <name of redis cache> 1
Once you have that setup, the original Redis service command above will work.
According to MSOpenTech, the following command should install Redis Sentinel as a service:
redis-server --service-install --service-name Sentinel1 sentinel.1.conf --sentinel
But when I used that command the installed service wouldn't start: it would immediately fail with error 1067, "The process terminated unexpectedly." Looking at service entry I'm guessing the problem is that the --service-name parameter isn't being filtered and ends up as part of the service executable path.
What I did find to work is installing the service manually with the SC command:
SC CREATE Sentinel1 binpath= "\"C:\Program Files\Redis\redis-server.exe\" --service-run sentinel.1.conf --sentinel"
Don't forget the required space after "binpath=", and obviously that path will have to reflect where you've installed redis-server.exe. Also after the service installed I edited the service entry so Redis Sentinel would run under the Network Service account.
I am using v3.0.501 and ran into the two issues below. While present it caused the service to fail on start without an error written to either the file log or the Event Log.
The configuration file must be the last parameter of the command line. If another parameter was last, such as --service-name, it would run fine when invoked the command line but would consistently fail went started as a service.
Since the service installs a Network Service by default, ensure that it has access to the directory where the log file will be written.
Once these two items were accounted for the redis as a service run smooth as silk.
Recently, I have found a way how to setup windows service for Redis and Sentinel.
During my setup, I encountered similar problem. I finally figured it out: it was caused by the configuration file path.
I have put all my configuration into my github project: https://github.com/dingyuliang/Windows-Redis-Sentinel-Config

Informatica : Not able to connect to Integration Service

I am new to informatica and looking for assistance from available experts.
I am able to login to admin console (http://localhost:6008/administrator/#admin) where I can see my node is available, my repository service is available, my integration service is available.
Through Power center desiner tool, I am able to view my mappings. Also, I am able to connect to Powercenter Workflow manager. However, while trying to execute my workflow, it says that cannot connect to integration service.
I am getting following error in log :
CCM_10322
The following error occurred while logging to Log Service: [[DOM_10022] The master gateway node for the domain is not available.
Electing another master gateway. Wait for the election of the master gateway node to complete.
If the problem persists, verify that the master gateway node is running.].
Thanks,
Manish.
Pre-requisites:
1)Once the Repository files has been restored make sure all service(rep & IS services) are up.
2)Make sure the INFA_DOMAIN_FILE env variable has been created and has the right values. And make sure file with path added in the PATH, in both server & client machines.
Solution :Last resort, Upgrade the domain info in server & Client machines as shown below, then restart the infaservices & server. Now it works!!!
Upgrade the Domains:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in server:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in client:
cd e:\Informatica\9.0.1\clients\PowercenterClient\CommandLineUtilities\Pc\server\bin\
`infacmd updateGatewayInfo –dn Domain_name –dg servername.net:6005'
Restart the Server:
Cd $INFA_HOME/tomcat/bin
Sh infaservices.sh shutdown
Sh startserver.sh shutdown
Sh startserver.sh startup
Sh infaservices.sh startup
Done. Now check!!!
You must define one node as Master Gateway using infacmd or infasetup then you can run all services from admin console.

Resources