why HAWQ Initialization error sync hawq-site.xml failed - hawq

the apache-hawq-src-2.3.0.0 ,build form source. install is sucessfuly.but
when I init hawq, like this "bin/hawq init master",
20180830:13:44:15:084023 hawq_init:office-hadoop02:web-[INFO]:-Check:
hawq_master_address_host is set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check: hawq_master_address_port
is set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check: hawq_master_directory is
set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check: hawq_segment_directory is
set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check: hawq_segment_address_port
is set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check: hawq_dfs_url is set
20180830:13:44:15:084023 hawq_init:office-hadoop02:web-[INFO]:-Check:
hawq_master_temp_directory is set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check:
hawq_segment_temp_directory is set 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-No standby host configured, skip
it 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Check if hdfs path is available
20180830:13:44:15:084023 hawq_init:office-hadoop02:web-[INFO]:-1
segment hosts defined 20180830:13:44:15:084023
hawq_init:office-hadoop02:web-[INFO]:-Set
default_hash_table_bucket_number as: 6 20180830:13:44:17:084023
hawq_init:office-hadoop02:web-[ERROR]:-sync hawq-site.xml failed.
20180830:13:44:17:084023 hawq_init:office-hadoop02:web-[ERROR]:-Set
default_hash_table_bucket_number failed

This should because the ssh password-less is not set, please check below settings.
https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install
Besides you need to set password-less ssh on the systems.
Exchange SSH keys between the hosts host1, host2, and host3:
hawq ssh-exkeys -h host1 -h host2 -h host3

Related

how to access docker mariadb container from outside?

I followed the official guide at:
https://mariadb.com/kb/en/installing-and-using-mariadb-via-docker/
However, I haven't found any entry with bind-address in my my.cnf file, it looks like this:
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 0. "/etc/mysql/my.cnf" symlinks to this file, reason why all the rest is read.
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# If you are new to MariaDB, check out https://mariadb.com/kb/en/basic-mariadb-articles/
#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Port or socket location where to connect
# port = 3306
socket = /run/mysqld/mysqld.sock
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
when I try to connect to it from outside, that is from the host computer, I get the following:
Creating a session to 'root#172.17.0.2'
MySQL Error 2003 (HY000): Can't connect to MySQL server on '172.17.0.2' (60)
What should I do to be able to connect to the server from outside? It does run as I can connect from within the docker container.
I'm using macOS.
You can't do this trick mysql -h 172.17.0.2 -u root -p on Mac.
There is no docker0 bridge on macOS🔗
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Please see the official docker documentation for Mac.
I suggest you expose the container port to the host -p 127.0.0.1:3306:3306 and then connect to your DB as to the localhost mysql -h 127.0.0.1 -p -uroot.
docker run --name mariadbtest \
-p 127.0.0.1:3306:3306\
-e MYSQL_ROOT_PASSWORD=mypass \
-d mariadb/server:10.3 \
--log-bin \
--binlog-format=MIXED
Your configuration uses a socket for connections, as you have commented out port:
# port = 3306
socket = /run/mysqld/mysqld.sock
So you should uncomment port above (and remove / comment out the socket configuration). This will cause the database to listen on port 3306.
For local usage you'll want to port-map that port to localhost afterward, for example running your container with -p so you can connect via localhost:3306:
docker -d -p 127.0.0.1:3306:3306 [..] example/mariadb

Nodetool command from one node to another node is not working

nodetool -h 10.16.252.129 -p 9042 -u cassandra -pw cassandra status
is giving error:
nodetool: Failed to connect to '10.16.252.129:9042' -
ConnectIOException: 'non-JRMP server at remote endpoint'.
This is in cassandra.yaml file:
rpc_address: 10.16.252.129
rpc_port: 9160
You have to use 7199 port here for nodetool command. However you need to check whether your port is open or not if not then you should open/allow this port on firewall.
You can find the JMX port configuration on cassandra-env.sh.
Then you should try to run below command:-
nodetool -h Hostname/IP -p 7199 -u username -pw password status
You can find more details about nodetool syntax and usage on below link.
http://cassandra.apache.org/doc/latest/tools/nodetool/compactionhistory.html
First of all, port 9042 is for the native binary protocol CQL client connections. Port 9160 is for legacy (deprecated) Thrift protocol client connections. Inter-node nodetool commands use the JMX (Java Management eXtensions) protocol over port 7199.
Do note that in order for remote JMX to work port 7199 will need to be open (firewall) and cassandra-env.sh has configuration lines for:
$JMX_PORT="7199"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=$HOST_IP"
You may also want to enable JMX password authentication:
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
Also, you shouldn't need to send the port or credentials. The cassandra/cassandra creds are the default for database auth, not JMX. If you enabled JMX password auth, then you'll need to send whatever username and password you defined in the password file. But otherwise, this should work (as long as both the current and target nodes have remote JMX enabled):
nodetool -h 10.16.252.148 status

How to make a SSH connection using pageant on terraform for provisioning files?

How to make a SSH connection via pageant on terraform? I'm trying to provision files with file provisioner running on SSH connection. According to docs, on windows, only supported ssh agent is Pageant, but it does not explain how to configure it.
https://www.terraform.io/docs/provisioners/connection.html
Even after adding directory of PuTTY to the PATH env var (which is included in GitExtension), terraform does not seem to detect that, and keep failing to make SSH connection.
Connecting via plink.exe works, so my SSH key is correctly added to the Pageant.
plink core#<ip-address-of-host>
File provisioner works when I pass the content of private_key directly like this, but that's not what I want.
connection {
type = "ssh"
host = aws_instance.instance.public_ip
user = "core"
agent = false
private_key = file(var.private_key_path)
}
You have to set the agent parameter to true:
agent - Set to false to disable using ssh-agent to authenticate. On Windows the only supported SSH authentication agent is Pageant.
agent = true

run: open server: open service: listen tcp :8086: bind: address already in use on starting influxdb

I am setting up influx DB (InfluxDB shell version: v1.7.6).I have made changes in configuration file.But when I start service using command-
It gives me error that bind port 8086 is already in use & graphite service does not start
# Change this option to true to disable reporting.
reporting-disabled = false
hostname=""
join=""
# Bind address to use for the RPC service for backup and restore.
bind-address = ":8088"
###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###
This is configuration for meta tag
[meta]
dir = "/usr/local/var/influxdb/meta"
# Automatically create a default retention policy when creating a database.
retention-autocreate = true
# If log messages are printed for the meta service
logging-enabled = true
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "jmeter"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one
Above is the my influxdb properties.I have restarted service after configuration changes.
I am setting up influx DB (InfluxDB shell version: v1.7.6).I have made changes in configuration file.But when I start service using command-
It gives me error that bind port 8086 is already in use & graphite service does not start
# Change this option to true to disable reporting.
reporting-disabled = false
hostname=""
join=""
# Bind address to use for the RPC service for backup and restore.
bind-address = ":8088"
###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###
[meta]
# Where the metadata/raft database is stored
dir = "/usr/local/var/influxdb/meta"
# Automatically create a default retention policy when creating a database.
retention-autocreate = true
# If log messages are printed for the meta service
logging-enabled = true
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "jmeter"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one
Above is the my influxdb properties.I have restarted service after configuration changes.
Code is not needed for this
It is because another process is using the port 8086. You can find the process using following commands:
netstat -a | grep 8086
If you have root permission:
lsof -i:8086
Identify the other process id and kill it using
kill -9 <process id>
Or configure influx using another port.
restarting influxdb helped me
sudo systemctl restart influxd.service
sudo systemctl restart influxdb.service

Hadoop setup issue: "ssh: Could not resolve hostname now.: No address associated with hostname"

When I build hadoop cluster based on vmware, and I use sbin/start-dfs.sh command, I meet the problem about ssh. It says,
ssh: Could not resolve hostname now.: No address associated with hostname
I have used vi /etc/hosts command to check the hostname and IP address, and vi /etc/profile command. I ensure that there is no fault.
Few suggestions
Check if the hostnames in hdfs-site.xml is set correctly. If you are running with single host setup, and you set namenode host as localhost, you need to make sure localhost mapped to 127.0.0.1 in your /etc/hosts. If you are setting multiple nodes, make sure you use FQDN of each host in your configuration, and make sure each FQDN mapped to the correct IP address in /etc/hosts.
Setup passwordless SSH. Note start-dfs.sh requires that you have passwordless SSH setup from the host where you run this command to rest of cluster nodes. Verify this by ssh hostx date and it doesn't ask for a password.
Check the hostname in the error message (maybe you did not paste the complete log), for the problematic hostname, run SSH command manually to make sure it can be resolved. If not, check /etc/hosts. A common /etc/hosts setup looks like
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain
172.16.151.224 host1.test.com host1
172.16.152.238 host2.test.com host2
172.16.153.108 host3.test.com host3

Resources