installing postgres-xl in linux in distributed environment - postgres-xl

I am very new to postgres-xl. I am planning to use it to my application. There is no properdocumentation to download and install in distributed mode. Please guide me, from where to download, install, configuration, what are the dependent packages for centOS 6 to support postgres-xl, what are the services need to start and how to start them, configuration changes for distributed environment.In a distribued environment, what are the services to start and how. Please guide me. Thanks..!

Following are key points to install Postgres-XL.
Detailed information, please see https://ruihaijiang.wordpress.com/2015/09/17/postgres-xl-installation-example-on-linux/
1. Plan your hosts, IP, ports, etc. For example,
GTM:
hostname=host1
nodename=gtm
IP=192.168.187.130
port=6666
Coordinator:
hostname=host2
nodename=coord1
IP=192.168.187.131
pooler_port=6668,port=5432
Datanode1:
hostname=host3
nodename=datanode1
IP=192.168.187.132
pooler_port=6669, port=15432
Datanode2:
hostname=host4
nodename=datanode2
IP=192.168.187.133
pooler_port=6670, port=15433
2. Write your pgxc_ctl.conf
#user and path
pgxcOwner=postgres
pgxcUser=$pgxcOwner
pgxcInstallDir=/usr/local/pgsql
#gtm and gtmproxy
gtmMasterDir=$HOME/pgxc/nodes/gtm
gtmMasterPort=6666
gtmMasterServer=192.168.187.130
gtmSlave=n
#gtm proxy
gtmProxy=n
#coordinator
coordMasterDir=$HOME/pgxc/nodes/coord
coordNames=(coord1)
coordPorts=(5432)
poolerPorts=(6668)
coordPgHbaEntries=(192.168.187.0/24)
coordMasterServers=(192.168.187.131)
coordMasterDirs=($coordMasterDir/coord1)
coordMaxWALsernder=0
coordMaxWALSenders=($coordMaxWALsernder)
coordSlave=n
coordSpecificExtraConfig=(none none none)
coordSpecificExtraPgHba=(none none none)
#datanode
datanodeNames=(datanode1 datanode2)
datanodePorts=(15432 15433)
datanodePoolerPorts=(6669 6670)
datanodePgHbaEntries=(192.168.187.0/24)
datanodeMasterServers=(192.168.187.132 192.168.187.133)
datanodeMasterDir=$HOME/pgxc/nodes/dn_master
datanodeMasterDirs=($datanodeMasterDir/datanode1 $datanodeMasterDir/datanode2)
datanodeMaxWalSender=0
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)
datanodeSlave=n
primaryDatanode=datanode1
3. Configure ssh authentication to avoid inputing password for pgxc_ctl
This really spent me a few days.
On host1, generate the authentication key file,
ssh-keygen -t rsa (Just press ENTER for all input values)
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
On host1, upload file authorized_keys to host2, host3 and host3, as following,
scp ~/.ssh/authorized_keys postgres#192.168.187.131:~/.ssh/
scp ~/.ssh/authorized_keys postgres#192.168.187.132:~/.ssh/
scp ~/.ssh/authorized_keys postgres#192.168.187.133:~/.ssh/
On every host, run following commands,
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
On host1, try to connect host2, host3 and host4, make sure no password is needed,
ssh postgres#192.168.187.131
ssh postgres#192.168.187.132
ssh postgres#192.168.187.133
4. Run pgxc_ctl to configure and start the cluster
At host1, run following command:
pgxc_ctl init all

Related

SSH to wpengine works in windows command terminal but not git bash

Recently I had to get my hard drive replaced on my work machine and thus had to reconfigure everything. As a result I had to reinstall git bash. Before I was able to ssh fine into wpengine and now I cannot.
I am able to connect via the regular windows terminal fine but when I try with git bash I am getting the "Permission denied (publickey)." error for the same exact command.
I have tried all the suggested options from wpengine and in the different questions related to this on other SE questions and nothing is working.
I am using a Windows machine on windows 10.
Here are the following things I have tried:
Regenerating the key and adding it to my user public keys again and
then waiting 24 hours.
Adding the config details to the ssh_config file in C:\Program
Files\Git\etc\ssh
Adding a config file to my /User/username/.ssh/ folder.
I have tried using the following link and adding the wpengine rsa file: https://gist.github.com/jherax/979d052ad5759845028e6742d4e2343b as well.
Any and all help would be appreciated.
My guess is there is some kind of permissions issue going on the local machine?
Why would the request from git bash terminal to wpengine look different from windows command terminal?
I did solve my issue. If it helps you please use it!
When I used the command to the ssh host with : ssh -v user#environment.wpengine.ssh.net info
I got back this among the debug errors:
debug1: Offering public key: /c/Users/USERNAME/.ssh/KEY_FILENAME RSA XXXXXXXXXXXXXXXXXXXXX explicit
debug1: send_pubkey_test: no mutual signature algorithm
debug1: No more authentication methods to try.
user#environment.ssh.wpengine.net: Permission denied (publickey).
After finding this page:
https://transang.me/ssh-handshake-is-rejected-with-no-mutual-signature-algorithm-error/
I was able to solve the issue by adding the line:
PubkeyAcceptedAlgorithms +ssh-rsa to my ssh config file.
Honestly I am not even 10% certain on WHY this worked, however, it solved my problem.
How To Connect with SSH In WPEngine
If you are having trouble connecting to SSH in WPEngine Following are the commands which I used:
ssh-keygen -t rsa -b 4096 -f c:/users//.ssh/wpengine_rsa
Add Fingerprint in WPEngine My Profile – SSH
Add Config file
Host *.ssh.wpengine.net
IdentityFile ~/.ssh/wpengine_rsa
IdentitiesOnly yes
Connect with your wordpress website (windows command prompt)
ssh environment#environment.ssh.wpengine.net

Hadoop "Permission denied (publickey,password,keyboard-interactive)" warning

I am following this tutorial to install Hadoop in my computer. After finishing the installation, when I try to launch Hadoop using this command ./start-dfs.sh, it returns me the following:
U:sbin U$ ./start-dfs.sh
Starting namenodes on [localhost]
localhost: U#localhost: Permission denied (publickey,password,keyboard-interactive).
Starting datanodes
localhost: U#localhost: Permission denied (publickey,password,keyboard-interactive).
Starting secondary namenodes [U.local]
U.local: U#pc.local: Permission denied (publickey,password,keyboard-interactive).
2018-02-25 14:52:15,505 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I tried un-installing and installing it several times to re-check if I missed something but still I keep getting this error at the end. After looking in some online forums I came to find that the last warning : WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform is not a big deal because it gives the error when we run Hadoop in a 64 bit machine. Will you please let me know what the other two error mean and how to fix them ? I have tried many solutions posted in the internet.
Problem is when you are trying to ssh to a server (in this case localhost) it tries to authenticate you using your credential. And stores that info. But here password-less authentication is not configured, so each time you try to ssh, it will ask you for your password, which is a problem if machines try to communicate with each other using ssh. So to setup passwordless ssh, we need to add user machine's public key to server machines ~/.ssh/authorized_keys file. In this case, both the system are same machines.
So Long story short run the following command.
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Proceed with the following steps:
Generate new keygen.
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Register key gen:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
I did the following 3 steps to create the password less login
ssh-keygen -t rsa (Press enter for each line)
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod og-wx ~/.ssh/authorized_keys
cd hadoop/etc/hadoop
nano hadoop-env.sh
And paste this line in hadoop-env.sh
export HADOOP_SSH_OPTS="-p 22"
Those who are still struggling with this error, my answer could help them.
If you have done everything right and have already added keys in authorized_keys as well, then all you need to do is to remove your id_rsa and id_rsa.pub (whatever names you have used for the keypair file) and empty your authorized_keys, in short just take a rollback, because you might have given a password while generating rsa key.
So just do one thing, create again the RSA key by:
ssh-keygen -t rsa
Give the name of the file(when prompted): < your_filename >
And then do not give it a passphrase and rather just simply hit Enter, i.e, leave it blank and thus you will never see the permission denied error (It worked in my case.)

How secure is Local Hadoop Installation without password?

I want to install hadoop 2.6 in pseudo-distributed mode on my Mac following the instruction found in the blog http://zhongyaonan.com/hadoop-tutorial/setting-up-hadoop-2-6-on-mac-osx-yosemite.html
The blogger suggests to execute the commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
to allow ssh connection to localhost without password. I don't know anything about ssh, sorry for the very basic following concern. Can anyone please tell me:
Is it secure to run these command? Or I am granting any kind of public remote access to my pc? (I told you it was a very basic question)
How can I undo the authorisation I previously granted with these commands?
First and foremost, no Hadoop is secure without Kerberos. That's not closely related to what you're doing generating SSH keys.
In any case, SSH keys require you to have both a public and private key. No one can access the cluster without the generated private key. And no one can access the cluster if their key isn't in the authorized file.
To put it simply, the commands are only as secure as the computer you're running them on. For example, some bad actor could be remotely coping all generated SSH keys on the system.
These passwordless SSH keys are for the hadoop services to communicate between each other within the cluster, and each process should be ran with limited system access anyway, not elevated / root privileges.
You undo the operation by ultimately destroying the key, but you can prevent access by just removing the entry from the authorized file

Confirm host fails for Single node Cluster while setting up cluster on Ambari

I am trying to setup Ambari on single node cluster.
Ambari setup was done as root user
I tried all the post related to this , change the permission and did set up as permission
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html
cd ~/.ssh
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys
cat /root/.ssh/authorized_keys
Copied the the Key from above line in Ambari while setting up cluster Step
ambari-server restart
When I try to Register and Confirm in lInstall Options I get below error
However I am able to do "ssh root#hadoop.maxsjohn.com without giving the password.
==========================
Creating target directory...
==========================
Command start time 2017-03-13 03:35:43
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
SSH command execution finished
host=hadoop.maxsjohn.com, exitcode=255
Command end time 2017-03-13 03:35:43
ERROR: Bootstrap of host hadoop.maxsjohn.com fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).[Error Message][1]
So, coming in a year later I got a very similar error but with a multiple host cluster. In case it helps, I found this error happens for the host running Ambari Server when the private key file chosen on the 'Install Options' page in the 'Cluster Install Wizard' is incorrect (in my case I re-created the keys but neglected to update Ambari). From the host OS perspective the passwordless SSH works just fine but Ambari fails to install the host until the corresponding SSH Private Key file is chosen.
I suspect the password cannot be blank. You need to set a password. If this is for your learning, i would suggest take a copy of VM from hortonworks site and use it. You don't have to go through the pain of installing and configuring. Here is the link

How to make permanent changes to sshd_config file?

I am trying to configure a Hadoop MapReduce environment on my Ubuntu system. I created a new user called hduser and put it under a new group hadoop. I created a ssh certificate and added it to the authorized keys. But whenever I tried to connect to the localhost, I ran into trouble since it kept on asking for password rather than using the key authentication.
I got over this by adding the user hduser to the AllowUsers list in /etc/ssh/sshd_config. I was able to connect to the localhost and get the HDFS system running.
Now the problem is that the entry I made for hduser in the sshd_config file is getting removed everytime I shutdown the Hadoop servers. So, each time, before starting Hadoop processes, I have to make the entry again in sshd_config file and reload ssh. Is there any way to make the changes permanent so that I don't have to do this every time?
I also tried commenting out the AllowUsers field, but it gets automatically uncommented each time.
Thanks,
TM
Edit: I talked to the system admins and it seem that the system wide configuration management application is updating the config files every now and then. Got my Hadoop users added to their list and now things work fine.
Did you perform these steps:
$ ssh-keygen -t rsa -P ''
...
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
...
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh localhost
These lines are from this tutorial
how to create shh-keygen and how to copy the ssh-copy in the localhost.
Cloud you do bellow commands and try the hadoop it will run.
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub $USER#hostname
exmp(hostname= your hostname like localhost). Then it will work.

Resources