Presto installation error - hortonworks-data-platform

Im trying to install presto on a cluster, but when im trying to deploy/ install presto server through the several nodes it gives an error on every node:
Fatal error: [host1] Needed to prompt for a connection or sudo password (host: host1), but input would be ambiguous in parallel mode.
Does anyone know where the problem came from?

That error comes when you don't have SSH connectivity between the node running presto-admin and the nodes in the cluster. The default user for the connection is root, but that can be changed by modifying config.json. Either set up passwordless SSH or specify the SSH password via --password (or -I for an interactive prompt). See the docs for more on this: https://teradata.github.io/presto/docs/current/presto-admin/ssh-configuration.html.

Related

Error while connecting to AWS EMR cluster from mac

I'm trying to create 3 node AWS EMR cluster. I have also create a key to connect to cluster from macOS with command :
ssh -i ~/Downloads/BigdataKey.pem hadoop#ec2-xx-xx-xx-xx.ap-south-1.compute.amazonaws.com
But its giving error :
192:Downloads nageshsinghchauhan$ ssh -i ~/Downloads/BigdataKey.pem hadoop#ec2-xx-xx-xx-xx.ap-south-1.compute.amazonaws.com
ssh: connect to host ec2-xx-xx-xx-xx.ap-south-1.compute.amazonaws.com port 22: Operation timed out
Any one please help me out, I'm trying this for the first time using macOS.
The solution I found is that:
Go to EC2 security groups and and open "ElasticMapReduce-master".
Under Inbound tab, click edit.
Add rule, and provide Type = All TCP, port range = 0-65535, source = MyIP.
now go to terminal and provide permission as :chmod 400 my-key-pair.pem
Last step, try SSH to your cluster via your key from mac.
It's Done :)

Connecting to Hive Database with DBeaver

I have a Hortonworks Hadoop cluster where the data nodes are on a separate network off of the master/head node. The only way to access the data nodes is through the master node or an edge node. From the edge node, I execute the hive command to connect into my hive database.
I cannot connect to the hive database from my desktop with DBeaver (4.3.0, 64-bit Windows) or the hive command line interface. Through DBeaver, I tried creating an SSH tunnel to my edge node and continually receive "Could not open client transport with JDBC Uri. jdbc:hive2://127.0.0.1:[port#]/[database].
Configuration for Hive/Apache Hive driver:
General Tab:
Host: dataNodeName
Port: 10000
Database/Schema: databaseName
User name: myUID
SSH Tunnel Tab (Network page):
Checked Use SSH Tunnel
Host/IP: edgeNodeServerName
Port: 22
User Name: myUID
Authentication Method: Password
Password: myPWD
Advanced
Local port: 0
Keep-Alive interval (ms): 0
When I select "Test Connection" with local port set to "0", I receive the above error message with random port numbers. If I set the local port to "10000", I receive the above error with port number "10000".
It looks like DBeaver is ignoring the generic JDBC connection settings--the host name in the created JDBC string is 127.0.0.1 instead of the data node name.
What am I missing? How do I setup DBeaver to access a Hive database located on a "hidden" network?
Is your hostname configured with the IP address mentioned in the jdbc connect syntax (127.0.0.1)?
Are you able to connect to beeline from your Unix shell?
Syntax to connect to beeline(hiveserver2):
beeline -u jdbc:hive2://<hostname>:<hive listener port>/<database> -n username> -p <password>
If you're able to connect to beeline, you should be able to connect to hive using same port number and host from DBeaver.
Hive listener port by default is configured on 10000, but there's a possibility that your admin can change the port number. Check the port number in hive-site.xml, or get it from admin.
Could you please uncheck the SSH tunnel and try?
This link has all the setup from scratch, please check if you have missed any step.
https://www.linkedin.com/pulse/query-hive-hiveserver2-from-windows-using-universal-database-nimmala
Not sure if your environment is Kerberized or not but assuming it is -
Following is what worked for me while connecting to Cloudera -
Fetch the krb5.conf or krb5.ini from your admins and place it in some directory. I normally put the file in a location where I put my keytabs.
Create jaas.conf file and place it at the same location(or the location of your choice)
jaas.conf must look like below(copy paste) -
Client {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
doNotPrompt=true
useKeyTab=true
keyTab="C:\Users{user}\krb5cc_{user}"
useTicketCache=true
renewTGT=true
principal="{user}#DOMAIN.ORG" ;
};
Edit your dbeaver.ini file and provide the reference to both of this files(append the following lines to existing dbeaver.ini). Make sure you backup dbeaver.ini, with re installations or replacing with newer version, dbeaver.ini may get replaced, in that case you can copy the lines below from your backup dbeaver.ini file -
-Djavax.security.auth.useSubjectCredsOnly=false
-Djava.security.krb5.debug=true
-Dsun.security.krb5.debug=true
-Djava.security.krb5.conf=C:\Users{User}\Documents\Keytabs\krb5.conf
-Djava.security.auth.login.config=C:\Users{User}\Documents\Keytabs\jaas.conf
Last Step(You may need or may not)
I init my keytab before connecting. So I use Shell Commands -
Press F4 after creating the connection
Make sure in user you just put the user name for which you are initializing the keytab and nothing else. It should not be {user}#domain.org.
Use the shell commands to init the keytab
I also was having trouble configuring DBeaver to Hive, my solution was to use Cloudera's ODBC Driver. It worked a lot better then the JDBC drivers (auto-complete working, quicker, no need to run kinit), and I could automatize its creation.
The only problem is that you must be admin to install it.

Confirm host fails for Single node Cluster while setting up cluster on Ambari

I am trying to setup Ambari on single node cluster.
Ambari setup was done as root user
I tried all the post related to this , change the permission and did set up as permission
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html
cd ~/.ssh
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys
cat /root/.ssh/authorized_keys
Copied the the Key from above line in Ambari while setting up cluster Step
ambari-server restart
When I try to Register and Confirm in lInstall Options I get below error
However I am able to do "ssh root#hadoop.maxsjohn.com without giving the password.
==========================
Creating target directory...
==========================
Command start time 2017-03-13 03:35:43
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
SSH command execution finished
host=hadoop.maxsjohn.com, exitcode=255
Command end time 2017-03-13 03:35:43
ERROR: Bootstrap of host hadoop.maxsjohn.com fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).[Error Message][1]
So, coming in a year later I got a very similar error but with a multiple host cluster. In case it helps, I found this error happens for the host running Ambari Server when the private key file chosen on the 'Install Options' page in the 'Cluster Install Wizard' is incorrect (in my case I re-created the keys but neglected to update Ambari). From the host OS perspective the passwordless SSH works just fine but Ambari fails to install the host until the corresponding SSH Private Key file is chosen.
I suspect the password cannot be blank. You need to set a password. If this is for your learning, i would suggest take a copy of VM from hortonworks site and use it. You don't have to go through the pain of installing and configuring. Here is the link

psql: could not connect to server

The error in its entirety reads:
psql: could not connect to server: No such file or directory. Is the
server running locally and accepting connections on Unix domain socket
"/tmp/.s.PGSQL.5432"?
This is my second time setting up Postgresql via Homebrew on my Mac, and I have no clue what is going on. Previously, it had been working. At some point, I must've entered a command that messed things up. I'm not sure. Now, whenever I enter a SQL command from the command line, I receive the above message. I've run a command to check whether the server is running, and it apparently is not. If I attempt to start the server using
$ postgres -D /usr/local/pgsql/data
I receive the following error:
postgres cannot access the server configuration file
"/usr/local/pgsql/data/postgresql.conf": No such file or directory
I've uninstalled and reinstalled Postgresql via Homebrew, but the problem persists. I'm completely at a loss as to how to get this working. Any help would be appreciated.
your data directory is most likely wrong.
issue a "sudo find / -name "postgresql.conf" " on your terminal to see where your postgres file resides. Then, do an ls in the data directory. Use that in the -D option when starting postgres.

SSH Key authentication failing when connecting Mac Hudson slave to Linux master

Ok, so I have Hudson (v1.393) running in an Ubuntu VM and everything's working fine.
However I'm trying to add a Mac slave to the Ubuntu master and I've run in to a few problems.
I have set up SSH keys so that from the command line, the Ubuntu VM can ssh using the key into a user called hudson on the Mac.
In the Hudson slave configuration, I have "Launch slave agents on Unix machines via SSH" selected and have entered the host IP, username of the user on the slave and the location of my private key file on the master (which has been added to the authorised keys file on the slave).
However, the master fails to connect to the slave.
Looking at the log (below), it's trying to authenticate using a password.
Is this a fall back for a failed key based SSH attempt?
Is Hudson only trying to authenticate using a password, and I need to change something else to get it to use the key file which is defined in the configuration?
Is it just not possible to launch slave agents via ssh on a mac? (I know the name of this type of slave launch method explicity states Unix, but I was thinking (read: hoping) that it would work with OS X too)
Log
[01/14/11 10:38:07] [SSH] Opening SSH connection to 10.0.1.188:22.
[01/14/11 10:38:07] [SSH] Authenticating as hudson/******.
java.io.IOException: Password authentication failed.
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePassword(AuthenticationManager.java:319)
at com.trilead.ssh2.Connection.authenticateWithPassword(Connection.java:314)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:565)
at hudson.plugins.sshslaves.SSHLauncher.launch(SSHLauncher.java:179)
at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:184)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.io.IOException: Authentication method password not supported by the server at this stage.
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePassword(AuthenticationManager.java:289)
... 9 more
[01/14/11 10:38:07] [SSH] Connection closed.
If anyone has managed to conquer this type of set up before, or has any tips or ideas, I'd be very grateful!
Thanks
I've recently run into the same problem, trying to launch an agent on a Mac OS X 10.6 machine using SSH.
To get password authentication to work you'll need to edit /etc/sshd_config on the client node, setting PasswordAuthentication yes
In the Hudson dashboard take the node offline, make sure the configuration has a valid username and password, and launch the agent. Also make sure that the Remote FS root directory is owned by the build user you're connecting as.
For password-less ssh authentication, first check which user the Hudson master is running as. Lets assume that this is tomcat55. Generate a public/private SSH key pair (with an empty passphrase), then verify that the Hudson user can connect.
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tomcat55/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tomcat55/.ssh/id_rsa.
Your public key has been saved in /home/tomcat55/.ssh/id_rsa.pub.
$ # authorize the hudson master on the hudson node
$ scp /home/tomcat55/.ssh/id_rsa.pub hudson#macnode:~/.ssh/authorized_keys
$ # test the connection
$ ssh -i /home/tomcat55/.ssh/id_rsa hudson#macnode
On the Hudson mac node, the /etc/sshd_config needs to allow for password-less access.
Protocol 2
PubkeyAuthentication yes
In the node configuration clear the password field, and set the private key field (in this example it is /home/tomcat55/.ssh/id_rsa). You should now be able to launch the agent:
[01/19/11 22:38:44] [SSH] Opening SSH connection to macnode:22.
[01/19/11 22:38:44] [SSH] Authenticating as hudson with /home/tomcat55/.ssh/id_rsa.
[01/19/11 22:38:45] [SSH] Authentication successful.
Check the /var/log/auth.log file on the Ubuntu machine. I'm betting you need to chmod 700 the .ssh directory of the hudson user.
I think the first answer (the selected one) is an awesome answer, but I did find a case where it is not the only solution.
In my case I have a Mac OS slave that was working and then I took that Mac down and brought up a new one. I thought I could just tweak the settings for the existing node's configuration to point it at the new Mac. It didn't work and I had all the same errors and problems described throughout this message thread.
Then I went in and deleted the node and recreated it with exactly the same settings and it worked. I suspect that SSH key fingerprint changed and by deleting the node and recreating it I was able to get it working. Whatever it is, the key component that caused it to fail is not a configuration option.

Resources