DNS incosistent - hadoop

I am trying to create a MAPR cluster, nut I have DNS inconsistent warning. I have edited my /etc/hosts file as
10.0.0.10 master.aptus.com
10.0.0.20 slave1.aptus.com
10.0.0.30 slave2.aptus.com
These system contribute to establishing cluster. When i execute :
host 10.0.0.10
the output is:
10.0.0.10.in-addr.arpa has no PTR record
And when I execute :
host master.aptus.com
I get the following output :
master.aptus.com has address 128.199.41.186
I tried to run the installation with warning, but installation fails.
MAPR installation fails

Update DNS entry in domain server
Update the DNS server (AD server) with the new ip address for the cluster node which is being migrated. Once it is updated, all the nodes in the cluster will be able to resolve it.
Check the reverse lookup zone and make sure it is the same using dns tool or ssh
Try it let me know if it helps.

Related

Connecting to Hive Database with DBeaver

I have a Hortonworks Hadoop cluster where the data nodes are on a separate network off of the master/head node. The only way to access the data nodes is through the master node or an edge node. From the edge node, I execute the hive command to connect into my hive database.
I cannot connect to the hive database from my desktop with DBeaver (4.3.0, 64-bit Windows) or the hive command line interface. Through DBeaver, I tried creating an SSH tunnel to my edge node and continually receive "Could not open client transport with JDBC Uri. jdbc:hive2://127.0.0.1:[port#]/[database].
Configuration for Hive/Apache Hive driver:
General Tab:
Host: dataNodeName
Port: 10000
Database/Schema: databaseName
User name: myUID
SSH Tunnel Tab (Network page):
Checked Use SSH Tunnel
Host/IP: edgeNodeServerName
Port: 22
User Name: myUID
Authentication Method: Password
Password: myPWD
Advanced
Local port: 0
Keep-Alive interval (ms): 0
When I select "Test Connection" with local port set to "0", I receive the above error message with random port numbers. If I set the local port to "10000", I receive the above error with port number "10000".
It looks like DBeaver is ignoring the generic JDBC connection settings--the host name in the created JDBC string is 127.0.0.1 instead of the data node name.
What am I missing? How do I setup DBeaver to access a Hive database located on a "hidden" network?
Is your hostname configured with the IP address mentioned in the jdbc connect syntax (127.0.0.1)?
Are you able to connect to beeline from your Unix shell?
Syntax to connect to beeline(hiveserver2):
beeline -u jdbc:hive2://<hostname>:<hive listener port>/<database> -n username> -p <password>
If you're able to connect to beeline, you should be able to connect to hive using same port number and host from DBeaver.
Hive listener port by default is configured on 10000, but there's a possibility that your admin can change the port number. Check the port number in hive-site.xml, or get it from admin.
Could you please uncheck the SSH tunnel and try?
This link has all the setup from scratch, please check if you have missed any step.
https://www.linkedin.com/pulse/query-hive-hiveserver2-from-windows-using-universal-database-nimmala
Not sure if your environment is Kerberized or not but assuming it is -
Following is what worked for me while connecting to Cloudera -
Fetch the krb5.conf or krb5.ini from your admins and place it in some directory. I normally put the file in a location where I put my keytabs.
Create jaas.conf file and place it at the same location(or the location of your choice)
jaas.conf must look like below(copy paste) -
Client {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
doNotPrompt=true
useKeyTab=true
keyTab="C:\Users{user}\krb5cc_{user}"
useTicketCache=true
renewTGT=true
principal="{user}#DOMAIN.ORG" ;
};
Edit your dbeaver.ini file and provide the reference to both of this files(append the following lines to existing dbeaver.ini). Make sure you backup dbeaver.ini, with re installations or replacing with newer version, dbeaver.ini may get replaced, in that case you can copy the lines below from your backup dbeaver.ini file -
-Djavax.security.auth.useSubjectCredsOnly=false
-Djava.security.krb5.debug=true
-Dsun.security.krb5.debug=true
-Djava.security.krb5.conf=C:\Users{User}\Documents\Keytabs\krb5.conf
-Djava.security.auth.login.config=C:\Users{User}\Documents\Keytabs\jaas.conf
Last Step(You may need or may not)
I init my keytab before connecting. So I use Shell Commands -
Press F4 after creating the connection
Make sure in user you just put the user name for which you are initializing the keytab and nothing else. It should not be {user}#domain.org.
Use the shell commands to init the keytab
I also was having trouble configuring DBeaver to Hive, my solution was to use Cloudera's ODBC Driver. It worked a lot better then the JDBC drivers (auto-complete working, quicker, no need to run kinit), and I could automatize its creation.
The only problem is that you must be admin to install it.

Cassandra: target machine actively refused it

I am trying to run Cassandra (CQL Shell) and I am receiving the following error, I have tried all the google responses to existing questions, nothing has fixed it so far.
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(10061, "Tried connecting to [('127.0.0.1', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Before installing Apache Cassandra, JDK must be installed.
Can you make sure the IP address is set correctly on your rpc_address setting in your cassandra.yaml file, on your cassandra server.
Also, you need to make sure port 9042 is open and available for incoming traffic (if your IT department is setting up servers, it is possible this port is blocked, unless otherwise specified...)
Hope it helps.
I also faced the same issue , but may be the below 2 way's can help :
Option 1 :
In my case i haven't started the Cassandra Server and was directly trying to connect to Cassandra.
(a) Firstly start the cassandra server via cmd --> \bin>cassandra.bat -f
and then
(b) Try to connect to it's node --> \bin>cqlsh.bat -u cassandra
Option 2:
Try changing the rpc_address in your cassandra.yaml file to eihter 127.0.0.1 instead of localhost
or to 0.0.0.0 instead of localhost
and then again start the server from new CMD.

Adding Elastic IP causes shell login to fail

After associating Elastic IP on a Cloud server instance I cannot login anymore
ssh -i "ec2.pem" ubuntu#1.2.3.4
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is...
Please contact your system administrator.
How can I asssign a static IP (Elastic IP) with my EC2 Cloud server and still be able to login with the system / console?
This is merely a warning that you are connecting to a system that had a different SSH fingerprint, as stored in your local .ssh/known_hosts file. If you know things are okay, just delete the appropriate entry from that file and you can connect again.

Websphere Node Federating issue

I have created a DMGR profile in my host which is successs. On the same host i have created a managed node. Now when i am trying to federate the node to DMGR i am getting below error :
ADMU006E : Exception creating Deployment Manager Connection: com.ibm.websphere.management.exception.ConnectorException: ADMC0016E: The system cannot create a SOAP connector to connect to host **** at port 8879
Now to verify SOAP connection i have run the below command :
** wsadmin.sh -conntype SOAP -port -host **
Running this script from bin in dmgr profile.. it is connecting fine but failing with the same exception when running from bin directory of managed node profile.
To verify further i done the same setup in a different host by creating a dmgr and a node. It was all fine. DMGR profile created and node got federated in one go.
Not sure what exactly is the issue here.
Have verified that my port and hoat details are all correct. Also that my dmgr is running.
Thanks Michal, ephonk for your inputs.
Found the solution to above issue.
Issue was with the java.security file present in the WAS_HOME/java/jre/lib/security location.
In this file the SSL values were as below :
ssl.SocketFactory.provider=com.ibm.jsse2.SSLSocketFactoryImpl
ssl.ServerSocketFactory.provider=com.ibm.jsse2.SSLServerSocketFactoryImpl
We changed to below values and were successfully able to Federate the node to DMGR.
ssl.SocketFactory.provider=com.ibm.websphere.ssl.protocol.SSLSocketFactory
ssl.ServerSocketFactory.provider=com.ibm.websphere.ssl.protocol.SSLServerSocketFactory
May or may not be relevant, but I've seen this when the SOAP port isn't open in the firewall. Are you sure traffic on that port is allowed?

RSH connection refused while running MPI program

I'm trying to run MPI programs on 8 machines, but I get the error
connect to address 127.0.0.1 port 544: Connection refused
Trying krb4 rsh...
connect to address 127.0.0.1 port 544: Connection refused
trying normal rsh (/usr/bin/rsh)
lagrid02: Connection refused
When I run it with a machinefile option, I get the error lagrid03: No route to host where lagrid03 is the neighbouring node connected to master node.
How should I rectify this ?
Regarding your first error, is rsh running on (all) the machine(s)? You'll need rsh or password-less ssh configured (and ask your mpi job launcher use ssh) before you can start jobs on different machines.
The second error indicates that there is no way to reach the machine lagrid03 with the current network config. I guess you have a /etc/hosts entry with the IP addresses for lagrid03, but you do not have an interface configured in that network. For a more detailed answer you'll need to post details about your network configuration.
The issue is with authentication, if you go into the /etc/pam.d/rsh file and move rlogin and rsh to the top and make it look like this, it would work just fine.
/* For root login to succeed here with pam_securetty, "rsh" must be listed in /etc/securetty.*/
auth required pam_nologin.so
auth required pam_securetty.so
auth required pam_env.so
auth required pam_rhosts_auth.so
account include system-auth
session optional pam_keyinit.so force revoke
session include system-auth

Resources