Unable to connect to mysql on ec2-instance from outside - amazon-ec2

Am unable to connect to mysql database on ec2 from my current server ..
using telnet i tried to check if the port is open .. but i get the following error ..
"telnet: Unable to connect to remote host: Connection refused" ..
i checked the security group as well .. and it has MySQLL (port 3306) added ..
The root device for the instance is EBS ..
any clue where i might be going wrong ??
Abhishek

Check the /etc/my.cnf or where your MySQL config is stored. It probably contains something like bind_address=127.0.0.1 and/or skip-networking.

Related

clickhouse-client get error "Timeout exceeded while reading from socket"

I'm newbie to clickhouse, I'm trying to create a clickhouse database on my unbuntu 18.04 remote server, I follow instruction to install click house from DEB package in this link: https://clickhouse.tech/docs/en/getting_started/install/#from-sources
after that when I run command clickhouse-client it shows something like this :
root#busmap-api-test:~# clickhouse-client
ClickHouse client version 20.3.5.21 (official build)
Connecting to localhost:9000 as user default.
Code: 209. DB::NetException: Timeout exceeded while reading from socket (127.0.0.1:9000)
Can someone help me to figure out what is the problem and how I can solve it?
Thanks,
Follow these steps to resolve the issue:
check that clickhouse-server-service started
service clickhouse-server status
check the server logs to find the possible reason
cat /var/log/clickhouse-server/clickhouse-server.err.log
if occured the error 'Address already in use':
{} <Error> Application: Net Exception: Address already in use: [::1]:9000
{} <Error> Application: Net Exception: Address already in use: 127.0.0.1:9000
need to switch CH-server to any other port by editing tcp_port-param in /etc/clickhouse-server/config.xml-file:
..
<tcp_port>9032</tcp_port>
..
restart CH-server service:
service clickhouse-server restart
and connect this way
clickhouse-client --port 9032
I actually had this problem too but I got it working with the default port.
The setting should be this way if you want to connect remotely and be able to use the loopback from localhost.
<listen_host>::1</listen_host>
<listen_host>0.0.0.0</listen_host>
This allows the loopback method to work (i.e clickhouse-client no args) on localhost to connect through the IPV6 route, and the remote connection (i.e clickhouse-client -h <hostname>) through the IPV4 connection.
My original problem was that i only used <listen_host>0.0.0.0</listen_host> in my config which meant theclickhouse-client no args would not work on localhost. And I could not get both to work by adding <listen_host>127.0.0.1</listen_host>

DbFit Connection for Remote Oracle Server via SSH

Please help me connecting to remote Oracle Server.
Oracle 11g is hosted on a Unix Server, which we access
Creating a SSH session . See Attachment 'Remote Host Connection.jpg'
Creating a SSH tunnel.. See attachment 'Tunneling.jpg'
After the tunnel is established (2nd method), I though we should be able to establish connection from DbFit to DB server, but its not working. Connection timeout error is show.
Please help me to get the connection established
Either directly creating a SSH tunnel from DbFit if possible.
Or connect to DB server using the SSH tunnel created.
Below is the code used for testing
!path lib/*.jar
!|dbfit.OracleTest|
!|Connect|localhost:5000|<DB user name>|<password>|dbfit|
!|Query|select 'test' as x|
|x |
|test |
Please find attached screenshot with error displayed.
The issue was resolved.
It was problem with tunneling to the DB server and not with FitNesse.
Regards,
Kabilan

Connecting to Hive Database with DBeaver

I have a Hortonworks Hadoop cluster where the data nodes are on a separate network off of the master/head node. The only way to access the data nodes is through the master node or an edge node. From the edge node, I execute the hive command to connect into my hive database.
I cannot connect to the hive database from my desktop with DBeaver (4.3.0, 64-bit Windows) or the hive command line interface. Through DBeaver, I tried creating an SSH tunnel to my edge node and continually receive "Could not open client transport with JDBC Uri. jdbc:hive2://127.0.0.1:[port#]/[database].
Configuration for Hive/Apache Hive driver:
General Tab:
Host: dataNodeName
Port: 10000
Database/Schema: databaseName
User name: myUID
SSH Tunnel Tab (Network page):
Checked Use SSH Tunnel
Host/IP: edgeNodeServerName
Port: 22
User Name: myUID
Authentication Method: Password
Password: myPWD
Advanced
Local port: 0
Keep-Alive interval (ms): 0
When I select "Test Connection" with local port set to "0", I receive the above error message with random port numbers. If I set the local port to "10000", I receive the above error with port number "10000".
It looks like DBeaver is ignoring the generic JDBC connection settings--the host name in the created JDBC string is 127.0.0.1 instead of the data node name.
What am I missing? How do I setup DBeaver to access a Hive database located on a "hidden" network?
Is your hostname configured with the IP address mentioned in the jdbc connect syntax (127.0.0.1)?
Are you able to connect to beeline from your Unix shell?
Syntax to connect to beeline(hiveserver2):
beeline -u jdbc:hive2://<hostname>:<hive listener port>/<database> -n username> -p <password>
If you're able to connect to beeline, you should be able to connect to hive using same port number and host from DBeaver.
Hive listener port by default is configured on 10000, but there's a possibility that your admin can change the port number. Check the port number in hive-site.xml, or get it from admin.
Could you please uncheck the SSH tunnel and try?
This link has all the setup from scratch, please check if you have missed any step.
https://www.linkedin.com/pulse/query-hive-hiveserver2-from-windows-using-universal-database-nimmala
Not sure if your environment is Kerberized or not but assuming it is -
Following is what worked for me while connecting to Cloudera -
Fetch the krb5.conf or krb5.ini from your admins and place it in some directory. I normally put the file in a location where I put my keytabs.
Create jaas.conf file and place it at the same location(or the location of your choice)
jaas.conf must look like below(copy paste) -
Client {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
doNotPrompt=true
useKeyTab=true
keyTab="C:\Users{user}\krb5cc_{user}"
useTicketCache=true
renewTGT=true
principal="{user}#DOMAIN.ORG" ;
};
Edit your dbeaver.ini file and provide the reference to both of this files(append the following lines to existing dbeaver.ini). Make sure you backup dbeaver.ini, with re installations or replacing with newer version, dbeaver.ini may get replaced, in that case you can copy the lines below from your backup dbeaver.ini file -
-Djavax.security.auth.useSubjectCredsOnly=false
-Djava.security.krb5.debug=true
-Dsun.security.krb5.debug=true
-Djava.security.krb5.conf=C:\Users{User}\Documents\Keytabs\krb5.conf
-Djava.security.auth.login.config=C:\Users{User}\Documents\Keytabs\jaas.conf
Last Step(You may need or may not)
I init my keytab before connecting. So I use Shell Commands -
Press F4 after creating the connection
Make sure in user you just put the user name for which you are initializing the keytab and nothing else. It should not be {user}#domain.org.
Use the shell commands to init the keytab
I also was having trouble configuring DBeaver to Hive, my solution was to use Cloudera's ODBC Driver. It worked a lot better then the JDBC drivers (auto-complete working, quicker, no need to run kinit), and I could automatize its creation.
The only problem is that you must be admin to install it.

Cassandra: target machine actively refused it

I am trying to run Cassandra (CQL Shell) and I am receiving the following error, I have tried all the google responses to existing questions, nothing has fixed it so far.
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(10061, "Tried connecting to [('127.0.0.1', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Before installing Apache Cassandra, JDK must be installed.
Can you make sure the IP address is set correctly on your rpc_address setting in your cassandra.yaml file, on your cassandra server.
Also, you need to make sure port 9042 is open and available for incoming traffic (if your IT department is setting up servers, it is possible this port is blocked, unless otherwise specified...)
Hope it helps.
I also faced the same issue , but may be the below 2 way's can help :
Option 1 :
In my case i haven't started the Cassandra Server and was directly trying to connect to Cassandra.
(a) Firstly start the cassandra server via cmd --> \bin>cassandra.bat -f
and then
(b) Try to connect to it's node --> \bin>cqlsh.bat -u cassandra
Option 2:
Try changing the rpc_address in your cassandra.yaml file to eihter 127.0.0.1 instead of localhost
or to 0.0.0.0 instead of localhost
and then again start the server from new CMD.

Postgresql is not allowed to be connected remotely

Could someone help take a look this weird problem? I'm still not able to connect remotely to my Postgresql.
My Steps:
Download and install the latest Postgresql to my local machine
Setup postgresql
Create a DB
Modify "pg_hba", add row "host all all 0.0.0.0/0 md5"
Modify "postgresql.conf", make sure "listen_addresses = '*'"
Restart postgresql service
Open local PgAdmin, and connect to DB <-- Success!
From Remote desktop, do the same thing as #7 <-- Failed!
Error Message:
"Server doesn't listen"
"Could not connect to server......accepting TCP/IP connections on port 5432?"
I found "TCP 0.0.0.0:5432 Listening" when I type "netstat -a"
I checked firewall, it's not enabled
......
Can someone please help? Does anyone encounter this situation?
P.S, my os is Winserver 2008
Thanks in advance~
If you're connecting to the local machine via RDP then you'll be connecting via localhost and no firewall or LAN/WAN/NAT settings should affect pgadmin.
When you edit the pg_hba and postgresql.conf files Server 2008 doesn't usually let you edit them directly where they are. I usually copy them out edit them and then paste them back in. You'll need to authorise the paste from an Admin account.
I usually have a separate rule in "pg_hba" with "host all all 127.0.0.1/32 md5" for local connections. Also ensure when you restart the service that it is running under the user "postgres" and not as some other user.

Resources