I try to connect to my MySQL server with logstash on our elastic cloud cluster, the problem is that we use SSH tunnel on the sql server. Is there a way, using the logstash pipeline creation interface on elastic cloud, to connect to a mysql server using SSH tunnel ?
Interface is as follow, there is not that much parameters..
No, I'm afraid that's not part of the JDBC input plugin of Logstash (which will do the connection to MySQL). Can you set up the SSH tunnel between your Logstash server and MySQL manually?
Related
I'm trying to connect to an Oracle DB from Entity Framework, but the DB rejects direct connections from the PC. I need to connect through a jump server / gateway.
Here's what's working:
From the gateway server I can connect to the DB using both sqlplus and EF. The gateway has a ssh server.
From the local PC I can start an ssh session on the gateway server (and from the ssh session I can continue with connecting to the DB using sqlplus).
This question is similar, but doesn't work (I get timeouts):
How can I connect to Oracle Database 11g server through ssh tunnel chain (double tunnel, server in company network)?
In particular, when I run plink -N -L localport:dbserver:dbport yourname#connectionserver locally on the PC, I get "session granted", but when connecting with EF from the PC, the connection request gets time out. In that case I use the gateway server as DataSource.
I've checked firewall settings, and port 1521 is open on both PC and gateway server.
UPDATED INFO:
When, instead of using ssh port forwarding on the gateway, I use socat tcp-listen:1521,reuseaddr,fork,tcp:<db_server:port> as port forwarding (on the gateway), I get a response with tnsping <gateway_server> on the local PC.
I need to fetch data from a Mysql server through ssh tunneling.
I am using Apache Beam 2.19.0 Java JdbcIO on Google Dataflow to connect to the database.
But as the database is inside a private network I need to reach the database through one in between ssh server via ssh tunneling.
Is it something achievable using apache beam jdbc IO ?
This functionality isn't built into Apache Beam, however there are several options. The JdbcIO uses the standard Java JDBC interface to connect to your database. It wouldn't be too difficult to overload the Mysql JDBC Driver with your own wrapper that sets up a SSH tunnel before connecting. I did a quick Google search and found a project that wraps an arbitrary JDBC driver with an SSH tunnel using SSHJ: jdbc-sshj (a copy is published to maven as com.cekrlic:jdbc-sshj:0.1.0). The project looks somewhat unmantained but it will do what you want. Add this to your runtime dependencies then update your config to something like this (this example is not secure):
pipeline.apply(JdbcIO.<KV<Integer, String>>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"com.cekrlic.jdbc.ssh.tunnel.SshJDriver",
"jdbc:sshj://sshbastion?remote=database:3306&username=sshuser&password=sshpassword&verify_hosts=off;;;jdbc:mysql://localhost:3306/mydb")
.username("username")
.withPassword("password"))
.withQuery("select id,name from Person")
.withCoder(KvCoder.of(BigEndianIntegerCoder.of(), StringUtf8Coder.of()))
.withRowMapper(new JdbcIO.RowMapper<KV<Integer, String>>() {
public KV<Integer, String> mapRow(ResultSet resultSet) throws Exception {
return KV.of(resultSet.getInt(1), resultSet.getString(2));
}
})
);
If you are using Dataflow you can setup a GCE VM to act as your gateway. On that VM use SSH forwarding to tunnel the Database to the VM's external interface (ssh -R \*:3306:database:3306 sshbastion), make the port avalable within the VPC, and then run your Dataflow job on your VPC. If your database is already running in GCP, you can use this approach to run your dataflow job on the same VPC as the database and drop the SSH step.
i have tried to connect from Jmeter to external mysql server using JDBC sampler. But I am getting erros. its possible to connect to the local mysql server.I am confused how to connect jmeter from my local machine to server database in other machine using JDBC CONNECTION
Make sure you remote MySQL server is accepting remote connections. Locate bind-address line in my.cnf file and set it to listen on all interfaces:
bind-address = 0.0.0.0
See Troubleshooting Problems Connecting to MySQL for more details. You will need restart MySQL server in order to pick up any changes made in my.cnf file
Make sure your operating system firewall on MySQL server side allows incoming connections to MySQL server TCP port (default is 3306).
Verify that you able to hit port 3306 with a telnet client or equivalent
If you will be still experiencing problems - update your question with JDBC Request sampler output and jmeter.log file contents. I would also recommend checking out The Real Secret to Building a Database Test Plan With JMeter to learn more about the concept of databases load testing using JMeter
If you are able to connect to that server using MySQL Workbench or another tool just use the below config for JMeter.
Just remember that you need to have mysql-connector-java-5.1.40-bin.jar (or another version of it) under apache-jmeter-3.0\lib\ folder.
Hope this helps!
I am following this guide on Hadoop/FIWARE-Cosmos and I have a question about the Hive part.
I can access the old cluster’s (cosmos.lab.fiware.org) headnode through SSH, but I cannot do it for the new cluster. I tried both storage.cosmos.lab.fiware.org and computing.cosmos.lab.fiware.org and failed to connect.
My intention in trying to connect via SSH was to test Hive queries on our data through the Hive CLI. After failing to do so, I checked and was able to connect to the 10000 port of computing.cosmos.lab.fiware.org with telnet. I guess Hive is served through that port. Is this the only way we can use Hive in the new cluster?
The new pair of clusters have not enabled the ssh access. This is because users tend to install a lot of stuff (even not related with Big Data) in the “old” cluster, which had the ssh access enabled as you mention. So, the new pair of clusters are intended to be used only through the APIs exposed: WebHDFS for data I/O and Tidoop for MapReduce.
Being said that, a Hive Server is running as well and it should be exposing a remote service in the 10000 port as you mention as well. I say “it should be” because it is running an experimental authenticator module based in OAuth2 as WebHDFS and Tidoop do. Theoretically, connecting to that port from a Hive client is as easy as using your Cosmos username and a valid token (the same you are using for WebHDFS and/or Tidoop).
And what about a Hive remote client? Well, this is something your application should implement. Anyway, I have uploaded some implementation examples in the Cosmos repo. For instance:
https://github.com/telefonicaid/fiware-cosmos/tree/develop/resources/java/hiveserver2-client
I'm not able to connect to my Redshift cluster through ODBC from an EC2 instance. However, I'm able to connect to it from an outside computer (for eg My Macbook) using the ODBC connector. I have been trying and trying but in vain. How can I make my EC2 instance connect to Redshift? The Error I get is:
Is the Server running on host .................and accepting TCP/IP connections on port 5439?
I'm really confused as I can connect form outside but not from an EC2.
Thanks for the help.
Add the security group of your EC2 machine to the list of Ingress rules of the security group in your Redshift VPC.
Basically, you need to allow your EC2 machine to connect to the Redshift cluster.
If you have the instance in the same VPC, public hostname of redshift might not work.