How to set up telnet in AWS instance? - amazon-ec2

I got SSH working fine. But I am facing an issue with connecting via telnet.

sudo yum -y install telnet
This works for me after logging in to the EC2 instance

ssh is recommended over telnet, as telnet is not encrypted and is by default not installed in amazon instance.
However if needed, steps involved for Linux : Amazon Instance or Centos
Install telnet daemon in the instance: Install telnet-server using sudo yum install telnet-server . Package telnet is for the client program in case one want to connect using telnet client from the instance, not needed for the exercise.
Enable the telnet daemon service:
- By default the service is disabled in /etc/xinetd.d/telnet, The disable flag needs to be set to no.
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = yes
}
Post change it should look like below
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
Verify the configuration in case of any edit related errors.
sudo chkconfig xinetd on
Bring up the telnet service:
Bring up the telnet daemon as root using sudo service xinetd restart command
Enable inbound telnet default port (23) on AWS Console:
In AWS Console EC2/Security Groups/<Your Security Group>/Inbound, set a rule
Type:Custom-TCP Rule
Protocol: TCP Range
Port Range: 23
Source: <As per your business requirement>
Test the telnet connection:
Test the telnet connection from any client enabled in the firewall.
>telnet ec2-XX-XX-XXX-XXX.region.compute.amazonaws.com.
Connected to ec2-XX-XX-XXX-XXX.region.compute.amazonaws.com.
Escape character is '^]'.
Password:
The steps(tools) will vary slightly for other linux variants.
PS: Referred http://aws-certification.blogspot.in/2016/01/install-and-setup-telnet-on-ec2-amazon.html, fixed few issues in the commands.

Related

How Use the Oracle Cloud Infrastructure MYSQL DB and XDev API together?

So I've created a MYSQL DB on the OCI and can connect to it via SSH, I have all the ingress rules set up, the users, etc.
What do I put in the host: "....": field in the javascript code? (instead of localhost).
mysqlx
.getSession( {
user: 'user',
password: 'password',
host: 'localhost',
port: '33060',
})
Do I have to do anything else in OCI since the connection is set up as SSH or can I set it up on the public subnet settings as a new ingress rule?
Thanks for any help.
The answer in OCI is to use the host name and provider of your instance that houses the MYSQL DB and then set your MYSQL Router in OCI with the following:
Step 1 - Install and Configure MySQL Router
Assuming your OCI Compute is running Enterprise Linux Enterprise Linux Server release 7.
SSH into the OCI Compute where MySQL Router will be installed
Install MySQL Router. Run:
sudo yum -y install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
sudo yum -y install mysql-router
Configure MySQL Router appending to the file /etc/mysqlrouter/mysqlrouter.conf. For example, assuming the MDS private IP is 10.0.0.6, run:
sudo tee -a /etc/mysqlrouter/mysqlrouter.conf > /dev/null << EOF
[routing:redirect_classic]
bind_address = localhost:3306
destinations = 10.0.0.6:3306
routing_strategy=first-available
[routing:redirect_xprotocol]
bind_address = localhost:33060
destinations = 10.0.0.6:33060
protocol = x
routing_strategy=first-available
EOF
Start MySQL Router and check if the service is active (running). Run:
$ sudo systemctl start mysqlrouter.service
$ sudo systemctl status mysqlrouter.service
Automatically start MySQL Router when the Compute instance reboots
$ sudo systemctl enable mysqlrouter.service
Add the firewalld rules. Run:
$ sudo firewall-cmd --permanent --add-port=3306/tcp
$ sudo firewall-cmd --permanent --add-port=33060/tcp
$ sudo firewall-cmd --reload
Thanks Airton Latori for the assist.
I'm not familiar with OCI specifics, but eventually there should be a hostname or IP address for the MySQL instance (or router) somewhere you can connect to. And, assuming the endpoint "speaks" the X Protocol, that is what you should provide for the host configuration property.
Disclaimer: I'm the lead developer of the MySQL X DevAPI Connector for Node.js

Sesman-Xvnc throws password failed with every user

I have an Ubuntu 16.04 LTS virtual machine that I use for log management. Since I created it, I use Sesman-Xvnc and has always been nice and easy to log in. However, after been on it for the last 3 weeks with on issues whatsoever, today I got to the office and it throws this error:
Connecting to sesman ip 127.0.0.1 port 3350
sesman connect ok
sending login info to session manager, please wait...
xrdp_mm_process_login_response: login successful for display
Started connecting
connecting to 127.0.0.1 5912
tcp connected
security level is 2 (1 = none, 2 = standard)
password failed
error - problem connecting
I didn't changed my password, the machine was on all the time and I am able to log in via ssh with my user and password.
I have tried reinstalling the services with:
sudo apt-get remove xrdp vnc4server tightvncserver
sudo apt-get install tightvncserver
sudo apt-get install xrdp
And then restarted the xrdp service with:
service xrdp restart
I have also created a new user but the results are the same; password failed.
Any ideas of how to sort this out?
Thank you very much familia. ;)
I too have the same issue facing it since today, Have put up the issue here.
XRDP doesnt connect to Azure VM suddenly
I fixed it by allowing the port which it is trying to connect to sesman in the ufw:
The moment u see connecting to "sesman ip 127.0.0.1 port 3350" (or any other port) in the RDP, Take that port number, and allow that port to the ufw using
These are the steps I used :
Downgrade ur xrdp using this :
[sudo apt-get install xrdp=0.6.1-2
and Hold the xrdp instance,
sudo apt-mark hold xrdp
Sudo ufw enable
Sudo ufw allow 3350 and
Sudo ufw allow 3389]
NB:You may use this cmd to see if its open:
sudo netstat -plnt | grep rdp
Perform these in the SSH window.
This worked for me. Hope it fixes this issue.
We had the same issue and it seems to be caused by an automatic update of 'xrdp'. Have a look to this post:
https://askubuntu.com/questions/1108550/xrdp-failed-problem-connecting-when-package-was-auto-updated

Multiple Reverse shells using the same public port

I´ve got a Server behind a firewall and the firewall only allows traffic through port 22. This server has both public and private addresses.
I´ve got also about 1K clients that I need to reverse shell to this server, and be able to choose one of them by id when I want that ssh reversed tunnel.
My goal is to make the clients connect to ssh server via port 22, and each one of this connections should be forwarded to localhost on port with the same id.
When I connect to the server with my laptop also via ssh, I would then ssh to localhost on the correct id and get the client shell.
Can someone provide me the good path to achieve this behaviour using bash, ssh and linux tools?
Note - I don´t want to use client.py and server.py cause most of my clients are android based and it could easily become a nightmare to install python on all of them.
The problem - it was solved using remote port forwarding:
ssh -R 21:localhost:8888 user#server
In this command the 8888 represents the terminal id. In order for this to work, had to add this line to my ssh conf:
GatewayPorts yes

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

ssh remote access on bash Windows 10

I'd like to connect remotely to the Ubuntu bash on my Windows 10.
I've got an answer on port 22 but when it asks for username and password, it says access denied...
I've already created a user "root" and i've done a "sudo passwd root"
Windows firewall is deactivated (service stopped).
Thanks !
Stop ssh server and ssh broker services on Windows to avoid SSH port conflict
Makes below changes in /etc/ssh/sshd_config:
UsePrivilegeSeparation no
PasswordAuthentication yes
Then restart ssh server by sudo service ssh restart. If you see could not load host key error then create host key as below and restart ssh service:
sudo ssh-keygen -f /etc/ssh/ssh_host_rsa_key -b 4096 -t rsa
First, You need to Stop/Disable Windows 10 SSH Server Broker Services or Change OpenSSH Port.
After that, modify the /etc/ssh/sshd_config:
UsePrivilegeSeparation no
PubkeyAuthentication no
PasswordAuthentication yes
I started having issues after installing VirtualBox with my Bash on Ubuntu on Windows SSH connection. I stopped the VM, uninstalled, and still couldn't authenticate. The user 'Nobody' is correct, the best solution would either to disable the SSH Broker for Windows 10, or just change the port for SSH on the Linux subsystem, which I did, and works perfectly.
You must also in most cases add a inbound firewall rule to allow traffic on port 22.. the default setup only allows for inbound traffic using the windows implementation of ssh, therefore not allowing any traffic for the openssh-server. Just follow the instructions above and then add a rule for port 22 inbound in Windows Firewall and you should be set.
Since windows implementation doesn't provide chroot you need to modify the /etc/ssh/sshd_config
UsePrivilegeSeparation no
Also you will need to create a user using useradd command or so.

Resources