I have a vagrantbox with a LAMP stack and YiiFramework.
I need to run some commands that rely on environment variables that I set provisioning the box in the .bashrc (.profile has the same behavior though).
The problem is that I want to launch them using ssh, but everything I tried is not working. For example:
vagrant ssh -- -t '[path]/./yiic command action'
Tells me that I didn't set the env variables.
Even:
vagrant ssh -- -t 'printenv MYSQL_HOST'
Has this output:
rmessineo:~ rmessineo$ switebox ssh -- -t 'printenv MYSQL_HOST'
Connection to 127.0.0.1 closed.
rmessineo:~ rmessineo$
But of course if I login in and to the same:
rmessineo:~ rmessineo$ vagrant ssh
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Feb 24 11:02:05 2016 from 10.0.2.2
vagrant#debian-jessie:~$ printenv MYSQL_HOST
127.0.0.1
vagrant#debian-jessie:~$
Any ideas?
I added the following in my .profile file in the VM
export MY_VAR="hello fred"
Then I am able to get the value of the new variable by running
fhenri#machine:~/project/examples/vagrant/ubuntu$ vagrant ssh -- -t 'source ~/.profile && printenv MY_VAR'
hello fred
Connection to 192.168.6.120 closed.
Suppose the vagrant instance starts with port 2222, then you can run the command:
ssh -p 2222 vagrant#127.0.0.1 'printenv MYSQL_HOST'
password is vagrant, then you should get the result.
Update the port depend the real vagrant instance.
Related
I use some code connecting with remote machine with use of paramiko library. The connection is established over tunnelling ssh connection bound to one of the localhost ports. The default shell on the remote machine is tcsh, but my code requires it to run bash. I have tested the sshing some simple commands and it works fine.
$ ssh localhost -p 2222 'echo $0'
tcsh
To change the login shell I have added to my .tcshrc file following two lines:
setenv SHELL /bin/bash
exec /bin/bash --login
The following thing works:
$ ssh localhost -p 2222
[user#remote ~]$ echo $0
/bin/bash
But not the following:
$ ssh localhost -p 2222 'echo $0'
which gives no response. The same for the connections with paramiko established by the code I want to use.
At the moment I am limited only to user-level solutions and would rather not play with the paramiko-using-code itself. Is there anything else I could try here?
I am trying to run the needrestart tool by ansible to check for processes with outdated libraries.
When I run needstart with the command or shell modules from ansible it says that I need to restart my ssh daemon. When I run needrestart manually it says that there are no processes with outdated libraries.
When I restart the ssh daemon it does not make a difference. But after rebooting the remote server the ssh daemon is not listed as a service I should restart anymore.
So I really do not understand the difference between the ssh connection from ansible and my manual ssh connection that causes the different behavior of needrestart.
Any help would be appreciated!
Thank you in advance and best regards
Max
My local machine
$ python -V
Python 2.7.13
$ ansible --version
ansible 2.2.0.0
$ cat ansible.cfg
[defaults]
inventory = hosts
ask_vault_pass = True
retry_files_enabled = False
I am using a ssh proxy to connect to the server:
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q user#jumphost.example.com"'
The remote server
$ cat /etc/debian_version
8.6
$ python -V
Python 2.7.9
Using ansible
$ ansible example.com -m command -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
$ ansible example.com -m shell -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
Using SSH
$ ssh example.com 'needrestart -b -l -r l'
NEEDRESTART-VER: 1.2
Killed by signal 1.
It looks like you have an active connection with older version of ssh process. When ssh restarts it does not terminate current copies which keeps active connections. If it would do this, than ssh servers sudo service ssh restart would kill active connection and you'll have a broken server.
So, when you do systemctl restart sshd, you restart only ssh-part, which accepts new connection. All existing connections are served by old ssh.
Why do ansible keep ssh old ssh connection between runs? Because of the ControlMaster feature. It keeps active ssh connection between runs to speed up new runs.
What to do? Close active ssh connections on your machine. Try ps aux|grep ssh and you'll see a process which serves as ControlMaster. Kill it, and outdated connection should be closed.
I am trying to connect to a server via ssh. Once connected, terminal should be cleared.
Due to generated keys, I can connect to the server via ssh usr#svr without being prompted a password. This works.
In order to get rid off
The programs included with the Debian GNU/Linux system are free
software; the exact distribution terms for each program are described
in the individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
I would usually just type clear. However, I would prefer not to type this every time but automate the procedure instead.
ssh usr#svr "clear" --> "TERM environment variable not set.". I googled several solutions about unset environment variables, but without success.
So instead, I tried ssh -t usr#vr "clear"; this successfully clears the terminal, but also closes the connection right away ("Connection to IP closed."). Computer connects to server, clears the screen, closes the connection.
Next attempt was to create a bash script on the server to be run after connecting to it.
#/bin/bash
clear
## cl.sh, chmod +x
ssh usr#svr ./cl.sh --> "TERM environment variable not set.".
Another attempt was to create a bash script connecting to the server and clearing the terminal via ENDSSH.
#/bin/bash
ssh usr#svr <<'ENDSSH'
clear
ENDSSH
## sc.sh, chmod +x
Running this results in:
> ./sc.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Linux raspberrypi 3.18.7-v7+ #755 SMP PREEMPT Thu Feb 12 17:20:48 GMT 2015 armv7l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
TERM environment variable not set.
I am a beginner at this, so please be patient if I have made a very obvious mistake. I tried to be as detailed as possible and researched this before posting, but could not find an answer to my question. For example, commands other than "clear" work (ssh usr#svr ls), but that does not help me.
I have found another easy solution
ssh -t usr#svr 'clear;bash'
The text your question refers to is part of the message-of-the-day (MOTD).
If you can become root on the server, you can just modify that message in /etc/motd. Note that depending on the server's distribution, this file will usually be generated somehow (overwriting any changes), e.g. on Debian it is generated from /etc/motd.tail at boot, so you might have to change that file instead.
See manpage motd(5).
To prevent that message from being printed you can create a file named .hushlogin in your home directory (on the server). SSH to the server and run the command touch ~/.hushlogin. If that file exists then the login shell will no longer print the motd (message of the day) which is what you are seeing.
All startup messages are defined in motd file (/etc/motd).
However if you'd like to having your console cleared to increase privacy, add the following code:
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
Either to your .bash_profile or .bash_logout (on logout). Or by using clear command.
Clearing screen on logout is the default behaviour on Debian Linux distribution.
I have an Ubuntu guest box setup on my Windows host using Vagrant and VirtualBox. I'm trying to write a shell script that will...
vagrant up
vagrant ssh once vagrant up is complete
cd into a specific project directory in the guest machine once successfully SSHed into the guest machine
Right now my vagrant_shell_script.sh file contains the following:
vagrant up && vagrant ssh && echo 'cd vagrant/rails_tutorial/sample_app'
Everything works fine when I execute it in Git Bash, up to and including connecting via SSH to the guest machine, however after it successfully connects, the script seems to stop working and does not execute the final cd command. I presume this is because it is no longer able to communicate directly with my host machine through that particular Bash instance (please correct me if I'm wrong).
Is there any way to have it navigate directly to the target directory once the SSH connection is successful?
Please forgive me if this is a dumb question--relatively new to bash scripting.
This solved it. It's kind of hacky, but running
vagrant up && vagrant ssh -- -t 'cd /vagrant/rails_tutorial/sample_app; /bin/bash' gets you in. For some reason vagrant keeps kicking you out if you don't launch the shell.
vagrant ssh -- allows you to pass commands into the SSH client. This is vagrant's own utility. The next flag, -t is an SSH flag and it allows SSH to execute certain commands before it hands control back to you. You put your command after the -t flag, but make sure to end it with <last command> ; /bin/bash so that it launches a shell for you and you don't get kicked out.
you can also use Heredoc to run the commands after you ssh by using something similar to this in your script:
# Use heredoc to send script over ssh
$ssh_cmd << 'END_DOC'
cd <path>
commands
exit
END_DOC
echo $ssh_cmd
I use docker on OSX with boot2docker.
I want to get an Ssh connection from my terminal into a running container.
But I can't do this :(
I think it's because Docker is running in a virtual machine.
There are several things you must do to enable ssh'ing to a container running in a VM:
install and run sshd in your container (example). sshd is not there by default because containers typically run only one process, though they can run as many as you like.
EXPOSE a port as part of creating the image, typically 22, so that when you run the container, the daemon connects to the EXPOSE'd port inside the container and something can be exposed on the outside of the container.
When you run the container, you need to decide how to map that port. You can let Docker do it automatically or be explicit. I'd suggest being explicit: docker run -p 42222:22 ... which maps port 42222 on the VM to port 22 in the container.
Add a portmap to the VM to expose the port to your host. e.g. when your VM is not running, you can add a mapping like this: VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
Then from your host, you should be able to ssh to port 42222 on the host to reach the container's ssh daemon.
Here's what happens when I perform the above steps:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
$ ./boot2docker start
[2014-04-11 12:07:35] Starting boot2docker-vm...
[2014-04-11 12:07:55] Started.
$ docker run -d -p 42222:22 dhrp/sshd
Unable to find image 'dhrp/sshd' (tag: latest) locally
Pulling repository dhrp/sshd
2bbfe079a942: Download complete
c8a2228805bc: Download complete
8dbd9e392a96: Download complete
11d214c1b26a: Download complete
27cf78414709: Download complete
b750fe79269d: Download complete
cf7e766468fc: Download complete
082189640622: Download complete
fa822d12ee30: Download complete
1522e919ec9f: Download complete
fa594d99163a: Download complete
1bd442970c79: Download complete
0fda9de88c63: Download complete
86e22a5fdce6: Download complete
79d05cb13124: Download complete
ac72e4b531bc: Download complete
26e4b94e5a13b4bb924ef57548bb17ba03444ca003128092b5fbe344110f2e4c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26e4b94e5a13 dhrp/sshd:latest /usr/sbin/sshd -D 6 seconds ago Up 3 seconds 0.0.0.0:42222->22/tcp loving_einstein
$ ssh root#localhost -p 42222
The authenticity of host '[localhost]:42222 ([127.0.0.1]:42222)' can't be established.
RSA key fingerprint is ....
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:42222' (RSA) to the list of known hosts.
root#localhost's password: screencast
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.12.1-tinycore64 x86_64)
* Documentation: https://help.ubuntu.com/
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root#26e4b94e5a13:~# exit
logout
So that shows ssh->localhost 42222->VM port 42222->container port 22.
Docker has added the docker exec command to Docker 1.3.0. You can connect to a running container using the following:
docker exec -it <container id> /bin/bash
That will connect to a bash prompt on the running container.
If you just want to get into the running container, you may consider using nsenter. Here is a simple bash script (suggested by Chris Jones) that you can use to enter into a docker container. Save it somewhere in your $PATH as docker-enter and chmod +x
#!/bin/bash
set-e
# Check for nsenter. If not found, install it
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
# Use bash if no command is specified
args=$#
if[[ $# = 1 ]]; then
args+=(/bin/bash)
fi
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter "${args[#]}"
Then you can run docker-enter 89af3d (or whatever configuration you want to enter)
A slightly modified variant of Michael's answer that just requires the container you want to enter be named (APPNAME):
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter $(docker ps | grep $APPNAME | awk '{ print $1 }')
I've tested this for an Ubuntu 16.04 image running on a host with the same OS, Docker 18.09.2, it should also work for boot2Docker with minor modifications.
Build the image.
Run it in background container (youruser may be root):
$ docker run -ditu <youruser> <imageId>
Attach to it with a shell:
$ docker exec -it <containerId> /bin/bash
Install the openssh-server (sudo only needed if youruser is not root, the command may differ for boot2Docker):
$ sudo apt-get install -y openssh-server
Run it:
$ sudo service ssh start
(The following step is optional, if youruser has a password, you can skip it and provide the password at each ssh connection).
Create a RSA key on the client host:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/youruser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/youruser/.ssh/id_rsa.
Your public key has been saved in /home/youruser/.ssh/id_rsa.pub.
On the docker image, create a directory $HOME/.ssh:
$ cd
$ mkdir .ssh && cd .ssh
$ vi authorized_keys
Copy and paste the content of $HOME/.ssh/id_rsa.pub on the client machine to authorized_keys on the docker image and save the file.
(End of optional step).
Jot down your image's IP address:
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 63448863ac39
^^^^^^^^^^ this
Now the connection from the client host should be effective:
$ ssh 172.17.0.2
Enter passphrase for key '/home/youruser/.ssh/id_rsa':
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-46-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Fri Apr 5 09:50:30 2019 from 172.17.0.1
Of course you can apply the above procedure non-interactively in your Dockerfile.