Cygwin nfs server - windows

I want to setup a nfs server on windows(desktop) and use ubuntu(laptop) as the client.
I've installed cygwin and nfs-server on windows, but I can't mount anything from linux.
The /etc/export from cygwin contains:
/mnt/d 192.168.0.100(ro)
On my laptop, I get the following result with showmount:
showmount -e 192.168.0.101
Export list for 192.168.0.101:
/mnt/d 192.168.0.100
If I try to mount, I get this:
sudo mount -t nfs 192.168.0.101:/mnt/d d
mount.nfs: Connection timed out
If I put a * in /etc/exports I get this:
sudo mount -t nfs 192.168.0.101:/mnt/d d
mount.nfs: access denied by server while mounting 192.168.0.101:/mnt/d
Please help :(

HTH:
http://stromberg.dnsalias.org/~strombrg/NFS-troubleshooting-2.html

Related

mount - samba with CentOS 7

I want to share a Windows directory which is Sys64 in CentOS. I install cifs-utils in CentOS 7 and I run the command:
mount.cifs //ip/Sys64 share -o user=hostname,password=hostname_password
I get the following message:
mount error(112): Host is down Refer to the mount.cifs(8) manual page
(e.g. man mount.cifs)
cat /var/log/messages
Nov 21 13:51:44 zabbix kernel: CIFS VFS: cifs_mount failed w/return
code = -112
I tested with nmap:
[root#titi mnt]# nmap -p 445 ip -P0
Starting Nmap 6.40 ( http://nmap.org ) at 2018-11-21 14:25 CET Nmap
scan report for ip Host is up (0.069s latency). PORT STATE SERVICE
445/tcp open microsoft-ds
I want to share this directory, do you have any ideas on how I can do this?
i put vers=2.0
mount -t cifs -o vers=2.0,uid=1010,gid=1011,username=,password= //10.219.56.2/SysWOW64 share

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

needrestart behaves differently when run by ansible instead of a manual ssh connection

I am trying to run the needrestart tool by ansible to check for processes with outdated libraries.
When I run needstart with the command or shell modules from ansible it says that I need to restart my ssh daemon. When I run needrestart manually it says that there are no processes with outdated libraries.
When I restart the ssh daemon it does not make a difference. But after rebooting the remote server the ssh daemon is not listed as a service I should restart anymore.
So I really do not understand the difference between the ssh connection from ansible and my manual ssh connection that causes the different behavior of needrestart.
Any help would be appreciated!
Thank you in advance and best regards
Max
My local machine
$ python -V
Python 2.7.13
$ ansible --version
ansible 2.2.0.0
$ cat ansible.cfg
[defaults]
inventory = hosts
ask_vault_pass = True
retry_files_enabled = False
I am using a ssh proxy to connect to the server:
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q user#jumphost.example.com"'
The remote server
$ cat /etc/debian_version
8.6
$ python -V
Python 2.7.9
Using ansible
$ ansible example.com -m command -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
$ ansible example.com -m shell -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
Using SSH
$ ssh example.com 'needrestart -b -l -r l'
NEEDRESTART-VER: 1.2
Killed by signal 1.
It looks like you have an active connection with older version of ssh process. When ssh restarts it does not terminate current copies which keeps active connections. If it would do this, than ssh servers sudo service ssh restart would kill active connection and you'll have a broken server.
So, when you do systemctl restart sshd, you restart only ssh-part, which accepts new connection. All existing connections are served by old ssh.
Why do ansible keep ssh old ssh connection between runs? Because of the ControlMaster feature. It keeps active ssh connection between runs to speed up new runs.
What to do? Close active ssh connections on your machine. Try ps aux|grep ssh and you'll see a process which serves as ControlMaster. Kill it, and outdated connection should be closed.

Copy file from remote to local using ssh

I'm trying to copy a file from a Ubuntu server to my mac but I keep receiving a No such file or directory error.
After I ssh in I'm using:
scp -p 8888 me#xx1.xx1.xx1.xx1:/var/www/html/00000001.jpg /Users/myusername/Documents/
But receive the error:
/Users/myusername/Documents/: No such file or directory
Is this error telling me that there is no such file or directory on my local machine? Any advice as to how to fix would be greatly appreciated.
Don't ssh in to your server first. Just execute that scp command from your local machine.
EDIT:
Also, the -p should be capitalized (according to the manpage on my machine), so:
scp -P 8888 your_username#remotehost.edu:/var/www/html/00000001.jpg /Users/myusername/Documents/
Yes, it's talking about your local machine. I'm guessing that you might have just typed something wrong. Try doing it like this instead:
scp -P 8888 me#xx1.xx1.xx1.xx1:/var/www/html/00000001.jpg ~/Documents/
Make sure you're typing this command at your Mac OS X Terminal prompt, not on the actual remote server. xx1.xx1.xx1.xx1 should be the remote Ubuntu machine ("pull" the file down to your machine, don't try to "push" it).
Also, although it's ssh -p, it's scp -P. For scp, -p just preserves modification times, and -P is the port.
Maybe you have multiple ssh connections open.
Try close all other connections and restart the scp command.

How to get ssh connection with docker container on OSX(boot2docker)

I use docker on OSX with boot2docker.
I want to get an Ssh connection from my terminal into a running container.
But I can't do this :(
I think it's because Docker is running in a virtual machine.
There are several things you must do to enable ssh'ing to a container running in a VM:
install and run sshd in your container (example). sshd is not there by default because containers typically run only one process, though they can run as many as you like.
EXPOSE a port as part of creating the image, typically 22, so that when you run the container, the daemon connects to the EXPOSE'd port inside the container and something can be exposed on the outside of the container.
When you run the container, you need to decide how to map that port. You can let Docker do it automatically or be explicit. I'd suggest being explicit: docker run -p 42222:22 ... which maps port 42222 on the VM to port 22 in the container.
Add a portmap to the VM to expose the port to your host. e.g. when your VM is not running, you can add a mapping like this: VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
Then from your host, you should be able to ssh to port 42222 on the host to reach the container's ssh daemon.
Here's what happens when I perform the above steps:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
$ ./boot2docker start
[2014-04-11 12:07:35] Starting boot2docker-vm...
[2014-04-11 12:07:55] Started.
$ docker run -d -p 42222:22 dhrp/sshd
Unable to find image 'dhrp/sshd' (tag: latest) locally
Pulling repository dhrp/sshd
2bbfe079a942: Download complete
c8a2228805bc: Download complete
8dbd9e392a96: Download complete
11d214c1b26a: Download complete
27cf78414709: Download complete
b750fe79269d: Download complete
cf7e766468fc: Download complete
082189640622: Download complete
fa822d12ee30: Download complete
1522e919ec9f: Download complete
fa594d99163a: Download complete
1bd442970c79: Download complete
0fda9de88c63: Download complete
86e22a5fdce6: Download complete
79d05cb13124: Download complete
ac72e4b531bc: Download complete
26e4b94e5a13b4bb924ef57548bb17ba03444ca003128092b5fbe344110f2e4c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26e4b94e5a13 dhrp/sshd:latest /usr/sbin/sshd -D 6 seconds ago Up 3 seconds 0.0.0.0:42222->22/tcp loving_einstein
$ ssh root#localhost -p 42222
The authenticity of host '[localhost]:42222 ([127.0.0.1]:42222)' can't be established.
RSA key fingerprint is ....
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:42222' (RSA) to the list of known hosts.
root#localhost's password: screencast
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.12.1-tinycore64 x86_64)
* Documentation: https://help.ubuntu.com/
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root#26e4b94e5a13:~# exit
logout
So that shows ssh->localhost 42222->VM port 42222->container port 22.
Docker has added the docker exec command to Docker 1.3.0. You can connect to a running container using the following:
docker exec -it <container id> /bin/bash
That will connect to a bash prompt on the running container.
If you just want to get into the running container, you may consider using nsenter. Here is a simple bash script (suggested by Chris Jones) that you can use to enter into a docker container. Save it somewhere in your $PATH as docker-enter and chmod +x
#!/bin/bash
set-e
# Check for nsenter. If not found, install it
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
# Use bash if no command is specified
args=$#
if[[ $# = 1 ]]; then
args+=(/bin/bash)
fi
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter "${args[#]}"
Then you can run docker-enter 89af3d (or whatever configuration you want to enter)
A slightly modified variant of Michael's answer that just requires the container you want to enter be named (APPNAME):
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter $(docker ps | grep $APPNAME | awk '{ print $1 }')
I've tested this for an Ubuntu 16.04 image running on a host with the same OS, Docker 18.09.2, it should also work for boot2Docker with minor modifications.
Build the image.
Run it in background container (youruser may be root):
$ docker run -ditu <youruser> <imageId>
Attach to it with a shell:
$ docker exec -it <containerId> /bin/bash
Install the openssh-server (sudo only needed if youruser is not root, the command may differ for boot2Docker):
$ sudo apt-get install -y openssh-server
Run it:
$ sudo service ssh start
(The following step is optional, if youruser has a password, you can skip it and provide the password at each ssh connection).
Create a RSA key on the client host:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/youruser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/youruser/.ssh/id_rsa.
Your public key has been saved in /home/youruser/.ssh/id_rsa.pub.
On the docker image, create a directory $HOME/.ssh:
$ cd
$ mkdir .ssh && cd .ssh
$ vi authorized_keys
Copy and paste the content of $HOME/.ssh/id_rsa.pub on the client machine to authorized_keys on the docker image and save the file.
(End of optional step).
Jot down your image's IP address:
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 63448863ac39
^^^^^^^^^^ this
Now the connection from the client host should be effective:
$ ssh 172.17.0.2
Enter passphrase for key '/home/youruser/.ssh/id_rsa':
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-46-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Fri Apr 5 09:50:30 2019 from 172.17.0.1
Of course you can apply the above procedure non-interactively in your Dockerfile.

Resources