Why can I not use the command history in minikube ssh - bash

I have a minikube K8s 1 node cluster on my Windows 10 pc. I can SSH into this cluster using minikube ssh.
The problem that I am experiencing is that I can't use the arrow keys to bring back the previous command. I did some looking around and diagnostics:
set -o | grep history gave history on
echo $HISTFILE gave /home/docker/.bash_history. This is indeed in the home folder of the user and the file was present after exiting and executing minikube ssh again
echo $HISTSIZE and echo $HISTFILESIZE both gave 500
echo $SHELL gave /bin/bash
All these things tell me that command history should be enabled, but it doesn't seem to be the case.
I tried using both Powershell and cmd to run minikube ssh, both with and without Windows Terminal.
Both PowerShell and cmd themselves have a working command history, but once SSHing using minikube, the history in the bash shell doesn't work.
Does anyone know how to get the command history to work after executing minikube shh?
Edit:
I have tried minikube ssh --native-ssh=false, but this didn't change anything.

It seems to be a problem with the SSH client you are using. You can try with the --native-ssh=false option:
minikube ssh --native-ssh=false
You can also try with different alternatives or with something like the ssh version that comes with Cygwin.
There is already an unsolved issue related to this. (Feel free to update)
✌️

Related

why Jenkins shell script hangs when i run sudo pm2 ls

I confess I am total newbie to Jenkins.
I have
Jenkins-tls
installed on my Mac for experimentation.
I have a remote server that I testing with.
My Jenkins script is ultra simple.
ssh to the remote machine
sudo pm2 ls
the last command just hangs
I run the same 2 commands from the command line and it all works perfectly.
FYI, I need sudo for pm2 since I need to be root to run pm2, without sudo, I get access denied.
Any thoughts?
I believe you make the invalid assumption that jenkins somehow "types" commands after starting ssh to the remote session's command shell. This is not what happens. Instead, it will wait for the ssh command to finish, and only then execute the next command sudo pm2 ls. This never happens, because the ssh session never terminates. You observe this as a "hang".
How to solve this?
If there's only a small number of commands, you can use ssh to run them with
ssh user#remote sudo mp2 ls
ssh user#remote command arg1 arg2
If this gets longer, why not place all commands in a remote script and just run it with
ssh user#remote /path/to/script

Connecting to windows shared drive from kubernetes using go

I need to connect to windows remote server(shared drive) from GO API hosted in the alpine linux. I tried using tcp,ssh and ftp none of them didn't work. Any suggestions or ideas to tackle this?
Before proceeding with debugging the GO code, it would be needed to do some "unskilled labour" within container in order to ensure pre-requisites are met:
samba client is installed and daemons are running;
the target name gets resolved;
there are no connectivity issues (routing, firewall rules, etc);
there are share access permissions;
mounting remote volume is allowed for the container.
Connect to the container:
$ docker ps
$ docker exec -it container_id /bin/bash
Samba daemons are running:
$ smbd status
$ nmbd status
You use the right name format in your code and command lines:
UNC notation => \\server_name\share_name
URL notation => smb://server_name/share_name
Target name is resolvable
$ nslookup server_name.domain_name
$ nmblookup netbios_name
$ ping server_name
Samba shares are visible
$ smbclient -L //server [-U user] # list of shares
and accessible (ls, get, put commands provide expected output here)
$ smbclient //server/share
> ls
Try to mount remote share as suggested by #cwadley (mount could be prohibited by default in Docker container):
$ sudo mount -t cifs -o username=geeko,password=pass //server/share /mnt/smbshare
For investigation purposes you might use the Samba docker container available at GitHub, or even deploy your application in it since it contains Samba client and helpful command line tools:
$ sudo docker run -it -p 139:139 -p 445:445 -d dperson/samba
After you get this working at the Docker level, you could easily reproduce this in Kubernetes.
You might do the checks from within the running Pod in Kubernetes:
$ kubectl get deployments --show-labels
$ LABEL=label_value; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
$ kubectl exec pod_name -c container_name -- ping -c1 server_name
Having got it working in command line in Docker and Kubernetes, you should get your program code working also.
Also, there is a really thoughtful discussion on StackOverflow regards Samba topic:
Mount SMB/CIFS share within a Docker container
Windows shares use the SMB protocol. There are a couple of Go libraries for using SMB, but I have never used them so I cannot vouch for their utility. Here is one I Googled:
https://github.com/stacktitan/smb
Other options would be to ensure that the Windows share is mounted on the Linux host filesystem using cifs. Then you could just use the regular Go file utilities:
https://www.thomas-krenn.com/en/wiki/Mounting_a_Windows_Share_in_Linux
Or, you could install something like Cygwin on the Windows box and run an SSH server. This would allow you to use SCP:
https://godoc.org/github.com/tmc/scp

accessing/passing openshift pod environment variables within scripted “oc rsh” calls

maybe some of you openshift/docker pros can help me out with this one. Apologies in advance for my formatting, im on mobile and dont have access to exact error codes right now. I can supply more detailed input/stderr later if needed.
Some details about the environment:
-functioning OC pod running a single postgresql v9.6 container.
-CentOS7 host
-Centos7 local machine
-bash 4.2 shell (both in the container and on my local box)
My goal is to use a one-liner bash command to rsh into a postgresql container, and run the following command to print said containers databases to my local terminal. something like this:
[root#mybox ~]$ oc rsh pod-name /path/to/command/executable/psql -l
result:
rsh cannot find required library, error code 126
The issue I am hitting is that, when executing this one liner, The rsh does not see the target pod’s environment variables. This means it cannot find the supporting libraries that the psql command needs. If i dont supply the full path as shown in my example, it cannot even find the psql command itself.
Annoyingly, running the following one liner prints all of the pods environment variables (including the ones i need for psql) to my local terminal, so they should be accessible somehow.
[root#mybox ~]$ oc rsh pod-name env
Since this is to be executed as part of an automated procedure, the simple, interactive rsh approach (which works, as described below) is not an option.
[root#mybox ~]$ oc rsh pod-name
sh-4.2$ psql -l
(pod happily prints the database info in the remote terminal)
I have tried executing the script which defines the psql environment variable and then chaining the desired command, but i get permission denied when trying to execute the env script.
[root#mybox ~]$ oc rsh pod-name /path/to/env/define/script && psql -l
permission denied, rsh error code 127

cygwin: vagrant ssh, empty command prompt

If I vagrant ssh with windows cmd, I get a nice command prompt, like that:
vagrant#homestead:~$ echo foo
vagrant#homestead:~$ foo
But with cygwin and mintty, I have no prompt at all:
echo foo
foo
I see it has to do with "pseudo-tty allocation".
With cygwin and mintty, I can have my prompt with this :
vagrant ssh -- -t -t
How can I change cygwin and mintty so that I don't have to tell the -t ?
About the ssh -t option :
"Force pseudo-tty allocation. This can be used to execute arbi-
trary screen-based programs on a remote machine, which can be
very useful, e.g., when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty."
I had the same problem with and the solution was to set the VAGRANT_PREFER_SYSTEM_BIN environment variable to get vagrant to use your normal ssh executable.
You can do:
VAGRANT_PREFER_SYSTEM_BIN=1 vagrant ssh
or put this into your .bash_profile:
export VAGRANT_PREFER_SYSTEM_BIN=1
Reference: https://github.com/hashicorp/vagrant/issues/9143#issuecomment-343311263
I run in the same problem described above. But only on one of three PCs. But as a workaround I am doing:
# save the config to a file
vagrant ssh-config > vagrant-ssh
# run ssh with the file.
ssh -F vagrant-ssh default
From an answer of How to ssh to vagrant without actually running "vagrant ssh"?
In this case I am getting the prompt and what's more important also history cycling and ctrl-c etc. are working properly.
Vagrant is a windows program managing Virtual machine
https://www.vagrantup.com/intro/index.html
as such it does not well interface with the pseudo tty
structure used by cygwin programs.
Read for reference on similar issues with a lot of other windows program
https://github.com/mintty/mintty/issues/56
Mintty is a Cygwin program. It expect interactive program running inside it to use the cygwin tty functionality for interactive behaviour.
Running Vagrant from Bash in Windows CMD, make CMD the terminal control so Vagrant has no problem in the interactive behaviour.
I do not see the need to run Vagrant inside Cygwin
Since vagrant is windows-based, I use ConEmu instead of cygwin's shell (mintty)
choco install conemu via chocolatey and it works
General solution is to teach vagrant to use ssh, compatible with preferred terminal. Like Cygwin ssh+mintty.
Modern Vagrant (v2.1.2) has VAGRANT_PREFER_SYSTEM_BIN=1 by default on Windows.
To troubleshoot issue:
VAGRANT_LOG=info vagrant ssh
In v2.1.2 they broke Cygwin support. See my bug report with hack to lib/vagrant/util/ssh.rb to make it work.

Boot2Docker searching for docker-bootstrap.sock which does not exist

I am currently trying to set up kubernetes on a multi-docker container on CoreOS stack for AWS. To do this I need to set up etcd for flannel and am currently using this guide but am having problems at the first stage where I am suggested to run
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
The problem is the 1st command
docker -d -H unix:///var/run/docker-bootstrap.sock
from within boot2docker. There is no docker-bootstrap.sock file in this directory and this error is thrown:
FATA[0000] An error occurred trying to connect: Post https:///var/run/docker-bootstrap.sock/v1.18/containers/create: dial unix /var/run/docker-bootstrap.sock: no such file or directory
Clearly the unix socket did not connect to this nonexistent socket.
I will note this is a very similar problem to this ticket and other tickets regarding the FATA[0000] though none seem to have asked the question in the way I currently am.
I am not an expert in unix sockets, but I am assuming there should be a file where there is not. Where can I get this file to solve my issue, or what is the recommended steps to resolve this.
specs: running OSX Yosemite but calling all commands from boot2docker
Docker should create this file for you. Are you running this command on your OS X machine? or are you running it inside the boot2docker VM?
I think you need to:
boot2docker ssh
Then:
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
You need to make sure that command runs on the Vagrant Linux box that boot2docker creates, not your OS X machine.
Hope that helps!

Resources