No output from bash -c over ssh - bash

How come when I run this on my local machine I get output
$ bash -c 'a=$(date) && echo $a'
Thu Feb 20 23:12:26 MST 2014
but if I try it over ssh (I have a public key on the other box, but no forced commands in authorized_keys)
$ ssh nathan#gnunix bash -c 'a=$(date) && echo $a'
Just a blank line is printed?

You don't need bash -c probably, just this would be able to print date:
ssh nathan#gnunix 'a=$(date) && echo $a'
If you must use bash -c then escape $ like this (otherwise $ is interpreted by current shell not remote one)
ssh nathan#gnunix "bash -c 'a=\$(date) && echo \$a'"
Fri Feb 21 01:22:42 EST 2014

Related

Behaviour of /dev/stdout in Cygwin

I have a script (executed with zsh 5.8; but this should not be relevant in this case) in a Cygwin environment, which takes as parameter the name of some output file and writes to this files via redirection in various places, like this:
outfile=$1
: >$outfile # Ensure that the file exists and is empty.
.... do some work
command_x >>$outfile
.... do more work
command_y >>$outfile
... and so on
I would like to modify the behviour of the script in that if no parameter is supplied, the output of the commands goes to standard output instead. I thought that it would be sufficient to modify the script in one line:
outfile=${1:-/dev/stdout}
But nothing is written to stdout. Investigating the case further, I found that instead a regular file named stdout had been created in the /dev directory. It seems that in the Cygwin environment, /dev/stdout does not represent the standard output of the process.
How would I achieve my goal under Cygwin?
UPDATE
As requested by #matzeri, here is a simple testcase:
echo x >/dev/stdout
Expected behaviour: Seeing x on stdout
Real behaviour: A regular file /dev/stdout has been created
on a standard windows installation the /dev/std* are a symlink to the /proc/self/fd/*
ls -l /dev/std*
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stdout -> /proc/self/fd/1
if for any reason that is not anymore true they can be recreated
by /etc/postinstall/bash.sh.done script
$ grep self /etc/postinstall/bash.sh.done
/bin/test -h /dev/stdin || ln -sf /proc/self/fd/0 /dev/stdin || result=1
/bin/test -h /dev/stdout || ln -sf /proc/self/fd/1 /dev/stdout || result=1
/bin/test -h /dev/stderr || ln -sf /proc/self/fd/2 /dev/stderr || result=1
/bin/test -h /dev/fd || ln -sf /proc/self/fd /dev/fd || result=1
In that condition the command
$ echo x > /dev/stdout
x
produces the expected output on both Bash and Zsh

Process substitution not working with sudo

From a main bash script run as root, I want to execute a subprocess using sudo as unpriviledge user nobody; that subprocess should source a file, which content is provided by the main script.
I am trying to solve this using bash process substitution. But I cannot manage to get this to work.
Can someone tell me why the following script, ...
#! /bin/bash
sudo -u nobody \
bash -c 'source /dev/stdin || ls -l /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"' \
< <(echo "A=$(ls /root/.profile)")
... when run as root, produces the following ouput ?
root#raspi:~# ./test3.sh
bash: line 1: /dev/stdin: Permission denied
lrwxrwxrwx 1 root root 15 Mar 20 20:55 /dev/stdin -> /proc/self/fd/0
lr-x------ 1 nobody nogroup 64 Aug 21 14:38 /proc/3243/fd/0 -> 'pipe:[79069]'
lr-x------ 1 nobody nogroup 64 Aug 21 14:38 /proc/self/fd/0 -> 'pipe:[79069]'
A=
I would expect reading from stdin to work because, as indicated by ls -l, read access to stdin is granted to nobody (which makes sense).
So why this does not work ? And is there any way to get this to work ?
Answers to this question did not help: as sample above shows, code in the <(...) bloc should access data that only root can.
To see why you have Permission denied, use ls -lL
sudo -u nobody \
bash -c 'source /dev/stdin || ls -lL /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"' \
< <(echo "A=$(ls /root/.profile)")
To get around the error, use cat |
sudo -u nobody \
bash -c 'cat | { source /dev/stdin || ls -lL /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"; }' \
< <(echo "A=$(ls /root/.profile)")

FTP not working UNIX

hi i have a script where i am performing sudo and going to particular directory,and within that directory editing files name as required. After getting required file name i want to FTP files on windows machine but script after reading FTP commands says-:
-bash: line 19: quote: command not found
-bash: line 20: quote: command not found
-bash: line 21: put: command not found
-bash: line 22: quit: command not found
My ftp is working if i run normally so it is some other problem.Script is below-:
#!/usr/bin/
path=/global/u70/glob
echo password | sudo -S -l
sudo /usr/bin/su - glob << 'EOF'
#ls -lrt
cd "$path"
pwd
for entry in $(ls -r)
do
if [ "$entry" = "ADM" ];then
cd "$entry"
FileName=$(ls -t | head -n1)
echo "$FileName"
FileNameIniKey=$(ls -t | head -n1 | cut -c 12-20)
echo "$FileNameIniKey"
echo "$xmlFileName" >> "$xmlFileNameIniKey.ini"
chmod 755 "$FileName"
chmod 755 "$FileNameIniKey.ini"
ftp -n hostname
quote USER ftp
quote PASS
put "$FileName"
quit
rm "$FileNameIniKey.ini"
fi
done
EOF
You can improve your questions and make them easier to answer and more useful for future readers by including a minimal, self-contained example. Here's an example:
#!/bin/bash
ftp -n mirrors.rit.edu
quote user anonymous
quote pass mypass
ls
When executed, you get a manual FTP session instead of a file listing:
$ ./myscript
Trying 2620:8d:8000:15:225:90ff:fefd:344c...
Connected to smoke.rc.rit.edu.
220 Welcome to mirrors.rit.edu.
ftp>
The problem is that you're assuming that a script is a series of strings that are automatically typed into a terminal. This is not true. It's a series of commands that are executed one after another.
Nothing happens with quote user anonymous until AFTER ftp has exited, and then it's run as a shell command instead of being written to the ftp command.
Instead, specify login credentials on the command line and then include commands in a here document:
ftp -n "ftp://anonymous:passwd#mirrors.rit.edu" << end
ls
end
This works as expected:
$ ./myscript
Trying 2620:8d:8000:15:225:90ff:fefd:344c...
Connected to smoke.rc.rit.edu.
220 Welcome to mirrors.rit.edu.
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
200 Switching to Binary mode.
229 Entering Extended Passive Mode (|||19986|).
150 Here comes the directory listing.
drwxrwxr-x 12 3002 1000 4096 Jul 11 20:00 CPAN
drwxrwsr-x 10 0 1001 4096 Jul 11 21:08 CRAN
drwxr-xr-x 18 1003 1000 4096 Jul 11 18:02 CTAN
drwxrwxr-x 5 89987 546 4096 Jul 10 10:00 FreeBSD
ftp -n "ftp://anonymous:passwd#mirrors.rit.edu" << end
Name or service not known

Running script with ssh

I work with machine A and I want to run a script that exists in machine B.
I did the ordinary command:
ssh user#machine_B_adress '. script.sh'
the problem is that I used in the script some commands that cannot be interpret with machine A. So I get command not found: (for example)
ksh: sqlplus: not found
I tried to open a shh by:
ssh user#machine_B_adress
and then run the script, it works!!!
Assuming that the shell for user#machine_B is bash, the first example 'ssh user#machine_B_adress '. script.sh', bash sets up the shell env differently for interactive/non-interactive sessions.
See man bash about interactive shells
Looks like you can emulate the interactive environment by adding a bash -l -c
$ ssh user#machine_B_address "bash -l -c '. script.sh'
My quick test, I added debug echo to .bash_profile of the remote user
$ ssh foouser#jdsrpi1 "bash --login -c '. foo.sh'"
This is file .bash_profile
foouser
SHELL = /bin/bash
PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
Similar scenario for ksh
$ ssh kshuser#jdsdrop1.jimsander.io "date"
Tue Apr 18 11:52:43 EDT 2017
$ ssh kshuser#jdsdrop1 "ksh -l -c date"
This is SHELL(/usr/bin/ksh) file .profile
Tue Apr 18 11:53:24 EDT 2017

Run function on every prompt line with .bash_profile editing

I have the following PS1 command in my .bash_profile:
PS1="$(svn info 2>&1 | grep 'Relative URL' | awk '{print $NF}')"
So that the output of this command is presented in the prompt line.
But it is run once I start the terminal and it just stays there, instead of changing while I navigate through my directories. So it runs once and is left there.
How can I make it change as I am navigating my directories?
PROMPT_COMMAND
If set, the value is executed as a command prior to issuing each
primary prompt.
$ PROMPT_COMMAND=date
Sun Feb 21 13:35:21 EST 2016
$ echo a
a
Sun Feb 21 13:35:23 EST 2016
$ echo b
b
Sun Feb 21 13:35:24 EST 2016
$ PROMPT_COMMAND='PS1=`date +%H:%M`\ $\ '
13:35 $ sleep 60
13:36 $

Resources