No response from [ -f /path/to/file ] - bash

I am trying to get my Capistrano deploy script working, but it is not doing the symlinking as it is configured to do as shown below.
set :linked_files, %w{config/database.yml}
set :linked_dirs, %w{log tmp vendor/bundle public/system}
When it runs the related command, I get the following:
WARN [SKIPPING] No Matching Host for /usr/bin/env [ -f /path/to/shared/config/database.yml ]
If I run this command on the server, either through ssh or through logging onto the server and running the command, I get no response from the command.
user: ~
$ [ -f /path/to/shared/config/database.yml ]
user: ~
$
The file does exist in the specified location and has permissions.
user: ~
$ ll /path/to/shared/config/
total 4.0K
drwxrwxr-x 2 user group 33 Nov 30 10:58 .
drwxrwxr-x 7 user group 89 Nov 30 10:58 ..
-rwxrwxr-x 1 user group 805 Nov 30 10:58 database.yml
user: ~
Shouldn't this return a true or a false, instead of nothing? Is there a configuration I may have changed that suppresses the output? I get no response at all whether the file exists or not.

In your response to the actual question you ask, test (which is what [ is an alias for) does in fact not return output to stdout. It returns an exit code.
user: ~
$ [ -f /path/to/shared/config/database.yml ] # if the file exists
user: ~
$ echo $?
0
user: ~
$ [ -f /path/to/shared/config/database.yml ] # if the file does not exist
user: ~
$ echo $?
1
test -f /path/to/file (or [ -f /path/to/file ]) yields an exit code of 0 if the file exists or 1 if it does not. If you want to check that a file is there and echo the path to it, try:
[ -f /path/to/file ] && echo "/path/to/file"

Related

Bash script not recognizing existing file

I have to write a simple bash script, which does the compilation of multiple sql scripts which can contain recursive references to the other sql scripts with Oracle SQLPlus convention, so in the end the result will be only one SQL file, containing all statements from all (possibly also recursively) referenced subscripts.
I have created a simple script called oracle-script-compressor.sh with following bash code:
#!/bin/bash
#
#
function err () {
echo $* > "/dev/stderr"
}
function replace_subscripts_placeholders() {
local PROCESSING_FILENAME=$1
echo "-- processing file $PROCESSING_FILENAME"
while read script_line; do
local script_line_NO_LEAD_SPACE="$(echo -e "${script_line}" | sed -e 's/^[[:space:]]*//')"
if [[ $script_line_NO_LEAD_SPACE == \#\#* ]] ; then
local file_name="./${script_line_NO_LEAD_SPACE#\#\#}"
echo "-- found reference to file $file_name"
echo "-- starting with processing, pwd=$PWD"
# for debug purposes:
ls -la
# but this returns always false:
if [ -f "$file_name" ]; then
# so this part is never started:
. replace_subscripts_placeholders $file_name
else
err_msg="WARNING: Could not find the referenced file $file_name"
echo "-- $err_msg"
err $err_msg
fi
else
echo "$script_line"
fi
done < $PROCESSING_FILENAME
}
if test -z "$1" ; then
err "Usage: oracle-script-compressor.sh {file_name_to_process} [> output_file]"
err " Be aware, if the referenced files within {file_name_to_process} are not specified by absolute paths, you have to start the script from corresponding directory."
err " If the part [> output_file] is omitted, then this script writes to standard output"
err " If the part [>> output_file] is used, then the result will be appended to output_file it it exists before the processing"
else
if [ -f "$1" ]; then
replace_subscripts_placeholders $1
else
echo "file $1 does not exist"
fi
fi
and I am stuck on interesting problem - it seems, inside of the function replace_subscripts_placeholders() the file check
if [ -f "$file_name" ]; then
. replace_subscripts_placeholders $file_name
else
err_msg="WARNING: Could not find the referenced file $file_name"
echo "-- $err_msg"
err $err_msg
fi
never goes to the recursion call and even, if I remove the if statement and really call the function in recursion with the correct referenced file name, which exists, it is still not recognized to be found and not passed into the loop over all lines of the referenced file in the recursive call (then the error file not found comes and the loop cannot be executed).
I have added the debug messages into script like mentioned above in the script, but still I am unable to find, why the hell should bash not find the file, if it is in the same directory. The scripts are placed in
user#mycomputer:/tmp/test$ ls -la
celkem 52
drwxr-xr-x 2 user user 4096 zář 14 21:47 .
drwxrwxrwt 38 root root 36864 zář 14 21:48 ..
-rw-r--r-- 1 user user 51 zář 14 21:45 a.sql
-rw-r--r-- 1 user user 51 zář 14 21:45 b.sql
-rw-r--r-- 1 user user 590 zář 14 21:46 start.sql
user#mycomputer:/tmp/test$
and the content of the file start.sql looks like this:
spool output__UpdSch.log
whenever sqlerror exit sql.sqlcode
--
--
PROMPT a.sql - starting
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as START_TIMESTAMP from dual;
##a.sql
PROMPT a.sql - finished
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as FINISH_TIMESTAMP from dual;
--
--
PROMPT b.sql - starting
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as START_TIMESTAMP from dual;
##b.sql
PROMPT b.sql - finished
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as FINISH_TIMESTAMP from dual;
--
--
spool off
and if I execute the script, it seems it decoded the filenames correctly, but there is still the problem in bash - the testing of the file existence returns always false:
user#mycomputer:/tmp/test$ ~/tmp/oracle-script-compressor.sh start.sql
-- processing file start.sql
spool output__UpdSch.log
whenever sqlerror exit sql.sqlcode
--
--
PROMPT a.sql - starting
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as START_TIMESTAMP from dual;
-- found reference to file ./a.sql
-- starting with processing, pwd=/tmp/test
celkem 52
drwxr-xr-x 2 user user 4096 zář 14 21:47 .
drwxrwxrwt 38 root root 36864 zář 14 21:48 ..
-rw-r--r-- 1 user user 51 zář 14 21:45 a.sql
-rw-r--r-- 1 user user 51 zář 14 21:45 b.sql
-rw-r--r-- 1 user user 590 zář 14 21:46 start.sql
-- WARNING: Could not find the referenced file ./a.sql
WARNING: Could not find the referenced file ./a.sql
PROMPT a.sql - finished
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as FINISH_TIMESTAMP from dual;
--
--
PROMPT b.sql - starting
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as START_TIMESTAMP from dual;
-- found reference to file ./b.sql
-- starting with processing, pwd=/tmp/test
celkem 52
drwxr-xr-x 2 user user 4096 zář 14 21:47 .
drwxrwxrwt 38 root root 36864 zář 14 21:48 ..
-rw-r--r-- 1 user user 51 zář 14 21:45 a.sql
-rw-r--r-- 1 user user 51 zář 14 21:45 b.sql
-rw-r--r-- 1 user user 590 zář 14 21:46 start.sql
-- WARNING: Could not find the referenced file ./b.sql
WARNING: Could not find the referenced file ./b.sql
PROMPT b.sql - finished
select to_char(current_timestamp,'YYYY-MM-DD HH24:MI:SS.FF3') as FINISH_TIMESTAMP from dual;
--
--
spool off
user#mycomputer:/tmp/test$
The script has read access to files, everything is in the same folder, where I start the script, It also does not matter, if I reference the file with the current directory prefix "./" or not, it is just never found. Interesting is, the check during the start of the script passes correctly, the only problem is the searching within the function... I have also tried to call the function with preceded "." and without it, it makes no difference... There are also no trailing spaces in the referenced file names... It also makes no difference if I declare the variables inside of the function as local or not.
So I have no idea, what could be the problem, maybe I just looked too long on it and cannot see something - seems it has to be something trivial, like the function is started in some different current directory - (but pwd and ls shows the directory is always correct...???) - Any help or pointers will be appreciated.
Thank you for the comments, it brought me to the solution. I found out, this problem was related to the fact, the test sql files were created on Windows, so they had in the end of line always the <CR><LF> characters. This leads to the fact in the original bash script I posted, the script line
while read script_line
puts into the variable not only the given file name from the line, preceded by the characters ##
##a.sql
but also the <CR> character - in the variable is then the value
##a.sql<CR>
what was the cause the file a.sql etc. could never be found. Of course the character is invisible, so therefore it has not been shown on any debug echoing I made here - I had to put the content of $file_name between some another characters and then i could see it in the echoed test... I made also some other corrections and the final working script looks in the following way, if somebody needs to join referenced SQL scripts into one, this does it:
#!/bin/bash
#
#
function err () {
echo $* > "/dev/stderr"
}
function replace_subscripts_placeholders() {
local PROCESSING_FILENAME=$1
echo "-- processing file $PROCESSING_FILENAME"
while read script_line || [ -n "$script_line" ]; do
local script_line_NO_LEAD_SPACE="$(echo -e "${script_line}" | sed -e 's/^[[:space:]]*//' | sed -e 's/^M$//')"
if [[ $script_line_NO_LEAD_SPACE == \#\#* ]] ; then
local file_name="${script_line_NO_LEAD_SPACE#\#\#}"
echo "-- found reference to file $file_name"
if [ -f "$file_name" ]; then
replace_subscripts_placeholders $file_name
else
err_msg="WARNING: Could not find the referenced file $file_name"
echo "-- $err_msg"
err $err_msg
fi
else
echo "$script_line"
fi
done < $PROCESSING_FILENAME
}
if test -z "$1" ; then
err "Usage: oracle-script-compressor.sh {file_name_to_process} [> output_file]"
err " Be aware, if the referenced files within {file_name_to_process} are not specified by absolute paths, you have to start the script from corresponding directory."
err " If the part [> output_file] is omitted, then this script writes to standard output"
err " If the part [>> output_file] is used, then the result will be appended to output_file it it exists before the processing"
else
if [ -f "$1" ]; then
replace_subscripts_placeholders $1
else
echo "file $1 does not exist"
fi
fi

how can find directory

when starting the tcl script a directory is created via a bash command. at the end of my script i want to read the directory name of the latest dirs. but my script does not find the newest directory but only the 2nd newest
bind pub "-|-" !aa pub:aaa
proc pub:aaa {nick host handle channel arg} {
set home "/home/user"
set bb [exec bash -c "start.sh"]
after 3000
set latest [exec bash -c "ls -td $home/jpg/*/ | head -n1"]
putnow "PRIVMSG $channel :$latest"
}
before starting it has the following folders in the directory:
drwxr-xr-x 2 user user 4096 Jun 24 18:30 aaa
drwxr-xr-x 2 user user 4096 Jun 24 18:14 bbb
after starting it has the following folders in the directory
drwxr-xr-x 2 user user 4096 Jun 24 18:30 aaa
drwxr-xr-x 2 user user 4096 Jun 24 18:14 bbb
drwxr-xr-x 2 user user 4096 Jun 24 18:35 ccc
output is :
<#testbot> aaa
it should be so
<#testbot> ccc
he finds the directory created during which the tcl script is not running
how can I display the newest, newly created directory?
regards
Instead of trying to exec out to a shell to find the most recently modified directory, I'd do it in pure tcl:
proc latest_directory {path {time mtime}} {
set dirs {}
foreach dir [glob -nocomplain -type d $path/*] {
file stat $dir s
lappend dirs $s($time) $dir
}
if {[llength $dirs] == 0} {
error "No directories found in $path"
} else {
return [lindex [lsort -integer -decreasing -stride 2 $dirs] 1]
}
}
# Then in pub:aaa
set latest [latest_directory $home/jpg]
As for why you're not getting ccc... hard to say for sure without seeing your start.sh script, but if it ends up running stuff in the background that continues after it exits, maybe it takes more than 3 seconds to create that directory?

Adding timestamp and user to bash history

I want to add the timestamp and user, who executed the command to my history command. Is this even possible?
By adding this line to the ~/.bashrc, you're able to add the timestamps to the history command:
HISTTIMEFORMAT="[ %Y/%m/%d %T ] "
This results in something like this by running history:
1841 [ 2016/08/25 10:57:54 ] ls
1842 [ 2016/08/25 10:57:56 ] who
1843 [ 2016/08/25 10:57:59 ] last
1844 [ 2016/08/25 10:58:10 ] uptime
1845 [ 2016/08/25 10:58:13 ] history
Now, I want an additional info: Which real user has executed these commands?
Yes, the history is saved and written for each individual user to the $HISTFILE (~/.bash_history), but usually has every user his own login like "firstname-name" or anything else.
An example user called "max-mustermann" logs in via SSH and switches to the user "root" for example. The user "max-mustermann" is now acting as "root".
I want to log the "real" user (in this case "max-mustermann") and maybe some additional information like the used IP address, to be able to identify, who has executed the command "uptime" as user "root". I thought, I've found a solution, but it didn't really work. Add the following lines to your ~/.bashrc to test it:
TTY_CON=$(tty | cut -d '/' -f 3-);
HIST_WHO_INFO=$(who | grep "${TTY_CON}" | cut -d ' ' -f 1,12);
HISTTIMEFORMAT="[ %Y/%m/%d %T by ${HIST_WHO_INFO} ] "
This returns something like this:
1848 [ 2016/08/25 11:06:47 by max-mustermann (10.50.1.42:S.0) ] ls
1849 [ 2016/08/25 11:06:48 by max-mustermann (10.50.1.42:S.0) ] who
1850 [ 2016/08/25 11:06:49 by max-mustermann (10.50.1.42:S.0) ] last
1851 [ 2016/08/25 11:06:51 by max-mustermann (10.50.1.42:S.0) ] uptime
1852 [ 2016/08/25 11:06:52 by max-mustermann (10.50.1.42:S.0) ] history
That's perfect, but unfortunately, it doesn't keep / save the real user with the history. If somebody else logs in, all commands, which were executed as user "max-mustermann" in this example would get updated to the new user, who has logged in:
1848 [ 2016/08/25 11:06:47 by somebody-else (10.50.1.18) ] ls
1849 [ 2016/08/25 11:06:48 by somebody-else (10.50.1.18) ] who
1850 [ 2016/08/25 11:06:49 by somebody-else (10.50.1.18) ] last
1851 [ 2016/08/25 11:06:51 by somebody-else (10.50.1.18) ] uptime
1852 [ 2016/08/25 11:06:52 by somebody-else (10.50.1.18) ] history
1853 [ 2016/08/25 11:08:13 by somebody-else (10.50.1.18) ] history
How can I solve this problem?
This is not always possible, depending on your server configuration and user behaviour, but can be if you "educate" your users properly. There are 2 ways (that I know of) you can get this information.
Just let your users only use sudo whenever they actually need it. Don't "switch" to root and start running commands. Just run seperate commands with a sudo prefix whenever you actually need root access. That way the history is still saved per user (and a sudo log will be written to the /var/log/auth.log file on most systems).
Make them switch to root using sudo su and not sudo su -, the - will reset their environment variables. But if you omit that -, the root enviroment variables should have a SUDO_USER variable holding the user that switched to the account. For example:
oldskool#server:~$ sudo su
root#server:/home/oldskool# echo $SUDO_USER
oldskool
You could use that SUDO_USER in your history timeformat.
HISTTIMEFORMAT will only save epoch to HISTFILE. You can't add any other text. Instead, I create different files per user:
echo "export HISTSIZE=10000" >> ~/.bash_profile
echo "export HISTTIMEFORMAT=\"%F %T \"" >> ~/.bash_profile
echo "export HISTFILE=\"/root/.bash_history_\$(who am i | awk '{print $1}')\"" >> ~/.bash_profile
echo "export PROMPT_COMMAND='history -a'" >> ~/.bash_profile

Bash completion for Camel.CaseNames (cd CCN<tab> => cd Camel.CaseName)

I'm looking for a way to implement completion for file/directory names of the kind
Foo.Bar.Baz/
Foo.Bar.QuickBrown.Fox/
Foo.Bar.QuickBrown.Interface/
Foo.Bar.Query.Impl/
where the completion would work like
~ $ cd QI<tab><tab>
Foo.Bar.QuickBrown.Interface/ Foo.Bar.Query.Impl/
~ $ cd QIm<tab><enter>
~/Foo.Bar.Query.Impl $
However, my simplistic approach of building a glob pattern from the input (eg. QIm -> *Q*I*m) does not exactly work for files / directories sharing the same prefix. In the case above, I get
~ $ cd QI<tab><tab>
Foo.Bar.QuickBrown.Interface/ Foo.Bar.Query.Impl/
~ $ cd Foo.Bar.Qu<tab><tab>
Foo.Bar.QuickBrown.Fox/ Foo.Bar.QuickBrown.Interface/ Foo.Bar.Query.Impl/
i.e. bash replaces the current word with the longest common prefix of the possible completions, which in this case results in larger completion set.
Here's my completion function:
_camel_case_complete()
{
local cur pat
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
pat=$(sed -e 's/\([A-Z]\)/*\1*/g' -e 's/\*\+/*/g' <<< "$cur")
COMPREPLY=( $(compgen -G "${pat}" -- $cur ) )
return 0
}
Any hints how to fix this without breaking normal filename / directory completion?
See following example:
% ls -l
total 20
-rw-r--r-- 1 root root 315 2016-06-02 18:30 compspec
drwxr-xr-x 2 root root 4096 2016-06-02 17:56 Foo.Bar.Baz
drwxr-xr-x 2 root root 4096 2016-06-02 17:56 Foo.Bar.Query.Impl
drwxr-xr-x 2 root root 4096 2016-06-02 17:56 Foo.Bar.QuickBrown.Fox
drwxr-xr-x 2 root root 4096 2016-06-02 17:56 Foo.Bar.QuickBrown.Interface
% cat compspec
_camel_case_complete()
{
local cur=$2
local pat
pat=$(sed -e 's/[A-Z]/*&/g' -e 's/$/*/' -e 's/\*\+/*/g' <<< "$cur")
COMPREPLY=( $(compgen -G "${pat}" ) )
if [[ ${#COMPREPLY[#]} -gt 1 ]]; then
# Or use " " instead of "__"
COMPREPLY[${#COMPREPLY[#]}]="__"
fi
return 0
}
complete -F _camel_case_complete cd
% . ./compspec
% cd QI<TAB><TAB>
__ Foo.Bar.QuickBrown.Interface
Foo.Bar.Query.Impl
% cd QIm<TAB>
% cd Foo.Bar.Query.Impl<SPACE>

How to decrease TCP connect() system call timeout?

In command below I enable file /dev/tcp/10.10.10.1/80 both for reading and writing and associate it with file descriptor 3:
$ time exec 3<>/dev/tcp/10.10.10.1/80
bash: connect: Operation timed out
bash: /dev/tcp/10.10.10.1/80: Operation timed out
real 1m15.151s
user 0m0.000s
sys 0m0.000s
This automatically tries to perform TCP three-way handshake. If 10.10.10.1 is not reachable as in example above, then connect system call tries to connect for 75 seconds. Is this 75 second timeout determined by bash? Or is this system default? Last but not least, is there a way to decrease this timeout value?
It's not possible in Bash without modifying the source as already mentioned, although here is the workaround by using timeout command, e.g.:
$ timeout 1 bash -c "</dev/tcp/stackoverflow.com/80" && echo Port open. || echo Port closed.
Port open.
$ timeout 1 bash -c "</dev/tcp/stackoverflow.com/81" && echo Port open. || echo Port closed.
Port closed.
Using this syntax, the timeout command will kill the process after the given time.
See: timeout --help for more options.
It is determined by TCP. It can be decreased on a per-socket basis by application code.
NB The timeout only takes effect if there is no response at all. If there is a connection refusal, the error occurs immediately.
No: there is no way of changing timeout by using /dev/tcp/
Yes, you could change default timeout for TCP connection in any programming language.
But, bash is not a programming language!
You could have a look into source code (see: Bash Homepage), you may find lib/sh/netopen.c file where you could read in _netopen4 function:
s = socket(AF_INET, (typ == 't') ? SOCK_STREAM : SOCK_DGRAM, 0);
You could read this file carefully, there are no consideration of connection timeout.
Without patching bash sources, there is no way of changing connection timeout by a bash script.
Simple HTTP client using netcat (near pure bash)
There is a little sample HTTP client written in pure bash, but using netcat:
#!/bin/bash
tmpfile=$(mktemp -p $HOME .netbash-XXXXXX)
exec 7> >(nc -w 3 -q 0 stackoverflow.com 80 >$tmpfile)
exec 6<$tmpfile
rm $tmpfile
printf >&7 "GET %s HTTP/1.0\r\nHost: stackoverflow.com\r\n\r\n" \
/questions/24317341/how-to-decrease-tcp-connect-system-call-timeout
timeout=100;
while ! read -t .001 -u 6 status ; do read -t .001 foo;done
echo STATUS: $status
[ "$status" ] && [ -z "${status//HTTP*200 OK*}" ] || exit 1
echo HEADER:
while read -u 6 -a head && [ "${head//$'\r'}" ]; do
printf "%-20s : %s\n" ${head%:} "${head[*]:1}"
done
echo TITLE:
sed '/<title>/s/<[^>]*>//gp;d' <&6
exec 7>&-
exec 6<&-
This could render:
STATUS: HTTP/1.1 200 OK
HEADER:
Cache-Control : private
Content-Type : text/html; charset=utf-8
X-Frame-Options : SAMEORIGIN
X-Request-Guid : 46d55dc9-f7fe-425f-a560-fc49d885a5e5
Content-Length : 91642
Accept-Ranges : bytes
Date : Wed, 19 Oct 2016 13:24:35 GMT
Via : 1.1 varnish
Age : 0
Connection : close
X-Served-By : cache-fra1243-FRA
X-Cache : MISS
X-Cache-Hits : 0
X-Timer : S1476883475.343528,VS0,VE100
X-DNS-Prefetch-Control : off
Set-Cookie : prov=ff1129e3-7de5-9375-58ee-5f739eb73449; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
TITLE:
bash - How to decrease TCP connect() system call timeout? - Stack Overflow
Some explanations:
We create first a temporary file (under private directory for security reason), bind and delete before using them.
$ tmpfile=$(mktemp -p $HOME .netbash-XXXXXX)
$ exec 7> >(nc -w 3 -q 0 stackoverflow.com 80 >$tmpfile)
$ exec 6<$tmpfile
$ rm $tmpfile
$ ls $tmpfile
ls: cannot access /home/user/.netbash-rKvpZW: No such file or directory
$ ls -l /proc/self/fd
lrwx------ 1 user user 64 Oct 19 15:20 0 -> /dev/pts/1
lrwx------ 1 user user 64 Oct 19 15:20 1 -> /dev/pts/1
lrwx------ 1 user user 64 Oct 19 15:20 2 -> /dev/pts/1
lr-x------ 1 user user 64 Oct 19 15:20 3 -> /proc/30237/fd
lr-x------ 1 user user 64 Oct 19 15:20 6 -> /home/user/.netbash-rKvpZW (deleted)
l-wx------ 1 user user 64 Oct 19 15:20 7 -> pipe:[2097453]
$ echo GET / HTTP/1.0$'\r\n\r' >&7
$ read -u 6 foo
$ echo $foo
HTTP/1.1 500 Domain Not Found
$ exec 7>&-
$ exec 6>&-

Resources