sed emptying file in script - bash

I have a script to manipulate some log files and then push them up to a server, to be loaded into mysql and analyzed. I have almost all of this process figured out except the automation of collecting the logs. I use sed to rip all the " out of the log files, so it can more easily be imported into mysql. When I run the command below it works fine, but run the same command in a shell script, and it creates an empty file. I am not sure why --any help would be greatly appreciated.
sed 's/\"//g' /usr/local/tomcat/logs/localhost_access_log.$yest.txt > /$DIR/iweb$yest.txt
Here is the complete script.
#!/bin/bash
#Script to manage catalina logs on our servers.
#First create the needed variables.
date=$(date +"%m_%d_%y")
adate=$(date +"%Y-%m-%d")
yest=$(date -d 'yesterday' +"%Y-%m-%d")
Customer="iwebsup"
log=/isweb/admin/catmanage/log
DIR=/catmanage
#Make Directory
#mkdir /catmanage/tomcat1/
#Run access logs through sed to remove " from file
echo "Removing quote marks from access log and moving to direcotry" &> $log.$date
sed 's/\"//g' "/usr/local/tomcat/logs/localhost_access_log.$yest.txt" > "/$DIR/iweb$yest.txt" &> $log.$date

Your original question shows redirection with > but your actual script has &>. These are rather different things, and in fact, the latter is probably incompatible with your sh.
The portable, compatible way to include error redirection is
command >file 2>&1
Your shebang line says #!/bin/bash but based on your diagnostic remarks, I am guessing you are running this with sh after all.
By the way, tr -d '"' <file >newfile would be a more efficient way to remove double quotes from a file.

Launching a shell script starts a new system process. I suspect that within the context of this script's sub-shell, $yest is not set as a shell variable. If you're going to use $yest in a shell-script, you should ideally pass its value as an argument to the script -- or alternatively export the shell variable as an environment variable (export $yest) which will be inherited by the sub-shell process (all child processes inherit the environment of their parent process).
When debugging shell scripts, it's always useful to include set -xvu at the start of the section that you're debugging so that you can see what the script is doing and what values are stored in its variables.
-x Print commands and their arguments as they are executed.
-v Print shell input lines as they are read.
-u Treat unset variables as an error when substituting.
You can turn off this debugging by later running set -xvu.

Related

Unix Jobs command not listing background jobs

I am trying to create a simple script to zip a list of files each into its own zip file. The files are big, so I a trying to send the to background using ampersand. It works as I can see the temporary files filling up and after some time the files are created, but issuing the 'jobs' command does not list the jobs. What am I doing wrong?
#!/bin/ksh
for file in $*;do
bash -c "zip -q $file.zip $file" &
done
NATIVE CSH SOLUTION
As I said earlier, shell scripts execute in a subshell and the parent shell will not be able to list the jobs of a subshell. In order to use jobs, the jobs need to be running in the same shell.
This can be achieved by source-ing the file. Since your default shell is csh the file should contain these lines according to the csh syntax
# not a script. no need for shebang
# sourcing this file **in csh** will
# start quiet zip jobs in the background
# for all files in the working dir (*)
foreach file in (*)
zip -q "$file.zip" "$file" &
end
Keeping this file in an easily accessible location and running source /path/to/file will give you what you need.
This is the only way to do it in csh for the following reasons:
cannot be a shell script. jobs will not be possible
csh does not support shell functions
setting alias not easy due csh's foreach syntax
But also consider a few of these alternatives
A. The organisation allows for changing the login shell
Change the shell to one that allows shell functions (e.g. to bash)
chsh -s `which bash` $USER
Logout and login or simply execute bash (or your shell of choice) to start a new shell
Check you are in the right shell echo $0
Add a function to your user-level login script (~/.bashrc for bash)
# executing this command appends a bash function named `zipAll` to ~/.bashrc
# modify according to your choice of shell
cat << 'EOF' >> ~/.bashrc
zipAll() {
for file in *; do
zip -q "$file.zip" "$file" &
done
}
EOF
The function zipAll should be available from the next login onwards.
B. The organisation does not allow changing login shell
Simply execute bash (or your shell of choice) to start a new shell
Follow steps A3 to A4
Temporarily switch to a new shell with bash (or your shell of choicd) when you need this function
C. B; but you want to use bash (or other shell)
I do not know if this is a good solution. Hopefully someone will point out the ill-effects of it. Hopefully your organisation simply allows you to change the login shell
Seeing as your default shell is csh, add a line to ~/.cshrc to start bash (or your choice of shell)
echo 'bash --login' >> ~/.cshrc
Follow steps A2 to A4
Copy necessary lines from existing ~/.cshrc to ~/.bashrc (or the file corresponding to your shell)
Confusion regarding zip usage was oversight on my part. Apologies.
NB: The syntax zip -q $file $file.zip does not work with my version. But I retain it assuming that it works on OP's system
PS: The command that works with my version of zip is zip -q $file.zip file

How to pass a file which may have a different name using Execute Shell command in Jenkins

I have a Jenkins job in which I want to read a file from a directory using the shell and pass that file in ant test step.
Say the file I want to read is /home/xxx/y.txt. The name of the file always changes but there will be only single file with .txt extension at any given point in that directory.
So, I am trying to pass that file in the "Execute Shell" build action as ant -Dfile=/home/xxx/*.txt but the build is "unable to read the file".
The shell won't expand -Dfile=/home/xxx/*.txt into -Dfile=/home/xxx/y.txt because -Dfile=/home/xxx/y.txt is not a file. However, the shell will expand /home/xxx/*.txt into /home/xxx/y.txt. You can get the result you want using command substitution:
ant -Dfile=`echo /home/xxx/*.txt`
To protect against whitespace in the file path, you can use double quotes around the backticks:
ant -Dfile="`echo /home/xxx/*.txt`"
General tip: If you are having trouble with a shell script running in a Jenkins job, try enabling command tracing and view the console output to help debug. Command tracing can be enabled in one of two ways (take your pick):
Pass -x as an option to the shebang at the beginning of the script. For example, replace #!/bin/sh with #!/bin/sh -x. All commands will be output on standard error before they are executed.
Place set -x somewhere in your script. Commands after this line will be traced.
Consider:
set -- /home/xxx/*.txt
{ [ "$#" -eq 1 ] && [ -e "$1" ]; } || {
echo "ERROR: There should be exactly one file matching /home/xxx/*.txt" >&2
exit 1
}
ant -Dfile="$1"
This has several advantages:
You're actually detecting the unexpected cases instead of letting it passed unnoticed when (not if) an impossible thing happens.
Everything is happening in a single shell -- there's no subshell performance impact.
Your filenames aren't being mangled at all -- all the odd corner cases (ie. names with literal backslashes, which echo is allowed by POSIX to mangle) are fully supported.
It's fully compliant with any POSIX shell.
There's also a caveat:
set -- /home/xxx/*.txt overrides "$#", the argument vector, in the current context. If you need to refer to arguments as "$1", "$2", etc. in the outside script, you might put this code inside a function.
file_name=(`/home/xxx/*.txt`)
ant -Dfile=${file_name}

Cannot properly execute bash script because of a redirection in an environment where root is the owner

My script is executable and I run it as sudo. I tried many workarounds and alternatives to the ">>" operator but nothing seemed to work properly.
My script:
#! /bin/bash
if [[ -z "$1" || -z "$2" ]]; then
exit 1
else
root=$1
fileExtension=$2
fi
$(sudo find $root -regex ".*\.${fileExtension}") >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
I tried tee, sed and dd of, I also tried running it with bash -c or in sudo -i , nothing worked. Either i get an empty file or a Permission denied error.
I searched thoroughly and read many command manuals but I can't get it to work
The $() operator performs command substitution. When the overall command line is expanded, the command within the parentheses is executed, and the whole construct is replaced with the command's output. After all expansions are performed, the resulting line is executed as a command.
Consider, then, this simplified version of your command:
$(find /etc -regex ".*\.conf") >> /home/mux/Desktop/AllFilesOfconf.txt
On my system that will expand to a ginormous command of the form
/etc/rsyslog.conf /etc/pnm2ppa.conf ... /etc/updatedb.conf >> /home/mux/Desktop/AllFilesOfconf.txt
Note at this point that the redirection is separate from, and therefore independent of, the command in the command substitution. Expanding the command substitution therefore does not cause anything to be written to the target file.
But we're not done! That was just the expansion. Bash now tries to execute the result as a command. In particular, in the above example it tries to execute /etc/rsyslog.conf as a command, with all the other file names as arguments, and with output redirected as specified. But /etc/rsyslog.conf is not executable, so that will fail, producing a "permission denied" message. I'm sure you can extrapolate from there what effects different expansions would produce.
I don't think you mean to perform a command substitution at all, but rather just to run the command and redirect its output to the given file. That would simply be this:
sudo find $root -regex ".*\.${fileExtension}" >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
Update:
As #CharlesDuffy observed, the redirection in that case is performed with the permissions of the user / process running the script, just as it is in your original example. I have supposed that that is intentional and correct -- i.e. that the script is being run by user 'mux' or by another user that has access to mux's Desktop directory and to any existing file in it that the script might try to create or update. If that is not the case, and you need the redirection, too, to be privileged, then you can achieve it like so:
sudo -s <<END
find $root -regex ".*\.${fileExtension}" >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
END
That runs an interactive shell via sudo, with its input is redirected from the heredoc. The variable expansions are performed in the host shell from which sudo is executed. In this case the redirection is performed with the identity obtained via sudo, which affects access control, as well as ownership of the file if a new one is created. You could add a chown command if you don't want the output files to be owned by root.

Run command from current terminal in bash script

I would like to issue a command available in my current shell (where I am running my bash script from). As it is right now if I put the uecap command in my bash script as it is. The script fails with reason uecap command not found.
Although if I issue the uecap command directly from my current shell it works fine.
My current shell is on a separate process with it is own process id.
here is my bash shell as an example:
#!/bin/sh
while read i
do
cid=`echo "$i" | cut -b1`
rid=`echo "$i" | cut -b9-18`
rm -rf $1_TEMP
uecap -r rid -c cid
done < $1ID.log
The way I run the bash script is by issuing this command:
!./bashscript $node.
On the same not, is there a way to run a command using another process from your bash script?
Likely you've just answered your own question. Since uecap is an alias, then it's not exported to the processes issued by your interactive shell, and you can't force them shell to export it, for aliases there's no export command.
What you can do is to create a file containing that alias definition and then source than file from within your script, something like this:
alias uecap > ~/tmp/uecap_alias
and within a shell script
source ~/tmp/uecap_alias
...
uecap -r rid -c cid
you may also try to pass uecap alias definition through environment variables, something like
UECAP_ALIAS="$(alias uecap)" ./your_shell_script
and within the script
eval "$UECAP_ALIAS"
...
uecap ...
but beware that eval "$VAR" is quite fragile on whitespaces, quoting and weird symbols, so it should be used with extra care and only when you're 100% sure about what you're going to eval (think of SQL injections and alike).

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

Resources