I am trying to check the ends of code for semi-colons as they are causing me some issues for a server I have running. To do this I am using a bash script (as I am more familiar with bash) to read through the lines and return those that doesn't end with a semi-colon. My bash script is as follows
while read line
do
if[$line!=*;]
echo $line
fi
done < $1
When I run the script, it says there is an error by fi but I cannot figure it out. I also realize this will return statements like if and while but that will be fine for my needs.
Given the sample input
use CGI;
print "<html>"
print "<head>";
print "</head>";
print "<body><p> HELLO WORLD </p>";
print "</body>";
print "</html>"
this should be the output
print "<html>"
print "</html>"
I think the easiest way would be with grep. Given an input.txt file like this:
spam
foo;<Space><Space>
sausage
baked;<Tab>
beans
unladen;
You could do
grep -v ';\s*$' input.txt
and obtain
spam
sausage
beans
grep's -v flag means "return all lines not matching this regular expression", so it will skip all lines ending with semi-colons.
If your lines have also spaces after the semi-colons, the \s* means "any sequence of space characters" so grep will remove those lines aswell.
The reason you have a problem is that your if statement requires a then. You also need some more spaces and to quote your variables. It still won't work, though. Your comparison is wrong, too - that's not how [ works to compare strings. You can use bash's [[ instead:
while read line
do
if [[ "$line" != *\; ]]
then
echo "$line"
fi
done < $1
But even with all that, what you really should be doing is:
grep -v ';$' $1
To search lines with semicolon ; and without semicolon in a linux file
$ grep ';' file_name
$ grep -v ';\s*$' file_name
Related
I have written a script to change file ownerships based on an input list read in. My script works fine on directories without space in their name. However it fails to change files on directories with space in their name. I also would like to capture the output from the chown command to a file. Could anyone help ?
here is my script in ksh:
#!/usr/bin/ksh
newowner=eg27395
dirname=/home/sas/sastest/
logfile=chowner.log
date > $dir$logfile
command="chown $newowner:$newowner"
for fname in list
do
in="$dirname/$fname"
if [[ -e $in ]]
then
while read line
do
tmp=$(print "$line"|awk '{if (substr($2,1,1) == "/" ) print $2; if (substr($0,1,1) == "/" ) print '})
if [[ -e $tmp ]]
then
eval $command \"$tmp\"
fi
done < $in
else
echo "input file $fname is not present. Check file location in the script."
fi
done
a couple of other errors:
date > $dir$logfile -- no $dir variable defined
to safely read from a file: while IFS= read -r line
But to answer your main concern, don't try to build up the command so dynamically: don't bother with the $command variable, don't use eval, and quote the variable.
chmod "$newowner:$newowner" "$tmp"
The eval is stripping the quotes on this line
command="chown $newowner:$newowner"
In order to get the line to work with spaces you will need to provide backslashed quotes
command="chown \"$newowner:$newowner\""
This way the command that eval actually runs is
chown "$newowner:$newowner"
Also, you probably need quotes around this variable setting, although you'll need to tweak the syntax
tmp="$(print "$line"|awk '{if (substr($2,1,1) == "/" ) print $2; if (substr($0,1,1) == "/" ) print '})"
To capture the output you can add 2>&1 > file.out where file.out is the name of the file ... in order to get it working with eval as you are using it you will need to backslash any special characters much in the same way you need to backslash the double quotes
Your example code suggests that list is a "meta" file: A list of files that each has a list of files to be changed. When you only have one file you can remove the while loop.
When list is a variable with filenames you need echo "${list}"| while ....
It is not completely clear why you sometimes want to start with the third field. It seems that sometimes you have 2 words before the filename and want them to be ignored. Cutting the string on spaces becomes a problem when your filenames have spaces as well. The solution is look for a space followed by a slash: that space is not part of a filename and everything up to that space can be deleted.
newowner=eg27395
# The slash on the end is not really part of the dir name, doesn't matter for most commands
dirname=/home/sas/sastest
logfile=chowner.log
# Add braces, quotes and change dir into dirname
date > "${dirname}/${logfile}"
# Line with command not needed
# Is list an inputfile? It is streamed using "< list" at the end of while []; do .. done
while IFS= read -r fname; do
in="${dirname}/${fname}"
# Quotes are important
if [[ -e "$in" ]]; then
# get the filenames with a sed construction, and give it to chmod with xargs
# The sed construction is made for the situation that in a line with a space followed by a slash
# the filename starts with the slash
# sed is with # to avoid escaping the slashes
# Do not redirect the output here but after the loop.
sed 's#.* /#/#' "${in}" | xargs chmod ${newowner}:${newowner}
else
echo "input file ${fname} is not present. Check file location in the script."
fi
done < list >> "${dirname}/${logfile}"
I'm new to UNIX and have this really simple problem:
I have a text-file (input.txt) containing a string in each line. It looks like this:
House
Monkey
Car
And inside my shell script I need to read this input file line by line to get to a variable like this:
things="House,Monkey,Car"
I know this sounds easy, but I just couldnt find any simple solution for this. My closest attempt so far:
#!/bin/sh
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done <input.txt
echo $things
But this won't work. Regarding to my google research I thought the while loop would create a new sub shell, but this I was wrong there (see the comment section). Nevertheless the variable "things" was still not available in the echo later on. (I cannot just write the echo inside the while loop, because I need to work with that string later on)
Could you please help me out here? Any help will be appreciated, thank you!
What you proposed works fine! I've only made two changes here: Adding missing quotes, and handling the empty-string case.
things=""
addToString() {
if [ -n "$things" ]; then
things="${things},$1"
else
things="$1"
fi
}
while read -r line; do addToString "$line"; done <input.txt
echo "$things"
If you were piping into while read, this would create a subshell, and that would eat your variables. You aren't piping -- you're doing a <input.txt redirection. No subshell, code works without changes.
That said, there are better ways to read lists of items into shell variables. On any version of bash after 3.0:
IFS=$'\n' read -r -d '' -a things <input.txt # read into an array
printf -v things_str '%s,' "${things[#]}" # write array to a comma-separated string
echo "${things_str%,}" # print that string w/o trailing comma
...on bash 4, that first line can be:
readarray -t things <input.txt # read into an array
This is not a shell solution, but the truth is that solutions in pure shell are often excessively long and verbose. So e.g. to do string processing it is better to use special tools that are part of the “default” Unix environment.
sed ':b;N;$!bb;s/\n/,/g' < input.txt
If you want to omit empty lines, then:
sed ':b;N;$!bb;s/\n\n*/,/g' < input.txt
Speaking about your solution, it should work, but you should really always use quotes where applicable. E.g. this works for me:
things=""
while read line; do things="$things,$line"; done < input.txt
echo "$things"
(Of course, there is an issue with this code, as it outputs a leading comma. If you want to skip empty lines, just add an if check.)
This might/might not work, depending on the shell you are using. On my Ubuntu 14.04/x64, it works with both bash and dash.
To make it more reliable and independent from the shell's behavior, you can try to put the whole block into a subshell explicitly, using the (). For example:
(
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done
echo $things
) < input.txt
P.S. You can use something like this to avoid the initial comma. Without bash extensions (using short-circuit logical operators instead of the if for shortness):
test -z "$things" && things="$1" || things="${things},${1}"
Or with bash extensions:
things="${things}${things:+,}${1}"
P.P.S. How I would have done it:
tr '\n' ',' < input.txt | sed 's!,$!\n!'
You can do this too:
#!/bin/bash
while read -r i
do
[[ $things == "" ]] && things="$i" || things="$things","$i"
done < <(grep . input.txt)
echo "$things"
Output:
House,Monkey,Car
N.B:
Used grep to tackle with empty lines and the probability of not having a new line at the end of file. (Normal while read will fail to read the last line if there is no newline at the end of file.)
Can someone please help with this because I can't seem to find a solution. I have the following script that works fine:
#!/bin/bash
#Checks the number of lines in the userdomains file
NUM=`awk 'END {print NR}' /etc/userdomains.hristian`;
echo $NUM
#Prints out a particular line from the file (should work with $NUM eventually)
USER=`sed -n 4p /etc/userdomains.hristian`
echo $USER
#Edits the output so that only the username is left
USER2=`echo $USER | awk '{print $NF}'`
echo $USER2
However, when I substitute the 4 on line 12 with the variable $NUM, like this, it doesn't work:
USER=`sed -n $NUMp /etc/userdomains.hristian`
I tried a number of different combinations of quotes and ${}, however none of them seem to work because I'm a BASH newbie. Help please :)
I'm not sure exactly what you've already tried but this works for me:
$ cat out
line 1
line 2
line 3
line 4
line 5
$ num=4
$ a=`sed -n ${num}p out`
$ echo "$a"
line 4
To be clear, the issue here is that you need to separate the expansion of $num from the p in the sed command. That's what the curly braces do.
Note that I'm using lowercase variable names. Uppercase ones should be be reserved for use by the shell. I would also recommend using the more modern $() syntax for command substitution:
a=$(sed -n "${num}p" out)
The double quotes around the sed command aren't necessary but they don't do any harm. In general, it's a good idea to use them around expansions.
Presumably the script in your question is a learning exercise, which is why you have done all of the steps separately. For the record, you could do the whole thing in one go like this:
awk 'END { print $NF }' /etc/userdomains.hristian
In the END block, the values from the last line in the file can still be accessed, so you can print the last field directly.
Your trying to evaluate the variable $NUMp rather than $NUM. Try this instead:
USER=`sed -n ${NUM}p /etc/userdomains.hristian`
in shell scripts I usually append a string to variable with "${variable} end". However, I have a file "file.txt" in which I want all lines to be appended by "end". So command line I do, for instance, for i in `cat file.txt`; do echo "${i} end"; done. But the word "end" (pluse the space) will not be appended but appended. The same thing happends when I use a while loop. Could anybody tell me what is going on right there? I am using GNU bash version 4.2.37 on LinuxMint13 64bit (both Cinammon and Mate).
Thank you for any help!
You should use a while loop instead of a for loop, as explained here.
while IFS= read -r line
do
echo "$line end"
done < "file.txt"
It may just be your syntax - don't forget do. That is:
for i in `cat file.txt`; do echo "${i} end"; done
If you're asking how to make a new file with "end" appended to each line, try this:
for i in `cat file.txt`; do echo "${i} end" >> some_new_file; done
Is using a loop the only option? If all you want to do is append something to the end of every line, it's probably easier to use sed:
sed -ie 's/.*/& end/' file.txt
I have noticed for a while that read never actually reads the last line of a file if there is not, at the end of it, a "newline" character. This is understandable if one consider that, as long as there is not a "newline" character in a file, it is as if it contained 0 line (which is quite difficult to admit !). See, for example, the following:
$ echo 'foo' > bar ; wc -l bar
1 bar
But...
$ echo -n 'bar' > foo ; wc -l foo
0 foo
The question is then: how can I handle such situations when using read to process files which have not been created or modified by myself, and about which I don't know if they actually end up with a "newline" character ?
read does, in fact, read an unterminated line into the assigned var ($REPLY by default). It also returns false on such a line, which just means ‘end of file’; directly using its return value in the classic while loop thus skips that one last line. If you change the loop logic slightly, you can process non-new line terminated files correctly, without need for prior sanitisation, with read:
while read -r || [[ -n "$REPLY" ]]; do
# your processing of $REPLY here
done < "/path/to/file"
Note this is much faster than solutions relying on externals.
Hat tip to Gordon Davisson for improving the loop logic.
POSIX requires any line in a file have a newline character at the end to denote it is a line. But this site offers a solution to exactly the scenario you are describing. Final product is this chunklet.
newline='
'
lastline=$(tail -n 1 file; echo x); lastline=${lastline%x}
[ "${lastline#"${lastline%?}"}" != "$newline" ] && echo >> file
# Now file is sane; do our normal processing here...
If you must use read, try this:
awk '{ print $0}' foo | while read line; do
echo the line is $line
done
as awk seems to recognize lines even without the newline char
This is more or less a combination of the answers given so far.
It does not modify the files in place.
(cat file; tail -c1 file | grep -qx . && echo) | while read line
do
...
done