Bash script output not replacing source file content - bash

I would be grateful for education on my question. The goal is to use the filename of an image to create alternate text in Markdown image references for multiple instances in a large number of Markdown files. (I realize from an accessibility standpoint this is a far-from-optimal practice to create alternate text - this is a temporary solution.) For example, I would like:
![](media/image-dir/image-file-with-hyphens.png)
to become
![image file with hyphens](media/image-dir/image-file-with-hyphens.png)
Current script:
for file in *.md; do
while read -r line; do
if [[ $line =~ "![]" ]]; then
# CREATE ALTERNATIVE TEXT OUT OF IMAGE FILENAME
# get text after last instance of / in filepath
processLine=`echo $line | grep -oE "[^\/]+$"`
# remove image filetypes
processLine2=`echo $processLine | sed 's/.png)//g'`
processLine3=`echo $processLine2 | sed 's/.jpg)//g'`
# remove numbers at end of filename
processLine4=`echo $processLine3 | sed 's/[0-9+]$//g'`
# remove hyphens in filename
processLine5=`echo $processLine4 | sed 's/-/ /g'`
# PUT ALTERNATIVE TEXT IN IMAGE FILEPATH
# trim ![ off front of original line
assembleLine2=`echo $line | sed 's/!\[//g'`
# string together `![` + filename without hyphens + rest of image filepath
assembleLine3='!['"$processLine5"''"$assembleLine2"''
fi
done < $file > $file.tmp && mv $file.tmp $file
done
As it stands, the file comes out blank.
If I add echo $file before while read -r line, the file maintains its original state, but all image references are as follows:
Text
![](media/image-dir/image-file-with-hyphens.png)
![image file with hyphens](media/image-dir/image-file-with-hyphens.png)
Text
If I remove > $file.tmp && mv $file.tmp $file, the console returns nothing.
I've never encountered this in any other Bash script and, at least from the terms I'm using, am not finding the right help in any resource. If anyone is able to help me understand my errors or point me in the right direction, I would be grateful.

If your aim is to replace
![](media/image-dir/image-file-with-hyphens.png)`
with
![image file with hyphens](media/image-dir/image-file-with-hyphens.png)`,
then you can try this sed
for file in *.md;
do sed -E 's/(\S+\[)(\].\S+.)/\1image file with hyphens\2/' "$file" > file.tmp;
done

The block of code calculates an assembleLine3, but you don't have echo "${assembleLine3}".
Nothing is written to stdout or to a file.
When you are debugging (or trying to ask a minimal question), remove the first while-loop and most of the processing. Testing the following code is easy:
while read -r line; do
if [[ $line =~ "![]" ]]; then
processLine=`echo $line | grep -oE "[^\/]+$"`
fi
done < testfile.md

Related

Updating a config file based on the presence of a specific string

I want to be able to comment and uncomment lines which are "managed" using a bash script.
I am trying to write a script which will update all of the config lines which have the word #managed after them and remove the preceeding # if it exists.
The rest of the config file needs to be left unchanged. The config file looks like this:
configFile.txt
#config1=abc #managed
#config2=abc #managed
config3=abc #managed
config3=abc
This is the script I have created so far. It iterates the file, finds lines which contain "#managed" and detects if they are currently commented.
I need to then write this back to the file, how do I do that?
manage.sh
#!/bin/bash
while read line; do
STR='#managed'
if grep -q "$STR" <<< "$line"; then
echo "debug - this is managed"
firstLetter=${$line:0:1}
if [ "$firstLetter" = "#" ]; then
echo "Remove the initial # from this line"
fi
fi
echo "$line"
done < configFile.txt
With your approach using grep and sed.
str='#managed$'
file=ConfigFile.txt
grep -q "^#.*$str" "$file" && sed "/^#.*$str/s/^#//" "$file"
Looping through files ending in *.txt
#!/usr/bin/env bash
str='#managed$'
for file in *.txt; do
grep -q "^#.*$str" "$file" &&
sed "/^#.*$str/s/^#//" "$file"
done
In place editing with sed requires the -i flag/option but that varies from different version of sed, the GNU version does not require an -i.bak args, while the BSD version does.
On a Mac, ed should be installed by default, so just replace the sed part with.
printf '%s\n' "g/^#.*$str/s/^#//" ,p Q | ed -s "$file"
Replace the Q with w to actually write back the changes to the file.
Remove the ,p if no output to stdout is needed/required.
On a side note, embedding grep and sed in a shell loop that reads line-by-line the contents of a text file is considered a bad practice from shell users/developers/coders. Say the file has 100k lines, then grep and sed would have to run 100k times too!
This sed one-liner should do the trick:
sed -i.orig '/#managed/s/^#//' configFile.txt
It deletes the # character at the beginning of the line if the line contains the string #managed.
I wouldn't do it in bash (because that would be slower than sed or awk, for instance), but if you want to stick with bash:
#! /bin/bash
while IFS= read -r line; do
if [[ $line = *'#managed'* && ${line:0:1} = '#' ]]; then
line=${line:1}
fi
printf '%s\n' "$line"
done < configFile.txt > configFile.tmp
mv configFile.txt configFile.txt.orig && mv configFile.tmp configFile.txt

For Loop Issues with CAT and tr

I have about 700 text files that consist of config output which uses various special characters. I am using this script to remove the special characters so I can then run a different script referencing an SED file to remove the commands that should be there leaving what should not be in the config.
I got the below from Remove all special characters and case from string in bash but am hitting a wall.
When I run the script it continues to loop and writes the script into the output file. Ideally, it just takes out the special characters and creates a new file with the updated information. I have not gotten to the point to remove the previous text file since it probably wont be needed. Any insight is greatly appreciated.
for file in *.txt for file in *.txt
do
cat * | tr -cd '[:alnum:]\n\r' | tr '[:upper:]' '[:lower:]' >> "$file" >> "$file".new_file.txt
done
A less-broken version of this might look like:
#!/usr/bin/env bash
for file in *.txt; do
[[ $file = *.new_file.txt ]] && continue ## skip files created by this same script
tr -cd '[:alnum:]\n\r' <"$file" \
| tr '[:upper:]' '[:lower:]' \
>> "$file".new_file.txt
done
Note:
We're referring to the "$file" variable being set by for.
We aren't using cat. It slows your script down with no compensating benefits whatsoever. Instead, using <"$file" redirects from the specific input file being iterated over at present.
We're skipping files that already have .new_file.txt extensions.
We only have one output redirection (to the new_file.txt version of the file; you can't safely write to the file you're using as input in the same pipeline).
Using GNU sed:
sed -i 's/[^[:alnum:]\n\r]//g;s/./\l&/g' *.txt

Read a file in a Bash script

I have a file in my file system. I want to read that file in bash script. File format is different i want to read only selected values from the file. I don't want to read the whole file as the file is very huge. Below is my file format:
Name=TEST
Add=TEST
LOC=TEST
In the file it will have data like above. From that I want to get only Add date in a variable. Could you please suggest me how I can do this.
As of now i am doing this to read the file:
file="data.txt"
while IFS= read line
do
# display $line or do somthing with $line
echo "$line"
done < "$file"
Use the right tool meant for the job, Awk in this case to speed things up!
dateValue="$(awk -F"=" '$1=="Add"{print $2; exit}' file)"
printf "%s\n" "dateValue"
TEST
The idea is to split input lines by = as the de-limiter. The awk logic works by checking the $1 field which equals to Add and prints the corresponding value associated with it.
The exit part after print is optional. It will quit the processing as soon as the Add string is met. It will help in quick processing if the file is huge as you have indicated.
You could rewrite your loop this way, notice the break after you got your line:
while IFS='=' read -r key value; do
if [[ $value == "Add" ]]; then
# your logic
break
fi
done < "$file"
If your intention is to just get the very first occurrence of "Add=", then you could use grep this way:
value=$(grep -m 1 '^Add=' "$file" | cut -f2 -d=)

How to Create several files from a list in a text file?

Say I have a list in a file which contains filenames (without spaces):
filename
othername
somethingelse
...
Each line followed by a CR, One "filename" per line
Is there some kind of touch script I can use to create hundreds of files with titles set by a list?
It would also be incredibly helpful if I could fill each file with content upon creation.
Any bashers have any tips: I'm using Ubuntu 10.04
Thanks
To simply create the files, assuming that file_full_of_files_names is a text file with whitespace-delimited filenames:
cat file_full_of_files_names | xargs touch
To actually fill them with content first, from a file called initial_content:
cat file_full_of_files_names | tr ' \t' '\n\n' | while read filename; do
if test -f "$filename"; then
echo "Skipping \"$filename\", it already exists"
else
cp -i initial_content "$filename"
fi
done

Renaming files in a UNIX directory - shell scripting

I have been trying to write a script that will take the current working directory, scan every file and check if it is a .txt file. Then take every file (that's a text file), and check to see if it contains an underscore anywhere in its name and if it does to change the underscore to a hyphen.
I know that this is a tall order, but here is the rough code I have so far:
#!/bin/bash
count=1
while((count <= $#))
do
case $count in
"*.txt") sed 's/_/-' $count
esac
((count++))
done
What I was thinking is that this would take the files in the current working directory as the arguments and check every file(represented by $count or the file at "count"). Then for every file, it would check if it ended in .txt and if it did it would change every underscore to a hyphen using sed. I think one of the main problems I am having is that the script is not reading the files from the current working directory. I tried included the directory after the command to run the script, but I think it took each line instead of each file (since there are 4 or so files on every line).
Anyway, any help would be greatly appreciated! Also, I'm sorry that my code is so bad, I am very new to UNIX.
for fname in ./*_*.txt; do
new_fname=$(printf '%s' "$fname" | sed 's,_,-,')
mv "$fname" "$new_fname"
done
why not:
rename 's/_/-/' *.txt
$ ls *.txt | while read -r file; do echo $file |
grep > /dev/null _ && mv $file $(echo $file | tr _ -); done
(untested)
Thanks for all your input guys! All in all, I think the solution I found was the most appropriate for my skill level was:
ls *.txt | while read -r file; do echo file |
mv $file $(echo $file | sed 's,_,-,');
done
This got what I needed done, and for my purposes I am not too worried about the spaces. But thanks for all your wonderful suggestions, you are all very intelligent!

Resources