Inserting a line with special characters (',`,$ etc) into a file using BASH - bash

Hello I need help with some bash script.
I need to insert the below line into a file at a specific line number
su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
In the bigger problem statement, the line to be inserted at a specified line number & into the file, all are dynamic & need to be fetched from a csv file.
The csv file looks like:
$ cat test_file.csv
TARGET_NODE,TARGET_FILE,TARGET_LINE,NEW_LINE
10.10.10.10,test_file.csv,2,su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
Have tried using sed in many ways, but failed. We can't use python due to system limitations.
The simple way to test it:
seq 4|sed 's/<LINE_NUMBER>/<NEW_TEXT>/'
So we tried using:
seq 4|sed 's/2/'su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1'/'

You didn't say if you wanted the new line printed before or after or instead of the existing line so here's all 3:
$ seq 4 | awk -F, 'NR==FNR{line[$3]=$4; next} FNR in line{print line[FNR]} 1' test_file.csv -
1
su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
2
3
4
.
$ seq 4 | awk -F, 'NR==FNR{line[$3]=$4; next} 1; FNR in line{print line[FNR]}' test_file.csv -
1
2
su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
3
4
.
$ seq 4 | awk -F, 'NR==FNR{line[$3]=$4; next} FNR in line{$0=line[FNR]} 1' test_file.csv -
1
su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
3
4
The above will work using any awk in any shell and for any string from that CSV on every UNIX box

If you have no system limitations for using perl :
#!/usr/bin/env bash
seq 4 | perl -pe '
BEGIN {
open CSV, shift; # Open test_file.csv
<CSV>; # Ignore first line
# Get line number and command
($n, $c) = <CSV> =~ /^(?:[^,]+,){2}([^,]+),(.*)/;
}
{ s/.*/$c/ if ($. == $n); } # Replace line with number $n
' test_file.csv

Just properly quote the line and use different s command separator because / is used in replacement string... escape special regex characters in replacement string, like & or \.
$ seq 4 | sed 's~2~su - test -c '\''$HOME/scripts/auto start'\'' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>\&1~'
1
su - test -c '$HOME/scripts/auto start' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1
3
4
with awk:
seq 4 | awk -vvar='su - test -c '\''$HOME/scripts/auto start'\'' >> /home/test/auto_start.`date +%h_%d_%y`.log 2>&1' 'NR==2{$0=var}1'
I discourage the usage of backticks `. Do not use them. Use $(...) instead.

Related

Loop Script from Input File

I have a reference file with device names in them. For example WABEL8499IPM101. I'm using this script to set the base name (without the last 3 digits) to look at the reference file and see what is already used. If 101 is used it will create a file for me with 102, 103 if I request 2 total. I'm looking to use an input file to run it multiple times. I'm also trying to figure out how to start at 101 if there isn't a name found when searching the reference file
I would like to loop this using an input file instead of manually entering bash test.sh WABEL8499IPM 2 each time. I would like to be able to build an input file of all the names that need compared and then output. It would also be nice that if there isn't a match that it starts creating names at WABEL8499IPM101 instead of just WABEL8499IPM1.
Input file example:
ColumnA (BASE NAME) ColumnB (QUANTITY)
WABEL8499IPM 2
Script:
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
# base name, such as "WABEL8499IPM"
device_name=$1
# quantity, such as "2"
quantityNum=$2
# the largest in sequence, such as "WABEL8499IPM108"
max_sequence_name=$(cat $SRCFILE | grep -o -e "$device_name[0-9]*" | sort --reverse | head -n 1)
# extract the last 3digit number (such as "108") from max_sequence_name
max_sequence_num=$(echo $max_sequence_name | rev | cut -c 1-3 | rev)
# create new sequence_name
# such as ["WABEL8499IPM109", "WABEL8499IPM110"]
array_new_sequence_name=()
for i in $(seq 1 $quantityNum);
do
cnum=$((max_sequence_num + i))
array_new_sequence_name+=($(echo $device_name$cnum))
done
#CODE FOR CREATING OUTPUT FILE HERE
#for fn in ${array_new_sequence_name[#]}; do touch $fn; done;
# write log
for sqn in ${array_new_sequence_name[#]};
do
echo $sqn >> $LOGFILE
done
Usage:
bash test.sh WABEL8499IPM 2
Result in the log file:
WABEL8499IPM109
WABEL8499IPM110
Just wrap a loop around your code instead of assuming the args come in on the command line.
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
while read device_name quantityNum
do max_sequence_name=$( grep -o -e "$device_name[0-9]*" $SRCFILE |
sort --reverse | head -n 1)
max_sequence_num=${max_sequence_name: -3}
array_new_sequence_name=()
for i in $(seq 1 $quantityNum)
do cnum=$((max_sequence_num + i))
array_new_sequence_name+=("$device_name$cnum")
done
for sqn in ${array_new_sequence_name[#]};
do echo $sqn >> $LOGFILE
done
done < input.file
I'd maybe pass the input file as the parameter now.

How to properly use the grep command to grab and store integers?

I am currently building a bash script for class, and I am trying to use the grep command to grab the values from a simple calculator program and store them in the variables I assign, but I keep receiving a syntax error message when I try to run the script. Any advice on how to fix it? my script looks like this:
#!/bin/bash
addanwser=$(grep -o "num1 + num2" Lab9 -a 5 2)
echo "addanwser"
subanwser=$(grep -o "num1 - num2" Lab9 -s 10 15)
echo "subanwser"
multianwser=$(grep -o "num1 * num2" Lab9 -m 3 10)
echo "multianwser"
divanwser=$(grep -o "num1 / num2" Lab9 -d 100 4)
echo "divanwser"
modanwser=$(grep -o "num1 % num2" Lab9 -r 300 7)
echo "modawser"`
You want to grep the output of a command.
grep searches from either a file or standard input. So you can say either of these equivalent:
grep X file # 1. from a file
... things ... | grep X # 2. from stdin
grep X <<< "content" # 3. using here-strings
For this case, you want to use the last one, so that you execute the program and its output feeds grep directly:
grep <something> <<< "$(Lab9 -s 10 15)"
Which is the same as saying:
Lab9 -s 10 15 | grep <something>
So that grep will act on the output of your program. Since I don't know how Lab9 works, let's use a simple example with seq, that returns numbers from 5 to 15:
$ grep 5 <<< "$(seq 5 15)"
5
15
grep is usually used for finding matching lines of a text file. To actually grab a part of the matched line other tools such as awk are used.
Assuming the output looks like "num1 + num2 = 54" (i.e. fields are separated by space), this should do your job:
addanwser=$(Lab9 -a 5 2 | awk '{print $NF}')
echo "$addanwser"
Make sure you don't miss the '$' sign before addanwser when echo'ing it.
$NF selects the last field. You may select nth field using $n.

Greping multiple lines from one multiline output Bash

I have a command dumpsys power with this output:
POWER MANAGER (dumpsys power)
Power Manager State: mDirty=0x0
mWakefulness=Awake #
mWakefulnessChanging=false
mIsPowered=false
mPlugType=0
mBatteryLevel=67 #
mBatteryLevelWhenDreamStarted=0
mDockState=0
mStayOn=false #
mProximityPositive=false
mBootCompleted=true #
mSystemReady=true #
mHalAutoSuspendModeEnabled=false
mHalInteractiveModeEnabled=true
mWakeLockSummary=0x0
mUserActivitySummary=0x1
mRequestWaitForNegativeProximity=false
mSandmanScheduled=false
mSandmanSummoned=false
mLowPowerModeEnabled=false #
mBatteryLevelLow=false #
mLastWakeTime=134887327 (59454 ms ago) #
mLastSleepTime=134881809 (64972 ms ago) #
mLastUserActivityTime=134946670 (111 ms ago)
mLastUserActivityTimeNoChangeLights=134794061 (152720 ms ago)
mLastInteractivePowerHintTime=134946670 (111 ms ago)
mLastScreenBrightnessBoostTime=0 (134946781 ms ago)
mScreenBrightnessBoostInProgress=false
mDisplayReady=true #
mHoldingWakeLockSuspendBlocker=false
mHoldingDisplaySuspendBlocker=true
Settings and Configuration:
mDecoupleHalAutoSuspendModeFromDisplayConfig=false
mDecoupleHalInteractiveModeFromDisplayConfig=true
mWakeUpWhenPluggedOrUnpluggedConfig=true
mWakeUpWhenPluggedOrUnpluggedInTheaterModeConfig=false
mTheaterModeEnabled=false
mSuspendWhenScreenOffDueToProximityConfig=false
mDreamsSupportedConfig=true
mDreamsEnabledByDefaultConfig=true
mDreamsActivatedOnSleepByDefaultConfig=false
mDreamsActivatedOnDockByDefaultConfig=true
mDreamsEnabledOnBatteryConfig=false
mDreamsBatteryLevelMinimumWhenPoweredConfig=-1
mDreamsBatteryLevelMinimumWhenNotPoweredConfig=15
mDreamsBatteryLevelDrainCutoffConfig=5
mDreamsEnabledSetting=false
mDreamsActivateOnSleepSetting=false
mDreamsActivateOnDockSetting=true
mDozeAfterScreenOffConfig=true
mLowPowerModeSetting=false
mAutoLowPowerModeConfigured=false
mAutoLowPowerModeSnoozing=false
mMinimumScreenOffTimeoutConfig=10000
mMaximumScreenDimDurationConfig=7000
mMaximumScreenDimRatioConfig=0.20000005
mScreenOffTimeoutSetting=60000 #
mSleepTimeoutSetting=-1
mMaximumScreenOffTimeoutFromDeviceAdmin=2147483647 (enforced=false)
mStayOnWhilePluggedInSetting=0
mScreenBrightnessSetting=102
mScreenAutoBrightnessAdjustmentSetting=-1.0
mScreenBrightnessModeSetting=1
mScreenBrightnessOverrideFromWindowManager=-1
mUserActivityTimeoutOverrideFromWindowManager=-1
mTemporaryScreenBrightnessSettingOverride=-1
mTemporaryScreenAutoBrightnessAdjustmentSettingOverride=NaN
mDozeScreenStateOverrideFromDreamManager=0
mDozeScreenBrightnessOverrideFromDreamManager=-1
mScreenBrightnessSettingMinimum=10
mScreenBrightnessSettingMaximum=255
mScreenBrightnessSettingDefault=102
Sleep timeout: -1 ms
Screen off timeout: 60000 ms
Screen dim duration: 7000 ms
Wake Locks: size=0 Suspend Blockers: size=4
PowerManagerService.WakeLocks: ref count=0
PowerManagerService.Display: ref count=1
PowerManagerService.Broadcasts: ref count=0
PowerManagerService.WirelessChargerDetector: ref count=0
Display Power: state=ON #
I want to get the lines marked with # in a format of:
mScreenOffTimeoutSetting=60000
mDisplayReady=true
***
ScreenOfftimeoutSetting = 60000
DisplayReady = true
The commands output can vary from device to device and some of the lines might not be there or are in a different place. Thus if the searched line isn't there no errors should be generated.
It's not clear what you want. Aou can use sed to extract variables form the file and do whatever you want with them. Here's an example:
sed -n -e 's/^mSomeName=\(.*\)/newVariable=\1/p' -e 's/^mOtherName=.*+\(.*\)/newVariable2=\1/p' myFile
Explanation:
-n don't output anything per default
-e an expression follows. It's required since we have multiple expressions in place
s/^mSomeName=\(.*\)/newVariable=\1/p if file starts (^) with mSomeName= capture what follows (\(.*\)), replace the line with newVariable=\1, where \1 is what got captured, and print it out (p)
's/^mOtherName=.+(.)/newVariable2=\1/p' similar to the previous expression but will capture whatere comes after a + sign and print it behind newVariable2
This does something like:
$ sed -n -e 's/^mSomeName=\(.*\)/newVariable=\1/p' -e 's/^mOtherName=.*+\(.*\)/newVariable2=\1/p' <<<$'mSomeName=SomeValue\nmOtherName=OtherValue+Somethingelse'
newVariable=SomeValue
newVariable2=Somethingelse
<<<$'...' is a way of passing a string with linebreaks \n directly to the command in bash. You can replace it with a file. This command just outputs a string, nothing will get changed.
If you need them in bash variables use eval:
$ eval $(sed -n -e 's/^mSomeName=\(.*\)/newVariable=\1/p' -e 's/^mOtherName=.*+\(.*\)/newVariable2=\1/p' <<<$'mSomeName=SomeValue\nmOtherName=OtherValue+Somethingelse')
$ echo newVariable=$newVariable - newVariable2=$newVariable2
newVariable=SomeValue - newVariable2=Somethingelse
eval will execute the string which in this case set the variable values:
$ eval a=1
$ echo $a
1
If you want to just use Grep command, you can use -A (After) and -B (Before) options and pipes.
This is a exemple with 2 lines.
File test.txt :
test
aieauieaui
test
caieaieaipe
mSomeName=SomeValue
mOtherName=OtherValue+Somethingelse
nothing
blabla
mSomeName=SomeValue2
mOtherName=OtherValue+Somethingelse2
The command to use :
grep -A 1 'mSomeName' test.txt |grep -B 1 'mOtherName'
The output :
mSomeName=SomeValue
mOtherName=OtherValue+Somethingelse
--
mSomeName=SomeValue2
mOtherName=OtherValue+Somethingelse2

setting awk variables through inlining

I've got this:
./awktest -v fields=`cat testfile`
which ought to set fields variable to '1 2 3 4 5' which is all that testfile contains
It returns:
gawk: ./awktest:9: fatal: cannot open file `2' for reading (No such file or directory)
When I do this it works fine.
./awktest -v fields='1 2 3 4 5'
printing fields at the time of error yields:
1
printing fields in the second instance yields:
1 2 3 4 5
When I try it with 12345 instead of 1 2 3 4 5 it works fine for both, so it's a problem with the white space. What is this problem? And how do I fix it.
This is most likely not an awk question. Most likely, it is your shell that is the culprit.
For example, if awktest is:
#!/bin/bash
i=1
for arg in "$#"; do
printf "%d\t%s\n" $i "$arg"
((i++))
done
Then you get:
$ ./awktest -v fields=`cat testfile`
1 -v
2 fields=1
3 2
4 3
5 4
6 5
You see that the file contents are not being handled as a single word.
Simple solution: use double quotes on the command line:
$ ./awktest -v fields="$(< testfile)"
1 -v
2 fields=1 2 3 4 5
The $(< file) construct is a bash shortcut for `cat file` that does not need to spawn an external process.
Or, read the first line of the file in the awk BEGIN block
awk '
BEGIN {getline fields < "testfile"}
rest of awk program ...
'
./awktest -v fields="`cat testfile`"
#note that:
#./awktest -v fields='`cat testfile`'
#does not work

How do I pick random unique lines from a text file in shell?

I have a text file with an unknown number of lines. I need to grab some of those lines at random, but I don't want there to be any risk of repeats.
I tried this:
jot -r 3 1 `wc -l<input.txt` | while read n; do
awk -v n=$n 'NR==n' input.txt
done
But this is ugly, and doesn't protect against repeats.
I also tried this:
awk -vmax=3 'rand() > 0.5 {print;count++} count>max {exit}' input.txt
But that obviously isn't the right approach either, as I'm not guaranteed even to get max lines.
I'm stuck. How do I do this?
This might work for you:
shuf -n3 file
shuf is one of GNU coreutils.
If you have Python accessible (change the 10 to what you'd like):
python -c 'import random, sys; print("".join(random.sample(sys.stdin.readlines(), 10)).rstrip("\n"))' < input.txt
(This will work in Python 2.x and 3.x.)
Also, (again change the 10 to the appropriate value):
sort -R input.txt | head -10
If jot is on your system, then I guess you're running FreeBSD or OSX rather than Linux, so you probably don't have tools like rl or sort -R available.
No worries. I had to do this a while ago. Try this instead:
$ printf 'one\ntwo\nthree\nfour\nfive\n' > input.txt
$ cat rndlines
#!/bin/sh
# default to 3 lines of output
lines="${1:-3}"
# default to "input.txt" as input file
input="${2:-input.txt}"
# First, put a random number at the beginning of each line.
while read line; do
printf '%8d%s\n' $(jot -r 1 1 99999999) "$line"
done < "$input" |
sort -n | # Next, sort by the random number.
sed 's/^.\{8\}//' | # Last, remove the number from the start of each line.
head -n "$lines" # Show our output
$ ./rndlines input.txt
two
one
five
$ ./rndlines input.txt
four
two
three
$
Here's a 1-line example that also inserts the random number a little more cleanly using awk:
$ printf 'one\ntwo\nthree\nfour\nfive\n' | awk 'BEGIN{srand()} {printf("%8d%s\n", rand()*10000000, $0)}' | sort -n | head -n 3 | cut -c9-
Note that different versions of sed (in FreeBSD and OSX) may require the -E option instead of -r to handle ERE instead or BRE dialect in the regular expression if you want to use that explictely, though everything I've tested works with escapted bounds in BRE. (Ancient versions of sed (HP/UX, etc) might not support this notation, but you'd only be using those if you already knew how to do this.)
This should do the trick, at least with bash and assuming your environment has the other commands available:
cat chk.c | while read x; do
echo $RANDOM:$x
done | sort -t: -k1 -n | tail -10 | sed 's/^[0-9]*://'
It basically outputs your file, placing a random number at the start of each line.
Then it sorts on that number, grabs the last 10 lines, and removes that number from them.
Hence, it gives you ten random lines from the file, with no repeats.
For example, here's a transcript of it running three times with that chk.c file:
====
pax$ testprog chk.c
} else {
}
newNode->next = NULL;
colm++;
====
pax$ testprog chk.c
}
arg++;
printf (" [%s] n", currNode->value);
free (tempNode->value);
====
pax$ testprog chk.c
char tagBuff[101];
}
return ERR_OTHER;
#define ERR_MEM 1
===
pax$ _
sort -Ru filename | head -5
will ensure no duplicates. Not all implementations of sort have the -R option.
To get N random lines from FILE with Perl:
perl -MList::Util=shuffle -e 'print shuffle <>' FILE | head -N
Here's an answer using ruby if you don't want to install anything else:
cat filename | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
for example, given a file (dups.txt) that looks like:
1 2
1 3
2
1 2
3
4
1 3
5
6
6
7
You might get the following output (or some permutation):
cat dups.txt| ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
4
6
5
1 2
2
3
7
1 3
Further example from the comments:
printf 'test\ntest1\ntest2\n' | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
test1
test
test2
Of course if you have a file with repeated lines of test you'll get just one line:
printf 'test\ntest\ntest\n' | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
test

Resources