I'd like to generate dummy files in bash. The content doesn't matter, if it was random it would be nice, but all the same byte is also acceptable.
My first attempt was the following command:
rm dummy.zip;
touch dummy.zip;
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
x=`expr $x + 1`;
done;
The problem was its poor performance. I'm using GitBash on Windows, so it might be much faster under Linux but the script is obviously not optimal.
Could you suggest me a quicker and nice way to generate dummy (binary) files of given size?
You can try head command:
$ head -c 100000 /dev/urandom >dummy
You may use dd for this purpose:
dd if=/dev/urandom bs=1024 count=5 of=dummy
if:= in file
of:= out file
bs:= block size
Note, that
x=`expr $x + 1`;
isn't the most efficient way to calculation in bash. Do arithmetic integer calculation in double round parenthesis:
x=((x+1))
But for an incremented counter in a loop, there was the for-loop invented:
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
x=`expr $x + 1`;
done;
in contrast to:
for ((x=0; x<100000; ++x))
do
echo a
done >> dummy.zip
Here are 3 things to note:
unlike the [ -case, you don't need the spacing inside the parens.
you may use prefix (or postfix) increment here: ++x
the redirection to the file is pulled out of the loop. Instead of 1000000 opening- and closing steps, the file is only opened once.
But there is still a more simple form of the for-loop:
for x in {0..100000}
do
echo a
done >> dummy.zip
This will generate a text file 100,000 bytes large:
yes 123456789 | head -10000 > dummy.file
If your file system is ext4, btrfs, xfs or ocfs2, and if you don't care about the content you can use fallocate. It's the fastest method if you need big files.
fallocate -l 100KB dummy_100KB_file
See "Quickly create a large file on a Linux system?" for more details.
$ openssl rand -out random.tmp 1000000
Possibly
dd if=/dev/zero of=/dummy10MBfile bs=1M count=10
echo "To print the word in sequence from the file"
c=1
for w in cat file
do
echo "$c . $w"
c = expr $c +1
done
Easy way:
make file test and put one line "test"
Then execute:
cat test >> test
ctrl+c after a minute will result in plenty of gigabytes :)
Related
I want to create a file with fixed size which is filled with a repetition of string of my choice.
So far I have tried using dd to create a file as follows:
dd if=/dev/urandom of=foo_200kb bs=1024 count=200
but obviously the file's content will be random. How can I fill the file with a string of my choice?
Note:
Accepted answer works fine however for someone using busybox shell(ash)
they can run following command to get similar results
yes foo | tr -d '\n' | dd of=foo_200kb bs=1024 count=200
You could use process substitution:
dd if=<(yes foo) of=foo_200kb bs=1024 count=200
You can create a bash script to do what you need:
#! /bin/bash
word=$1
size=$2
length=${#word}
(( remainder = size % length ))
if (( remainder )) ; then
echo Warning: Truncating the last word >&2
fi
(( repeat = size / length ))
for (( i=0; i < repeat ; i++ )) ; do
echo -n "$word"
done
echo -n ${word:0:remainder}
Call it with fill.sh bar 204800 > foo
You can use brace expansion with printf and head:
$ printf '%.0sbar' {1..100000} | head -c 200000 > foo
$ wc -c foo
200000 foo
I definitely prefer the dd solution though.
The purpose of my code is
To read two values from two separate files. [Working perfectly well]
To convert them into decimal values. [Working fine]
Find their differences. [Working fine]
To make the difference positive if it is a negative value. [Not Working, it's not checking the condition.]
Here is my code. Its coded in Ubuntu 11.04.
...
while read line;
do
echo -e "$line";
AllOn=$line
done<Output.log
gcc -Wall -o0 Test.c -o output
time -f "%e" -o BaseFile.log ./output
while read line;
do
echo -e "$line";
AllOff=$line
done<BaseFile.log
#Threshold Value
Threshold=`echo "$AllOff - $AllOn" | bc`;
echo "Threshold is $Threshold"
if [ `echo "$Threshold < 0.00"|bc` ]; then
Threshold=`echo "$Threshold * -1" | bc`;
fi
echo "\nThreshold is $Threshold" >> $Result
Now, irrespective of the value, the if clause is getting executed. I think, my if condition is not being checked and that it would be the reason for the following output.
Base Time is 2.94
All Techniques Off = 3.09
Threshold is .15
Base Time is 3.07
All Techniques Off = 2.96
Threshold is -.11
UPDATE: This question is not answered completely yet and if any one could suggest me a way to achive my 4th objective of finding the difference between the values, it would be really helpful for me. Thank you.
What shell are you using? I'm assuming just plain old 'sh' or 'bash'.
If so, look at line 33 where you have:
if($Threshhold<0) then
Switch that to:
if [ $Threshhold -lt 0 ]; then
You might have other issues, I haven't looked through the code closely to check for them.
To further expand, I knocked up test script and data (please note I changed 'Threshhold' to 'Threshold'):
# Example test.sh file
!/bin/bash
while read line;
do
echo "$line";
AllOn=$line
done < Output.log
while read line;
do
echo "$line";
AllOff=$line
done < BaseFile.log
#Threshhold Value
Threshold=`echo "$AllOn - $AllOff" | bc`;
echo "Threshold is $Threshold"
if [ `echo "$Threshold < 0"|bc` ]; then
# snips off the '-' sign which is what you were trying to do it looks
Threshold=${Threshold:1}
fi
echo $Threshold
Result=result.txt
echo "\nThreshold is $Threshold" >> $Result
Then some data files, first Output.log:
# Output.log
1.2
Then BaseFile.log:
# BaseFile.log
1.3
Example output from the above:
./test.sh
1.2
1.3
Threshold is -.1
.1
Bourne shell has no built-in facility for arithmetic. The assignment
Threshhold=$AllOn-$AllOff
simply concatenates the two strings with a minus sign between them.
In Bash, you can use
Threshhold=$(($AllOn-$AllOff))
but that will still not allow the comparison to zero. For portability, I would simply use Awk for the entire task.
#!/bin/sh
gcc -Wall -o0 Test.c -o output
time -f "%e" -o BaseFile.log ./output
awk 'NR==FNR { allon=$0; next }
{ alloff=$0 }
END { sum=allon-alloff;
if (sum < 0) sum *= -1;
print "Threshold is", sum }' Output.log BaseFile.log >>$Result
I am trying to create a bash script that is essentially like a magic 8 ball with 6 different responses (Yes, No, Maybe, Hard to tell, Unlikely, and Unknown). The key is that once a response is given, it should not be given again until all responses have been given.
Here is what I have so far:
#!/bin/bash
echo "Ask and you shall receive your fortune: "
n=$((RANDOM*6/32767))
while [`grep $n temp | wc awk '{print$3}'` -eq 0]; do
n=$((RANDOM*6/32767))
done
grep -v $n temp > temp2
mv temp2 temp
Basically I have the 6 responses all on different lines in the temp file, and I am trying to construct the loops so that once a response is given, it creates a new file without that response (temp2), then copies it back to temp. Then once the temp file is empty it will continue from the beginning.
I'm quite positive that my current inner loop is wrong, and that I need an outer loop, but I'm fairly new to this and I am stuck.
Any help will be greatly appreciated.
Try something like this:
#!/bin/bash
shuffle() {
local i tmp size max rand
# $RANDOM % (i+1) is biased because of the limited range of $RANDOM
# Compensate by using a range which is a multiple of the array size.
size=${#array[*]}
max=$(( 32768 / size * size ))
for ((i=size-1; i>0; i--)); do
while (( (rand=$RANDOM) >= max )); do :; done
rand=$(( rand % (i+1) ))
tmp=${array[i]} array[i]=${array[rand]} array[rand]=$tmp
done
}
array=( 'Yes' 'No' 'Maybe' 'Hard to tell' 'Unknown' 'Unlikely' )
shuffle
for var in "${array[#]}"
do
echo -n "Ask a question: "
read q
echo "${var}"
done
I wrote a script that follows your initial approach (using temp files):
#!/bin/bash
# Make a copy of temp, so you don't have to recreate the file every time you run this script
TEMP_FILE=$(tempfile)
cp temp $TEMP_FILE
# You know this from start, the file contains 6 possible answers, if need to add more in future, change this for the line count of the file
TOTAL_LINES=6
echo "Ask and you shall receive your fortune: "
# Dummy reading of the char, adds a pause to the script and involves the user interaction
read
# Conversely to what you stated, you don't need an extra loop, with one is enough
# just change the condition to count the line number of the TEMP file
while [ $TOTAL_LINES -gt 0 ]; do
# You need to add 1 so the answer ranges from 1 to 6 instead of 0 to 5
N=$((RANDOM*$TOTAL_LINES/32767 + 1))
# This prints the answer (grab the first N lines with head then remove anything above the Nth line with tail)
head -n $N < $TEMP_FILE | tail -n 1
# Get a new file deleting the $N line and store it in a temp2 file
TEMP_FILE_2=$(tempfile)
head -n $(( $N - 1 )) < $TEMP_FILE > $TEMP_FILE_2
tail -n $(( $TOTAL_LINES - $N )) < $TEMP_FILE >> $TEMP_FILE_2
mv $TEMP_FILE_2 $TEMP_FILE
echo "Ask and you shall receive your fortune: "
read
# Get the total lines of TEMP (use cut to delete the file name from the wc output, you only need the number)
TOTAL_LINES=$(wc -l $TEMP_FILE | cut -d" " -f1)
done
$ man shuf
SHUF(1) User Commands
NAME
shuf - generate random permutations
SYNOPSIS
shuf [OPTION]... [FILE]
shuf -e [OPTION]... [ARG]...
shuf -i LO-HI [OPTION]...
DESCRIPTION
Write a random permutation of the input lines to standard output.
More stuff follows, you can read it on your own machine :)
What I am trying to do is run the sed on multiple files in the directory Server_Upload, using variables:
AB${count}
Corresponds, to some variables I made that look like:
echo " AB1 = 2010-10-09Three "
echo " AB2 = 2009-3-09Foo "
echo " AB3 = Bar "
And these correspond to each line which contains a word in master.ta, that needs changing in all the text files in Server_Upload.
If you get what I mean... great, I have tried to explain it the best I can, but if you are still miffed I'll give it another go as I found it really hard to convey what I mean.
cd Server_Upload
for fl in *.UP; do
mv $fl $fl.old
done
count=1
saveIFS="$IFS"
IFS=$'\n'
array=($(<master.ta))
IFS="$saveIFS"
for i in "${array[#]}"
do
sed "s/$i/AB${count}/g" $fl.old > $fl
(( count++ ))
done
It runs, doesn't give me any errors, but it doesn't do what I want, so any ideas?
Your loop should look like this:
while read i
do
sed "s/$i/AB${count}/g" $fl.old > $fl
(( count ++ ))
done < master.ta
I don't see a reason to use an array or something similar. Does this work for you?
It's not exactly clear to me what you are trying to do, but I believe you want something like:
(untested)
do
eval repl=\$AB${count}
...
If you have a variable $AB3, and a variable $count, $AB${count} is the concatenation of $AB and $count (so if $AB is empty, it is the same as $count). You need to use eval to get the value of $AB3.
It looks like your sed command is dependent on $fl from inside the first for loop, even though the sed line is outside the for loop. If you're on a system where sed does in-place editing (the -i option), you might do:
count=1
while read i
do
sed -i'.old' -e "s/$i/AB${count}/g" Server_Upload/*.UP
(( count ++ ))
done < master.ta
(This is the entire script, which incorporates Patrick's answer, as well.) This should substitute the text ABn for every occurrence of the text of the nth line of master.ta in any *.UP file.
Does it help if you move the first done statement from where it is to after the second done?
How can i generate a random file filled with random number or character in shell script? I also want to specify size of the file.
Use dd command to read data from /dev/random.
dd if=/dev/random of=random.dat bs=1000000 count=5000
That would read 5000 1MB blocks of random data, that is a whole 5 gigabytes of random data!
Experiment with blocksize argument to get the optimal performance.
head -c 10 /dev/random > rand.txt
change 10 to whatever. Read "man random" for differences between /dev/random and /dev/urandom.
Or, for only base64 characters
head -c 10 /dev/random | base64 | head -c 10 > rand.txt
The base64 might include some characters you're not interested in, but didn't have time to come up with a better single-liner character converter...
(also we're taking too many bytes from /dev/random. sorry, entropy pool!)
A good start would be:
http://linuxgazette.net/153/pfeiffer.html
#!/bin/bash
# Created by Ben Okopnik on Wed Jul 16 18:04:33 EDT 2008
######## User settings ############
MAXDIRS=5
MAXDEPTH=2
MAXFILES=10
MAXSIZE=1000
######## End of user settings ############
# How deep in the file system are we now?
TOP=`pwd|tr -cd '/'|wc -c`
populate() {
cd $1
curdir=$PWD
files=$(($RANDOM*$MAXFILES/32767))
for n in `seq $files`
do
f=`mktemp XXXXXX`
size=$(($RANDOM*$MAXSIZE/32767))
head -c $size /dev/urandom > $f
done
depth=`pwd|tr -cd '/'|wc -c`
if [ $(($depth-$TOP)) -ge $MAXDEPTH ]
then
return
fi
unset dirlist
dirs=$(($RANDOM*$MAXDIRS/32767))
for n in `seq $dirs`
do
d=`mktemp -d XXXXXX`
dirlist="$dirlist${dirlist:+ }$PWD/$d"
done
for dir in $dirlist
do
populate "$dir"
done
}
populate $PWD
Create 100 randomly named files of 50MB in size each:
for i in `seq 1 100`; do echo $i; dd if=/dev/urandom bs=1024 count=50000 > `echo $RANDOM`; done
The RANDOM variable will give you a different number each time:
echo $RANDOM
Save as "script.sh", run as ./script.sh SIZE. The printf code was lifted from http://mywiki.wooledge.org/BashFAQ/071. Of course, you could initialize the mychars array with brute force, mychars=("0" "1" ... "A" ... "Z" "a" ... "z"), but that wouldn't be any fun, would it?
#!/bin/bash
declare -a mychars
for (( I=0; I<62; I++ )); do
if [ $I -lt 10 ]; then
mychars[I]=$I
elif [ $I -lt 36 ]; then
D=$((I+55))
mychars[I]=$(printf \\$(($D/64*100+$D%64/8*10+$D%8)))
else
D=$((I+61))
mychars[I]=$(printf \\$(($D/64*100+$D%64/8*10+$D%8)))
fi
done
for (( I=$1; I>0; I-- )); do
echo -n ${mychars[$((RANDOM%62))]}
done
echo