mget to download all files containing ' Q 787 ' in filename - ftp

I'm trying to download all files from an FTP server that have a specific set of characters in the filename. The beginning and ending part of the filenames can be different. There is no file extension that I can see.
Sample filenames:
001247854 Q 787 SFDFDS
014781259 Q 787 UEIJHF
187852584 S 787 KEINJE
785125873 Q 787 IKUSBD
854792547 S 787 KJDIEP
I've been using...
mget asterisk dot asterisk
...but I only need to get the files containing ' Q 787 ' in the filename.
Mieche

Use mask "* Q 787 *" (the quotes are needed because of the spaces):
mget "* Q 787 *"

Related

How can I remove blank spaces in a file that are marked with "." without removing the decimal place in my numbers? (Bash shell)

I have a file that looks like this:
18 29 293.434
12 32 9954.343
12 343 .
12 45 9493.545
I want to replace the "." on line 3 with "0" using sed.
I can't figure out how to do this without replacing every decimal place in the file.
Any help would be much appreciated.
With GNU sed:
sed 's/\B\.\B/0/g' file
Output:
18 29 293.434
12 32 9954.343
12 343 0
12 45 9493.545
See: \B: non-word boundary
Using any awk in any shell on every UNIX box and assuming you want to replace . with 0 wherever it occurs alone in any field rather than just if it's in the last field of the 3rd line of input:
awk '{for (i=1; i<=NF; i++) if ($i == ".") $i=0} 1' file
If I understand well, if the dot is at the end, remove spaces and substitute this dot with 0:
sed '/\.$/{s/ //g;s/\.$/0/}' file
# ^
$ is an anchor, it means at the end of the line
18 29 293.434
12 32 9954.343
123430
12 45 9493.545

I'm trying to cut a string with a number of bytes. What is the problem with this for loop?

Sorry in advance for the beginner question, but I'm quite stuck and keen to learn.
I am trying to echo a string (in hex) and then cut a piece of that with cut command. It looks like this:
for y in "${Offset}"; do
echo "${entry}" | cut -b 60-$y
done
Where echo ${Offset} results in
75 67 69 129 67 567 69
I would like each entry to be printed, and then cut from the 60th byte until the respective number in $Offset.
So the first entry would be cut 60-75.
However, I get an error:
cut: 67: No such file or directory
cut: 69: No such file or directory
cut: 129: No such file or directory
cut: 67: No such file or directory
cut: 567: No such file or directory
cut: 69: No such file or directory
I tried adding/removing parentheses around each variable but never got the right result.
Any help will be appreciated!
UPDATE: updated the code with changed from markp-fuso. However, this codes still does not work as intended. I would like to print every entry based on the respective offset, but it goes wrong. This prints every entry seven times, where each time is based on seven different offsets. Any ideas on how to fix this?
#!/bin/bash
MESSAGES=$( sqlite3 -csv file.db 'SELECT quote(data) FROM messages' | tr -d "X'" )
for entry in ${MESSAGES}; do
Offset='75 67 69 129 67 567 69'
for y in $Offset; do
echo "${entry:59:(y-59)}"
done
done
echo ${MESSAGES}
Results in seven strings with minimal length 80 bytes and max 600.
My output should be:
String one: cut by first offset
String two: cut by second offset
and so on...
In order for for to iterate over each space-separated "word" in $Offset, you need to get rid of the quotes, which are making it read as a single variable.
for y in ${Offset}; do
echo "${entry}" | cut -b 60-$y
done
To eliminate the sub-process that's going to be invoked due to the | cut ..., we could look at a comparable parameter expansion solution ...
Quick reminder on how to extract a substring from a variable:
${variable:start_position:length}
Keeping in mind that the first character in ${variable} is in position zero/0.
Next, we need to convert each individual offset (y) into a 'length':
length=$((y-60+1))
Rolling these changes into your code (and removing the quotes from around ${Offset}) gives us:
for y in ${Offset}
do
start=$((60-1))
length=$((y-60+1))
echo "${entry:${start}:${length}}"
#echo "${entry:59:(y-59)}"
done
NOTE: You can also replace the start/length/echo with the single commented-out echo.
Using a smaller data set for demo purposes, and using 3 (instead of 60) as the start of our extraction:
# base-10 character position
# 1 2
# 123456789012345678901234567
$ entry='123456789ABCDEFGHIabcdefghi'
$ echo ${#entry} # length of entry?
27
$ Offset='5 8 10 13 20'
$ for y in ${Offset}
do
start=$((3-1))
length=$((y-3+1))
echo "${entry:${start}:${length}}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20
And consolidating the start/length/echo into a single echo:
$ for y in ${Offset}
do
echo "${entry:2:(y-2)}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20

replace string in text file with random characters

So what i'm trying to do is this: I've been using keybr.com to sharpen my typing skills and on this site you can "provide your own custom text." Now i've been taking chapters out of books to type so its a little more interesting than just typing groups of letters. Now I want to also insert numbers into the text. Specifically, between each word have something like "393" and random sets smaller and larger than that example.
so i have saved a chapter of a book into a file in my home folder. Now i just need a command to search for spaces and input a group of numbers and add a space so a sentence would look like this: The 293 dog 328 is 102 black. 334 The... etc.
I have looked up linux commands through search engines and i've found out how to replace strings in text files with:
sed -i 's/original/new/g' file.txt
and how to generate random numbers with:
$ shuf -i MIN-MAX -n COUNT
i just can not figure out how to output a one line command that will have random numbers between each word. I'm still-a-searching so thanks to anyone that takes the time to read my problem.
Perl to the rescue!
perl -pe 's/ /" " . (100 + int rand 900) . " "/ge' < input.txt > output.txt
-p reads the input line by line, after reading a line, it runs the code and prints the line to the output
s/// is similar to the substitution you know from sed
/g means global, i.e. it substitutes as many times as possible
/e means the replacement part is a code to run. In this case, the code generates a random number (100-999).
Given:
$ echo "$txt"
Here is some random words. Please
insert a number a space between each one.
Here is a simple awk to do that:
$ echo "$txt" | awk '{for (i=1;i<=NF;i++) printf "%s %d ", $i, rand()*100; print ""}'
Here 92 is 59 some 30 random 57 words. 74 Please 78
insert 43 a 33 number 77 a 10 space 78 between 83 each 76 one. 49
And here is roughly the same thing in pure Bash:
while read -r line; do
for word in $line; do
printf "%s %s" "$word $((1+$RANDOM % 100))"
done
echo
done < <(echo "$txt")

In bash, I want to generate based on a set of words a fixed set of 4 characters output for each word and to always match

I got these words
Frank_Sinatra
Dean_Martin
Ray_Charles
I want to generate 4 characters which will always match with those words and never change.
ej:
frk ) Frank_Sinatra
dnm ) Dean_Martin
Ray ) Ray_Charles
and it shall always match these 4 characters when I run it again (not random)
note:
Something like this:
String 32-bit checksum 8-bit checksum
ABC 326 0x146 70 0x46
ACB 410 0x19A 154 0x9A
BAC 350 0x15E 94 0x5E
BCA 450 0x1C2 194 0xC2
CAB 399 0x18F 143 0x8F
CBA 256 0x100 0 0x00
http://www.flounder.com/checksum.htm
Look at this command --->
echo -n Frank_Sinatra | md5sum
d0f7287be11d7bbfe53809088ea3b009 -
but instead of that long string, I wanted just 4 unique characters.
I did it like this:
echo -n "Frank_Sinatra" | md5sum > foo ; sed -i 's/./&\n#/4' foo
grep -v "#" foo > bar
I'm not going to write the entire program for you, but I can share some algorithm that can accomplish this. I can't guarantee that it is the most optimized algorithm.
Problem
Generate a 3-letter identifier for each line in a text file that is unique, such that grep will only match with the intended line.
Assumption
There exists a 3-letter identifier for each line such that grep will only match that line.
Algorithm
For every line in text file
Grab a permutation of the line, run grep on the file using that permutation.
If grep returns more than 2 lines, get a new permutation of the line, go back to previous step.
If grep returns only one line and that line matches our current line, we found a proper identifier. Store this identifier.

bash script to find specific text if same multple lines are present in a text file

I want to write a bash script to copy one value from a text file. In the text file, i have some reapeated line . Example :
WIN [err]: fe I:35 A Q:24.17 si: 4554 INT:55.90 CA Mn A:61.00 B:44.45 INT:42.06
WIN [err]: fe P:880 A Q:26.89 si: 325 INT:12.12 CA Mn A:57.62 B:44.11 INT:39.56
some text line
some text line
"Line that i want to copy value:" WIN [err]: fe P:870 A Q:26.89 si: 325 INT: 5.5 CA Mn A:57.62 B:44.11 INT:39.06
dec 2000 frs, 30.8 fs, 2029.95 ms/s
Now I want to display the INT value ex: 39.06 which is present in this line "Line that i want to copy value:". Please consider the stating line from
"WIN [err] ....." I am new to shell scripting. As I have modified my text file. Now we are seeing that the string "INT: is present in some other lines also.
Can anyone help?
Thanks
I don't think you want "the second to last line". I believe you are asking for the value in the last line that matches WIN (in the example you give, these are equivalent).
awk '/^WIN/ {v=$NF } END {split(v,a,":"); print a[2]}' input
Probably this can work:
awk -F'[: ]+' '$1=="WIN"{a=$NF} END {print a}' RS='[\r\n]+' file.log

Resources