Copy containing a string to other server in shell script - bash

I have a question
How can I cat filea.txt and write to fileb.txt with a shell script? Filea and fileb are different server Linux.
cat filea.txt >> fileb.txt in other server
Many thanks!

cat filea.txt >> /tmp/fileb.txt
scp /tmp/fileb.txt user#192.168.0.30:~/fileb.txt
Just to let you know, ~/ is the home directory of that user. You'd need to replace the user and the ip, ofc.
You can also not paste into the home directory and do a full file path.
scp is really cool, check out its man page.
EDIT: I do see your trouble with the appending.
You can try to cat filea.txt | ssh user#192.168.0.30 "cat >> fileb.txt" This works because cat takes from stdin when no file is specified.

Related

In bash, is there a way to redirect output to a file open for reading? [duplicate]

This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 3 years ago.
If I try to redirect output for a command into a file that is open for reading within the command, I get an empty file.
For example, suppose I have a file named tmp.txt:
ABC
123
Now if I do this:
$ grep --color=auto A tmp.txt > out.txt
$ cat out.txt
ABC
But if I do this:
$ grep --color=auto A tmp.txt > tmp.txt
$ cat out.txt
$
I get no output.
I'd like to be able to redirect to a file that I am reading within the same command.
Okay, so I have my answer and would like to share it with you all.
You simply have to use a pipe with tee.
$ grep --color=auto A tmp.txt | tee tmp.txt
ABC
$cat tmp.txt
ABC
Perhaps someone who understands pipes well can explain why.

How to edit a file descriptor in place with sed

I succeeded in using a file descriptor with sed and giving the result on the standard output. Giving a file "file.txt" containing :
$ cat file.txt
foo
Foo
I open a file descriptor to file.txt, open a sub-shell, and give this file descriptor to sed :
$ (sed "/Foo/c\\bar" <&9 ) 9< file.txt
foo
bar
The result is correct.
Now, if I want to use the -i option of sed to change in place, I have troubles. I open the file descriptor in read and write mode, then give it to sed as input file :
$ (sed -i "/Foo/c\\bar" <&9 ) 9<> file.txt
sed: no input file
I do not understand why an input file is missing. Maybe sed needs a filename, and not a file descriptor when using the -i option ?
I tried a workaround which, of course, does not work as expected :
$ (sed "/Foo/c\\bar" <&9 >&9 ) 9<> file.txt
$ cat file.txt
foo
Foo
foo
bar
while I expected :
$ cat file.txt
foo
bar
Thanks in advance for your help !
Dunatotatos
You cannot edit anything "in place" with sed, and this is a great example of why -i is misnamed. gnu sed implements -i by creating a new file, writing the output to it, and then renaming the file. If you don't give sed the original filename, it doesn't know what to rename it.
sed -i expects a filename. You can't pass /dev/stdin (or similar), as sed will attempt to create a temporary file inside /dev.
You can't even save the output of sed into a temporary file and then write the output in the file descriptor again, as you can't rewind a file descriptor in Bash.
What you can do is figure out the original file name from the file descriptor. You can do this by using the link /proc/self/fd/9, like this:
sed -i "/Foo/c\\bar" "$(readlink /proc/self/fd/9)"
However, note that the original file may have been deleted or renamed, in which case this solution won't work. Also, this solution expects /proc to be available, which might not always be the case. /dev/fd/9 may be a good replacement.
Another thing to be aware of is that sed -i works by replacing the the file with a new one: after running sed -i, your fd 9 won't refer the newly created file. To workaround this problem:
name="$(readlink /proc/self/fd/9)"
cp "$name" "$name.tmp"
sed "/Foo/c\\bar" "$name.tmp" > "$name"
This way, your fd 9 will still refer the same file before and after running sed. You might want to use mktemp to create the temporary file, and atexit to ensure that it gets deleted.

Need to concatenate a string to each line of ls command output in unix

I am a beginer in Shell script. Below is my requirement in UNIX Korn Shell.
Example:
When we list files using ls command redirect to a file the file names will be stored as below.
$ ls FILE*>FLIST.TXT
$ cat FLIST.TXT
FILE1
FILE2
FILE3
But I need output as below with a prefixed constant string STR,:
$ cat FLIST.TXT
STR,FILE1
STR,FILE2
STR,FILE3
Please let me what should be the ls command to acheive this output.
You can't use ls alone to append data before each file. ls exists to list files.
You will need to use other tools along side ls.
You can append to the front of each line using the sed command:
cat FLIST.TXT | sed 's/^/STR,/'
This will send the changes to stdout.
If you'd like to change the actual file, run sed in place:
sed -i -e 's/^/STR,/' FLIST.TXT
To do the append before writing to the file, pipe ls into sed:
ls FILE* | sed 's/^/STR,/' > FLIST.TXT
The following should work:
ls FILE* | xargs -i echo "STR,{}" > FLIST.TXT
It takes every one of the file names filtered by ls and adds the "STR," prefix to it prior to the appending

Bash script: How to remote to a computer run a command and have output pipe to another computer?

I need to create a Bash Script that will be able to ssh into a computer or Machine B, run a command and have the output piped back to a .txt file on machine A how do I go about doing this? Ultimately it will be list of computers that I will ssh to and run a command but all of the output will append to the same .txt file on Machine A.
UPDATE: Ok so I went and followed what That other Guy suggested and this is what seems to work:
File=/library/logs/file.txt
ssh -n username#<ip> "$(< testscript.sh)" > $File
What I need to do now is instead of manually entering an ip address, I need to have it read from a list of hostnames coming from a .txt file and have it place it in a variable that will substitute the ip address. An example would be: ssh username#Variable in which "Variable" will be changing each time a word is read from a file containing hostnames. Any ideas how to go about this?
This should do it
ssh userB#machineB "some command" | ssh userA#machineA "cat - >> file.txt"
With your commands:
ssh userB#machineB <<'END' | ssh userA#machineA "cat - >> file.txt"
echo Hostname=$(hostname) LastChecked=$(date)
ls -l /applications/utilities/Disk\ Utility.app/contents/Plugins/*Partition.dumodule* | awk '{printf "Username=%s DateModified=%s %s %s\n", $3, $6, $7, $8}'
END
You could replace the ls -l | awk pipeline with a single stat call, but it appears that the OSX stat does not have a way to return the user name, only the user id

Concatenating multiple text files into a single file in Bash

What is the quickest and most pragmatic way to combine all *.txt file in a directory into one large text file?
Currently I'm using windows with cygwin so I have access to BASH.
Windows shell command would be nice too but I doubt there is one.
This appends the output to all.txt
cat *.txt >> all.txt
This overwrites all.txt
cat *.txt > all.txt
Just remember, for all the solutions given so far, the shell decides the order in which the files are concatenated. For Bash, IIRC, that's alphabetical order. If the order is important, you should either name the files appropriately (01file.txt, 02file.txt, etc...) or specify each file in the order you want it concatenated.
$ cat file1 file2 file3 file4 file5 file6 > out.txt
The Windows shell command type can do this:
type *.txt > outputfile.txt
Type type command also writes file names to stderr, which are not captured by the > redirect operator (but will show up on the console).
You can use Windows shell copy to concatenate files.
C:\> copy *.txt outputfile
From the help:
To append files, specify a single file for destination, but multiple files for source (using wildcards or file1+file2+file3 format).
Be careful, because none of these methods work with a large number of files. Personally, I used this line:
for i in $(ls | grep ".txt");do cat $i >> output.txt;done
EDIT: As someone said in the comments, you can replace $(ls | grep ".txt") with $(ls *.txt)
EDIT: thanks to #gnourf_gnourf expertise, the use of glob is the correct way to iterate over files in a directory. Consequently, blasphemous expressions like $(ls | grep ".txt") must be replaced by *.txt (see the article here).
Good Solution
for i in *.txt;do cat $i >> output.txt;done
How about this approach?
find . -type f -name '*.txt' -exec cat {} + >> output.txt
the most pragmatic way with the shell is the cat command. other ways include,
awk '1' *.txt > all.txt
perl -ne 'print;' *.txt > all.txt
type [source folder]\*.[File extension] > [destination folder]\[file name].[File extension]
For Example:
type C:\*.txt > C:\1\all.txt
That will Take all the txt files in the C:\ Folder and save it in C:\1 Folder by the name of all.txt
Or
type [source folder]\* > [destination folder]\[file name].[File extension]
For Example:
type C:\* > C:\1\all.txt
That will take all the files that are present in the folder and put there Content in C:\1\all.txt
You can do like this:
cat [directory_path]/**/*.[h,m] > test.txt
if you use {} to include the extension of the files you want to find, there is a sequencing problem.
The most upvoted answers will fail if the file list is too long.
A more portable solution would be using fd
fd -e txt -d 1 -X awk 1 > combined.txt
-d 1 limits the search to the current directory. If you omit this option then it will recursively find all .txt files from the current directory.
-X (otherwise known as --exec-batch) executes a command (awk 1 in this case) for all the search results at once.
Note, fd is not a "standard" Unix program, so you will likely need to install it
When you run into a problem where it cats all.txt into all.txt,
You can try check all.txt is existing or not, if exists, remove
Like this:
[ -e $"all.txt" ] && rm $"all.txt"
all of that is nasty....
ls | grep *.txt | while read file; do cat $file >> ./output.txt; done;
easy stuff.

Resources