I am working on a battery of automatic tests which executes on 2 Unix virtual machines running with KSH. Those VMs are independant and they have practically the same .profile file. I would like to study their differences by launching:
tkdiff /usr/system/.profile system#{external_IP}:/usr/system/.profile
on the first VM but it doesn't work.
I suppose that directly accessing a hidden file is not possible. Is there a solution to my problem, or maybe an alternative?
If you want to compare different files on two remote machines, I suggest the following procedure:
1. Compare checksums:
First compare the checksums. Use sum, md5sum or sha256sum to compute a hash of the file. If the hash is the same, the probability of having the same file is extremely high! You can even increase that probability by check the total amount of characters, lines and words, in the file using wc.
$ file="/usr/system/.profile"
$ md5sum "$file" && wc "$file"
$ ssh user#host "md5sum '$file' && wc '$file'"
2. run a simple diff
Run a simple diff using the classic command line tools. They understand the POSIX standard to use - as /dev/stdin. This way you can do:
$ ssh user#host "cat -- '$file'" | diff "$file" -
note: with old versions of tkdiff or new versions of svn/git, it can be tricky here due to bugs in tkdiff. It will quickly throw errors of the form svn [XXXX] file .... is not a working copy or file xxxx is not part of a revision control system if one of the files might be under version control or you end up in a directory under version control. Stick to diff!
You are using the filename convention "user#host:/path/to/file" for the second argument to tkdiff.
That convention for naming is not native to Ksh, but instead is understood by some programs like scp and others (which can be interactive, e.g. to ask for a password for the remote system or other authentication related questions).
But from the tkdiff man page, it does not mention having built-in support for that filenaming convention userid#host:/path/to/file, and neither is such support built into ksh.
So you may need to use two steps, first to use scp or similar to copy the remote file locally then then use tkdiff with one argument the local file and the other the file-just-copied, or arrange to mount part of the other VM filesystem locally, and then use tkdiff with appropriate arguments.
Obviously, both files need to be readable by your userid or the user specified on the userid#host:/path/to/file for this to work.
You can directly made a remote ssh compare , run a remote display with help of cat command line, with this :
tkdiff <(ssh system#{external_IP}1 'cat /usr/system/.profile') <(ssh system#{external_IP}2 'cat /usr/system/.profile')
In your case to be able to compare with the local .profile file this :
tkdiff /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')
Do you have just try with the simple diff command line (with -b -B option to remove blank line and space comparaison):
diff -b -B /usr/system/.profile <(ssh system#{external_IP} 'cat /usr/system/.profile')
Related
Note: I am particularly looking for a coding hack, not for an alternative solution. I am aware that awk, sed etc. can do this inline edit just fine.
$ echo '1' > test
$ cat test > test
$ cat test
$
Is there a way, to somehow make the second command output the original contents (1 in this case)? Again, I am looking for a hack which will work without visibly having to use a secondary file (using a secondary file in the background is fine). Another question on this forum solely focused on alternative solutions which is not what I am looking for.
You can store the content in a shell variable rather than a file.
var=$(<test)
printf "%s\n" "$var" > test
Note that this might only work for text files, not binary files. If you need to deal with them you can use encoding/decoding commands in the pipeline.
You can't do it without storing the data somewhere. When you redirect output to a file, the shell truncates the file immediately. If you use a pipeline, the commands in the pipeline run concurrently with the shell, and it's unpredictable which will run first -- the shell truncating the file or the command that tries to read from it.
With thanks to the comment made by #Cyrus to the original question
$ sudo apt install moreutils
$ echo '1' > test
$ cat test | sponge test
$ cat test
1
It does require installing an extra package and pre-checking for the binary using something like where sponge to check if it is installed.
if you happen to use macos, if the file isn't too gargantuan, you can always follow these steps :
perform the edits
pipe it to the clipboard (or "pasteboard" in mac lingo)
paste it back to original file name
|
{... edits to file1 ...} | pbcopy; pbpaste > file1
I have a very large number of files with very similar names: row1col1.txt, row1col2.txt, row1col3.txt, row1col4.txt......
I'd like to make copies of them all and change the names to row2col1.txt, row2col2.txt,
row2col3,txt, row2col4.txt......
Using the cp command in shell script, how can I do it efficiently?
How are you going to generate the file names? How are you going to specify the substitution?
One possibility is:
ls row1col*.txt |
sed 's/row1\(.*\)/cp & row2\1/' |
sh -x
This uses ls to generate the list of names, and sed to generate a cp command for each named file, and pipes that to sh so that the copy operations occur. Don't run it to sh until you are confident that the rest is right.
If you use the program mcp contained in the packet mmv, you can do that like this:
mcp row1\* row2\#1
I have a Bash script that repeatedly copies files every 5 seconds. But this is a touch overkill as usually there is no change.
I know about the Linux command watch but as this script will be used on OS X computers (which don’t have watch, and I don’t want to make everyone install macports) I need to be able to check if a file is modified or not with straight Bash code.
Should I be checking the file modified time? How can I do that?
Edit: I was hoping to expand my script to do more than just copy the file, if it detected a change. So is there a pure-bash way to do this?
I tend to agree with the rsync answer if you have big trees of files
to manage, but you can use the -u (--update) flag to cp to copy the
file(s) over only if the source is newer than the destination.
cp -u
Edit
Since you've updated the question to indicate that you'd like to take
some additional actions, you'll want to use the -nt check
in the [ (test) builtin command:
#!/bin/bash
if [ $1 -nt $2 ]; then
echo "File 1 is newer than file 2"
else
echo "File 1 is older than file 2"
fi
From the man page:
file1 -nt file2
True if file1 is newer (according to modification date) than
file2, or if file1 exists and file2 does not.
Hope that helps.
OS X has the stat command. Something like this should give you the modification time of a file:
stat -f '%m' filename
The GNU equivalent would be:
stat --printf '%Y\n' filename
You might find it more reliable to detect changes in the file content by comparing the file size (if the sizes differ, the content does) and the hash of the contents. It probably doesn't matter much which hash you use for this purpose: SHA1 or even MD5 is probably adequate, and you might find that the cksum command is sufficient.
File modification times can change without changing the content (think touch file); file modification times can not change even when the content does (doing this is harder, but you could use touch -r ref-file file to set the modification times of file to match ref-file after editing the file).
No. You should be using rsync or one of its frontends to copy the files, since it will detect if the files are different and only copy them if they are.
Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.
I am writing a little shellscript that needs to go through all folders and files on an ftp server (recursively). So far everything works fine using cURL - but it's pretty slow, becuase cURL starts a new session for every command. So for 500 directories, cURL preforms 500 logins.
Does anybody know, whether I can stay logged in using cURL (this would be my favourite solution) or how I can use ftp with only one session in a shell script?
I know how to execute a set of ftp commands and retrieve the response, but for the recursive listing, it has to be a little more dynamic...
Thanks for your help!
The command is actually ncftpls -R. It will recursively list all the files in a ftp folder.
Just to summarize what others have said so far. If you are trying to write a portable shell script which works as batch file, then you need to use the lftp solution since some FTP server may not implement ls -R. Simply replace 123.456.789.100 with the actual IP adress of the ftp server in the following examples:
$ lftp -c "open 123.456.789.100 && find -l && exit" > listing.txt
See the man page of lftp, go to the find section:
List files in the directory (current directory by default)
recursively. This can help with servers lacking ls -R support. You
can redirect output of this command.
However if you have a way to figure out whether or not the remote ftp server implements proper support for ls -lR, then a much better (=faster) solution will be:
$ echo ls -lR | ftp 123.456.789.100 > listing.txt
Just for reference if I execute the first command (lftp+find) it takes 0m55.384s to retrieve the full listing, while if I execute the second one (ftp+ls-R), it takes 0m3.225s.
If it's possible, try usign lftp script:
# lftp script "myscript.lftp"
open your-ftp-host
user username password
cd directory_with_subdirs_u_want_to_list
find
exit
Next thing u need is bash script to run this lftp command and write it to file:
#!/bin/bash
lftp -f myscript.lftp > myOutputFile
myOutputFile now contains the full dump of directories.
You could connect to the ftp server in a manner that it accepts commands from stdin and writes to stdout. Create two named pipes ("fifos", man mkfifo), redirect stdin and stdout of the ftp command each to one of them. Then you can write commands to the stdin-connected-fifo and read them (line-by-line with bash's read for example) from the stdout-fifo. Then use the results to see where you need to send another listing command (and print it or whatever you want to do)
In short: Not something bash scripting is suitable for :) (Until you find a tool that does what you want by itself of course)
if you just want to create a listing of all files and folders, you can use ssh instead. Something like this (but check the documentation on correct usage)
$ ssh user#host "ls -R /path"