tar & split remote files saving output locally remove "tar: Removing leading `/' from member names" message from output - bash

This is a 2 part question.
Ive made a bash script that logs into a remote server makes a list.txt and saves that locally.
#!/bin/bash
sshpass -p "xxxx" ssh user#pass ls /path/to/files/ | grep "^.*iso" > list.txt
It then starts a for loop using the list.txt
for f in $(cat list.txt); do
The next command splits the target file and saves it locally
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"
Question 1
I need help understanding the above command, why is it saving the *part files locally? Even though that is what I intend to do I would like to understand it better, How would I do this the other way round, tar and split files saving output to remote directory (flip around what is happening in the above command using the same tools sshpass is a requirement)
Question 2
When running the above command even though I have made it not verbose it still prints this message
tar: Removing leading `/' from member names
How do I get rid of it as I have my own echo output as part of the script I have tried the following after searching online but I think me piping a few commands together confuses tar and breaks the operation.
I have tried these with no luck
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czfP - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf -C /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part > /dev/null 2>&1
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f > /dev/null 2>&1 | split -b 10M - "$f.tar.bz2.part
All of the above break the operation and I would like it to not display any messages at all. I suspect it has something to do with regex and how the pipe passes through arguments. Any input is appreciated.
Anyways this is just part of the script the other part uploads the processed file after tar and splitting it but Ive had to break it up into a few commands a 'tar | split' locally, then uploading via rclone. It would be way more efficient if I could pipe the output of split and save it remotely via ssh.

First and foremost, you must consider the security vulnerabilities when using sshpass.
About question 1:
Using tar with -f - option will create the tar on the fly and will send to stdout.
The | separates the commands.
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f - Runs remotely
split -b 10M - "$f.tar.bz2.part" - Runs in local shell
The second command reads the stdin from the first command (the tar output) and it creates the file locally.
If you want to perform all the operations in the remote machine, you could enclose the rest of the commands in quotes like this (read other sources about qouting).
sshpass -p "xxxx" ssh user#pass 'tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"'
About question 2.
tar: Removing leading '/' from member names is generated by tar command which sends errors/warnings to STDERR which in the terminal, STDERR defaults to the user's screen.
So you can suppress tar errors by adding 2>/dev/null:
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f 2 > /dev/null | split -b 10M - "$f.tar.bz2.part

Related

sshpass want to use parameter of sftp

Hi i created following script to initialize my storage box to use rsync without password later. Last year it works if i remember correct...
cat .ssh/id_rsa.pub >> .ssh/storagebox_authorized_keys
echo -e "mkdir .ssh \n chmod 700 .ssh \n put $.ssh/storagebox_authorized_keys" \
".ssh/authorized_keys \n chmod 600 .ssh/authorized_keys" | sshpass -p ${storage_password} \
sftp -P ${storage_port} -i .ssh/id_rsa ${storage_user}#${storage_address}
today I get following error:
sshpass: invalid option -- 'i'
but the parameter -i belongs to sftp and not sshpass - is there an possibility to parse the parameters in the correct way?
edit: i switched the position of
-i .ssh/id_rsa ${storage_user}#${storage_address}
and get this error
sshpass: Failed to run command: No such file or directory
edit: it seems like an sftp problem...
after discussion, updating answer to properly support automation
step 1:
create an sftp "batch file" e.g: ~/.ssh/storage-box_setup.sftp
mkdir .ssh
chmod 700 .ssh
put /path/to/authorized_keys_file ".ssh/authorized_keys
chmod 600 .ssh/authorized_keys
/path/to/authorized_keys_file is a file containing public keys of ONLY the keys that should have access to your storage box (.ssh/storagebox_authorized_keys)
step 2:
update automation script command to
sshpass -p <password> -- sftp -P <port> -b ~/.ssh/storage-box_setup.sftp user#host
the -b flag was the answer you needed.
refer: man sftp
-b batchfile
Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction it should be used in conjunction with non-interactive authentication.
--
sshpass -p ${storage_password} -- \
sftp -P ${storage_port} -i .ssh/id_rsa \
${storage_user}#${storage_address}
the -- before sftp is a way to tell sshpass (and most other programs) to stop parsing arguments.
everything after -- is assumed as the last argument, which in the case of sshpass is the command to be executed ssh -i ~/.id_rsa ...
in case you're wondering switching the position of -i tells sshpass to execute -i as a program and hence fails with command not found

Gunzip a file on remote server without copying

I have a file named abc.tar.gz on server1 and wanted to extract it on server2 using SSH and without copying it to the server2.
Tried like this, but doesn't work:
gunzip -c abc.tar.gz "ssh user#server2" | tar -xvf -
You are mixing stuffs. Try to understand what do you copy (and maybe also this answer).
Your program needs few steps:
1- read the file on remote server: gunzip -c abc.tar.gz
2- send the file to your machine: | ssh user#server2
3- and make ssh to execute a local program: (still on ssh) ` tar -xvf -
so gunzip -c abc.tar.gz | ssh user#server2 tar -xvf -
It server2 is a good machine (not a old embedded device), probably it is better to just use cat on server1 and do the gunzip on server2: less traffic to be sent, so probably also faster.
Please: try to understand it, before to copy and execute on your machine. There are man pages of all such commands.

Bash script for gathering info from multiple servers [duplicate]

This question already has answers here:
Multiple commands on remote machine using shell script
(3 answers)
Closed 6 years ago.
I've only got a little question for you.
I have made a little shell script that allows me to connect to a server and gather certain files and compress them to another location on another server, which works fine.
It is something in the vane of:
#!/bin/bash
ssh -T user#server1
mkdir /tmp/logs
cd /tmp/logs
tar -czvf ./log1.tgz /somefolder/docs
tar -czvf ./log2.tgz /somefolder/images
tar -czvf ./log3.tgz /somefolder/video
cd ..
-czvf logs_all.tgz /tmp/logs
What I would really like to do is:
Login with the root password when connect via ssh
Run the commands
Logout
Login to next server
Repeat until all logs have been compiled.
Also, it is not essential but, if I can display the progress (as a bar perhaps) then that would be cool!!
If anyone can help that would be awesome.
I am in between n00b and novice so please be gentle with me!!
ssh can take a command as argument to run on the remote machine:
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder"
This will perform the tar command on the remote machine, writing the tar's output to stdout which is passed to the local machine by the ssh command. So you can write it locally somewhere (there's no need for that /tmp/logs/ on the remote machine):
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder" > /path/on/local/machine/log1.tgz
If you just want to collect them on the remove server (no wish to transfer them to the local machine), just do the straight forward version:
ssh -T user#server1 "mkdir /tmp/logs/"
ssh -T user#server1 "tar -cvzf /tmp/logs/log1.tgz /somefolder/anotherfolder"
ssh -T user#server1 "tar -cvzf /tmp/logs/log2.tgz /somefolder/anotherfolder"
…
ssh -T user#server1 "tar -czvf /tmp/logs_all.tgz /tmp/logs"
You could send a tar command that writes a compressed archive to standard out and save it locally:
ssh user#server1 'tar -C /somefolder -czvf - anotherfolder' > server1.tgz
ssh user#server2 'tar -C /somefolder -czvf - anotherfolder' > server2.tgz
...

Download file with curl and pipe it through sha1sum into tar

I would like to download a file with curl, check its checksum with sha1sum (or a similar tool) and pipe the file into tar to unpack it, given that the result of sha1sum was a 0.
I know that without the checksum verfication it would be a simple curl <link> | tar x , however I'm having a hard time fitting sha1sum in there since its syntax is very foreign to me. I could probably manage to do it if sha1sum was able to receive the checksum as a parameter and read the file from stdin, but as far as I have seen this is not possible. Is there a way to achieve this nontheless?
set -ex; \
curl -o wordpress.tar.gz -fSL "https://wordpress.org/wordpress-4.7.3.tar.gz"; \
echo "35adcd8162eae00d5bc37f35344fdc06b22ffc98 *wordpress.tar.gz" | sha1sum -c -; \
tar -xzf wordpress.tar.gz -C ./

Bash scp several files password issue

I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD

Resources