Gunzip a file on remote server without copying - shell

I have a file named abc.tar.gz on server1 and wanted to extract it on server2 using SSH and without copying it to the server2.
Tried like this, but doesn't work:
gunzip -c abc.tar.gz "ssh user#server2" | tar -xvf -

You are mixing stuffs. Try to understand what do you copy (and maybe also this answer).
Your program needs few steps:
1- read the file on remote server: gunzip -c abc.tar.gz
2- send the file to your machine: | ssh user#server2
3- and make ssh to execute a local program: (still on ssh) ` tar -xvf -
so gunzip -c abc.tar.gz | ssh user#server2 tar -xvf -
It server2 is a good machine (not a old embedded device), probably it is better to just use cat on server1 and do the gunzip on server2: less traffic to be sent, so probably also faster.
Please: try to understand it, before to copy and execute on your machine. There are man pages of all such commands.

Related

tar & split remote files saving output locally remove "tar: Removing leading `/' from member names" message from output

This is a 2 part question.
Ive made a bash script that logs into a remote server makes a list.txt and saves that locally.
#!/bin/bash
sshpass -p "xxxx" ssh user#pass ls /path/to/files/ | grep "^.*iso" > list.txt
It then starts a for loop using the list.txt
for f in $(cat list.txt); do
The next command splits the target file and saves it locally
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"
Question 1
I need help understanding the above command, why is it saving the *part files locally? Even though that is what I intend to do I would like to understand it better, How would I do this the other way round, tar and split files saving output to remote directory (flip around what is happening in the above command using the same tools sshpass is a requirement)
Question 2
When running the above command even though I have made it not verbose it still prints this message
tar: Removing leading `/' from member names
How do I get rid of it as I have my own echo output as part of the script I have tried the following after searching online but I think me piping a few commands together confuses tar and breaks the operation.
I have tried these with no luck
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czfP - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf -C /path/to/files/$f | split -b 10M - "$f.tar.bz2.part
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part > /dev/null 2>&1
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f > /dev/null 2>&1 | split -b 10M - "$f.tar.bz2.part
All of the above break the operation and I would like it to not display any messages at all. I suspect it has something to do with regex and how the pipe passes through arguments. Any input is appreciated.
Anyways this is just part of the script the other part uploads the processed file after tar and splitting it but Ive had to break it up into a few commands a 'tar | split' locally, then uploading via rclone. It would be way more efficient if I could pipe the output of split and save it remotely via ssh.
First and foremost, you must consider the security vulnerabilities when using sshpass.
About question 1:
Using tar with -f - option will create the tar on the fly and will send to stdout.
The | separates the commands.
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f - Runs remotely
split -b 10M - "$f.tar.bz2.part" - Runs in local shell
The second command reads the stdin from the first command (the tar output) and it creates the file locally.
If you want to perform all the operations in the remote machine, you could enclose the rest of the commands in quotes like this (read other sources about qouting).
sshpass -p "xxxx" ssh user#pass 'tar --no-same-owner -czf - /path/to/files/$f | split -b 10M - "$f.tar.bz2.part"'
About question 2.
tar: Removing leading '/' from member names is generated by tar command which sends errors/warnings to STDERR which in the terminal, STDERR defaults to the user's screen.
So you can suppress tar errors by adding 2>/dev/null:
sshpass -p "xxxx" ssh user#pass tar --no-same-owner -czf - /path/to/files/$f 2 > /dev/null | split -b 10M - "$f.tar.bz2.part

How to use a source file while executing a script on a remote machine over SSH

I am running a bash script on a remote machine over SSH.
ssh -T $DBHOST2 'bash -s' < $DIR/script.sh <arguments>
Within the script I am using a source file for defining functions used in the script script.sh.
DIR=`dirname $0` # to get the location where the script is located
echo "Directory is $DIR"
. $DIR/source.bashrc # source file
But since the source file is not present in the remote machine it results in an error.
Directory is .
./source.bashrc: No such file or directory
I can always define the functions along with the main script rather than using a source file, but I was wondering is there any way to use a separate source file.
Edit : Neither the source file nor the script is located in the remote machine.
Here are to ways to this - both only requiring one ssh session.
Option 1: Use tar to copy your scripts to the server
tar cf - $DIR/script.sh $DIR/source.bashrc | ssh $DBHOST2 "tar xf -; bash $DIR/script.sh <arguments>"
This 'copies' your scripts to your $DBHOST2 and executes them there.
Option 2: Use bashpp to include all code in one script
If copying files onto $DBHOST2 is not an option, use bashpp.
Replace your . calls with #include and then run it through bashpp:
bashpp $DIR/script.sh | ssh $DBHOST2 bash -s
ssh -T $DBHOST2 'bash -s' <<< $(cat source_file $DIR/script.sh)
The following acheives what I am trying to do.
1.Copy the source file to remote machine.
scp $DIR/source.bashrc $DBHOST2:./
2.Execute the local script with arguments on the remote machine via SSH
ssh $DBHOST2 "bash -s" -- < $DIR/script.sh <arguments>
3. Copy remote logfile logfile.log to local file dbhost2.log and remove the source file and logfile from the remote machine
ssh $DBHOST2 "cat logfile.log; rm -f source.bashrc logfile.log" > dbhost.log

Bash script for gathering info from multiple servers [duplicate]

This question already has answers here:
Multiple commands on remote machine using shell script
(3 answers)
Closed 6 years ago.
I've only got a little question for you.
I have made a little shell script that allows me to connect to a server and gather certain files and compress them to another location on another server, which works fine.
It is something in the vane of:
#!/bin/bash
ssh -T user#server1
mkdir /tmp/logs
cd /tmp/logs
tar -czvf ./log1.tgz /somefolder/docs
tar -czvf ./log2.tgz /somefolder/images
tar -czvf ./log3.tgz /somefolder/video
cd ..
-czvf logs_all.tgz /tmp/logs
What I would really like to do is:
Login with the root password when connect via ssh
Run the commands
Logout
Login to next server
Repeat until all logs have been compiled.
Also, it is not essential but, if I can display the progress (as a bar perhaps) then that would be cool!!
If anyone can help that would be awesome.
I am in between n00b and novice so please be gentle with me!!
ssh can take a command as argument to run on the remote machine:
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder"
This will perform the tar command on the remote machine, writing the tar's output to stdout which is passed to the local machine by the ssh command. So you can write it locally somewhere (there's no need for that /tmp/logs/ on the remote machine):
ssh -T user#server1 "tar -czf - /somefolder/anotherfolder" > /path/on/local/machine/log1.tgz
If you just want to collect them on the remove server (no wish to transfer them to the local machine), just do the straight forward version:
ssh -T user#server1 "mkdir /tmp/logs/"
ssh -T user#server1 "tar -cvzf /tmp/logs/log1.tgz /somefolder/anotherfolder"
ssh -T user#server1 "tar -cvzf /tmp/logs/log2.tgz /somefolder/anotherfolder"
…
ssh -T user#server1 "tar -czvf /tmp/logs_all.tgz /tmp/logs"
You could send a tar command that writes a compressed archive to standard out and save it locally:
ssh user#server1 'tar -C /somefolder -czvf - anotherfolder' > server1.tgz
ssh user#server2 'tar -C /somefolder -czvf - anotherfolder' > server2.tgz
...

How to tar a directory from different - different server and copy to current server on which I am working

Actually I want to tar a directory from different-different server and need to move that tar to my current server on which I am working so I am writing a shell script for that and had used the following command for this
ssh hostname " tar -cvj site1.tar.gz -C /www/logs/test " > site2.tar.gz
I had also tried moving it to a directory on my current server but not able to achieve that also, command tried -
ssh hostname " tar -cvj site1.tar.gz -C /www/logs/test " > /www/logs/automate
I tried this and think that it will not work, so is their any other way to achieve this, Please help. It will be appreciable. Thanks.
With GNU tar:
ssh hostname "tar -C /www/logs/test -cjf -" > your.tar.bz2
-f -: write archive to stdout

Bash scp several files password issue

I am trying to copy several files from a remote server into local drive in Bash using scp.
Here's the part of the code
scp -r -q $USR#$IP:/home/file1.txt $PWD
scp -r -q $USR#$IP:/home/file2.txt $PWD
scp -r -q $USR#$IP:/root/file3.txt $PWD
However, the problem is that EVERY time that it wants to copy a file, it keeps asking for the password of the server, which is the same. I want it to ask only once and then copy all my files.
And please, do not suggest rsync nor making a key authentication file since I do not want to do that.
Are there any other ways...?
Any help would be appreciated
You can use expect script or sshpass
sshpass -p 'password' scp ...
#!/usr/bin/expect -f
spawn scp ...
expect "password:"
send "ur_password"
An disadvantage is that your password is now in plaintext
I'm assuming that if you can scp files from the remote server, you can also ssh in and create a tarball of the remote files.
The -r flag is recursive, for copying entire directories but your listing distinct files in your command, so -r becomes superfluous.
Try this from the bash shell on the remote system:
$ mkdir /home/file_mover
$ cp /home/file1.txt /home/file_mover/
$ cp /home/file2.txt /home/file_mover/
$ cp /root/file3.txt /home/file_mover/
$ tar -cvf /home/myTarball.tar /home/file_mover/
$ scp -q $USR#$IP:/home/myTarball.tar $PWD
Well, in this particular case, you can write...
scp -q $USR#$IP:/home/file[1-3].txt $PWD

Resources