BASH 'df' command showing the same numbers for all directories? - bash

I'm trying to get the disk usage of everything within certain directories, which I've been attempting to do with commands like this:
df -h -k /var/www/html/exampledirectory1
df -h -k /var/www/html/exampledirectory2
df -h -k /var/www/html/exampledirectory3
The problem is, that every single directory in the server (even if I just run 'df -h' while within a certain directory) is giving me the exact same numbers, down to the Kilobyte.
Obviously this can't be correct, but I have no idea what it is I'm doing wrong. Can anyone help me out?
(I'm using BASH version 4.2.25 and I'm running Ubuntu 14.10)

You want to use the du command. df is used for measuring disk usage of a whole partition. Here is an example to determine the disk spaced used by a directory and all sub-directories:
du -sh /home/darwin

Related

rsync over ssh results in 0 files, but no error message

I'm trying to rsync a large directory of around 200 GB from a server to my local external hard drive. I can ssh onto the server and see the directory fine. I can also cd into the external hard drive fine. When I try and rsync the file across, I don't get an error, but the last line of the rsync output is 'total size is 0 speedup is 0.00', and there are no files in the destination directory.
Here's how I ssh onto the server successfully:
ssh skgtmdf#live.rd.ucl.ac.uk
Here's my rsync command:
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
And here's the rsync output:
sending incremental file list
drwxrwxrwx 65,536 2022/08/10 21:32:06 .
sent 57 bytes received 64 bytes 242.00 bytes/sec
total size is 0 speedup is 0.00
What am I doing wrong?
The way you have it quoted, the source path is part of the remote shell option (-e value) rather than a separate argument as it should be.
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is all part of the `-e` option value
This means rsync doesn't see that as a sync source at all, but just part of the command it'll use to connect to the remote system. I'm not sure why this doesn't lead to an error. In any case, the fix is simple: don't include ssh with the source path.
As I noticed later (see comments) the --progress option needs a double-dash or it'll be wildly misparsed. Fixing both of these things gives:
rsync -avrt --progress -e ssh "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
In fact, since ssh is the default command for making a remote connection, you can leave off -e ssh entirely:
rsync -avrt --progress "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
rsync -azve ssh user#host:/src/ target/
Normally, you don't need to wrap -e flag with ". It's probably messing the connection string

Assign value to a variable 'during' command - bash

Apologies if the title isn't worded very well, hard to explain exactly what I'm trying to do without an example
I am running a database backup command that creates a file with a timestamp. In the same command I am then uploading that file to a remote location.
pg_dump -U postgres -W -F t db > $backup_dir/db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar && gsutil cp $backup_dir/db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar $bucket_dir
As you can see here it is creating the timestamp during the pg_dump command. However in the 2nd half of the command, the timestamp will now be different and it won't find the file.
I'm looking for a way to 'save' or assign the value of the backup file name from the first half of the command, so that I can then use it in the 2nd half of the command.
Ideally this would be done across 2 separate commands however in this particular use case I'm limited to 1.
a variation of the advice already given in comments -
fn=db_backup_$(date +%Y-%m-%d-%H.%M.%S).tar &&
pg_dump -U postgres -W -F t db > "$backup_dir/$fn" &&
gsutil cp "$backup_dir/$fn" "$bucket_dir"
The $fn var makes the whole thing shorter and more readable, too.

using "dd" to capture and restore fails?

I used dd to capture the two local vm partitions like this...
# dd if=/dev/sda1 | gzip >mySda1.gz
# dd if=/dev/sda2 | gzip >mySda2.gz
Then I attached two volumes of sufficient size to an already running instance and mounted them (as /mnt/one and /mnt/two), then copied the .gz files up to the instance and used these commands to restore the partitions
# gunzip –c mySda1.gz | dd of=/dev/xvdk
# gunzip –c mySda2.gz | dd of=/dev/xvdl
The gunzip commands do not show failure, but when I then go /mnt/one and issue command ls -a there is nothing there. Why is this? The .gz files are very large. Why does the mounted partition show as blank even if the gunzip command completed?
Before you can write directly to a partition, you must first ensure that it is unmounted.
Linux will not notice if you write directly to the disk behind its back (and, more importantly, will assume that this will not happen---it will likely get very confused if you try modifying a mounted file system.)
So, the correct procedure would be as follows:
umount /dev/xvdk
gunzip –c mySda1.gz | dd of=/dev/xvdk
mount /dev/xvdk
and again for /dev/xvdl.

Using cURL to send JSON within a BASH script

Alright, here's what I'm trying to do. I'm attempting to write a quick build script in bash that will check out a private repository from GitHub on a remote server. To do this a "hands off" as possible, I want to generate a local RSA key set on the remote server and add the public key as a Deploy Key for that particular repository. I know how to do this using GitHub's API, but I'm having trouble building the JSON payload using Bash.
So far, I have this particular process included below:
#!/bin/bash
ssh-keygen -t rsa -N '' -f ~/.ssh/keyname -q
public_key=`cat ~/.ssh/keyname.pub`
curl -u 'username:password' -d '{"title":"Test Deploy Key", "key":"'$public_key'"}' -i https://api.github.com/repos/username/repository/keys
It's just not properly building the payload. I'm not an expert when it comes to string manipulation in Bash, so I could seriously use some assistance. Thanks!
It's not certain, but it may help to quote where you use public_key, i.e.
curl -u 'username:password' \
-d '{"title":"Test Deploy Key", "key":"'"$public_key"'"}' \
-i https://api.github.com/repos/username/repository/keys
Otherwise it will be much easier to debug if you use the shell's debugging options set -vx near the top of your bash script.
You'll see each line of code (or block (for, while, etc) as it is in your file. Then you see each line of code with the variables expanded to their values.
If you're still stuck, edit your post to show the expanded values of variables for the problem line in your script. What you have looks reasonable at first glance.

cp command fails when run in a script called by Hudson

This one is a puzzler. If I run a command from the command line to copy a file remotely it works perfectly. If I run that same command inside a script on the server (that hosts Hudson), it runs perfectly as well, same for running the job as hudson from the command line. However, if I run that exact command as a function inside a bash script from a Hudson job, it fails with:
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
The variable is defined as:
original_tarball=flash_board.tar.gz
and is in scope (variable expansion works correctly in the script).
The original command is:
ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I've also tried it as:
ssh -n -p 1601 -o stricthostkeychecking=no root#$IP_ADDRESS cp /opt/$original_tarball /opt/$original_tarball.bak
which points to the correct port, but fails in exactly the same way.
For reference all the variables have been checked to be valid. I originally thought this was a substitution error, but that doesn't seem to be the case, so then I tried running it with Hudson credentials as:
sudo -u hudson ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I get the exact same results (it works). So it's only when this command is run from a Hudson job that it fails.
Here's the sequence of events:
Hudson job sets parameters & calls a shell script.
A function inside the script tries to copy the files remotely from an embedded Montevista (Linux) board across an SPI bus to a second embedded Arago (Linux) board
Both boards are physically on the same mother board, but there's no way to directly access the Arago board except through a serial console session (which isn't feasible, this is an automation job that runs across the network).
I've tried this using ssh with -p 1601 (the correct port to the Arago side).
Can I use scp to copy a remote file to the same location as the remote file with a different file extension?
Something like:
scp -o stricthostkeychecking=no root#$IP_ADDRESS /opt/$original_tarball /opt/$original_tarball.bak
I had a couple of the devs take a look at this and they were stumped as well. Anyone got any ideas (A) why this fails & (B) how to work around it. I'm pretty sure I can write a script to run locally on the remote machine, but that doesn't seem like it should be necessary.
Oh, and if I run the exact same command on the Montevista board (which means I don't have to go across the SPI bus (169.254.0.2), it works perfectly from the Hudson job.
So, this turned out to be something completely unrelated to the question. I broke the problem down into little pieces with a test Hudson script, adding more and more complexity from the original script till it failed as before.
It turned out to be pilot error, I'd written an if statement to differentiate between the two boards (Arago & Montevista) and then abstracted out the variables passed to the if statement to the point where it was ambiguous which board was being passed in, so the if logic always grabbed the first match (as it should) and the flash script I was trying to copy on the Arago board didn't exist on the Montevista board (well, it has a different name) so the error returned was absolutely correct.
Sorry for the spin up and thanks for all the effort to help.
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
This is saying that Hudson cannot see the file. I would do a ls -la /opt in that shell script of yours. This will show you the permissions on the /opt directory, and whether your script can list that file.
While you're at it, do a du -f on the Hudson machine too and see if that /opt directory is a remote mount or something that could be problematic.
You've already said that you logged in as the user that runs the Hudson task and execute it from the workspace directory.
Right now, I suspect that the directory permission is an issue.
The obvious way that goes wrong is that somehow it is being run on the wrong machine, possibly due to either a line length limit, or to weird quoting issues.
I'd try changing the command to … uname -a or … hostname -f to see if you get the right machine. Or, alternatively, … cp /proc/cpuinfo /tmp/this-machine and then see which machine gets the file.
edit: I see now that OP has answered his own question. I guess I'll leave this here in case it helps any future visitors with similar issues. I guess I should add "or not running the command you thing you're running" to the reasons why it could happen.

Resources