du command not giving accurate results - terminal

When I try to use du command, to see the size of my folders like this for
example:
du -H --max-depth=1 some-folder/
28M
11M
8.0K
4.2M
260K
896K
86M
7.9M
24K
8.6M
22M
14M
6.0M
60K
912K
365M total
,final size does not shows the real sum of the above numbers. Why the summary size is wrong?

The command you're executing shows the folders only.
Therefore any files in some-folder only added to the total but not listed individually.
Try this:
du -Hs some-folder/*
But note that if you have hidden files (ie. files with a dot prefix) won't be listed with this command either.

Related

Discrepancy between the size of file created and size displayed by du -sh [duplicate]

This question already has answers here:
Size() vs ls -la vs du -h which one is correct size?
(3 answers)
Closed last month.
I had to create a random file of 10GB size, which I can using dd or fallocate, but the size shown by du -sh is twice the one I created:
$ dd bs=1MB count=10000 if=/dev/zero of=foo
10000+0 records in
10000+0 records out
10000000000 bytes (10 GB, 9.3 GiB) copied, 4.78419 s, 2.1 GB/s
$ du -sh foo
19G foo
$ ls -sh foo
19G foo
$ fallocate -l 10G bar
$ du -sh bar
20G bar
$ ls -sh bar
20G bar
Can someone please explain me this apparent discrepancy?
On wikipedia, it mentions about GPFS ...
The system stores data on standard block storage volumes, but includes an internal RAID layer that can virtualize those volumes for redundancy and parallel access much like a RAID block storage system.
I conclude that there is at least one non-visible duplicate for every file, and therefore each file actually uses twice the amount of space than the actual content of a single file. So the underlying RAID imposes the double-usage.
That could explain it, because I have created a similar massive file for other purposes, also using dd, on an ext4 filesystem, but the OS reports my file size matching the dd creation size, as per design intent (no RAID in effect on my drive).
The fact that you indicate that stat does report the correct file size as per dd's actions, confirms what I put forward above.

du -ahx shows 15Gb in folder but there are only 32M in files on it

I've been passing by a problem.
I got 1 folder inside /tmp called schemajs.
This folder is from a project in jenkins.
If you use the command du -ahx it shows like this
15G .
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517089398135/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517089398135
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517087611935/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517087611935
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517085797988/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517085797988
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517084059192/schemajs-2.1.1-linux-x86_64
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517084059192
66M ./schemajs-2.1.1-linux-x86_64.tar.bz2-extract-1517082197124/schemajs-2.1.1-linux-x86_64
The dot has 15G although there aren't enough files in the folder to justify this among of space used.
With the command ls -lha it shows several files with 4.0k with the total of 32M.
if I used vim .. it show more than 7500 files.
What could be happening in this particular case?
Platform: centos 6.9
4KB is the size of a directory and from what I see of du output your /tmp directory is full of directories (named for instance schemajs-2.1.1-linux-x86_64.tar.bz2-extract-151708939813) . Inside those directories are the file occupying disk space. It is always the same file of 66 MB named
schemajs-2.1.1-linux-x86_64

LFTP: CLS with folder size

When using CLS to list files via LFTP, is there a way to show the calculated size of any folders that appear in the listing? Here's what I'm using:
cls -s -h --sort=date
And here's the result:
3.7G bigfile.mp4
4.0K some.folder/
4.0K another.folder/
1.1G anotherfile.psd
All folders are showing as being only 4.0K, which of course does not reflect the total size of the folder's contents. My Google-fu fails me on this one :/
cls shows folder size reported by the server. In this case it is the size of the folder itself without the size of nested files. To see the size of all nested files and directories, use du -h command.

Filter int, float and character from a text file in a Shell Script

Suppose I have a text file, which contains data like this.
Below output generated from du - sh /home/*
1.5G user1
2.5G user2
And so on...
Now if I want that those files size be stored in an array and compared to 5 GB if the user is consuming more than 5 Gb. What can I do?
The du command shows the usage of each folder in home directory. So if i want myself to be notified that some user is consuming more than 5 GB. Because there is a long list of users. It will be tedious to identify each user's disk usage. I want a shell script to identify the usage for each directory in home. And then I will put mail function to notify myself for exceeded limits.
Note : Don't want to implement quota as I just want to monitor the usage.
Use du's -t (--threshold) option to specify you only want to know about directories with more than a certain amount of data in them:
$ du -sh -t 5G /home/*
If you're picky about precisely how big a gigabyte is, note that 5G uses multiples of 1024; you may prefer -t 5GB for multiples of 1000, or even -t 5000M to mix them.
For lots of users, you're probably better off writing that using -d 1 instead of -s to avoid the shell having to expand the * into a very long list:
$ du -h -d 1 -t 5G /home/

finding size of a file using ls and du .what is difference [duplicate]

This question already has answers here:
Size() vs ls -la vs du -h which one is correct size?
(3 answers)
Closed 8 years ago.
There is a file named today.log in my server.
ls -l today.log showing 400GB.
du -sh today.log. showing 240GB
What is the difference between ls and du ...
du shows how much disk the file uses. ls shows how big the file is. These two values can be different. Files with holes can take up less space than their size. Most files do not completely fill the blocks of the filesystem, so they take up more space than their size. A file with a single byte still takes up at least one full block. (512 or 1024 bytes, typically.) As an examle, consider a file with a single byte at position 183738475 (randomly typed numbers). That file can be stored on disk using a single block (whenever the kernel queries the filesystem for bytes other than the single byte in the file, the filesystem reports them as being zero, and there is no need to store anything. Not all filesystems work this way.) But the size of the file is 183738475, so ls will report that and du will report how many blocks are used by the filesystem. du -h will report the number of blocks used times the block size converted to a human readable format. Keep in mind that the actual numbers will vary depending on your filesystem. For example:
$ echo > foo; ls -l foo |awk '{print $5}'; du foo; du -h foo
1
8 foo
4.0K foo
This file is one byte in size but consumes 8 blocks on disk, and the block size is 512 so those 8 blocks consume 4k. (My filesystem has been optimized for large files, and small files waste a lot of space.)

Resources