Lustre file system: how many OSTs? - parallel-processing

Is there a way, besides bugging the sys admins, to determine how many Lustre OSTs a parallel system has?

The lfs utility will give you this info.
I believe lfs osts specifically lists the OSTs

Also, lfs df and lfs df -i will print the space and inode usage per OST, respectively.

Related

changing the crtime in bash

I want to change crtime properties in bash.
First, I tried to check the crtime as following command.
stat test
And next, I changed the timestamp.
touch -t '200001010101.11' test
But I realized that if the crtime is already past than the date I wrote, then it can't be changed.
So I want to know how to specify the crtime even it is already past.
Edit:
According to This answer to a similar question, you may be able to use debugfs -w -R 'set_inode_field ...' to change inode fields, though this does require unmounting.
man debugfs shows us the following available commmand:
set_inode_field filespec field value
Modify the inode specified by filespec so that the inode field field has value value. The list of valid inode fields which can be set via this command can be displayed by using the command:
set_inode_field -l Also available as sif.
You can try the following to verify the inode number and name of the crtime field:
stat -c %i test
debugfs -R 'stat <your-inode-number>' /dev/sdb1
and additionally df -Th to find the /dev path of your filesystem (e.g. /dev/sdb1)
Followed by:
umount /dev/sdb1
debugfs -w -R 'set_inode_field <your-inode-number> crtime 200001010101.11' /dev/sdb1
Note: In the above commands, inode numbers must be indicated with <> brackets as shown. Additionally, as described here it may be necessary to flush the inode cache with echo 2 > /proc/sys/vm/drop_caches
Original answer:
You might try birthtime_touch:
birthtime_touch is a simple command line tool that works similar to
touch, but changes a file's creation time (its "birth time") instead
of its access and modification times.
From the birthtime_touch Github page, which also notes why this is not a trivial thing to accomplish:
birthtime_touch currently only runs on Mac OS X. The minimum required
version is Mac OS X 10.6. birthtime_touch is known to work for files
that are stored on HFS+ and MS-DOS filesystems.
The main problem why birthtime_touch does not work on all systems and
for all filesystems, is that not all filesystems store a file's
creation time, and for those that actually do store the creation time
there is no standardized API to access/change that information.
This page has more details about the reasons why we haven't yet seen support for this feature.
Beyond this tool, it might be worth looking at the source on Github to see how it's accomplished and whether or not it might be portable to Unix/Linux. And beyond that, I imagine it would be necessary to write low level code to expose those aspects of the filesystems that crtime would be stored.

shell script to trim error_log files in all accounts in server

folks!
Here's my problem: I have a dedicated server using cPanel with several accounts hosted. About 20 of these accounts are generating huge error_log files daily, sometimes over 7 GB, which is using up all the account's disk space, not mentioning it's cluttering the server! I don't have the time or knowledge right now to find out and fix what's causing the problem in each one of these accounts. So I'd like a shell script that would trim/trucate these error_log files in all accounts to a maximum size of 500 kb, so they won't grow so large. And the cronjob to run it on a daily basis.
Can somebody help me with this?
TIA! :)
Use the truncate command to shrink or extend the size of each FILE to the specified size:
truncate -s 0 {filename.txt}
ls -lh filename.txt
truncate -s 0 filename.txt
ls -lh filename.txt
The -s option is used to set SIZE to zero. See truncate command man page for more details
man truncate

git svn clone large repo under Windows: out of memory - not a large file issue

I am trying to clone a large svn repository with git svn. The repo has got 100000 revisions. The size is about 9GB (pristine folder). Biggest file in repo is 300 MB.
The branch structure is a total mess in the repo. Lots of wrong and missing merge info, no standard layout. I've tried to fetch the latest revisions with and without branches. The command without branches looks like this:
git svn clone url_to_trunk_in_repo -r100000:HEAD --username=svn_user
HEAD is currently at 101037. The process runs for a while (hours) and fails with something like this:
Out of memory during request for 29040 bytes, total sbrk() is 254959616 bytes!
I have got the latest maintained git revision for Windows (Git-1.9.4-preview20140929) running on Windows 7 x64 with 16 GB RAM.
I've done some search on this kind of failure. Most postings refer to a problem with large files some years ago which is most likely fixed already (haven't checked that). Anyway this issue refers to large allocation, indicated by the error message during "large" request. However, the process fails while adding normal implementation files of small size. Therefore, I don't think this is a large file problem.
I've tried to modify the pack settings in etc/gitconfig, which is a common advise. However, this didn't help. I didn't expect it to help at all because the memory error is during download from svn server not during git gc which processes the packs, AFAIK.
Further digging lead me to a perl memory limitation of 256MB. This is most likely the case because I always get the error with almost 256MB sbrk().
Further investigation on perl memory limitations brings up OS memory limitations, only. That is 2GB on win32 (3GB with special switch) and RAM limit for 64 bit windows. I also found some advice for raising Cygwin memory limitations but that doesn't apply here.
The 256MB limit is ridiculous in my eyes and I desperately searching for a way to get around this.
EDIT:
This is propably a Perl 5.8.8 issue (git uses that version). I have also installed strawberry perl 5.16.3 x64.
I've written this test code, which is a modification of the code posted at this stackoverflow question:
use strict;
use warnings;
my #s;
my $count = 200;
my $alloc = 30000000;
for (my $i = 0; $i < $count; $i++)
{ print "Trying allocation...";
$s[$i] = "a" x $alloc; # ok
print "OK\n\n";
}
With strawberray perl, this works perfectly. In git bash, I receive the error described before.
Out of memory during "large" request for 33558528 bytes, total sbrk()
is 2351800 32 bytes at mem.pl line 9.
EDIT 2:
I've tried strawberry perl 5.8.8-1. It allocates properly, however, the program crashes after execution. Hence, this is not a bug in perl 5.8.8 generally but in the version that is being shipped with git (msys perl 5.8.8)
Configuration of strawberry perl and msys perl differs in many entries. Most noticable difference for me is usemymalloc=n (strawberry) and usemymalloc=y (msys perl).
I also checked for ulimit in git bash, which doesn't show any abnormality:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2046
cpu time (seconds, -t) unlimited
max user processes (-u) 63
virtual memory (kbytes, -v) 2097152
With Cygwin and Git 2.1.1 I'm able to run git svn on my repo without any memory issues. My test program runs fine as well. I haven't tried 1.x-Versions of Git on Cygwin but I guess they'd work because the problem was a memory limitation of msysperl which is replaced by Cygwin.
I won't mark this as answer since it doesn't solve my original question. It is my current workaround for tests with Git.
I'd like to have a Git for Windows distribution with properly working Perl. There is an issue for upgrading Perl here. However this seems to be not an easy task. Same holds for the SVN version used by git svn on Windows: Howto upgrade SVN

diff -r for windows?

To find differences in files between two directory structures I usually run
diff -r dir1/ dir2/
I'm burdened with Windows - is there any way to do this easily Windows or should I just get Cygwin?
I use a tool called FreeFileSync. As the name implies it's free :) It does a great job of visually comparing directories and does not move any files unless you tell it to sync.
It also has a portable version so you do not need to install it.
FreeFileSync
Beyond Compare does a pretty good job. It isn't free though (but it is nagware, if I recall).
You can use cygwin or mingw, but these are very slow compared to the unix variants, I believe due to the (possibly intentionally) crippled posix implementation on windows.
if you have windiff you can try:
windiff -T dir1 dir2
also, you can download the Gnu utils for windows to run to traditional Unix diff (with no Cygwin required)
http://unxutils.sourceforge.net/

read directory file

we all know that in linux directory is a special file containing the file name and the inode number of constituent files. I want to read the contents of this directory file using standard command line utility.
cat . gives an error that I cannot open a directory.
However, apparently vim can understand the content of this file using readdir probably. It displays the contents of the directory file in a formatted manner. I want the raw contents of the file. How is this possible ??
As far as I can tell, it cannot be done. I was pretty sure dd would do it, and then I found the following
‘directory’
Fail unless the file is a directory. Most operating systems do not allow I/O to a directory, so this flag has limited utility.
http://www.gnu.org/software/coreutils/manual/html_node/dd-invocation.html
So I think you have your answer there. dd supports it, as do probably a number of other utilities, but that doesn't mean linux allows it.
I think stat might be the command you're looking for.

Resources