I am on Windows and using diff to compare two text files. It was working successfully for small files but, when I start comparing 2GB file with another 2GB file it shows me:
diff: C:/inetpub/wwwroot/webclient/database_sequences/est_mouse_2.txt: Permission denied
My code:
$OldDatabaseFile = "est_mouse_1";
$NewDatabaseFile = "est_mouse_2";
shell_exec("C:\\cygwin64\\bin\\bash.exe --login -c 'diff $text_files_path/$OldDatabaseFile.txt $text_files_path/$NewDatabaseFile.txt > $text_files_path/TempDiff_$OldDatabaseFile$NewDatabaseFile.txt 2>&1'");
est_mouse_1.txt and est_mouse_2.txt are created by me and I check file permission and folder permission, it is full control. And all other text files which I compared are in the same folder and they were successfully compared.
Any idea?
You are using cygwin for this operation, Cygwin's heap is extensible. However, it does start out at a fixed size and attempts to extend it may run into memory which has been previously allocated by Windows.
Heap memory can be allocated up to the size of the biggest available free block in the processes virtual memory (VM). On 64 bit systems this results in a 4GB VM for a process started from that executable. I think that why you can't compare two 2GB files, I agree that the error pretty strange but explains that your access to the memory is limited. Please see cygwin user guide for the more info.
Related
i am planing to write ansible playbook for file system creation. i am using Logical volume manager(LVM).
can any one help me to identify the new lun using ansible modules.
Generally speaking, this is very operating system specific. Your best path forward is to figure out what steps you would do from the command line to detect that there is a new disk and then use shell, parsing the result stdout. For example, on ubuntu, you might run fdisk -l and look for unpartitioned / unallocated drives and parse the output.
Good evening everyone! I have been working on this for sometime, but can't figure it out. I am simply trying to get the working bootcode of a bootloader installed on an attached media, but can't figure this out!!! I have tried grub legacy, lilo, and grub2... The host system has it's drive listed as /dev/sda* and the target attached media is listed as /dev/sdb* and is mounted to /mnt/target.
With grub legacy, I was attempting to work with another media (/dev/sdc*, /mnt/source) that already had it installed and tried dirty hacks like:
dd if=/mnt/source/boot/grub/stage1 of=/dev/sdb bs=446 count=1
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
This will actually boot into a grub interface where you can enter things like:
root (hd0,0)
setup (hd0)
I get no error messages, but grub will boot to garbage on the screen and then stop.
With lilo, I actually had the package installed and tried to setup (after creating a lilo.conf):
default=Test1
timeout=10
compact
prompt
lba32
backup=/mnt/target/boot/lilo/MBR.hda.990428
map=/mnt/target/boot/lilo/map
install=/mnt/target/boot/lilo/boot.b
image=/mnt/target/boot/vmlinuz
label=Test1
append="quiet ... settime"
initrd=/mnt/target/boot/ramdisks/working.gz
And then from the prompt execute the following:
$ lilo -C /mnt/target/boot/lilo/lilo.conf -b /dev/sdb
Warning: /dev/sdb is not on the first disk
Fatal: Sorry, don't know how to handle device 0x0701
With grub2, I tried something like:
grub-mkconfig -o /mnt/target/boot/grub/grub.cfg
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.11.0-12-generic
Found initrd image: /boot/initrd.img-3.11.0-12-generic
Found memtest86+ image: /boot/memtest86+.bin
No volume groups found
done
I couldn't even get the above to generate a grub.cfg correctly or in the right spot so I gave up on this one... The entries listed above are for the host system, not the target system.
I can provide any additional information that you guys need to help resolve this problem.
-UPDATE-
After working with the media a bit longer, I decided to run an 'fdisk -l' and was presented with the following info:
Partition 1 has different physical/logical beginnings (non-Linux?):
phys(0,32,33) logical(0,37,14)
Partition 1 has different physical/logical endings:
phys(62,53,55) logical(336,27,19)
I should also note that when I try to mount the partition I always get a message that states:
EXT4-fs (sdb1): couldn't mount as ext3 due to feature incompatibilities
Not sure if that is just specific to busybox, or if that is related to the fdisk output. Anyhow, I don't know if the fdisk info is indicating that there may be a problem with the disk geometry that could be causing all these bootloaders to not work.
First stage boot sector code for grub legacy is in "stage1", for grub(2) in "boot.img". Fist stage code contains the address of next stage to be loaded on same disk.
On some other disk the address of next stage to be loaded could be (and is maybe) different.
I think using chroot and grub-install would be a better way to go.
See Grub2/Installing.
As for disk/partition structure:
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
maybe has overwritten partition table in MBR of sdb.
I'm using Cloud9 (railstutorial.org) and noticed that the disk space used by my workspace is fastly growing toward the disk quota.
Is there a way to clean up the workspace and thereby reduce the disk space used?
The workspace is currently 817MB (see below using quota -s). I downloaded it to look at the size of the directories, and I don't understand it. The directory containing my project is only 170 MB in size and the .9 folder is only 3 MB. So together that doesn't come near the 817 MB... And the disk space used keeps growing even though I don't I'm making any major changes to the content of my project.
Size Used Avail Use%
1.1G 817M 222M 79%
Has it perhaps got to do with the .9 folder? For example, I've manually deleted several sub-projects but in the .9 folder these projects still exist, including their files. I also wonder if perhaps different versions of gems remain installed in the .9 folder... so that if you update a gem, it includes both versions of the gem.
I'm not sure how this folder or Cloud9 storage in general works, but my question is how to clean up disk space (without having to remove anything in my project)? Is there perhaps some clean-up function? I could of course create a new workspace and upload my project there, but perhaps there's an alternative while keeping the current workspace.
The du-c9 command lists all the files contributing to your quota. You can reclaim disk space by deleting files listed by this command.
For a user-friendly interface, you may want to install ncdu to see the size of all your folders. First, free some space for the install. A common way to do this is by removing your tmp folder:
rm -rf /tmp/*
Then install ncdu:
sudo apt-get install ncdu
Then run ncdu and navigate through your folders to see which ones are using up the most space:
ncdu ~
Reference: https://docs.c9.io/discuss/557ecf787eafa719001d1af8
For me the answers above unfortunately did not work (the first produced a list incomprehensibly long, so long that I run out of scroll space in the shell and the second one produced a strange list-- see at the end of this answer):
What did was the following:
1) From this support faq article: du -hx / -t 50000000
2) Identify the culprit from the easy to read, easy to understand list: in my case 1.1G /home/ubuntu/.local/share/heroku/tmp
3) From the examples of this article: rm -r /home/ubuntu/.local/share/heroku/tmp
Strange list:
1 ./.bundle
1 ./.git
1 ./README.md
1 ./Project_5
2 ./.c9
2 ./Project_1
3 ./Project_2
17 ./Project_3
28 ./Project_4
50 .
If you want to dig into more details of which file is affecting your workspace disk try this command: sudo du -h -t 50M / --exclude=/nix --exclude=/mnt --exclude=/proc
This will give you all the files on your Linux server and then you can remove any file by this command:
sudo rm -rf /fileThatNeedsToDelete/*
From AWS in Cloud9 this command df -hT /dev/xvda1 worked for me:
[ec2-user ~]$ df -hT /dev/xvda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda1 xfs 8.0G 1.2G 6.9G 15% /
more info here:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-describing-volumes.html
I am trying to clone a large svn repository with git svn. The repo has got 100000 revisions. The size is about 9GB (pristine folder). Biggest file in repo is 300 MB.
The branch structure is a total mess in the repo. Lots of wrong and missing merge info, no standard layout. I've tried to fetch the latest revisions with and without branches. The command without branches looks like this:
git svn clone url_to_trunk_in_repo -r100000:HEAD --username=svn_user
HEAD is currently at 101037. The process runs for a while (hours) and fails with something like this:
Out of memory during request for 29040 bytes, total sbrk() is 254959616 bytes!
I have got the latest maintained git revision for Windows (Git-1.9.4-preview20140929) running on Windows 7 x64 with 16 GB RAM.
I've done some search on this kind of failure. Most postings refer to a problem with large files some years ago which is most likely fixed already (haven't checked that). Anyway this issue refers to large allocation, indicated by the error message during "large" request. However, the process fails while adding normal implementation files of small size. Therefore, I don't think this is a large file problem.
I've tried to modify the pack settings in etc/gitconfig, which is a common advise. However, this didn't help. I didn't expect it to help at all because the memory error is during download from svn server not during git gc which processes the packs, AFAIK.
Further digging lead me to a perl memory limitation of 256MB. This is most likely the case because I always get the error with almost 256MB sbrk().
Further investigation on perl memory limitations brings up OS memory limitations, only. That is 2GB on win32 (3GB with special switch) and RAM limit for 64 bit windows. I also found some advice for raising Cygwin memory limitations but that doesn't apply here.
The 256MB limit is ridiculous in my eyes and I desperately searching for a way to get around this.
EDIT:
This is propably a Perl 5.8.8 issue (git uses that version). I have also installed strawberry perl 5.16.3 x64.
I've written this test code, which is a modification of the code posted at this stackoverflow question:
use strict;
use warnings;
my #s;
my $count = 200;
my $alloc = 30000000;
for (my $i = 0; $i < $count; $i++)
{ print "Trying allocation...";
$s[$i] = "a" x $alloc; # ok
print "OK\n\n";
}
With strawberray perl, this works perfectly. In git bash, I receive the error described before.
Out of memory during "large" request for 33558528 bytes, total sbrk()
is 2351800 32 bytes at mem.pl line 9.
EDIT 2:
I've tried strawberry perl 5.8.8-1. It allocates properly, however, the program crashes after execution. Hence, this is not a bug in perl 5.8.8 generally but in the version that is being shipped with git (msys perl 5.8.8)
Configuration of strawberry perl and msys perl differs in many entries. Most noticable difference for me is usemymalloc=n (strawberry) and usemymalloc=y (msys perl).
I also checked for ulimit in git bash, which doesn't show any abnormality:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2046
cpu time (seconds, -t) unlimited
max user processes (-u) 63
virtual memory (kbytes, -v) 2097152
With Cygwin and Git 2.1.1 I'm able to run git svn on my repo without any memory issues. My test program runs fine as well. I haven't tried 1.x-Versions of Git on Cygwin but I guess they'd work because the problem was a memory limitation of msysperl which is replaced by Cygwin.
I won't mark this as answer since it doesn't solve my original question. It is my current workaround for tests with Git.
I'd like to have a Git for Windows distribution with properly working Perl. There is an issue for upgrading Perl here. However this seems to be not an easy task. Same holds for the SVN version used by git svn on Windows: Howto upgrade SVN
Here is what I am trying to do: I need to know whenever a file is read or used by a tool (e.g. compiler). I use ls to get the last accessed time using the following command
ls -l --time=access -u --sort=time --time-style=+%H:%M:%S
or
stat "filename"
But my files access times are not getting updated, I figured its because of caching! please correct me if I am wrong. So my next step was how can I clear the cache, researching it I came across some variations of the following command:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
The thing is even after I execute this command my file access time is not updated! My way of testing access time is by opening the file in gEdit or call gcc on my source file.
My setting: Ubunto 12.0.4 running on VMware, which is running on Win 7
Question: what am I missing or doing wrong that my access time is not being updated??
What you're observing is the change in the default mount option starting 2.6.30 in order to bring about filesystem performance improvement.
Quoting from man mount:
relatime
Update inode access times relative to modify or change time.
Access time is only updated if the previous access time was ear‐
lier than the current modify or change time. (Similar to noat‐
ime, but doesn't break mutt or other applications that need to
know if a file has been read since the last time it was modi‐
fied.)
Since Linux 2.6.30, the kernel defaults to the behavior provided
by this option (unless noatime was specified), and the stricta‐
time option is required to obtain traditional semantics. In
addition, since Linux 2.6.30, the file's last access time is
always updated if it is more than 1 day old.
(Also refer to this and this.) You might be looking for the following mount option:
strictatime
Allows to explicitly requesting full atime updates. This makes
it possible for kernel to defaults to relatime or noatime but
still allow userspace to override it. For more details about the
default system mount options see /proc/mounts.