How can I find
Disk usage or size of my entire Subversion repository
Disk usage or size only for a particular branch on my repo. Eg ( https://mysvn/svn/myrepo/myfolder)
OS: Windows 2008 server
I have RDP access to the server.
You should be able to find out the size of the repositories by checking their size on disk on the server. For example, if your repositories are stored under C:\Repositories, check the overall size of this directory or check the size of individual repository C:\Repositories\MyRepository.
Disk usage or size of my entire Subversion repository
Just check the size of the repositories on disk.
In case you use VisualSVN Server, try the Measure-SvnRepository cmdlet. It will produce the following output:
Name Revisions Size SizeOnDisk
---- --------- ---- ----------
MyRepo 498 3,340 KB 4,529 KB
MyRepo2 479 21,313 KB 22,571 KB
MyRepo3 201 1,032 KB 2,226 KB
MyRepo5 2 71 KB 90 KB
You can also check the SVN repo size in the VisualSVN Server Manager console:
Disk usage or size only for a particular branch on my repo. Eg (
https://mysvn/svn/myrepo/myfolder)
Normally, a new branch in the repository should take minimum of space (several kilobytes). A branch or tag in SVN is a cheap copy. When you create a branch or tag, Subversion doesn't actually duplicate any data. Therefore, the repository should not grow in size (unless you are going to commit new large content to it).
Instead of counting the "branch size", check the size of revisions on disk, e.g. under C:\Repositories\MyRepository. You can also view and examine the repository storage statistics with the svnfsfs stats tool. Here is an example:
svnfsfs stats C:\Repositories\MyRepository
Related
I am debugging a firmware code and need to use the output .bin file for programming the hardware. In the debug configuration, the binary file size is 158 KB and in the release configuration, it goes down to 120 KB by applying the optimization settings in IAR Embedded Workbench.
I know that the file size can go down to below 50 KB as there are some old .bin files that the previous developer could get from the software. But I can't find a way to reduce the file size further.
Does anyone have any idea how the binary file size could be reduced in the release configuration in IAR Embedded Workbench?
Here's the ending lines of my map file:
38 674 bytes of readonly code memory
4 721 bytes of readonly data memory
17 351 bytes of readwrite data memory
Errors: none
Warnings: none
I'm tring to understand the way bytes go from write() to the phisical disk plate to tune my picture server performance.
Thing I don't understand is what is the difference between these two: commit= mount option and dirty_writeback_centisecs. Looks like they are about the same procces of writing changes to the storage device, but still different.
I do not get it clear which one fires first on the way to the disk for my bytes.
Yeah, I just ran into this investigating mount options for an SDCard Ubuntu install on an ARM Chromebook. Here's what I can tell you...
Here's how to see the dirty and writeback amounts:
user#chrubuntu:~$ cat /proc/meminfo | grep "Dirty" -A1
Dirty: 14232 kB
Writeback: 4608 kB
(edit: This dirty and writeback is rather high, I had a compile running when I ran this.)
So data to be written out is dirty. Dirty data can still be eliminated (if say, a temporary file is created, used, and deleted before it goes to writeback, it'll never have to be written out). As dirty data is moved into writeback, the kernel tries to combine smaller requests that may be into dirty into single larger I/O requests, this is one reason why dirty_expire_centisecs is usually not set too low. Dirty data is usually put into writeback when a) Enough data is cached to get up to vm.dirty_background_ratio. b) As data gets to be vm.dirty_writeback_centisecs centiseconds old (3000 default is 30 seconds) it is put into writeback. vm.dirty_writeback_centisecs, a writeback daemon is run by default every 500 centiseconds (5 seconds) to actually flush out anything in writeback.
fsync will flush out an individual file (force it from dirty into writeback and wait until it's flushed out of writeback), and sync does that with everything. As far as I know, it does this ASAP, bypassing any attempt to try to balance disk reads and writes, it stalls the device doing 100% writes until the sync completes.
commit=5 default ext4 mount option actually forces a sync() every 5 seconds on that filesystem. This is intended to ensure that writes are not unduly delayed if there's heavy read activity (ideally losing a maximum of 5 seconds of data if power is cut or whatever.) What I found with an Ubuntu install on SDCard (in a Chromebook) is that this actually just leads to massive filesystem stalls like every 5 seconds if you're writing much to the card, ChromeOS uses commit=600 and I applied that Ubuntu-side to good effect.
The dirty_writeback_centisecs, configures the daemons of the kernel Linux related to the virtual memory (that's why the vm). Which are in charge of making a write back from the RAM memory to all the storage devices, so if you configure the dirty_writeback_centisecs and you have 25 different storage devices mounted on your system it will have the same amount of time of writeback for all the 25 storage systems.
While the commit is done per storage device (actually is per filesystem) and is related to the sync process instead of the daemons from the virtual memory.
So you can see it as:
dirty_writeback_centisecs
writing from RAM to all filesystems
commit
each filesystem fetches from RAM
I have gone through How does a PE file get mapped into memory?, this is not what i am asking for.
I want to know which sections (data, text, code, ...) of a PE file are always completely loaded into memory by the loader no matter whatever the condition is?
As per my understanding, none of the sections (code,data,resources,text,...) are always loaded completely, they are loaded as and when needed, page by page. If few pages of code (in the middle or at the end), are not required to process user's request then these pages will not always get loaded.
I have tried making exe files with lots of code with/without resources both of which are not used at all, but, every time the exe loads into memory, it takes more memory than the file size. (I might have been looking at the wrong column of Memory in Task Manager)
Matt Pietrek writes here
It's important to note that PE files are not just mapped into memory
as a single memory-mapped file. Instead, the Windows loader looks at
the PE file and decides what portions of the file to map in.
and
A module in memory represents all the code, data, and resources from
an executable file that is needed by a process. Other parts of a PE
file may be read, but not mapped in (for instance, relocations). Some
parts may not be mapped in at all, for example, when debug information
is placed at the end of the file.
In a nutshell,
1- There is an exe of size 1 MB and available memory (physical + virtual) is less than 1 MB, is it consistent that loader will always refuse to load because available memory is less than the size of file?
2- If an exe of size 1 MB takes 2 MB memory when loaded (starts running first line of user code) while available memory (physical + virtual) is 1.5 MB, is it consistent that loader will always refuse to load because there is not enough memory?
3- There is an exe of size 50 MB (lots of code, data and resources) but it requires 500 KB to run the first line of user code, is it consistent that this exe will always run first line of code if available memory (physical + virtual) is 500 KB atleast?
I have a TMS server with apache mod_tile, mapnik & renderd. I have 400GB of free space on my cache folder.
I want to pre-render 11 or 12 levels.
I tried the command "render_list -a -z 0 -Z 10 -v -n 4".
But my cache folder doesn't grow more than 2.6GB and render_list says it finished, no error message.
Even when I use my map (openlayer) missing tiles are rendered on the fly but not stored in cache. Before I pre-rendered my tiles, they were stored in cache.
I searched unsuccessfully,so I ask here : Is there any option in Mod_tile to manage cache size and cache replacement strategy ?
Thanks for your answers.
udpdate : Strangely when I request tile from level 11, they are well stored in cache, and my cache grows. So is there a size limit per level ?
In Vista Task Manager, I understand the available page file is listed like this:
Page File inUse M / available M
In XP it's listed as the Commit Charge Limit.
I had thought that:
Available Virtual Memory = Physical Memory Total + Sum of Page Files
But on my machine I've got Physical Memory = 2038M, Page Files = 4096M, Page File Available = 6051. There's 83M unaccounted for here. What's that used for. I thought it might be something to do with the Kernel memory, but the number doesn't seem to match up?
Info I've found so far:
See http://msdn.microsoft.com/en-us/library/aa965225(VS.85).aspx for more info.
Page file size can be found here: Computer Properties, advanced, performance settings, advanced.
I think you are correct in your guess it has to do something with the kernel - the kernel memory needs some physical backup as well.
However I have to admit that when trying to verify try, the numbers still do not match well and there is a significant amount of memory not accounted for by this.
I have:
Available Virtual Memory = 4 033 552 KB
Physical Memory Total = 2 096 148 KB
Sum of Page Files = 2048 MB
Kernel Non-Paged Memory = 28 264 KB
Kernel Paged Memory = 63 668 KB