Windows backup functioning - windows

as part of my memory, I am looking for an explanation regarding windows backups.
In fact, I noticed that the disk to be backed up must have at least 15% free space (I got save errors for this). So I would like to know why and how it works. If you have sources and sites to consult.
Thanks in advance.

Related

seek for an useful and free utility to clean my OSX

Recently, I run out of my disk. Core dump in /cores occupied most space in my OSX. After clean it, I save much space. But I don't feel like to delete them manually each time,running commands in terminal. Is there an free tool to help me out.
I also found some app occupy too much cache or unnecessary files such as iTune. I also need an easy way to clean them out.
Any recommendation?
App store has so many FREE app to help you clean unnecessary files.
I recommend you FREE APP "Dr.Clean" which is developed by Trend Micro.
Here is the link

Is it possible in Windows to use part of memory as a virtual file

I'm using a commandline tool to do some processing on a file. The thing is that this file should not be stored on disk (security reasons). So I was wondering whether it's possible in Windows to use a part of memory as a virtual file that is accessible bu the commandline tool as if it was a real physical file.
Yes, it's possible with things referred to as "ramdisks" usually. What's the best ramdisk for Windows? over at superuser.com has some links.
Have you written the command line tool yourself? If so, you can simply allocate a section of memory to your program and use it in your processing. There's little reason to trick the app into thinking it's using a file on a physical disk. The specifics on how to do so depend on what language your app is written in.
If not, you'll need to create a RAM disk and tell the program to use that. Using a RAM disk on Windows requires third-party software; a comprehensive list of options is available here on Super User.
Note, though, neither using a RAM disk nor storing all of your data in memory will make it more secure. The information stored in RAM is just as accessible to prying eyes and malicious applications as data that is saved on the hard disk. Probably more so than data that has been deleted from the hard disk.
If you need a ready to use application, there are several ramdisk applications (including free ones) on the market, and then your question here is offtopic. If you need to do this in code, than one of our virtual storage products (SolFS, CallbackDisk, Callback File System) will work, and Callback File System has a sample project that stores files in memory.
If you're using .NET, you might look into MemoryStream.
Note Cody Gray's answer though, which is only too true insofar as having something in memory does not guarantee that it can't be compromised. Though opinions differ on this subject. Most people would argue that writing to disk is even less secure, especially in the age of wear-levelling where controlling what is deleted and what is not is practically impossible.
RAM has its own disadvantages, but on the positive side, what's gone is gone :-)

What disk layout works fastest for Visual Studio development?

So my laptop hard drive reported a bad cluster last week, which is never a good sign.
I'm going to be shopping for a hard disk, and I may as well plump for the upgrade to Windows 7, which means a reinstallation of Visual Studio and everything else.
This particular laptop has space for two hard disks, so I was thinking about an SSD drive in one and a larger fastish (7.2k) drive in the other.
Where should Visual Studio best go in this arrangement? And what about "special" folders like %TEMP%? Does it make sense to use a ReadyBoost USB stick when your pagefile is already on an SSD? Should the database server and files live on the hard drive? Should I get concerned about the SSD wearing out?
Thanks all...
Do you have an Antivirus with on-access scans activated? If yes, deactivate it for the directory the compiler is installed and the source code, maybe also other areas (have a look at the on-access-scan stats during compilation). That was the main slow-down on my laptop.
Also a bit more RAM might help better also.
I had a look at the exorbitant prices of SSD. I would think twice before investing a big amount in money in something that might not help in the end (that's why you asked the question here, right? ;-) )
If you really need speed, I would buy a desktop and setup a raid-0 partition. Laptops are quite slow. Of course only if you can accept the drawback in mobility...
Make sure you have 2GB+ of RAM. The more the better, as Win7 will use any spare RAM as a disk cache which will probably negate most of the advantage of an SSD. (We have a solution that took 6 minutes to load the first time in VS2005, and 20 seconds thereafter, due to the disk cache).
If you have enough RAM, stick your temp & intermediate folders in a RAMdisk.
Then split your remaining files over the two drives (e.g. apps and pagefile on one, source/object files on the other) to spread the I/O load across both drives. If using SSD, try to use it as a read-only device as much as possible.
I would put OS and Apps on the SSD, lots of reading, little writing. And then put data on the other.
Visual Studio intensively uses disk while compiling. Putting temp and project folders on high performance disk can speed up compiling very much.
My opinion is based on tests using ramdisk.

How big can a Sourcesafe DB be before "problems" arise?

We use SourceSafe 6.0d and have a DB that is about 1.6GB. We haven't had any problems yet, and there is no plan to change source control programs right now, but how big can the SourceSafe database be before it becomes an issue?
Thanks
I've had VSS problems start as low as 1.5-2.0 gigs.
The meta-answer is, don't use it. VSS is far inferior to a half-dozen alternatives that you have at your fingertips. Part of source control is supposed to be ensuring the integrity of your repository. If one of the fundamental assumptions of your source control tool is that you never know when it will start degrading data integrity, then you have a tool that invalidates its own purpose.
I have not seen a professional software house using VSS in almost a decade.
1 byte!
:-)
Sorry, dude you set me up.
Do you run the built-in ssarchive utility to make backups? If so, 2GB is the maximum size that can be restored. (http://social.msdn.microsoft.com/Forums/en-US/vssourcecontrol/thread/6e01e116-06fe-4621-abd9-ceb8e349f884/)
NOTE: the ssarchive program won't tell you this; it's just that if you try to restore a DB over 2GB, it will fail. Beware! All these guys who are telling you that they are running fine with larger DB are either using another archive program, or they haven't tested the restore feature.
I've actually run a vss db that was around 40 gig. I don't recommend it, but it is possible. Really the larger you let it go, the more you're playing with fire. I've heard instances where the db gets corrupted, and the items in source control were unrecoverable. I would definately back it up on a daily basis and start looking to change source control systems. Having been in the position of the guy who they call when it fails, I can tell you that it will really start to get stressful when you realize that it could just go down and never come back.
Considering the amount of problems SourceSafe can generate on its own, I would say the size has to be in the category "Present on disk" for it to develop problems.
I've administered a VSS DB over twice that size. As long as your are vigilant about running Analyze, you should be OK.
Sourcesafe recommends 3-5G with a "don't ever go over 13G".
In practice, however, ours is over 20G and seems to be running fine.
The larger you get, Analyze will find more and more problems including lost files, etc.
EDIT: Here is the official word: http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
I have found that Analyze/Fix starts getting annoyingly slow at around 2G on a reasonably powerful server. We run Analyze once per month on databases that are used by 20 or so developers. The utility finds occasional fixes to perform, but actual use has been basically problem free for years at my workplace.
The main thing according to Microsoft is make sure you never run out of disk space, whatever the size of the database.
http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
quote:
Do not allow Visual SourceSafe or the Analyze tool to run out of disk space while running. Running out of disk space in the middle of a complex operation can create serious database corruption

How to isolate causes of system hang on Unix/OSX

I am on OSX, and my system is becoming unresponsive for a few seconds roughly every 10 minutes. (It gives me the spinning beach ball of death). I was wondering if there was any way I could isolate the problem (I have plenty of RAM, and there are no pageouts/thrashing). Any Unix/OSX tools that could help me monitor and isolate the cause of this behaviour?
Activity Monitor (cmd+space, type, activity monitor), should give you an intuitive overview of what's happening on your system. If as you say it is there are no processes clogging CPU, please do take a look at the disk/IO activity. Perhaps your disk is going south.
I have been having problems continually over the years with system hangs. It seems that generally they are a result of filesystem errors, however Apple does not do enough to take care of this issue. System reliability should be a 100% focus and I am certainly sick of these issues. I have started to move a lot of files and all backups over to a ZFS volume on a FreeBSD server and this is helping a bit as it has started to both ease my mind and allow me to recover more quickly from issues. Additionally I've placed my system volume on a large SSD (240GB as I have a lot of support files and am trying to keep things from being too divided up with symlinks) and my Users folders on another drive. This too has helped add to reliability.
Having said this, you should try to explore spindump and stackshot to see if you can catch frozen processes before the system freezes up entirely. It is very likely that you have an app or two that is attempting to access bad blocks and it just hangs the system or you have a process blocking all others for some reason with a system call that's halting io.
Apple has used stackshot a few times with me over the last couple years to hunt some nasty buggers down and the following link can shed some light on how to perhaps better hunt this goblin down: http://www.stormacq.com/?p=346
Also try: top -l2 -S > top_output.txt and exame the results for hangs / zombie processes.
The deeper you go into this, you may find it useful to subscribe to the kernel developer list (darwin-kernel#lists.apple.com) as there are some very, very sharp cookies on here that can shed light on some of the most obscure issues and help to understand precisely what panic's are saying.
Additionally, you may want to uninstall any VMs you have installed. There is a particular developer who, I've heard from very reliable sources, has very faulty hypervisor issues and it would be wise to look into that if you have any installed. It may be time to clean up your kexts altogether as well.
But, all-in-all, we really quite desperately need a better filesystem and proactive mechanisms therein to watch for bad blocks. I praised the day and shouted for joy when I thought that we were getting ZFS officially. I am doubtful Lion is that much better on the HFS+ front sadly and I certainly am considering ZFS for my Users volume + other storage on the workstation due to it's ability to scrub for bad blocks and to eliminate issues like these.
They are the bain of our existence on Apple hardware and having worked in this field for 20 years and thousands of clients, hard drive failure should be considered inexcusable at this point. Even if the actual mfgs can't and won't fix it, the onus falls upon OS developers to handle exceptions better and to guard against such failures in order to hold off silent data loss and nightmares such as these.
I'd run a mixture of 'top' as well as tail -f /var/log/messages (or wherever your main log file is).
Chances are right before/after the hang, some error message will come out. From there you can start to weed out your problems.
Activity Monitor is the GUI version of top, and with Leopard you can use the 'Sample Process' function to watch what your culprit tasks are spending most of their time doing. Also in Utilities you'll find Console aka tail -f /var/log/messages.
As a first line of attack, I'd suggest keeping top running in a Terminal window where you can see it, and watching for runaway jobs there.
If the other answers aren't getting you anywhere, I'd run watch uptime and keep notes on the times and uptimes when it locks up. Locking up about every 10 minutes is very different from locking up exactly every 10 minutes; the latter suggests looking in crontab -l for jobs starting with */10.
Use Apple's Instruments. Honestly, it's helped immensely in finding hangs like these.
Periodic unresponsiveness is often the case when swapping is happening. Do you have sufficient memory in your system? Examine the disk io to see if there are peaks.
EDIT:
I have seen similar behaviour on my Mac lately which was caused by the filesystem being broken so OS X tried to access non-existing blocks on the disk and even trying to repair it with Disk Manger told me to reformat and reinstall. Doing that and reestablishing with Time Machine helped!
If you do this, then double check that Journalling is enabled on the HFS on the harddisk. This helps quite a bit avoiding it happening again.

Resources