I'm using Redis-server for windows ( 2.8.4 - MSOpenTech) / windows 8 64bit.
It is working great , but even after I run :
I see this : (and here are my questions)
When Redis-server.exe is up , I see 3 large files :
When Redis-server.exe is down , I see 2 large files :
Question :
— Didn't I just tell it to erase all DB ? so why are those 2/3 huge files are still there ?
How can I completely erase those files? ( without re-generating)
NB
It seems that it is doing deletion of keys without freeing occupied space. if so , How can I free this unused space?
From https://github.com/MSOpenTech/redis/issues/83
"Redis uses the fork() UNIX system API to create a point-in-time snapshot of the data store for storage to disk. This impacts several features on Redis: AOF/RDB backup, master-slave synchronization, and clustering. Windows does not have a fork-like API available, so we have had to simulate this behavior by placing the Redis heap in a memory mapped file that can be shared with a child(quasi-forked) process. By default we set the size of this file to be equal to the size of physical memory. In order to control the size of this file we have added a maxheap flag. See the Redis.Windows.conf file in msvs\setups\documentation (also included with the NuGet and Chocolatey distributions) for details on the usage of this flag. "
I know this is an old thread, but I am facing the same issues with the file sizes.
In case you have problems with your C ssd drive (like me), you can make a directory junction:
1) Stop redis service
2) Move C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis folder to another drive / location.
3) Open a command prompt in C:\Windows\ServiceProfiles\NetworkService\AppData\Local then execute:
mklink /J "C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis" "[newpath]"
PD: [newpath] must be absolute, like "D:\directory junctions\Redis"
4) Start redis service. Now the files are in another drive.
Check http://ss64.com/nt/mklink.html if doubts regarding this command.
I faced this same issue on my development machine. It was resolved by stopping the redis service and I used WinDirStat (which is what I used to detect the issue originally) to permanently delete these files in appdata/local/redis.
I then started redis back up and things were working fine.
Before following this same procedure others may want to ensure that this data isn't needed. In my case it wasn't critical since this is my development workstation.
When you flush the DB you only flush the keys from memory. I'm not sure why you've got files of different names, it may be an artifact of the way the Windows port of Redis manages files, but Redis itself doesn't delete files when you remove keys. You will need to manage outdated files outside of Redis.
Related
I just learned by accident that since Windows XP there is a special system directory C:/Windows/Prefetch which contains "prefetch files" used to speed up system boot and application start-up performance. I discovered them by searching my disk with the name of a program of mine and saw a .pf file in that directory.
Naturally, I began to wonder if it is possible for me, as the creator of an application, to pre-create these prefetch files myself. The intention is that the users of my application could benefit from better startup performance since the very first launch of my application, rather than the second one, as I understood that the prefetch file is generated only when needed.
Is it possible to create these special files ahead of time, package them with an app, and then install them, or do they have to be created on the end user's machine?
I am trying to install Linux in a computer that has Windows 7. The first step was shrinking the disk size but Windows did not allow any reduction. Thus I followed a number of steps to disable "unmovable" files
I disabled the Page File
I disabled hibernation
I disabled System Protection
After that nothing seemed to have changed so I checked the disk fragmentation and it was 11% fragmented. I have since then run at least 4 defrags and I have also defragged the free space using Defraggler.
As of now the disk looks like this
Right now, Windows refuses to shrink the partition by any amount (I imagine that the files at the end of the disk are the troublesome ones).
Coming from an Linux background I am unsure what else needs to be done in order to shrink the partition.
Are you using Windows disk management tool to do the shrink? Here's a link for that method.
https://www.howtogeek.com/howto/windows-vista/resize-a-partition-for-free-in-windows-vista/
Also make sure the recycle bin on that drive is empty.
I finally figured it out.
The easiest way is just to use a Live USB with GParted on it since that will allow you to move Windows protected files around (the windows OS is not loaded on the live distro).
If just defragmenting is concerned one can use Hiren's Boot CD and the included Defraggler for the same purpose.
I had the same problem on Windows 10. Turns out it was antivirus software that was running on the machine that prevented defragmentation happen properly. I actually had to temporarily uninstall antivirus. After that, the Disk Management tool was able to correctly shrink the volume.
I have a large file server machine which contains several terabytes of image data that I generally access in chunks. I'm wondering if there is anything special that I can do to hint to the OS that a specific set of documents should be preloaded into memory to improve the access time for that subset of files when they are loaded over a file share.
I can supply a parent directory that contains all of the files that comprise a given chunk before I start to access them.
The first thing that comes to mind is to simply write a service that will iterate through the files in the specified path, load them into process memory and then free the memory in hopes that the OS filesystem cache holds on to them, but I was wondering if there is a more explicit way to do this.
It would save a lot of work if I could re-use the existing file share access paradigm rather than requiring the access to these files to go through a memory caching layer.
The files in question will almost always be accessed in a readonly manner.
I'm working on Windows Server 2003/2008
Two approaches come to mind:
1) Set the server to be optimized for file serving. This used to be in the properties for file & printer sharing, but seems to have gone away in Windows 2008. This is set via the registry in:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory
Management\LargeSystemCache=1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size=3
http://technet.microsoft.com/en-us/library/cc784562.aspx as ref.
2) Ensure that both endpoints are either windows 2008/windows 2008, or windows 2008/Vista. There are significant performance improvements in SMB 2.0 as well as the IP stack which improve performance greatly. This may not be an option due to cost, organizational constraints, or procurement lead time, but I thought I'd mention it.
http://technet.microsoft.com/en-us/library/bb726965.aspx as ref.
What tools or techniques can I use to remove cached file contents to prevent my performance results from being skewed? I believe I need to either completely clear, or selectively remove cached information about file and directory contents.
The application that I'm developing is a specialised compression utility, and is expected to do a lot of work reading and writing files that the operating system hasn't touched recently, and whose disk blocks are unlikely to be cached.
I wish to remove the variability I see in IO time when I repeat the task of profiling different strategies for doing the file processing work.
I'm primarily interested in solutions for Windows XP, as that is my main development machine, but I can also test using linux, and so am interested in answers for that environment too.
I tried SysInternals CacheSet, but clicking "Clear" doesn't result in a measurable increase (restoration to timing after a cold-boot) in the time to re-read files I've just read a few times.
Use SysInternal's RAMMap app.
The Empty / Empty Standby List menu option will clear the Windows file cache.
For Windows XP, you should be able to clear the cache for a specific file by opening the file using CreateFile with the FILE_FLAG_NO_BUFFERING options and then closing the handle. This isn't documented, and I don't know if it works on later versions of Windows, but I used this long ago when writing test code to compare file compression libraries. I don't recall if read or write access affected this trick.
A command line utility can be found here
from source:
EmptyStandbyList.exe is a command line tool for Windows (Vista and
above) that can empty:
process working sets,
the modified page list,
the standby lists (priorities 0 to 7), or
the priority 0 standby list only.
Usage:
EmptyStandbyList.exe workingsets|modifiedpagelist|standbylist|priority0standbylist
A quick googling gives these options for Linux
Unmount and mount the partition holding the files
sync && echo 1 > /proc/sys/vm/drop_caches
#include <fcntl.h>
int posix_fadvise(int fd, off_t offset, off_t len, int advice);
with advice option POSIX_FADV_DONTNEED:
The specified data will not be accessed in the near future.
I've found one technique (other than rebooting) that seems to work:
Run a few copies of MemAlloc
With each one, allocate large chunks of memory a few times
Use Process Explorer to observe the System Cache size reducing to very low levels
Quit the MemAlloc programs
It isn't selective though. Ideally I'd like to be able to clear the specific portions of memory being used for caching the disk blocks of files that I want to no longer be cached.
For a much better view of the Windows XP Filesystem Cache - try ATM by Tim Murgent - it allows you to see both the filesystem cache Working Set size and Standby List size in a more detailed and accurate view. For Windows XP - you need the old version 1 of ATM which is available for download here since V2 and V3 require Server 2003,Vista, or higher.
You will observe that although Sysinternals Cacheset will reduce the "Cache WS Min" - the actual data still continues to exist in the form of Standby lists from where it can be used until it has been replaced with something else. To then replace it with something else use a tool such as MemAlloc or flushmem by Chad Austin or Consume.exe from the Windows Server 2003 Resource Kit Tools.
As the question also asked for Linux, there is a related answer here.
The command line tool vmtouch allows for adding and removing files and directories from the system file cache, amongst other things.
There's a windows API call https://learn.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-setsystemfilecachesize that can be used to flush the file system cache. It can also be used to limit the cache size to a very small value. Looks perfect for these kinds of tests.
When I defragment my XP machine I notice that there is a block of "Unmovable Files". Is there a file attribute I can use to make my own files unmovable?
Just to clarify, I want a way to programmatically tell Windows that a file that I create should be unmovable. Is this possible, and if so, how can I do it?
Thanks,
Terry
A lot of system files cannot be moved after the system boots, such as the page file and registry database files.
This utility runs before Windows boots to defragment those files. I have it set to run at every boot, and it works well for me on several machines.
Note that the very first time you boot up with this utility set to run, it may take several minutes to defrag. After that first run though, it finishes in just 3 or 4 seconds.
Edit0: To respond to your clarification- that link says windows has marked the page file and registry files as open for exclusive access. So you should be able to do the same thing with the LockFile API Call. However, that's not an attribute of the file itself. You'd have to actually run some background program that locks the file for exclusive access.
There are no file attributes that you can place on your files to mark them as immovable. The only way that a file cannot be moved (I think) during defragmentation is to have some other process have the file open (for read or write, I'm not even sure that you need to have the file open in exclusive mode or not).
Quite frankly, I cannot think of a reason that you'd want your files not to move, unless you have specific requirements about where on the disk platter your files reside. Defragmentation should generally lead to faster disk access and that seems to be desireable in all cases :-)
This usually means that the file is in use by some process. If you're defragmenting, you'll likely see this with a lot of system files. If the file should legitimately be movable and is stuck (it's being held by a process that runs at startup but shouldn't be, for example), the most useful way of resolving the problem is to remove all permissions on the file, reboot, restore the permissions, and then get rid of the file/run the program that's trying to use it.
I suppose the ugly way is to have an application boot on startup, check every few seconds if defrag is running and if so open the file in exclusive mode.
This is really ugly and I don't recommend it unless there is no cleaner solution.
Terry, the answers all mention ways to prevent files from becoming unmovable during defragmentation. From your question it appears that you are in fact wanting to make your personal files unmovable. Can you please clarify what is appealing about making your files unmovable.
I assume you're using the defragger that comes with Windows. Some commercial ones like DiskKeeper can move some of these files (usually system files). You can try their trial versions.
Contig might serve your purpose http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
I'm relatively certain I ran across some methods/attributes you could access programatically to do exactly what you want. This was back in NT4 days though and my memory isn't that good.
For a little more complete solution try Raxco's PerfectDisk. While it is a commercial product it does a very good job and supports boot time defrag of system files. The first defrag takes longer than say DiskKeeper but its a single pass defragger and supports defragging with very little free space left on the drive. Overall its a much smarter defrag program then any other I've seen and supports systems of any size.
http://www.raxco.com/
first try to move(or delete) the files within safe mode. If can not, try to move(or delete) the files with linux.
But be careful if those are the windows system files, then you are failed to boot up your windows.
Some reason why the files are unmovable are : the file size is too big, the files are being in open/in use condition, insufficient security privileges, being access by other computer/s, and many other things.