Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm cleaning a machine with Windows 7 that I use that will be reassigned to another co-worker, and I would like to clear all the deleted files so they can be unrecoverable.
I tried using cipher w:f:\, then I installed Recuva and I still can see a lot of files that can be recovered.
Then I created a little program that creates a file with 0's that has the size of the free space on disk (after creating the file, I can see on Windows Explorer that the disk has like 100kb of free space only).
Then I delete the file and I run Recuva, and again I can see all those files as recoverable.
I'm just curious about what's happening under the hood. If I leave like 100Kb of free space in the disk, then why are there more than 100k of recoverable files still?
To make files unrecoverable, you need to use a "digital file shredder" application. This will write a series of zeroes and ones to the file to be shredded, multiple times. While 3 passes seems sufficient for many users, the US government has set a standard of 7 passes to meet most of its security needs.
There are several free file shredder applications, and even more commercial file shredder tools. Some security suite software (such as Antivirus with personal security protection tools) may also provide a file shredder.
For recommendations on digital file shredder applications, please ask for Windows digital file shredder recommendations at https://softwarerecs.stackexchange.com/
As for why "deleted" files are still listed by recovery tools as "recoverable", when a file is deleted, all that normally happens is a flag is set in the master file index maintained by the file system. The raw data of the file is left on the hard disk as "noise/garbage". If no other files are written into the area occupied by the deleted file, then it is trivial to recover the data. If other data has been overwritten on it, it becomes a non-trivial, but still possible, exercise to recover the data as it was before it was overwritten. Large scale recovery vendors are capable of recovering a file even if it has been overwritten a few tiles. This is why the "security" standards of the US government call for the file area to be overwritten 7 times, as only the most serious (and expensive) recovery operation can recover that data.
To make a file "disappear", the master file index also needs to have the information "erased" and overwritten ("shredding" the file's meta-data to be hidden and very hard to recover).
If you are interested in the details and how to more permanently hide or delete a file, you might want to consider asking at https://security.stackexchange.com/ about how the windows 7 file system works, and what it takes to truly delete or make a file sufficiently overridden to make it impractical to recover.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
Improve this question
Some programs write to the macOS TMPDIR that is on the booth volume, but unfortunately, some write huge files on it (for scratch disk, Lightroom for example) and this depletes the available space, but more importantly the remaining HD space on the boot volumes (especially nowadays with Apple's soldered SSDs) the remaining pace is not enough for the scratch disk and it fails. I experience it a lot with Lightroom doing Panorama,s temp files can be hundred of Gigagbytes. Unfortunately, you can't set the scratch disk location, contrary to photoshop. It writes to the TMPDIR.
So I would like to move that TMPDIR to another external SSD.
I tried the symbolic link but unfortunately, I don't have the permission to overwrite or rename the current temporary folder.
Maybe there's a way to change the way the TMPDIR is create so it does it on another drive than the boot drive, or maybe I could get the permission to modify the current one.
Given the fact that lots of program used that location that is often too small, it would be a major boon to get a method to put that TMPDIR on another drive.
I managed to do it, by disabling SIP and then create a symbolic link to another drive as a replacement of the TMPDIR folder.
Original TMPDIR was T : /var/folders/jc/myw_64vd1vb2zsn9wps4_xnh0000gp/T
More exactly I created a symbolic to my other drive folder in the folder myw_64vd1vb2zsn9wps4_xnh0000gp and named it A.
Then I renamed the T Folder to G and then the symbolink link A to T. You've to be quick as the OS recreates T quickly.
Of course, Lightroom must be quit before doing that. But it works.
It works, but of course, you've to disable SIP which is a pain. Also, after that photoshop doesn't work anymore, other programs mays fail also.
Now, the real solution would be to tell mac os to create the temp folder to and external drive. But that's another topic. I feel it has to to with the mktemp command, If we could ask it to use and external drive it would be the perfect solution.
So the solution was to disable SIP, and then you can move the TMPDIR with a symbolic link.
Have you tried TMPDIR=/your/tmp/dir open -n /Applications/Lightroom.app from terminal?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm running minecraft server(modded) using 4Ram with 32G.
It's stable when 1~2people, but when people join server, tps become low.
I think it is not a problem with rams. But packets are too many transfered client and server.
How can I increase tps?
The primary causes of TPS drops are a result of what you have going on in your world.
When adding mods or plugins, you should be thinking about the long-term effects of your choices.
For each modded block you add that provides some type of function, the server has to allocate resources to ensure that function is carried out. Now on its own that one block is of little consequence. But if that block forms an array as is typically done with solar panels, then the server will need to dedicate more resources to carry out that arrays functions. When we break it down we can get an idea of how much is really going on in the background.
Minecraft does not have any built-in methods for checking the RAM usage, but you can check the RAM usage by installing the Essentials plugin and using the command “/memory”. You can take a look at this link for more information. Also this command can help you to determine the Current TPS.
Additionally, you will find some good recommendations the last link that may help you to resolve your problem:
Reduce view distance
Your Minecraft server will run at view distance of 10 by default. We recommend changing your view distance to 6, this will not make any noticeable difference to players, but this can hugely help your server performance. You can learn how to access your server settings here.
Setup automated restarts
Setting up automatic restarts can help your server run smoother by freeing up your server RAM usage. It can also reclaim RAM that gets used by plugins and mods that have small memory leaks. You can view a tutorial on how to setup automated restarts here.
Run the latest version
We recommend using the latest version of Minecraft, plugins, and mods on your server. Most newer versions of software will include bug fixes and performance improvements that will make your server run faster and more stable.
Remove unnecessary mods and plugins
Having unused plugins and mods on the server will use up server resources even if the plugins and mods are not being used. It is a good idea to remove any unnecessary mods and plugins from the server. If you think you may use some plugins in the future and are not using right now, you can disable plugins by renaming the plugin .jar file to end with “.disable.” E.g Essentials.jar.disable. You can remove “.disable” from the plugin name to enable the plugin again.
I also found this documentation that explains, How to optimize the server's performance? That may help you to troubleshoot your issue.
On the other hand, I recommend you to review the following guides on asking questions: How do I ask a good question? and How to create a Minimal, Complete, and Verifiable example in order to provide a better context on what you are doing and what you want to achieve.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
i have search a lot to find out How to detect unnecessary Windows\Installer file !
I am sure lot of you have faced this issue previously and solve it somehow.
Now when i look at my ---
C:\Windows\Installer
directory on Windows Server 2008 R2 i can see it took already 42 GB out if total 126 GB.
Now what i would like to know can i just delete all the files from that Installer directory or i have to detect which file can be removed !
Do anyone knows any solution for this issue !
How do you define unnecessary?
Specialized system case: You want the minimum footprint and are willing to sacrifice functionality that you don't expect to use.
If all is well, each the files in C:\Windows\Installer are a local cache of an installed Windows Installer package, patch, transform, etc. They are necessary for uninstallation, auto-repair or on-demand installation to succeed. If you will never need any of those things on these machines (i.e. if you are bringing them up on demand as VMs, and would rebuild them rather than uninstall something), then unless the app itself invokes Windows Installer APIs itself, it may be relatively safe to remove files from C:\Windows\Installer. In addition, you could call the Windows Installer API MsiSourceListEnum to find other caches of files that are used for these same purposes. It may be similarly safe (or unsafe) to remove those files.
More usual case: You'd rather not rebuild the system
If you suspect there are unreferenced files in that folder left over from prior upgrades or uninstallations, you can try to use Windows Intstaller API calls to verify this. At a very low level, you can call MsiEnumProducts (or possibly MsiEnumProductsEx) to find the product codes of all installed products, and MsiGetProductInfo/Ex(szProduct, INSTALLPROPERTY_LOCALPACKAGE, ...) to find its cached .msi file and INSTALLPROPERTY_TRANSFORMS for a list of its transforms. Then MsiEnumPatches/Ex to find all patch codes and MsiGetPatchInfo/Ex (again with INSTALLPROPERTY_LOCALPACKAGE and/or INSTALLPROPERTY_TRANSFORMS) to list the .msp and .mst files it references. In theory, the full set of all files referenced here should match up with the full set of files in C:\Windows\Installer. (Or there are more references to look for...)
(Before you write anything to do this, consider that there are probably apps out there that automate this, or are even smarter about it, such as the one referenced in another answer.)
You could not delete them all.
There is a good answer about your problem, I test in my lab. It works for me.
Notes: If possible, you had better copy this folder to anther disk (such as E:)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am having an issue with the Dropbox cache, whereby periodically I find that a particular machine I am syncing to with Dropbox has run out of disk space and the Dropbox cache is the culprit. This is a problem because the machine Dropbox is installed on is headless (or nearly so) and therefore the only indication that something is wrong is suddenly data that should be available on the machine isn't.
I have read that it is possible to clear the cache, but this is a pain as this machine is running OS X and there is no command line interface, meaning that I have to VNC into the machine simply to restart Dropbox. This also seems to limit my options for automatically clearing the cache, although having to create a periodic task to clean the Dropbox folder seems kludgy and error prone. (For instance, the disk could fill up before the script runs.)
(Update: It appears that deleting the files in a low disk condition results in Dropbox starting to sync again without restarting, but I am not sure if there are any undesirable side-effects to this, everywhere I have read about the cache says to stop Dropbox during the delete and restart it afterwards.)
In addition, it appears that the reason Dropbox is running out of space so fast is that I have a single large log file (on the order of half a gigabyte) which is append-only, but Dropbox is creating a new cached copy of the entire old version every time a change is made. So from the standpoint of performance, it is kinda undesirable that it keep creating duplicates of this large file for every tiny addition of a few bytes to the file.
Disk space is rather tight on this machine, so I would rather simply have Dropbox limit how much caching it does. Is there some way to do this? My searches so far have turned up empty.
Update: I tried opening a Dropbox support request, only to get an e-mail reply stating: "Thanks for writing in. While we'd love to answer every question we get, we
unfortunately can't respond to your inquiry due to a large volume of support
requests." ಠ_ಠ
I just have a command file that I run now and then on my MacBook Air to clear space, which contains also these lines:
rm -rf /Users/MYUSERNAME/Dropbox/".dropbox.cache"/old_files/{*,.*}
osascript -e 'tell application "Terminal" to quit' & exit
Should be easy enough to automate, no?
I have the same issue with the exact same cause (took a while to figure out too): a log file inside a Dropbox folder that is actually not that big (several MB), but it does update every minute with a couple of hundred bytes. My cache is killing me. My total local Dropbox folder has 150 GB of which 50 GB is the cache!
I just cleared it, and my understanding is there are no consequences other than resync, but this is unsustainable.
I see several solutions here:
Dropbox is not suitable for this use case. Do not keep frequently updated logs on Dropbox. I think this would be a bummer, because there should be a fairly simple technical solution to this, and they are:
Dropbox either has OR SHOULD HAVE a setting for the maximum size of the cache, the way browsers do. This should not be too hard to implement if it does not exist (apparently), otherwise tell us where it is.
A script can be written (talking about Linux here) that periodically (every hour should be enough, but it can be done every minute in theory) checks for disk size of .dropbox.cache and if it is over some limit, it will delete some files. You could delete 10 most recent ones, or 10% of the files, or if you really wanted to go fancy you could calculate how much you have to delete, from oldest file on to maintain a certain cache size. The issue might be stopping Dropbox but seems like if you simply pause syncing that should be ok and enough.
Number 2 and #3 are really one and the same, it's just a question of who is going to do it. Given that Dropbox isn't an open source platform, it would probably be best for Dropbox to write and maintain this feature. Any third party plugin for this may stop working when something inside Dropbox codebase changes.
Dropbox does have an incentive NOT to provide this feature, because frequent syncing = more bandwidth. But I thought we pay for bandwidth.
Thank you Dropbox, we all love you, especially since you gave us all that extra space for free.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to programmatically test a file for viruses.
I'm aware of this thread, which didn't get a satisfactory answer in my opinion, but I'm not looking for an API here. Any kind of workaround to make it possible to test a file would be fine.
Of course, an API would probably be the best solution (I'm using .net on a Windows platform), but maybe it's possible to drop the file in the folder, and to then check whether I can still open it or if it's been quarantined by the antivirus software.
Has someone run into the same sort of situation?
Assuming you wish to integrate with whatever antivirus is already present on the system rather than bundling your own, you should check out the way Firefox 3 does this.
Bugs 103487
and 408153
- Inform anti-virus software when Firefox downloads an executable (using
the Windows "IOfficeAntiVirus" and
"IAttachmentExecute" APIs).
(of course if you wish to bundle your own, check out ClamAV/ClamWin, but then you must deal with updates, etc, and you should probably first check there is not a fully updated solution on the target system for politeness)
Windows? No problem. Check out ClamWin. It is based on ClamAV.
Platform?
Most Windows anti-virus provide shell integration (right click on a file in explorer to scan that file), which will either mean running an executable, DDE or COM. All of which provide an entry point to another program to call the same mechanism.
Check out ClamAV.
Clam AntiVirus is an open source (GPL) anti-virus toolkit for UNIX, designed especially for e-mail scanning on mail gateways.
Maybe you could use this web service, assuming the file is less than 1MB:
http://www.kaspersky.com/scanforvirus
If you discover a suspicious file on
your machine, or suspect that a
program you downloaded from the
Internet might be malicious, you can
check the files here.
Indicate the file to be checked; it
will automatically be uploaded from
your computer to a dedicated server,
where it will be scanned using
Kaspersky Anti-Virus. Multiple
independent tests and publications
acknowledge the solution to have
exceptional detection rates. Updates
every three hours ensure that even the
very newest viruses can be detected.
Only one file of up to 1 MB can be
checked at any one time. If the file
is too large, a window with an error
message will be displayed. Type the
name of the file in the window at the
top of this page, or find the file
using 'Browse'. Then click on
'Submit'.
You can use a Debugger or a Disassembler
It really depends on which AV program you're going to use. Read the documentation for whatever solution you choose and you'll probably find a command-line interface or some other API you can call. There's no "generic" way of doing this (across AVs).