Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
Improve this question
Some programs write to the macOS TMPDIR that is on the booth volume, but unfortunately, some write huge files on it (for scratch disk, Lightroom for example) and this depletes the available space, but more importantly the remaining HD space on the boot volumes (especially nowadays with Apple's soldered SSDs) the remaining pace is not enough for the scratch disk and it fails. I experience it a lot with Lightroom doing Panorama,s temp files can be hundred of Gigagbytes. Unfortunately, you can't set the scratch disk location, contrary to photoshop. It writes to the TMPDIR.
So I would like to move that TMPDIR to another external SSD.
I tried the symbolic link but unfortunately, I don't have the permission to overwrite or rename the current temporary folder.
Maybe there's a way to change the way the TMPDIR is create so it does it on another drive than the boot drive, or maybe I could get the permission to modify the current one.
Given the fact that lots of program used that location that is often too small, it would be a major boon to get a method to put that TMPDIR on another drive.
I managed to do it, by disabling SIP and then create a symbolic link to another drive as a replacement of the TMPDIR folder.
Original TMPDIR was T : /var/folders/jc/myw_64vd1vb2zsn9wps4_xnh0000gp/T
More exactly I created a symbolic to my other drive folder in the folder myw_64vd1vb2zsn9wps4_xnh0000gp and named it A.
Then I renamed the T Folder to G and then the symbolink link A to T. You've to be quick as the OS recreates T quickly.
Of course, Lightroom must be quit before doing that. But it works.
It works, but of course, you've to disable SIP which is a pain. Also, after that photoshop doesn't work anymore, other programs mays fail also.
Now, the real solution would be to tell mac os to create the temp folder to and external drive. But that's another topic. I feel it has to to with the mktemp command, If we could ask it to use and external drive it would be the perfect solution.
So the solution was to disable SIP, and then you can move the TMPDIR with a symbolic link.
Have you tried TMPDIR=/your/tmp/dir open -n /Applications/Lightroom.app from terminal?
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
So I wanted to set up my new PC and vim was one of the first things.
I need my .vim directory for that. I have vim and it works, too.
I found the indent folder at /usr/share/vim/vim90/indent. This seems to be the correct folder but I have no idea if this is the correct location. The .vimrc i created at ~/.vimrc.
There are as far as I could find two possible "proper" places for .vim, which is ~/.vim and /etc/vim. It is in neither.
What should be the next step to properly use vim? Should I leave it there? Should I create .vim in my home dir and move it there? Does it literally not matter and I am confusing over nothing?
Thanks for any help!
/usr/share/vim/vim90/ is the system-wide "runtime directory". What is in there shouldn't be messed with because…
it needs to be in a certain state for Vim to work as expected,
whatever you do there might be overridden or left behind during later upgrade,
other users might be negatively impacted by your changes.
The first reason is sad, but yeah, Vim is a fragile beast, the working of which can be compromised very easily by moving stuff around, renaming files or whatnot.
The second reason is, I think, easy to demonstrate: when 9.1 is released, it will ignore /usr/share/vim/vim90/ entirely, and thus whatever changes you might have done there.
The third reason might seem more abstract because you are probably the only actual person to use that particular computer, but Unix-like systems are multi-user by design and, in that context, keeping your changes in your own $HOME is just common sense.
Vim is highly configurable and offers many ways to craft the perfect personal environment… or shoot yourself in the foot so, for now, you should do your configuration in your own $HOME as it is simple and predictable:
create a .vim directory under $HOME, $HOME/.vim,
create a .vimrc file under $HOME, $HOME/.vimrc,
and forget that /usr/share/vim/vim90/ ever existed.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
i have search a lot to find out How to detect unnecessary Windows\Installer file !
I am sure lot of you have faced this issue previously and solve it somehow.
Now when i look at my ---
C:\Windows\Installer
directory on Windows Server 2008 R2 i can see it took already 42 GB out if total 126 GB.
Now what i would like to know can i just delete all the files from that Installer directory or i have to detect which file can be removed !
Do anyone knows any solution for this issue !
How do you define unnecessary?
Specialized system case: You want the minimum footprint and are willing to sacrifice functionality that you don't expect to use.
If all is well, each the files in C:\Windows\Installer are a local cache of an installed Windows Installer package, patch, transform, etc. They are necessary for uninstallation, auto-repair or on-demand installation to succeed. If you will never need any of those things on these machines (i.e. if you are bringing them up on demand as VMs, and would rebuild them rather than uninstall something), then unless the app itself invokes Windows Installer APIs itself, it may be relatively safe to remove files from C:\Windows\Installer. In addition, you could call the Windows Installer API MsiSourceListEnum to find other caches of files that are used for these same purposes. It may be similarly safe (or unsafe) to remove those files.
More usual case: You'd rather not rebuild the system
If you suspect there are unreferenced files in that folder left over from prior upgrades or uninstallations, you can try to use Windows Intstaller API calls to verify this. At a very low level, you can call MsiEnumProducts (or possibly MsiEnumProductsEx) to find the product codes of all installed products, and MsiGetProductInfo/Ex(szProduct, INSTALLPROPERTY_LOCALPACKAGE, ...) to find its cached .msi file and INSTALLPROPERTY_TRANSFORMS for a list of its transforms. Then MsiEnumPatches/Ex to find all patch codes and MsiGetPatchInfo/Ex (again with INSTALLPROPERTY_LOCALPACKAGE and/or INSTALLPROPERTY_TRANSFORMS) to list the .msp and .mst files it references. In theory, the full set of all files referenced here should match up with the full set of files in C:\Windows\Installer. (Or there are more references to look for...)
(Before you write anything to do this, consider that there are probably apps out there that automate this, or are even smarter about it, such as the one referenced in another answer.)
You could not delete them all.
There is a good answer about your problem, I test in my lab. It works for me.
Notes: If possible, you had better copy this folder to anther disk (such as E:)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm cleaning a machine with Windows 7 that I use that will be reassigned to another co-worker, and I would like to clear all the deleted files so they can be unrecoverable.
I tried using cipher w:f:\, then I installed Recuva and I still can see a lot of files that can be recovered.
Then I created a little program that creates a file with 0's that has the size of the free space on disk (after creating the file, I can see on Windows Explorer that the disk has like 100kb of free space only).
Then I delete the file and I run Recuva, and again I can see all those files as recoverable.
I'm just curious about what's happening under the hood. If I leave like 100Kb of free space in the disk, then why are there more than 100k of recoverable files still?
To make files unrecoverable, you need to use a "digital file shredder" application. This will write a series of zeroes and ones to the file to be shredded, multiple times. While 3 passes seems sufficient for many users, the US government has set a standard of 7 passes to meet most of its security needs.
There are several free file shredder applications, and even more commercial file shredder tools. Some security suite software (such as Antivirus with personal security protection tools) may also provide a file shredder.
For recommendations on digital file shredder applications, please ask for Windows digital file shredder recommendations at https://softwarerecs.stackexchange.com/
As for why "deleted" files are still listed by recovery tools as "recoverable", when a file is deleted, all that normally happens is a flag is set in the master file index maintained by the file system. The raw data of the file is left on the hard disk as "noise/garbage". If no other files are written into the area occupied by the deleted file, then it is trivial to recover the data. If other data has been overwritten on it, it becomes a non-trivial, but still possible, exercise to recover the data as it was before it was overwritten. Large scale recovery vendors are capable of recovering a file even if it has been overwritten a few tiles. This is why the "security" standards of the US government call for the file area to be overwritten 7 times, as only the most serious (and expensive) recovery operation can recover that data.
To make a file "disappear", the master file index also needs to have the information "erased" and overwritten ("shredding" the file's meta-data to be hidden and very hard to recover).
If you are interested in the details and how to more permanently hide or delete a file, you might want to consider asking at https://security.stackexchange.com/ about how the windows 7 file system works, and what it takes to truly delete or make a file sufficiently overridden to make it impractical to recover.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am having an issue with the Dropbox cache, whereby periodically I find that a particular machine I am syncing to with Dropbox has run out of disk space and the Dropbox cache is the culprit. This is a problem because the machine Dropbox is installed on is headless (or nearly so) and therefore the only indication that something is wrong is suddenly data that should be available on the machine isn't.
I have read that it is possible to clear the cache, but this is a pain as this machine is running OS X and there is no command line interface, meaning that I have to VNC into the machine simply to restart Dropbox. This also seems to limit my options for automatically clearing the cache, although having to create a periodic task to clean the Dropbox folder seems kludgy and error prone. (For instance, the disk could fill up before the script runs.)
(Update: It appears that deleting the files in a low disk condition results in Dropbox starting to sync again without restarting, but I am not sure if there are any undesirable side-effects to this, everywhere I have read about the cache says to stop Dropbox during the delete and restart it afterwards.)
In addition, it appears that the reason Dropbox is running out of space so fast is that I have a single large log file (on the order of half a gigabyte) which is append-only, but Dropbox is creating a new cached copy of the entire old version every time a change is made. So from the standpoint of performance, it is kinda undesirable that it keep creating duplicates of this large file for every tiny addition of a few bytes to the file.
Disk space is rather tight on this machine, so I would rather simply have Dropbox limit how much caching it does. Is there some way to do this? My searches so far have turned up empty.
Update: I tried opening a Dropbox support request, only to get an e-mail reply stating: "Thanks for writing in. While we'd love to answer every question we get, we
unfortunately can't respond to your inquiry due to a large volume of support
requests." ಠ_ಠ
I just have a command file that I run now and then on my MacBook Air to clear space, which contains also these lines:
rm -rf /Users/MYUSERNAME/Dropbox/".dropbox.cache"/old_files/{*,.*}
osascript -e 'tell application "Terminal" to quit' & exit
Should be easy enough to automate, no?
I have the same issue with the exact same cause (took a while to figure out too): a log file inside a Dropbox folder that is actually not that big (several MB), but it does update every minute with a couple of hundred bytes. My cache is killing me. My total local Dropbox folder has 150 GB of which 50 GB is the cache!
I just cleared it, and my understanding is there are no consequences other than resync, but this is unsustainable.
I see several solutions here:
Dropbox is not suitable for this use case. Do not keep frequently updated logs on Dropbox. I think this would be a bummer, because there should be a fairly simple technical solution to this, and they are:
Dropbox either has OR SHOULD HAVE a setting for the maximum size of the cache, the way browsers do. This should not be too hard to implement if it does not exist (apparently), otherwise tell us where it is.
A script can be written (talking about Linux here) that periodically (every hour should be enough, but it can be done every minute in theory) checks for disk size of .dropbox.cache and if it is over some limit, it will delete some files. You could delete 10 most recent ones, or 10% of the files, or if you really wanted to go fancy you could calculate how much you have to delete, from oldest file on to maintain a certain cache size. The issue might be stopping Dropbox but seems like if you simply pause syncing that should be ok and enough.
Number 2 and #3 are really one and the same, it's just a question of who is going to do it. Given that Dropbox isn't an open source platform, it would probably be best for Dropbox to write and maintain this feature. Any third party plugin for this may stop working when something inside Dropbox codebase changes.
Dropbox does have an incentive NOT to provide this feature, because frequent syncing = more bandwidth. But I thought we pay for bandwidth.
Thank you Dropbox, we all love you, especially since you gave us all that extra space for free.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Can you recommend a good SSH file sync utility for Windows? For example, I have some C++ sources that I need to compile remotely. I need this utility to be simple and most of all responsive, so I can compile my sources instantly after saving, without having to wait for the sync to be triggered.
WinSCP supports synchronization from both gui and command line.
Use rsync. See this. There is even an instruction to set automatic backup.
for source-code, you could use something like git or subversion, paired with an ssh-connection using port-forwarding.
in all cases you would need to trigger the sync yourself except you have a tool that watches the directory you're working on.
try this, SSHSync for windows
http://code.google.com/p/sshsync/
A command line applications that allows intelligent Secure FTP transmissions. SshSync only support pull type transfers, but it allows use of a Private Key to ensure that authentication is secure. A text file that contains a list of files always processed is used to check that only 'new' files are retrieved.
Sounds like a job for a Continuous Integration tool.
install cygwin and use rsync over ssh.
It seems to me that one way to solve your problem would be to simply use a network drive. Edit your files from the network drive, and whenever you save, any other systems connected to that drive can also access your changes, including your build server. That's what we do at my office — everyone's home directories are on NFS/CIFS shares, so we edit on our local computers, but run a script to trigger a build on any of several build servers, even multiple platforms at once. We don't have to sync anything before being allowed to compile our latest changes.