I have just noticed that my C drive is getting full, whereas it still had 30 GB free space 3 days ago.
Given last days activity I can't find any reason for this.
Now I realize that my C free space is getting lower and lower even though there's no current activity on my PC (except that it's turned on).
Every 2 minutes, I lose approximately 100 MB of free space, even though I don't download anything.
I launched my antivirus and I have closed my internet connection in order to see if the free space would stop decreasing, but it continued decreasing at the same pace.
I checked the task manager and notice there was a software running which I think was named "One Drive setup.exe" (during the past weeks, I had many pop up windows saying I had to update onedrive, but there was a problem with the auto update etc... but I didn't car because I don't even know what OneDrive is and I don't think I use it). So I killed this running task.
I thought it had stopped the loss of free space (I even gained 100 MB), but the decrease started again.
Now I connected to Internet again.
I got 300 MB free space back and now it seems constant since 4 minutes. Maybe these little ups and downs can be due to the current antivirus scanning.
But what can explain the loss of 30 GB during the past 2 or 3 days?
Could it be windows update? How can i check this with windows 10?
Could it be a virus or something bad?
Please, answer quickly, i only have 17GB left :-(
Thanks
Which version of Windows OS are you using?
Turn off/ disable System Restore point, this way you will able to recover some space. Other than use CCleaner (https://www.piriform.com/ccleaner/download) to clean your system.
They release patch I believe. But u also can use built in disk cleanup tool(https://support.microsoft.com/en-us/help/4026616/windows-disk-cleanup-in-windows-10).
Also uninstalled OneDrive/Google Drive unless u actively use this service. OneDrive sync with cloud files so that u can use those off line.
Related
The Mac has decided to freeze and restart several times a day while I'm using it.
panic (cpu 2 caller 0xffffff801a579938): watchdog timeout: no checkins from watchdogd in 93 seconds .....
Used the repair disk utility tool multiple times in different recovery modes.
I Used every free admin fixing tool, clean up tool and error reporting tool on the app store.
Launched my Mac in all sorts of different recovery modes. I literally pressed and used every restart keyboard combination you can with a Mac. And I used them several times over and over in different scenarios.
Spent hours researching every forum and reading every article of similar problems and solutions.
Download the manual updates and installed them each separately
After a day of frustration i found then solution
The fans internal sensor wasn’t working any more.
I set the automatic controls using Macs Fan Control app and in Custom switched to Sensor-based value and selected CPU PECI from drop down and set 30 in Temperature that the fan speed will start to increase from and 90 in Max temperature. Fans now kicking in and cooling down the processors and preventing from re-starting.
Source
I tried to update my mojave to catalina in order to update Xcode... But impossible I have 27Gb Free space but all time this error:
an error occurred while installing the selected updates
And after each error I have 8Gb in less, so if I continue I while arrive to not have free space, I don't know how to delete this fail load on my system storage.
I found that 8Gb are add in private/var/folders (never touch this folders) just reboot in safe mode, this will erase temporary file...
As found here:
A clean install uses up around 20 GB of storage space. In addition,
you need to allow for space for your user data, applications, and
future updates. As if that weren’t enough, you should keep at least 10
to15 percent of the startup drive free to ensure adequate performance.
I normally suggest a good deal more free space than that, but here
we’re just talking about a minimum to ensure you can install and use
macOS Catalina.
So the solution may be freeing up to 30 GB of space, just to be sure.
You can try this utility to better identify some files to delete.
I've been having issues with my parity instance freezing up while syncing. Given enough time it'll generally pick up again but sometimes it will stay where it is for hours. Restarting the instance temporarily solves the problem and it will immediately start syncing like normal.
For example, it might sync to the latest block at 15:00:00 and then not pick up anything until 15:30:00 (or a few hours later) and then start syncing until it catches back up to the top.
Peers are generally 20+ and absolutely nothing new will show up in the console until parity decides to resume (or I restart)
Running off of a windows server instance. Off of an HDD (Not ideal I know but except for these pauses it has been sufficient and the fact that restarting parity fixes the issue makes me believe it is unrelated to the hard drive performance)
Running parity 1.10.8 with the cmd:
parity.exe --jsonrpc-apis all --cache-size 1024 --db-compaction hdd --tracing off --pruning fast
I have a large number of wallets that I query regularly with the RPC APIs to check for changes.
Did anyone encounter anything like this?
That is a known bug and fixed a while ago, if you keep experiencing this, please ugprade to the latest Parity Ethereum 2.x releases!
https://github.com/paritytech/parity-ethereum/releases/latest
Note: I work for Parity Technologies.
I am working with a SuSE machine (cat /etc/issue: SUSE Linux Enterprise Server 11 SP1 (i586)) running Postgresql 8.1.3 and the Slony-I replication system (slon version 1.1.5). We have a working replication setup going between two databases on this server, which is generating log shipping files to be sent to the remote machines we are tasked to maintain. As of this morning, we ran into a problem with this.
For a while now, we've had strange memory problems on this machine - the oom-killer seems to be striking even when there is plenty of free memory left. That has set the stage for our current issue to occur - we ran a massive update on our system last night, while replication was turned off. Now, as things currently stand, we cannot replicate the changes out - slony is attempting to compile all the changes into a single massive log file, and after about half an hour or so of running, it trips over the oom-killer issue, which appears to restart the replication package. Since it is constantly trying to rebuild that same package, it never gets anywhere.
My first question is this: Is there a way to cap the size of Slony log shipping files, so that it writes out no more than 'X' bytes (or K, or Meg, etc.) and after going over that size, closes the current log shipping file and starts a new one? We've been able to hit about four megs in size before the oom-killer hits with fair regularity, so if I could cap it there, I could at least start generating the smaller files and hopefully eventually get through this.
My second question, I guess, is this: Does anyone have a better solution for this issue than the one I'm asking about? It's quite possible I'm getting tunnel vision looking at the problem, and all I really need is -a- solution, not necessarily -my- solution.
Im using ReSharper 6 in a Vs 2010 Pro environment and are doing some pretty large scale projects. Development box includes 2 x quadcore xeon with 24 GB ram. Project's are running on a PCI-E x4 SSD drive with 1GB/s read and write (for real). So, i suppose there is not much I can do to give the development machine more power.
The worst project is an Umbraco site with roughly 14000 files and folders and some pretty nasty css. I got everything from second long freezes to 30 sec VS freezout.
I've optimized VS2010 according to every guide available in VS optimization. Even enabled the 64bit memory enhancement but the problems continue.
I've even added the media library folder to the skip list.
Are there any other magic tricks someone would know of, please let me know!
gorohoroh's comment lead me to the solution, the 6.1 nightly dec 13 rocks!
Thanks
http://confluence.jetbrains.net/display/ReSharper/ReSharper+6.1+Nightly+Builds
I am using 7.0.1 and I find that it's killing my machine too.
However, it normally happens if I have more than one VS2010 open.
If it happens then the only way of fixing it I have found is to close VS, delete the DotSettings.user and the suo, and then reopen.
I'm using 6.1, and find that it slows down over time, and typing becomes really laggy. I've just discovered that when it starts to chug, if I go to "Tools..Options..ReSharper..General", then click on Suspend, then Resume - it goes back to it's initial speed.