I have a Postgres instance building a GIN index. It's looking at about 200,000 rows and it's so far taken about 9 hours. Who knows how long it will take eventually. The problem is that it's using about 2% of CPU when I'd like it to use more like 90%. Is there any way to force it to speed up?
The main bottleneck is probably disk IO and not CPU.
If you're on a Windows machine, you can check disk IOs using Process Explorer (freeware), if on Unix, use iostat, sar, DTrace (haven't done the latter in a while so not 100% sure of the best tool)
Related
I am using Xcode 9 and my SourceKitService using more than 5 GB memory. Due to it my system become very slower.
Every time, I have to force quite this service from activity monitor to come back to normal situation. A few minutes back this service appears again and start using my memory resources.
Any help about it.
Same problem here. AFAIK there's no workaround, other than kill the process.
I am writing a server app which I want to efficiently use ALL available physical RAM of the machine when possible. The plan is that it will allocate physical pages using AWE until it detects that 99% of physical memory and stop when 1% is free, and any time physical memory drops below 1% free, it will free physical pages it doesn't need.
However when I put this plan into practice, Windows seems to think any time it has 99% of RAM in use it would be a good idea to free up more physical memory, and so it starts paging all sorts of stuff to disk, and my system crashes.
How can I tell Windows it is OK to have 99% of RAM in use and it doesn't need to try to page stuff back to disk until it reaches whatever its default perceived ideal level of usage is (I guess it will be something like 90%...)
Note: Raymond says 'Unless you are designing a system where you are the only program running on the computer, this is a bad idea.'
In this server scenario this is basically intended to be the only app running on the computer. But unfortunately there are some OS/background tasks that need to run...
But certainly I don't expect there is any other process on the computer indulging in this 'use all but 1% of RAM' behavior...?
Update: I've done more experimentation and started to wonder if I'm somewhat asking the wrong question. My assumption that windows is being overeager may be wrong. Perhaps the question should instead have been 'how can I determine how much physical RAM my process can safely use without compromising overall responsiveness on the machine'?
You can't. The Windows memory manager runs at a lower level than your program and knows nothing about your program (and even if it did, it has no reason to assume your program is the good citizen you claim it to be. What if your program crashes, or has an off-by-one error in a loop that mallocs? What about other programs that need memory while yours is running? What about the thousand other scenarios that the guys who wrote the Windows MM encountered when they were writing it?)
Don't try to be cleverer than Windows. A more productive use of your time would be to consider if your application really needs to allocate 99% physical memory up front.
So, yesterday I opened task manager in Win 8 (64 bit) and noticed that Chrome (32-bit for some reason) didn't use the whole power my PC has got. So I was running an AI JavaScript program and I noticed that my CPU was running at 1% and Memory was only runnning 120 MB, and that forced me to think why would I wait 5 minutes for it to run instead of somehow boosting it to at least 60%. As far as I know Windows automatically distributes the hardware usage to programs so I'm asking what's the problem:
Is it because it's x32?
Is it because I should manually configure windows to give it more power?
Note: I did search Google, but all I got is that people actually complain about High CPU usage and I've got the opposite.
32 bit doesn't make a difference here. Javascript is inherently single-threaded, so by default (not counting web workers) it won't use more than a single core on your machine. It just cannot. Also memory usage doesn't necessarily tell you how hard a program is working. Some need lots of memory, others only little.
It's up to programs to use the resources of the machine most efficiently; if they don't, there is nothing you can do with Windows to make them run better or faster.
Lots of times Xcode (seemingly randomly) starts to run extremely slowly- it can take around fifteen seconds to move an object in IB, or compiling after changing one line of code can still take up to ten seconds. I took a look at my Activity Monitor, and this is what I found:
My question is, is this normal?
2GB of RAM is way more than enough, even for latest versions of Xcode.
Looks like you've hit one of the bugs in Xcode compilation / indexing / syntax hilighting. c.f.: Xcode 4.3.2 and 100% CPU constantly in the idle time
You only have 2GB's of RAM. With each update to Apple's software, they get more memory intensive. The same goes with Safari 5.1+. So to answer your question, Yes, this is normal for a machine with 2GB's of RAM running Lion as well as other memory intensive applications, Chrome being another.
As for Interface Builder, I have noticed this too. XIB's are XML files, so I believe as you move an object, Interface Builder is writing it's location on the view as it is moved, so that is a very data heavy task as well.
XCode is usually around 20-30% CPU sitting in the background doing nothing. It's a pig that's all.
We run a Fortran console program we have run for years. Recently we purchased identical new HP server class machines (4 processors, 8 gig ram, 4 hard drives) for everyone in the office. We configured them identically as nearly as we know. We can compile the Fortran program on one machine, pass the executable to the different machines, and on two machines it executes painfully slow, while on two others it has modest performance (but not as good as before we upgraded from XP machines).
It uses almost no console output (about 40 lines) but outputs about 15 megs of files.
We open task manager to see what's going on, and we see that on the slow machines it's loading ONE CPU to about 15%. On the fast machines it's loading ALL CPUs to about 40% (but one of them seems to load more than the others). As I recall, on XP it loaded the CPU to 99%, and ran much faster.
These machines are the employees' general purpose machines, and have lots of company bloatware on them. And there is the possibility they have slightly different directory structures. But what seems totally puzzling to me is why Vista is not giving them more CPU time. If the CPUs were loading up, I might blame the performance variation on different directory structures, but not loading up the CPUs just boggles my mind.
David
if there's a bottleneck in IO, the CPU wouldn't be loaded as much because it's mostly waiting for the IO to take place. One could even imagine this to cause the one CPU vs many CPUs problem if there's just no point in kicking in another CPU because there's plenty of time between while waiting. What if you take an external HD and try out if the differences also take place if you run the same program on that HD on different machines?
Please go into Windows task manager, Performance / - Select in [View] the option: [Kernel Times] and look what's displayed on the bars during program execution.
If its only 15% load on quad+hyperthreading box, that says basically, OpenMP, MPI (or whatever it uses) - isn't properly working - works on 1/8 => 15%. Can you run the MPI-test command for your specific system in order to check for errors in multiprocessing on each box? Therefore, the question would be - why does the multiprocessing environment not work?
Regards
rbo
SWAG, but have you checked your virus scanner configuration? If the scanner isn't set to ignore the type of file you're writing on the slow machine, then each write to those files might be getting intercepted and scanned before being written to the disk. This could lead to the process sitting in I/O wait and not getting scheduled as often.
Vista had a problem with some uncontrollably memory leaks, perhaps this is your error, some conflict in the "bloatware" is causing a memory leak and so your Fortran program is running so much slower?
I assume you have tested this with all programs ended. It seems unlikely that your console program is the issue. Sounds like there's definitely a memory conflict going on though.