Limiting number of CPU cores used for indexing - xcode

Is there a way to limit how many CPU cores Xcode can use to index code in the background?
I write code in emacs but I run my app from Xcode, because the debugger is pretty great. The problem is that in emacs I use rtags for indexing, which already needs a lot of CPU, and then Xcode wants to do the same. Basically whenever I touch a common header file, my computer is in big trouble...

I like this question, it presents hacky problem-solving :)
Not sure if this would work (not sure how to force Xcode to index) but here are some thoughts that might set you on the right track: there's a tool called cpulimit that you can use to slow down processes (it inserts a sleep or something into a given process, I used it when experimenting with mining crypto).
If you can figure out the process ID of the indexing daemon, maybe you can cpulimit it!
I'd start by running ps -A | grep -i xcode before and after indexing occurs to see what's changed (if anything), or using Activity Monitor to see what spikes (/Applications/Xcode10.1.app/Contents/SharedFrameworks/DVTSourceControl.framework/Versions/A/XPCServices/com.apple.dt.Xcode.sourcecontrol.WorkingCopyScanner.xpc/Contents/MacOS/com.apple.dt.Xcode.sourcecontrol.WorkingCopyScanner looks interesting)

There is a -i or --include-children param on cpulimit that should take care of this, but not sure how well it works in practice.

I made a script /usr/local/bin/xthrottle;
#!/bin/ksh
PID=$(pgrep -f Xcode | head -n 1)
sudo renice 10 $PID
You can play with the nice level, -20 is least nice, 20 is nicest for your neighbour processes.

Related

Getting each of the CPU cores usage via terminal in MacOS

I need to get the list of each of CPU cores usage with one command in macOS terminal.
I have been searching the web for a few hours, but all that I was able to find were two variants, both of which are not what I am looking for.
The first one is the usage of htop command. As I understood, it prints the separate cores load on screen. I was not able to extract this information with one grep command.
I tried looking in the htop source code, but was not able to understand how it gets the cores usage information.
Another solution that I found involves the usage of
ps -A -o %cpu | awk '{s+=$1} END {print s "%"}'
The result is one number that represents the overall CPU usage. If I am correct, the output of macOS ps command, that is used here, does not provide the information about each process's core, so t is not possible to use that approach for my task.
I hope that it is possible to get such results in macOS.
Nope, this is how you do it on Mac:
Put the Activity Monitor in the Dock
Right click on the icon > Monitors

Nice / IOnice which one first? Does it matter? Any other way to reduce server load on a script?

I've been trying the "nicer" way to run a gzip from a bash script on an active server, but it somehow manages to get the load average above what I would wish it would be.
Which of the following would be softer on the I/O and the CPU?
Is there another way I'm not aware of?
/usr/bin/nice -n 19 /usr/bin/ionice -c2 -n7 gzip -9 -q foo*
or
/usr/bin/ionice -c2 -n7 /usr/bin/nice -n 19 gzip -9 -q foo*
Also, is there another commands such as ulimit that would help reducing the load on the server?
I'm not familiar with the ionice thing, but nice just means that if another process want's to use the CPU, then the nice process will be more willing to wait a bit.
The CPU load is unaffected by this since it's just a measure of the length of the "run queue", which will be the same.
I'm guessing it'll be the same with ionice, but affecting disk load.
So, the "niceness" only affects how willing your process is to allow others to go before you in the queue, but in the end the load will be the same because the CPU/disk has to carry out the job.
ANALOGY: Think of a person behind a checkout counter as an analogy. They still has to process the queue, but the people in the queue may be nice to each other and let others pass before them to the counter. The "load" is the length of that queue.

Check status of a forked process?

I'm running a process that will take, optimistically, several hours, and in the worst case, probably a couple of days.
I've tried a couple of times to run it and it just never seems to complete (I should add, I didn't write the program, it's just a big dataset). I know my syntax for the command is correct as I use it all the time for smaller data and it works properly (I'll spare you the details as it is obscure for SO and I don't think that relevant to the question).
Consequently, I'd like to leave the program unattended running as a fork with &.
Now, I'm not totally sure whether the process is just grinding to a halt or is running but taking much longer than expected.
Is there any way to check the progress of the process other than ps and top + 1 (to check CPU use).
My only other thought was to get the process to output a logfile and periodically check to see if the logfile has grown in size/content.
As a sidebar, is it necessary to also use nohup with a forked command?
I would use screen for this purpose. see the man for more reference
Brief summary how to use:
screen -S some_session_name - starts a new screen session named session_name
Ctrl + a + d - detach session
screen -r some_session_name returns you to your session

killall command older-than option

I'd like to ask about experience with the killall program, namely if anyone used the -o, --older-than CLI option.
We've recently encountered a problem that processes were killed under the hood by a command: "killall --older-than 1h -r chromedriver"
Killall was simply killing everything that matched regardless of the age. While killall man page is quite straightforward:
-o, --older-than
Match only processes that are older (started before) the time specified. The time is specified as a float then a unit. The units
are s,m,h,d,w,M,y for seconds, minutes, hours, days, weeks, Months and years respectively.
I wonder if this was a result of some false assumption or killall bug or something else.
Other posts here suggest a lot more complicated command involving sed, piping, etc which seem to work though.
Thanks,
Zdenek
I suppose you're referring to the Linux incarnation of killall, coming from the PSmisc package. Looking at the sources, it appears that some conditions for selecting PIDs to kill are AND-ed together, while others are OR-ed. -r is one of the conditions that is OR-ed with the others. I suspect the authors themselves can't really explain their intention there...

Process Management w/ bash/terminal

Quick bash/terminal question -
I work a lot on the command line, but have never really had a good idea of how to manage running processes with it - I am aware of 'ps', but it always gives me an exceedingly long and esoteric list of junk, including like 30 google chrome workers, and I always end up going back to activity monitor to get a clean look at what's actually going on.
Can anyone offer a bit of advice on how to manage running processes from the command line? Is there a way to get a clean list of what you've got running? I often use 'killall' on process names that I know as a quick way to get rid of something that's freezing up - can I get those names to display via terminal rather than the strange long names and numbers that ps displays by default? And can I search for a specific process or quick regex of a process, like '*ome'?
If anyone has the answers to these three questions, that would be amazingly helpful to many people, I'm sure : )
Thanks!!
Yes grep is good.
I don't know what you want to achieve but do you know the top command ? Il gives you a dynamic view of what's going on.
On Linux you have plenty of commands that should help you getting what you want in a script and piping commands is a basic we are taught when studying IT.
You can also get a look to the man of jobs and I would advise you to read some articles about process management basics. :)
Good Luck.
ps -o command
will give you a list of just the process names (more exactly, the command that invoked the process). Use grep to search, like this:
ps -o command | grep ".*ome"
there may be scripts out there..
but for example if you're seeing a lot of chrome that you're not interested in, something as simple as the following would help:
ps aux | grep -v chrome
other variations could help showing each image only once... so you get one chrome, one vim etc.. (google show unique rows with perl or python or sed for example)
you could use ps to specifiy one username... so you filter out system processes, or if more than one user is logged to the machine etc.
Ps is quite versatile with the command line arguments.. a little digging help finding a lot of nice tweaks and flags in combinations with other tools such as perl and sed etc..

Resources