block a command in ubuntu(in shell) - shell

I am working on ubuntu server with 10 users at any point of time. We usually keep our code there and use the server to make builds. The build usually takes 30 to 50 minutes based on concurrency defined. The build command is make -jX where X can be anything from 1 to 24.
My problem starts when many users start giving make command with higher X value. Is there any way to block these commands or to put any limit.
For example, if someone gives make -jX (X>4), I should be able to override the command as make -j4.
I know one way is to use alias but I have no idea how to interpret the argument value through alias (like alias ll='ls -la' in .bashrc file is ok but how to interpret ll -lha through bashrc).
Also is there any way to make the alias work for all the users without editing the bashrc files of all the users?
Thanks in advance.

Although limiting the parameters to a particular command (in this case make) in a way that cannot be circumvented is generally hard, you can configure system-level process limits on each user using /etc/security/limits.conf.
If you open this file on your system, you will see in the comments that you can limits various user resources such as nproc and memory. If you play with these limits, you may be able to get something reasonable to get fair resource sharing among your developers.

Related

Vocabulary for a script that is expected to produce the same output no matter where it is run

I'd like some advice on what vocabulary to use to describe the following. Having the right vocabulary will allow me to search for tools and ideas related to the concept
I'd like to say a script is SomeWord if it is expected to produce the same output no matter where it is run.
For example, the following script is not SomeWord:
#!/bin/bash
ls ~
because of course it depends on where it is executed.
Whereas the following (if it runs without error) is expected to always produce the same output:
#!/bin/bash
echo "hello, world"
A more useful example would be something that loads and runs a docker or singularity container in a way that guarantees that a very particular container image is being used. For example by retrieving the singularity image by its content-hash.
The advantages of SomeWord scripts are: (a) they may be safely run on a remote system without worrying about the environment and (b) their outputs may be cached.
The best I can think of would be "deterministic" or some variation on "environment independent" or "reproducible".
Any container should be able to do this, as that is a big part of why the tech developed in the first place. Environment managers like conda can also do this to a certain extent, but because it's just modifying the host environment it's possible to be using non-conda binaries without realizing it.

Make uses multiple cores even without -j argument?

I've noticed on my MacBook Pro (Quad-core) that when I run make, it takes the same amount of time as make -j, and sure enough, Activity Monitor shows all four cores getting high usage. Why is this? Is there some default setting that Apple has? I mean, it would make sense for -j to be the default, but from what I've seen on the web make with no arguments should only be using one thread.
This isn't necessarily a problem, but I'd like to understand the cause nonetheless.
The -j|--jobs flag specifies/limits the number of commands that can be run simultaneously, not the number of threads to allocate to a single command. Think of this option as concurrency instead of parallelism.
For example, I can specify --jobs=2 and have both an ES6 transpiler and a SASS preprocessor running in the background, in the same terminal window, watching for any file changes I may make.

redis: EVAL and the TIME

I like the Lua-scripting for redis but i have a big problem with TIME.
I store events in a SortedSet.
The score is the time, so that in my application i can view all events in given time-window.
redis.call('zadd', myEventsSet, TIME, EventID);
Ok, but this is not working - i can not access the TIME (Servertime).
Is there any way to get a time from the Server without passing it as an argument to my lua-script? Or is passing the time as argument the best way to do it?
This is explicitly forbidden (as far as I remember). The reasoning behind this is that your lua functions must be deterministic and depend only on their arguments. What if this Lua call gets replicated to a slave with different system time?
Edit (by Linus G Thiel): This is correct. From the redis EVAL docs:
Scripts as pure functions
A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are replicated on slaves by sending the script -- not the resulting commands.
[...]
In order to enforce this behavior in scripts Redis does the following:
Lua does not export commands to access the system time or other external state.
Redis will block the script with an error if a script calls a Redis command able to alter the data set after a Redis random command like RANDOMKEY, SRANDMEMBER, TIME. This means that if a script is read-only and does not modify the data set it is free to call those commands. Note that a random command does not necessarily mean a command that uses random numbers: any non-deterministic command is considered a random command (the best example in this regard is the TIME command).
There is a wealth of information on why this is, how to deal with this in different scenarios, and what Lua libraries are available to scripts. I recommend you read the whole documentation!

Help in understanding this bash file

I am trying to understand the code in this page: https://github.com/corroded/git-achievements/blob/gh-pages/git-achievements
and I'm kinda at a loss on how it actually works. I do know some bash and shell scripting, but how does this script actually "store" how many times you've used a command(im guessing saving into a text file?) and how does it "sense" that you actually typed in a git command? I have a feeling it's line 464 onwards that does it but I don't seem to quite follow the logic.
Can anyone explain this in a bit more understandable context?
I plan to do some achievements for other commands and I hope to have an idea on HOW to go about it without randomly copying and pasting stuff and voodoo.
Yes on 464 start the script, everything before are helping functions. I dont know how it gets installed, but I would assume you have to call this script instead of the normal git-command. It just checks if the first parameter is achievement, and if not then just (regular) git with the rest parameters is executed. Afterwards he checks if an error happend (if he exits). And then he just makes log_action and check_for_achievments. log_action just writes the issued command with a date into a text file, while achievments scans for that log file for certains events. If you want to add another achievment you have to do it in this check_for_achievments.
Just look how the big case handles it (most of the achievments call the count_function which counts the # usages of the function and matches when a power of 2 is reached).

command line wisdom for 2 panel file manager user

Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

Resources