I have an application which run as user "foo". I would like to set limit to CPU and memory that this application can use at run time. I am not able to realize how this can be achieved using systemd tools like - "systemctl" and "set-property" as this application is not a service but rather it is an application which a user can select and start.
Can some one please provide any guidance?
Resource management is on the cgroup level. A service for example would get its own cgroup. Systemd is just the intermediate guy here between kernel. You tell systemd how much resource your service should get and systemd would propagate this to the kernel by playing around with the cgroup hierarchy.
Few things to check up:
1) You can play with your application's service's resource management. To find the service of your application I would use systemd-cgls.
2) Make sure you have necessary control groups enabled in your system like cpu cgroup, cpuacct cgroup, memory cgroup.
If you have any other specific question, just shoot.
If you wish to start a process from the command line, systemd-run --scope might be used. This will be run as a scope unit:
:/home/foo# systemd-run --scope dummy_bin &
[1] 201
:/home/foo# Running as unit run-201.scope.
Then systemctl can be used by this scope unit, like:
:/home/foo# systemctl status run-201.scope
Loaded: loaded (/run/systemd/system/run-201.scope; static)
Drop-In: /run/systemd/system/run-201.scope.d
Active: active (running) since Thu 1970-01-01 01:01:01 UTC; 10s ago
CGroup: /system.slice/run-201.scope
`-201 /usr/bin/dummy_bin
The resource management of the scope unit could be realized by systemctl set-property via systemd.
Related
If I change CPUAffinity= or CPUQuota= in a systemd unit configuration file such as postgresql#.service, when will the new settings come into effect? In particular, will I have to restart the service in order to see the service's processes executing on the intended CPUs and will they run there under guarantee?
According to the testing that I have just done (why, oh why, does the documentation not clarify this!), changing the CPUAffinity requires a reboot.
I tried changing the value and then
restarting processes - no effect
systemctl daemon-reload - no effect
systemctl daemon-reexec - no effect
reboot
Only the reboot effected the change to CPUAffinity.
Tested on CentOS 7.
Also, for those finding the documentation lacking, the CPU numbering goes from zero, you can specify ranges (such as 1-3 and multiples may be given either space or comma-delimnited).
You just need to reload the configuration (systemctl daemon-reload) then restart the service.
See for example here. There's no need to reboot the system like starfry suggests.
Here is the context of my problem:
a gitlab ci yml pipeline
several jobs in the same internship
all jobs use a task gradle requiring the use of his cache
all jobs share the same gradle cache
My problem:
sometimes, when there are several pipelines at the same time, I get :
What went wrong:
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache (/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
I can't find any documentation about the lock system used by gradle. I don't understand why locks are positioned when the gradle action doesn't write to cache dir.
Does anyone know how locks work? Or can I simply change the duration of the timeout to allow concomitant tasks to wait their turn long enough before failing?
Translated with www.DeepL.com/Translator
I tried to tun gradle without a daemon, did not work.
I fixed this by killing all java processes in Activity Monitor(MacOS). Hope it helps.
You typically get this error when trying to share the Gradle cache amongst multiple Gradle processes that run on different hosts. I assume your CI pipelines run on different hosts or they at least run isolated from each other (e.g., as part of different Docker containers).
Unfortunately, such a scenario is currently not supported by Gradle. Gradle developer Stefan Oehme wrote this comment wrt. sharing the Gradle user home:
Gradle processes will hold locks if they are uncontended (to gain performance). Contention is announced through inter-process communication, which does not work when the processes are isolated in Docker containers.
And more clearly he states in a follow-up comment (highlighting by me):
There might be other issues that we haven't yet discovered though, since sharing a user home between machines is not a use case we have designed for.
In other words: sharing a Gradle user home or even just the cache part of it across different machines or otherwise isolated processes is currently not officially supported by Gradle. (See also my related question.)
I guess the only way to solve this for your scenario is to either:
make sure that the Gradle processes in your CI pipelines can communicate with each other (e.g., by running them on the same host), or
don’t directly share the Gradle user home, e.g., by creating copies for all CI pipelines, or
don’t run the CI pipelines in parallel.
Another scenario where it could happen is if some of these Gradle related files are on a cloud file system like OneDrive that needs a re-authentication.
Re-authenticate to the cloud file system
"Invalidate caches and restart" in Android Studio
1.First edit your config file /etc/sysconfig/jenkins change to root user JENKINS_USER="root"
2.Modify /var/lib/jenkins file permissions to root chown -R root:root jenkins
3.Restart your service service jenkins restart
Your Exception:
What went wrong: Could not create service of type FileHasher using
GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache
(/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
Thid worked for me:
rm /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
(Remove the lock file)
I am translating my upstart scripts to systemd scripts and I am wondring what is the best practice of transalting the following:
start on started postgresql
stop on stopping postgresql
Is the Requires= section is the right for me or there is a better section?
start on and start on started according to upstart:
6.33 start on
This stanza defines the set of Events that will cause the Job to be automatically started.
...
6.33.2 Start depends on another service
start on started other-service
So in your case, start on started postgresql means it needs to start after postgresql has successfully started because it depends on it.
In systemd that would be:
[Unit]
Requires=postgresql
After=postgresql
Because according to the systemd.unit man page:
After=,Before= ... If a unit foo.service contains a setting Before=bar.service and both units are being started, bar.service's start-up is delayed until foo.service is started up. [...] After= is the inverse of Before=, i.e. while After= ensures that the configured unit is started after the listed unit finished starting up, Before= ensures the opposite, that the configured unit is fully started up before the listed unit is started.
...
Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units fails to activate, and an ordering dependency After= on the failing unit is set, this unit will not be started. [...] If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated.
As for stop on and stop on stopped according to upstart:
6.34 stop on
This stanza defines the set of Events that will cause the Job to be automatically stopped if it is already running.
...
6.34.3 Stop after dependent service
stop on stopped other-service
The After=postgresql mentioned above has you covered because again, according to the systemd.unit man page:
After=,Before= [...] Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down.
I have a sysv style init file for a service being used in centos 7.1
When the system boot up, the systemd generates a service file and it
seems to be enabled for both level 2 and level 3.
I have following questions:
1) Can the service be started twice at each run level ? [How can I prevent
it if it can start]
2) How can I check at which run-level the currently executing service
was started on ?
Thanks
Arvind
This depends on your service. If your service is an active service then starting it will not do anything. You can find if your service is an active service or not by running "systemctl status yourservice.service". In case your service is not active, you can tell systemd to treat it as an active service even after it quits. The directive for this is RemainAfterExit= (https://www.freedesktop.org/software/systemd/man/systemd.service.html#RemainAfterExit=).
To find out which run level your service has been started by you need to look at the "systemctl show yourservice.service" output. Look at what is listed on WantedBy= or RequiredBy= fields.
I want to port some upstart scripts to systemd. But I want fully control of the services that get started(and the order). I have stripped down the folder of units and wrote one from the begining. Things get started in the desired ordered but I can see some tries for some default units like:
[0.791694] systemd[1]: systemd-journald.socket: Failed to load configurationn: No such file or
[0.792422] systemd[1]: syslog.target: Failed to load configuration: No such file or directory
[0.793083] systemd[1]: var.mount: Failed to load configuration: No such file or directory
[0.793677] systemd[1]: var-run.mount: Failed to load configuration: No such
How can I get rid of this. Can anyone point me to a guide for embedded systems with systemd. Searched for ones but couldn't find the right ones.
Also I need control of cgroups and when filesystem get mounted.
The errors are seen because the systemd unit files are available in your system and is referred to by some target unit under Wants
Cleaning up these dependencies shall get rid of these messages.
There is no guide specific to embedded systems. Systemd was initially started for desktop and can be used across.