When will new value for CPUAffinity= in systemd come into effect? - systemd

If I change CPUAffinity= or CPUQuota= in a systemd unit configuration file such as postgresql#.service, when will the new settings come into effect? In particular, will I have to restart the service in order to see the service's processes executing on the intended CPUs and will they run there under guarantee?

According to the testing that I have just done (why, oh why, does the documentation not clarify this!), changing the CPUAffinity requires a reboot.
I tried changing the value and then
restarting processes - no effect
systemctl daemon-reload - no effect
systemctl daemon-reexec - no effect
reboot
Only the reboot effected the change to CPUAffinity.
Tested on CentOS 7.
Also, for those finding the documentation lacking, the CPU numbering goes from zero, you can specify ranges (such as 1-3 and multiples may be given either space or comma-delimnited).

You just need to reload the configuration (systemctl daemon-reload) then restart the service.
See for example here. There's no need to reboot the system like starfry suggests.

Related

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

How to use distcc to preprocess and compile everything remotely only?

Background:
I have a 128-core server which I would like to use as a build server.
I have a bunch of client machines which work with a not-so-new and not-so-powerful PC. (Can't upgrade! Not in my control.)
What I did:
I followed the distcc documentation.
And installed a virtual machine on the server with exactly the same compiler version, the same distcc version -- basically the same distribution, as on the client-machines.
After configuring the clients and the servers, I can remotely build. I can verify this using the distccmon-text tool. I can see on the server, there are 8 threads started by the distcc daemon and that are awaiting for build-jobs to come. This was good as a first step. You can see the output below to be sure.
Second Step: Since the client machines are dual-core machines while the server offers 128-cores, and not all clients will be compiling at the same time, I wanted to offload as much of the build as possible to the build-server.
Problems:
First Problem: distcc, no matter how I try to configure it, always tries to distribute the build-jobs equally on the client and the server. Even though my configuration file looks as shown below:
# --- /etc/distcc/hosts -----------------------
# See the "Hosts Specification" section of
# "man distcc" for the format of this file.
#
# By default, just test that it works in loopback mode.
# 127.0.0.1
172.24.26.208/8,cpp,lzo
localhost/0
Which as per the distcc documentation should give higher priority to the build-server and lower-priority to the localhost since it comes later in the list. Also, it should give 8 jobs to the build server and 0 jobs to the localhost. But no, that doesn't happen. Upon typing make -j8 what it tries to do is start 4 threads on localhost and 4 on remote. Not good. This you can see from the image below.
Second Problem: What you would also notice is that the pre-processing is being done on the localmachine. For this there is a tool that comes with distcc. It is called the "distcc-pump" or the pump mode and can be used like this.
time pump make CC="distcc gcc" CXX="distcc g++" -j8
Unfortunately, pump mode or not, the pre-processing happens to be happening all on the localhost, as you can see from the above image. Sad.
Note: At no point does the distcc program, with the configurations I listed here, throw any errors nor warnings, neither on the server nor on the clients.
Versions:
gcc 4.4.5
distcc 3.2rc1.2
(Before someone suggests - "upgrade software!", newer versions are most likely not possible for me. Anyways, this version of distcc offers the features that I need. Also, I can upgrade the server virtual-machine but then there would be compiler version mistmatch between clients and the server. The clients I cannot upgrade.)
Any suggestions, feedback on how to improve this setup/(fix the problems) are most welcome.
EDIT : these solutions do not work, I let the answer to avoid someone else to propose again them
Try by
removing the line concerning the localhost in /etc/distcc/hosts c.f. https://superuser.com/questions/568133/force-most-compilation-to-a-remote-host-with-distcc
or may be specifying 127.0.0.1 rather than localhost in /etc/distcc/hosts c.f. an other problem solved with that substitution in https://distcc.github.io/faq.html
distcc actually differentiates between remote and local CPUs. But contrary to your interpretation, in the hosts file the IP address 127.0.0.1 is considered as a remote CPU and a distccd server is expected to be running there. Any number of jobs you define in the hosts file is interpreted only for these server nodes.
According to the man page, "localhost" is interpreted specially. This is what seems to not work for you. An alternative syntax is --localslots=<int>. Have you tested this?
Additionally, distcc runs jobs on the local host (the one where you start the driver program). First, all linking is done there. Second, when you specify a certain parallelism with make -jN, all jobs exceeding the available number of remote jobs are run on your local host, too - in addition to the workload distribution part of distcc. The option --localslots limits these. The man page does not mention localhost explicitly here. And then there are those jobs, which fail on the server and are repeated locally.
For the given 128-core server I would use the number of cores in the hosts file and start only that number of compile jobs:
$ cat ~/.distcc/hosts
172.24.26.208/128,cpp,lzo
$ make -j 128
...
Then I would expect to not see any compile jobs on the local machine.
The man page has some more words regarding recommended job numbers. Search for the section(s) starting with distcc spreads the jobs across both local and remote CPUs.

Clickhouse server failed to restart because of LowCardinality setting

I tried to play with LowCardinality setting, I got a message saying that this is an experimental feature and I have to SET allow_experimental_low_cardinality_type = 1 in order to use it.
I executed this command inside clickhouse-client and then I restarted the server. But I got
clickhouse-server.service: Unit entered failed state
Now I am trying to find out how to disable this setting and make my clickhouse-server start again.
Can you help with this please ?
PS: The version I use is the 18.12.17 and I installed it on Linux Ubuntu 16.04
ClickHouse has different layers for settings. If you used SET <setting> = <value> then you set it for current session. You don't need to restart ClickHouse. Please, take a look here.
I suppose you faced with another problem during starting your server. There a bunch of reasons why. So, firstly try to recollect what were done in configs since last restart (because you have just applied changes by restarting server).
Digging into logs also an awesome idea. Don't hesitate to check other similar issues on github.com, for example like this one

How can I allow a private insecure registry to be used inside a minikube node?

I know there are about a thousand answers to various permutations of this question but none of the fifteen or so that I've tried have worked.
I'm running on Mac OS Sierra and using Minikube 0.17.1 as well as kubectl 1.5.3.
We run our own private Docker registry that is insecure as it is not open to the internet. (This is not my choice or in my control so it's not open for discussion). This is my first foray into Kubernetes and actually container orchestration altogether. I also have a very intermediate level of knowledge about Docker in general so I'm drowning in terminology/platform soup here.
When I execute
kubectl run perf-ui --image=X.X.X.X/performance/perf-ui:master
I see
image pull failed for X.X.X.X/performance/perf-ui:master, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://X.X.X.X/v1/_ping: dial tcp X.X.X.X:443: getsockopt: connection refused)
We have an Ubuntu box that accesses the same registry (not using Kubernetes, just Docker) that works just fine. This is likely due to
DOCKER_OPTS="--insecure-registry X.X.X.X"
being in /etc/default/docker.
I made a similar change using the UI of Docker for Mac. I don't know where this change persisted in a config file. After this change a docker pull worked on my laptop!!! Again, this is just using Docker not Kubernetes. The interesting part is I got the same "Connection refused error" (as it tries to access via HTTPS) on my Mac as I get in the Minikube VM and after the change the pull worked. I feel like I'm on to something there.
After sshing into minikube (the VM created my minikube start) using
minikube ssh
I added the following content to /var/lib/boot2docker/profile
export EXTRA_ARGS="$EXTRA_ARGS --insecure-registry 10.129.100.3
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.129.100.3
As you can infer, nothing has worked. I know I've tried other things but they too have failed.
I know this isn't the most comprehensive explanation but I've been digging into this for the past 4 hours.
The bottom line is docker pulls work from our Ubuntu box with the config file setup correctly and from my Mac with the setting configured properly.
How can I enable the same setting in my "Linux 2.6" VM that was created by Minikube?
If someone knows the answer I would be forever grateful.
Thank you in advance!
Thank you to Janos for your alternative solution. I'm confident that is the right choice for some use cases.
It turns out that what I needed was a good night sleep and the following command to start Minikube in the first place:
minikube start --insecure-registry="X.X.X.X"
#intelfx says that adding a port won't be necessary. I'm inclined to believe them but if your registry is on a non-standard port just keep it in mind in case things still aren't working.
In the end it was, in fact, a matter of telling Docker to use an insecure registry but it was not clear how to tell this to Docker when I was not controlling it directly.
I know that seems simple but after you've tried a hundred things you're almost hallucinating so you're not in a great state to make rational decisions. I'm sorry for the dumb post but I'm willing to bet this will help at least one person one day, which makes it worth it.
Thanks SO!
The flag --insecure-registry doesn't work on the existing cluster on MacOS. You need to do minikube delete, it's not enough just to stop the cluster with kubectl stop.
I spent plenty of time to figure this out and then I found this comment at https://github.com/kubernetes/minikube/issues/604:
the --insecure-registry flag is ignored if the
machine already existed (even if it is stopped). You must first
minikube delete if you want new flags to be respected.
You can use kube-registry-proxy from (needs some configuration):
https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml
Then you can refer to localhost:5050 as your registry. The trick is that localhost is allowed as an insecure registry by default.

Tomcat as a Service

I need to write a shell script in which I need to bounce the Tomcat server(it would possibly on anyone's system). Hence, I wanted to know how should I check if tomcat is ran as a service with "service tomcat6 start" or with the script "./bin/startup.sh"?
If this is for a production server: Assume that it's always started as a service. If you find out that it isn't: Find the person that started from the shell and fire them.
Hard words, but on production systems: Hand off, keep them operating according to a standard. If you automate the bouncing (restart): This is what you do.
Dangers when starting through startup.sh: The process will be started as whatever user executes the script - potentially lacking write permissions to the log and temp files, or ruining it for the next start through service tomcat start, when the service can't access those files any more.
Thinking of it: It might be a good idea to check (at least) the identity of the current user in startup.sh (or setenv.sh) and terminate if it's not the expected one. Thus effectively forbidding to ever run startup.sh as a regular user, including root.

Resources