Why does dbus-daemon take 1min:30s to start? - systemd

$ time sudo dbus-daemon --system
real 1m30.111s
user 0m0.017s
sys 0m0.003s
Barebone ArchLinux inside docker on ArchLinux.
D-Bus Message Bus Daemon 1.12.16
Tried dbus-x11 from AUR, same. Every time.
Edit/Details: the sudo invocation above takes 1:30 to execute, but the actual dbus-daemon process is spawned right away, and continues to run during and after the 1:30, successfully (i.e. it works). Reason I need dbus-daemon? for avahi-daemon (more specifically, to be able to run avahi-browse --all and discover stuff on my network).
Edit2: seems even though 'everything works' despite this slowness (avahi, network service discovery etc), the container becomes dead slow. Barely running sudo echo 'something' takes 25 seconds (a figure perhaps related to a timeout of 25000 inside /usr/share/dbus-1/system.conf). Just like an infection. For what it's worth, after reading more, seems the frustration of needing dbus is not restricted to the world of containerization - plenty of articles/communities like this and this.

I hit this issue with various Docker images, but not always. I digged deeper in this issue and found an interessting comment on the systemd repo.
The images I'm currently working on had systemd configured on passwd and group:
$ cat /etc/nsswitch.conf
passwd: sss files systemd
group: sss files systemd
Then I removed the systemd provider ($ sed -i 's/ systemd//g' /etc/nsswitch.conf) and the 90s hang was gone when starting with dbus-daemon --system --nofork. This was really a PITA to find out.
I could also verify that exactly this was the issue/difference for another Docker image I'm maintaining.

Related

Optimizing a systemd service taking too long

I have this udhcpc service in my system:
[Unit]
Description=uDHCP Client Service
After=network.target
Conflicts=systemd-resolved.service
[Service]
Type=forking
ExecStart=/sbin/udhcpc -p /var/run/udhcpc.brg0.pid -i brg0 -R -b
ExecStop=/bin/sh -c 'test -f /var/run/udhcpc.brg0.pid && kill $(cat /var/run/udhcpc.brg0.pid)'
[Install]
WantedBy=multi-user.target
It's been working well, except systemd-analyze is showing that it is adding about 7 seconds to the boot time:
7.388s udhcpc.service
4.946s dev-mmcblk1p2.device
1.303s uim-sysfs.service
959ms dev-mmcblk1p4.device
752ms dev-mmcblk1p3.device
739ms dev-mmcblk1p1.device
718ms systemd-hwdb-update.service
.
.
And here is the output of systemd-analyze critical-chain:
multi-user.target #15.164s
[[0;1;31mudhcpc.service #7.773s +7.388s[[0m
network.target #7.551s
[[0;1;31msystemd-networkd.service #6.724s +668ms[[0m
[[0;1;31msystemd-udevd.service #1.854s +87ms[[0m
[[0;1;31msystemd-tmpfiles-setup-dev.service #1.662s +70ms[[0m
[[0;1;31msystemd-sysusers.service #1.353s +229ms[[0m
[[0;1;31msystemd-remount-fs.service #1.044s +238ms[[0m
systemd-journald.socket #911ms
-.slice #281ms
I suppose the right way to fix this is to avoid using udhcpc and stick to the mechanism built into systemd, but unfortunately that's not my call. I'd like to at least optimize the boot time though. What are some things I can do?
The "problem" is systemd-networkd, which stops the boot until the network is configured -- and this is what you want to replace, not systemd-resolved.
Network autoconfiguration cannot be made any faster, because, when properly implemented, DHCP needs to check that the address isn't already in use, which involves sending a bunch of ARP packets and waiting for the timeout.
Since you insert your service between "network is configured" and "multi-user boot is complete" targets, you introduce a dependency where there was none before.
Network configuration is normally asynchronous, because any service that fails when the network is unconfigured at start would also fail when the network goes down later.

slapd command on MacOS

I have the following commands on MacOS
$ sl
slapacl slapadd slapauth slapcat slapconfig slapdn
slapindex slappasswd slapschema slaptest sleep slogin
I am following this tutorial on running an ldap server on MacOS:
http://krypted.com/mac-security/starting-openldap-on-mac-os-x-client/
seems strange that I don't have a slapd command - anyone know why?
Since slapd is almost never run "by hand", it's not in one of the binaries directories that're in the default PATH. Instead, it's in /usr/libexec, which is the usual place for things that're run automatically rather than manually. So run it with sudo /usr/libexec/slapd instead of just as slapd. (BTW, the sudo is needed so it can allocate low-numbererd TCP ports, and get full access to its database).

What are the systemd dependencies of ntpd?

My ntptime is showing error code 5 when the system starts. Restarting the ntpd through systemctl fixes this. Waiting a few minutes also seems to fix this. I have verified that ntpq shows that ntpd is talking to my intended server. This may be caused by another issue, but I think I'll take this time to ask a more general-purpose question.
Does anybody know which systemd dependencies are required for ntpd to work? I would love to see a minimum working example ntpd.service file from a system whose ntptime shows great success on system start.
Check your dependencies of a systemd unit with:
systemctl list-dependencies ntp
That command was found by reviewing man systemctl.

Starting a application with graphical interface on boot

I have a small question that I havn't found any answers to.
I run a virtual machine on my CentOS server, and I have made a simple script to start the virtual machine. I would like to run the script on boot so that the virtual machine starts up on boot also. So I successfully registered the script with following
chkconfig --add myscript
and enabled it with following
chkconfig --level 2345 myscript on
at last I checked it so its registered and enabled correctly with
$ chkconfig --list | grep myscript
So long, so fine, but when I restart my machine to test it, well nothing happens.
So now I wonder why isn't my script running? I had some thaughts that it cold be because of some missing arguments, myscript requires an argument "start" to run properly, so I think that could be the cause why it's not running, in that case where should I add the argument?
Note also, my script is ok, or at least I can run it manually.
UPDATE
The script is run during boot and is working as it should. Tha application I try to start with a script, my virtal machine, has a graphical interface and it seems like it's that causing the trouble. Does anyone have any experience in starting a graphical application with script on boot, on unix based OS's ofcourse? Or if there are any other clever ways of achieve this?
Thanks!
Make sure that the proper symlinks get created in /etc/rc.?/ and your startup script in /etc/init.d/ should contain start and stop methods.

Starting MarkLogic server stalled with "Waiting for device mounted to come online : /dev/xvdj"

Using a free 'micro' instance from Amazon to fire up a quick demo of MarkLogic. The rpm installs fine with no errors.
Some information that may be helpful:
[user#aws ~]$ rpm -qa | grep release
redhat-release-server-6Server-6.4.0.4.el6.x86_64
[user#aws ~]$ rpm -qa | grep MarkLogic
MarkLogic-7.0-1.x86_64
Starting the MarkLogic server for the very first time shows this:
[user#aws ~]$ sudo /etc/init.d/MarkLogic start
Initialize Configuration
Region: us-west-2 ML_NAME:
Set configuration: MARKLOGIC_ZONE="us-west-2c"
Instance is not managed
Waiting for device mounted to come online : /dev/xvdj
And here it sits with no other messages anywhere including /var/opt/MarkLogic/Logs which doesn't exist yet.
Even though Micro instances aren't officially supported, you can usually start one up. But, reports are that you will be quickly wishing you didn't.
That said, see the precise instructions at http://developer.marklogic.com/products/aws and, in particular, a disk at mounting /dev/sdf ; the server init script will wait forever to come up if you don't do that.
If the above didn't help, I've dug into the RPM enough to discover some issues on AWS.
For one, they use some sysconfig scripts to detect if they're on AWS. If you're running MarkLogic 6, these sysconfigs have a hardcoded drive and will wait indefinitely since it probably won't exist. Yours is 7, and still has some issues on AWS. To bypass this, you can make a /usr/bin/is-ec2.sh that contains:
#!/bin/bash
exit 1
That will prevent it from doing any ec2 detection. More details can be found on my write-up at this github post

Resources