Krusader crashed on Mac OSX 10.7.2 - macos

i've installed krusader via macports but i'm unable to run krusader.
w-4jesc:web_cms_dms w-4jesc$ /Applications/MacPorts/KDE4/krusader.app/Contents/MacOS/krusader
KGlobal::locale::Warning your global KLocale is being recreated with a valid main component instead of a fake component, this usually means you tried to call i18n related functions before your main component was created. You should not do that since it most likely will not work
krusader(97879)/KSharedDataCache ensureFileAllocated: This system misses support for posix_fallocate() -- ensure this partition has room for at least 10547296 bytes.
krusader(97879)/KSharedDataCache: Unable to find an appropriate lock to guard the shared cache. This *should* be essentially impossible. :(
krusader(97879)/KSharedDataCache: Unable to perform initial setup, this system probably does not really support process-shared pthreads or semaphores, even though it claims otherwise.
krusader(97879)/KSharedDataCache: Unable to unmap shared memory segment 0x1142d2000
Dynamic session lookup supported but failed: launchd did not provide a socket path, verify that org.freedesktop.dbus-session.plist is loaded!
krusader(97879)/kdeui (kdelibs): Session bus not found
To circumvent this problem try the following command (with Linux and bash)
export $(dbus-launch)
KCrash: Application 'krusader' crashing...
KCrash: Attempting to start /opt/local/lib/kde4/libexec/drkonqi.app/Contents/MacOS/drkonqi from kdeinit
sock_file=/Users/w-4jesc/Library/Preferences/KDE/socket-w-4jesc.local/kdeinit4__tmp_launch-URcTjs_org.x_0
Warning: connect() failed: : No such file or directory
KCrash: Attempting to start /opt/local/lib/kde4/libexec/drkonqi.app/Contents/MacOS/drkonqi directly
QProcess: Destroyed while process is still running.
drkonqi(97882)/KSharedDataCache ensureFileAllocated: This system misses support for posix_fallocate() -- ensure this partition has room for at least 10547296 bytes.
drkonqi(97882)/KSharedDataCache: Unable to find an appropriate lock to guard the shared cache. This *should* be essentially impossible. :(
drkonqi(97882)/KSharedDataCache: Unable to perform initial setup, this system probably does not really support process-shared pthreads or semaphores, even though it claims otherwise.
drkonqi(97882)/KSharedDataCache: Unable to unmap shared memory segment 0x1145ea000
Dynamic session lookup supported but failed: launchd did not provide a socket path, verify that org.freedesktop.dbus-session.plist is loaded!
drkonqi(97882)/kdeui (kdelibs): Session bus not found
To circumvent this problem try the following command (with Linux and bash)
export $(dbus-launch)
dbus is already loaded:
sudo launchctl load -w /Library/LaunchAgents/org.freedesktop.dbus-session.plist
org.freedesktop.dbus-session: Already loaded
has every one a idea?

sudo launchctl load -w /Library/LaunchAgents/org.freedesktop.dbus-session.plist

MacPorts installer specifically says to start dbus without sudo:
# Don't forget that dbus needs to be started as the local
# user (not with sudo) before any KDE programs will launch
# To start it run the following command:
# launchctl load /Library/LaunchAgents/org.freedesktop.dbus-session.plist

Related

Custom built kernel produces non-bootable initramfs on Centos 7

I've been building my own kernel (4.19.37) and have no issues during build (make) or install (make install_modules + make install). Everything seems to go fine until I execute grub2-mkconfig -o /boot/grub2/grub.cfg. When executing this command, grub finds both my existing and new vmlinuz-* kernels in /boot/ as well as their corresponding initramfs-*.img. However, at that point the system hangs indefinitely (> several hours). Ctrl+C does not seem to stop it and I must reboot. I have looked into this issue and all I have found that could be a problem is the probing of removal disks for bootable OS's, which I have eliminated by both removing them and by adding GRUB_DISABLE_OS_PROBER=true to /etc/default/grub per this SE post. Neither has helped.
Upon reboot, I end up at the grub> command line, presumably because the grub2-mkconfig never finished and corrupted the grub configuration file. Here I can load both the old and new kernel without any issue, as well as initramfs, but when I execute boot I get a kernel panic:
end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
Naturally, it is my assumption that there is something wrong with my initramfs-4.19.37.img that was created by my build process. As an experiment, I tested if I could load the new kernel, but use the old initramfs (4.19.10), and indeed it does boot into emergency mode. I however cannot do the opposite, old kernel with new initramfs. So something is fishy with my new initramfs image.
Getting smarter, my last experiment was mounting the old and new initramfs image with mount. They both mount successfully with no errors, and seem to have identical file structures. I have also compared both my new and old .config files for the kernel builds, and the differences are trivial.
A few other notes/observations:
In the image above, you can see List of all partions: produces nothing, so I am wondering if there is an issue with the file system type? My hard drive is xfs, what is the file system for the initramfs? CPIO?
At the grub> command line, ls / produces what I expect to see in /boot. It contains all my vmlinuz-* and initramfs-*.img files
My file system is xfs
I've tried various other kernel versions with same results
I have twice had successful builds and installs, once was the existing kernel (4.19.10), it was an upgrade, and a second time with the same kernel with a low-latency pre-emption model. I can't for the life of me figure out what I did differently then.
So the final question(s) are - What's wrong with the initramfs form these builds? What else can I do to validate it's integrity? Are there any .config changes I should make when building the kernel for the xfs file system?
Disclaimer: So this actually an continuation of [this question][3], but I've simplified the problem a bit. Some background info there might be relevant.
After updating the kernel using yum update, reboot the VM using the new kernel you get a kernel panic error.
Following commands will fix this problem.
yum remove kernel
yum update

"Operation note permitted" when trying to change kern.sysv.shmmni on Mac OS X

I would like to change the shmmni parameter of my shared memory kernel settings, but when I try to write to it, I get "Operation not permitted".
sysctl -w kern.sysv.shmmni=2048
Output:
kern.sysv.shmmni: 64
sysctl: kern.sysv.shmmni=2048: Operation not permitted
Can this be circumvented in any way? Why is the operation not permitted? shmmni should be a writable parameter... I can set the other shared memory params (shmmax, shmmin, shmall, shmseg)
I can change shmmni by updating /etc/sysctl.conf or by changing the setting via a launchdaemon, but these changes only have effect upon rebooting the system.
I would like to force-set it without rebooting.
Apparently, this specific entry is protected by SIP.
Not sure you're willing to do that, but if you disable SIP, you can then use sysctl -w kern.sysv.shmmni=2048.
I had the same issue setting other kernel parms in OSX Sierra.
Once I sudo su - root, I was able to execute successfully.

custom Linux kernel build failure in vmware workstation

While trying to compile/build and boot custom kernel inside vmware workstation, while booting new kernel, it fails and falls to shell with error "failed to find disk by uuid".
I tried this with both ubuntu and centos.
Things I tried but didn't help
check mapping by uuid in boot entry and existence in directory.
initramfs-update
replaced root=uuid=<> with /dev/disk/sda3
is it issue with vmware workstation?
how can it be rectified..??
I had a similar fault with my own attempts to bootstrap Fedora 22 onto a blank partition using a Centos install on another partition. I never did solve it completely, but I did find the problem was in my initrd rather than the kernel.
The problem is the initrd isn't starting LVM because dracut didn't tell the initrd that it needs LVM. Therefore if you start LVM manually you should be able to boot into your system to fix it.
I believe this is the sequence of commands I ran from the emergency shell to start LVM:
vgscan
vgchange -ay
lvs
this link helped me remember
Followed by exit to resume normal boot.
You might have to mount your LVM /etc/fstab entries manually, I don't recall whether I did or not.
Try this:
sudo update-grub
Then:
mkinitcpio -p linux
It won't hurt to check your fstab file. There, you should find the UUID of your drive. Make sure you have the proper flags set in the fstab.
Also, there's a setting in the grub.cfg that has has GRUB use the old style of hexadecimal UUIDs. Check that out as well!
The issue is with creation of initramfs, after doing a
make oldconfig
and choosing default for new options, make sure the ENOUGH diskspace is available for the image to be created.
in my case the image created was not correct and hence it was failing to mount the image at boot time.
when compared; the image size was quite less than the existing image of lower version, so I added another disk with more than sufficient size and then
make bzImage
make modules
make modules_install
make install
starts working like a charm.
I wonder why the image creation got completed earlier and resulted in corrupt image (with less size) without throwing any error [every single time]

AWS auto scaling test failed instance

I have an autoscaling cloudformation that I think I have set up to replace failed instances based on StatusCheckFailed_Instance. I want to test this. Can I test this by terminating one of the EC2 instances? Thanks!
Instance Status Check might fail for one of the following reasons:
Memory Errors
Out of memory: kill process
ERROR: mmu_update failed (Memory management update failed)
Device Errors
I/O error (Block device failure)
IO ERROR: neither local nor remote disk (Broken distributed block device)
Kernel Errors
request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux versions)
"FATAL: kernel too old" and "fsck: No such file or directory while trying to open /dev" (Kernel and AMI mismatch)
"FATAL: Could not load /lib/modules" or "BusyBox" (Missing kernel modules)
ERROR Invalid kernel (EC2 incompatible kernel)
File System Errors
request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux versions)
fsck: No such file or directory while trying to open... (File system not found)
General error mounting filesystems (Failed mount)
VFS: Unable to mount root fs on unknown-block (Root filesystem mismatch)
Error: Unable to determine major/minor number of root device... (Root file system/device mismatch)
XENBUS: Device with no driver...
... days without being checked, check forced (File system check required)
fsck died with exit status... (Missing device)
Operating System Errors
GRUB prompt (grubdom>)
Bringing up interface eth0: Device eth0 has different MAC address than expected, ignoring. (Hard-coded MAC address)
Unable to load SELinux Policy. Machine is in enforcing mode. Halting now. (SELinux misconfiguration)
XENBUS: Timeout connecting to devices (Xenbus timeout)
It seems to me that #1 is the easiest to implement to fail on demand. You can add web hook or launch a shell script with the delay to start some process which will result in OutOfMemory failure, to confirm that your autoscaling configuration works as configured.
Terminating the instance will not help to test your configuration, as when you gracefully terminating the instance, it is removed from the pool of available instances, and the check will not be performed.
More details on status checks can be found here: Troubleshooting Instances with Failed Status Checks

Installing Membase from source

I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.

Resources