How does linux kernel lockdep record lock-class dependency? - debugging

I got a warning as below which is a AB-BA deadlock issue. But I don't understand how the 'existing dependency' happened.
It looks like: kvm_read_guest() held (&mm->mmap_sem), then reading userspace memory (which is not ready yet) caused page_fault() invoked, then in i915_gem_fault() it tries to hold lock class (&dev->struct_mutex) which only has one instance.
But this sequence must haven't happened. Otherwise, double-lock already happed, since intel_vgpu_create_workload() has held (&dev->struct_mutex) already:
(&dev->struct_mutex)->(&mm->mmap_sem)->(&dev->struct_mutex)
So how could lockdep find such 'existing dependency'?
Another question is '#0 (&mm->mmap_sem)' is held by __might_fault()->might_lock_read(). But might_lock_read() only temporarily accquire the lock. Then why there is a lock sequence "(&mm->mmap_sem)->(&dev->struct_mutex)"?
[ 113.934800] ======================================================
[ 113.940963] WARNING: possible circular locking dependency detected
[ 113.947131] 4.16.0-rc5+ #44 Tainted: G U
[ 113.952257] ------------------------------------------------------
[ 113.958424] qemu-system-x86/4298 is trying to acquire lock:
[ 113.963983] (&mm->mmap_sem){++++}, at: [<0000000052776050>] __might_fault+0x36/0x80
[ 113.971716]
but task is already holding lock:
[ 113.977542] (&dev->struct_mutex){+.+.}, at: [<000000002a25c66b>] intel_vgpu_create_workload+0x3d1/0x550 [i915]
[ 113.987643]
which lock already depends on the new lock.
[ 113.995801]
the existing dependency chain (in reverse order) is:
[ 114.003268]
-> #1 (&dev->struct_mutex){+.+.}:
[ 114.009113] i915_mutex_lock_interruptible+0x66/0x170 [i915]
[ 114.015301] i915_gem_fault+0x1d0/0x620 [i915]
[ 114.020273] __do_fault+0x19/0xed
[ 114.024108] __handle_mm_fault+0x9fa/0x1140
[ 114.028806] handle_mm_fault+0x1a7/0x390
[ 114.033240] __do_page_fault+0x286/0x530
[ 114.037677] page_fault+0x45/0x50
[ 114.041503]
-> #0 (&mm->mmap_sem){++++}:
[ 114.046901] __might_fault+0x60/0x80
[ 114.051008] __kvm_read_guest_page+0x3d/0x80 [kvm]
[ 114.056319] kvm_read_guest+0x47/0x80 [kvm]
[ 114.061024] kvmgt_rw_gpa+0x9d/0x110 [kvmgt]
[ 114.065833] copy_gma_to_hva+0x71/0x100 [i915]
[ 114.070818] intel_gvt_scan_and_shadow_ringbuffer+0xc4/0x210 [i915]
[ 114.077617] intel_gvt_scan_and_shadow_workload+0xc7/0x480 [i915]
[ 114.084240] intel_vgpu_create_workload+0x3d9/0x550 [i915]
[ 114.090258] intel_vgpu_submit_execlist+0xc0/0x2a0 [i915]
[ 114.096190] elsp_mmio_write+0xcb/0x140 [i915]
[ 114.101170] intel_vgpu_mmio_reg_rw+0x250/0x4f0 [i915]
[ 114.106845] intel_vgpu_emulate_mmio_write+0xaa/0x240 [i915]
[ 114.113014] intel_vgpu_rw+0x200/0x250 [kvmgt]
[ 114.117972] intel_vgpu_write+0x164/0x1f0 [kvmgt]
[ 114.123188] __vfs_write+0x33/0x170
[ 114.127192] vfs_write+0xc5/0x1c0
[ 114.131022] SyS_pwrite64+0x90/0xb0
[ 114.135025] do_syscall_64+0x70/0x1c0
[ 114.139203] entry_SYSCALL_64_after_hwframe+0x42/0xb7
[ 114.144765]
other info that might help us debug this:
[ 114.152754] Possible unsafe locking scenario:
[ 114.158664] CPU0 CPU1
[ 114.163185] ---- ----
[ 114.167714] lock(&dev->struct_mutex);
[ 114.171541] lock(&mm->mmap_sem);
[ 114.177449] lock(&dev->struct_mutex);
[ 114.183790] lock(&mm->mmap_sem);
[ 114.187186]
*** DEADLOCK ***
[ 114.193095] 3 locks held by qemu-system-x86/4298:
[ 114.197789] #0: (&gvt->lock){+.+.}, at: [<0000000087b40252>] intel_vgpu_emulate_mmio_write+0x64/0x240 [i915]
[ 114.207792] #1: (&dev->struct_mutex){+.+.}, at: [<000000002a25c66b>] intel_vgpu_create_workload+0x3d1/0x550 [i915]
[ 114.218314] #2: (&kvm->srcu){....}, at: [<00000000e49701e8>] kvmgt_rw_gpa+0x4c/0x110 [kvmgt]
[ 114.226913]
stack backtrace:
[ 114.231263] CPU: 0 PID: 4298 Comm: qemu-system-x86 Tainted: G U 4.16.0-rc5+ #44
[ 114.239771] Hardware name: Dell Inc. OptiPlex 7040/0Y7WYT, BIOS 1.2.8 01/26/2016
[ 114.247155] Call Trace:
[ 114.249600] dump_stack+0x7c/0xbe
[ 114.252909] print_circular_bug.isra.33+0x21b/0x228
[ 114.257781] __lock_acquire+0xf7d/0x1470
[ 114.261698] ? lock_acquire+0xec/0x1e0
[ 114.265440] lock_acquire+0xec/0x1e0
[ 114.269008] ? __might_fault+0x36/0x80
[ 114.272749] __might_fault+0x60/0x80
[ 114.276317] ? __might_fault+0x36/0x80
[ 114.280066] __kvm_read_guest_page+0x3d/0x80 [kvm]
[ 114.284856] kvm_read_guest+0x47/0x80 [kvm]
[ 114.289042] kvmgt_rw_gpa+0x9d/0x110 [kvmgt]
[ 114.293333] copy_gma_to_hva+0x71/0x100 [i915]
[ 114.297797] intel_gvt_scan_and_shadow_ringbuffer+0xc4/0x210 [i915]
[ 114.304091] intel_gvt_scan_and_shadow_workload+0xc7/0x480 [i915]
[ 114.310206] intel_vgpu_create_workload+0x3d9/0x550 [i915]
[ 114.315709] intel_vgpu_submit_execlist+0xc0/0x2a0 [i915]
[ 114.321140] elsp_mmio_write+0xcb/0x140 [i915]
[ 114.325596] intel_vgpu_mmio_reg_rw+0x250/0x4f0 [i915]
[ 114.330746] intel_vgpu_emulate_mmio_write+0xaa/0x240 [i915]
[ 114.336396] intel_vgpu_rw+0x200/0x250 [kvmgt]
[ 114.340833] intel_vgpu_write+0x164/0x1f0 [kvmgt]
[ 114.345533] __vfs_write+0x33/0x170
[ 114.349016] ? common_file_perm+0x68/0x250
[ 114.353104] ? security_file_permission+0x36/0xb0
[ 114.357803] vfs_write+0xc5/0x1c0
[ 114.361113] SyS_pwrite64+0x90/0xb0
[ 114.364597] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 114.369293] do_syscall_64+0x70/0x1c0
[ 114.372950] entry_SYSCALL_64_after_hwframe+0x42/0xb7
[ 114.377994] RIP: 0033:0x7f0ca8a29da3
[ 114.381562] RSP: 002b:00007f0c98dc96c0 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
[ 114.389121] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0ca8a29da3
[ 114.396240] RDX: 0000000000000004 RSI: 00007f0c98dc96f0 RDI: 0000000000000019
[ 114.403362] RBP: 00007f0c98dc9710 R08: 0000000000000004 R09: 00000000ffffffff
[ 114.410484] R10: 0000000000002230 R11: 0000000000000293 R12: 0000000000000000
[ 114.417605] R13: 00007ffdd628d02f R14: 00007f0c98dca9c0 R15: 0000000000000000
in function intel_vgpu_create_workload:
mutex_lock(&dev_priv->drm.struct_mutex);
ret = intel_gvt_scan_and_shadow_workload(workload);
mutex_unlock(&dev_priv->drm.struct_mutex);

Related

Can I create a SpiDev device with spi-gpio?

I'm trying to setup a spidev device with the spi-gpio driver but I can't make it work. I created a spi-gpio device in the device tree as following:
spi4 {
compatible = "spi-gpio";
pinctrl-names = "default";
//pinctrl-0 = <&pinctrl_ecspi1_1>;
#address-cells = <0x1>;
ranges;
gpio-sck = <&gpio4 6 0>;
gpio-mosi = <&gpio3 18 0>;
gpio-miso = <&gpio3 17 0>;
num-chipselects = <1>;
status = "ok";
spidev#0x00 {
compatible = "spidev";
spi-max-frequency = <20000000>;
reg = <0>;
};
but when I try it on my iMX6Q and do: modprobe spi-gpio I got the following error:
[ 29.830946] WARNING: at drivers/gpio/gpiolib.c:126 gpio_to_desc+0x44/0x4c()
[ 29.830954] invalid GPIO -2
[ 29.830959] Modules linked in: spi_gpio(+) usb_f_acm u_serial g_serial libcomposite xxeng [last unloaded: at25]
[ 29.831005] CPU: 3 PID: 908 Comm: modprobe Not tainted 3.10.17-p16163-Q7+gd2d3122 #1
[ 29.831014] Backtrace:
[ 29.831060] [<800128a4>] (dump_backtrace+0x0/0x110) from [<80012abc>] (show_stack+0x18/0x1c)
[ 29.831070] r6:80267514 r5:00000009 r4:d88dfbd0 r3:00000000
[ 29.831115] [<80012aa4>] (show_stack+0x0/0x1c) from [<80552a40>] (dump_stack+0x24/0x28)
[ 29.831146] [<80552a1c>] (dump_stack+0x0/0x28) from [<80026c80>] (warn_slowpath_common+0x5c/0x74)
[ 29.831166] [<80026c24>] (warn_slowpath_common+0x0/0x74) from [<80026cd0>] (warn_slowpath_fmt+0x38/0x40)
[ 29.831171] r8:00000000 r7:00000000 r6:fffffffe r5:d8436e28 r4:d871f140
r3:00000009
[ 29.831203] [<80026c98>] (warn_slowpath_fmt+0x0/0x40) from [<80267514>] (gpio_to_desc+0x44/0x4c)
[ 29.831210] r3:fffffffe r2:806858b8
[ 29.831234] [<802674d0>] (gpio_to_desc+0x0/0x4c) from [<8026886c>] (gpio_request+0x14/0x20)
[ 29.831270] [<80268858>] (gpio_request+0x0/0x20) from [<7f12f750>] (spi_gpio_setup+0x9c/0xe0 [spi_gpio])
[ 29.831282] r4:d865b000 r3:00000000
[ 29.831325] [<7f12f6b4>] (spi_gpio_setup+0x0/0xe0 [spi_gpio]) from [<80357910>] (spi_setup+0x78/0x178)
[ 29.831334] r7:00000000 r6:d819fa10 r5:d8436c00 r4:d865b000
[ 29.831369] [<80357898>] (spi_setup+0x0/0x178) from [<80357bc4>] (spi_add_device+0xbc/0x17c)
[ 29.831382] r9:d819fa10 r8:00000000 r7:00000000 r6:d819fa10 r5:d8436c00
r4:d865b000
[ 29.831432] [<80357b08>] (spi_add_device+0x0/0x17c) from [<80359178>] (spi_register_master+0x620/0x760)
[ 29.831442] r7:d865b174 r6:d865b000 r5:81e1f754 r4:d8436c00
[ 29.831478] [<80358b58>] (spi_register_master+0x0/0x760) from [<8035aa44>] (spi_bitbang_start+0xa4/0x110)
[ 29.831502] [<8035a9a0>] (spi_bitbang_start+0x0/0x110) from [<7f12fa5c>] (spi_gpio_probe+0x2c8/0x45c [spi_gpio])
[ 29.831511] r4:d8764910 r3:8035a2cc
[ 29.831553] [<7f12f794>] (spi_gpio_probe+0x0/0x45c [spi_gpio]) from [<802d84b8>] (platform_drv_probe+0x20/0x24)
[ 29.831577] [<802d8498>] (platform_drv_probe+0x0/0x24) from [<802d6ea4>] (driver_probe_device+0x140/0x384)
[ 29.831596] [<802d6d64>] (driver_probe_device+0x0/0x384) from [<802d71c8>] (__driver_attach+0x94/0x98)
[ 29.831608] r8:00000000 r7:00000000 r6:d819fa44 r5:7f12fecc r4:d819fa10
[ 29.831646] [<802d7134>] (__driver_attach+0x0/0x98) from [<802d4f34>] (bus_for_each_dev+0x68/0x9c)
[ 29.831659] r6:802d7134 r5:7f12fecc r4:00000000 r3:00000000
[ 29.831696] [<802d4ecc>] (bus_for_each_dev+0x0/0x9c) from [<802d6884>] (driver_attach+0x24/0x28)
[ 29.831709] r6:80763840 r5:d84b1c80 r4:7f12fecc
[ 29.831741] [<802d6860>] (driver_attach+0x0/0x28) from [<802d6458>] (bus_add_driver+0x1d8/0x27c)
[ 29.831753] [<802d6280>] (bus_add_driver+0x0/0x27c) from [<802d7838>] (driver_register+0x80/0x148)
[ 29.831764] r7:d88de000 r6:807870c0 r5:7f12ff18 r4:7f12fecc
[ 29.831795] [<802d77b8>] (driver_register+0x0/0x148) from [<802d8b70>] (platform_driver_register+0x58/0x60)
[ 29.831807] r8:00000000 r7:d88de000 r6:807870c0 r5:7f12ff18 r4:d88dff48
r3:00000000
[ 29.831842] [<802d8b18>] (platform_driver_register+0x0/0x60) from [<7f132014>] (spi_gpio_driver_init+0x14/0x1c [spi_gpio])
[ 29.831857] [<7f132000>] (spi_gpio_driver_init+0x0/0x1c [spi_gpio]) from [<8000867c>] (do_one_initcall+0xe0/0x18c)
[ 29.831885] [<8000859c>] (do_one_initcall+0x0/0x18c) from [<8006fcb0>] (load_module+0x1be8/0x223c)
[ 29.831911] [<8006e0c8>] (load_module+0x0/0x223c) from [<800704a4>] (SyS_finit_module+0x88/0xb8)
[ 29.831938] [<8007041c>] (SyS_finit_module+0x0/0xb8) from [<8000f080>] (ret_fast_syscall+0x0/0x30)
[ 29.831947] r6:00000000 r5:00000000 r4:00000000
[ 29.831966] ---[ end trace 689f08d95c191798 ]---
[ 29.831978] gpiod_request: invalid GPIO
[ 29.832003] spi_gpio spi4.22: can't setup spi32762.0, status -22
[ 29.838036] spi_master spi32762: spi_device register error /spi4/spidev#0x00
[ 29.846522] ------------[ cut here ]------------
[ 29.846559] WARNING: at drivers/gpio/gpiolib.c:126 gpio_to_desc+0x44/0x4c()
[ 29.846567] invalid GPIO -2
[ 29.846572] Modules linked in: spi_gpio(+) usb_f_acm u_serial g_serial libcomposite xxeng [last unloaded: at25]
[ 29.846603] CPU: 3 PID: 908 Comm: modprobe Tainted: G W 3.10.17-p16163-Q7+gd2d3122 #1
[ 29.846610] Backtrace:
[ 29.846634] [<800128a4>] (dump_backtrace+0x0/0x110) from [<80012abc>] (show_stack+0x18/0x1c)
[ 29.846640] r6:80267514 r5:00000009 r4:d88dfbb0 r3:00000000
[ 29.846663] [<80012aa4>] (show_stack+0x0/0x1c) from [<80552a40>] (dump_stack+0x24/0x28)
[ 29.846679] [<80552a1c>] (dump_stack+0x0/0x28) from [<80026c80>] (warn_slowpath_common+0x5c/0x74)
[ 29.846689] [<80026c24>] (warn_slowpath_common+0x0/0x74) from [<80026cd0>] (warn_slowpath_fmt+0x38/0x40)
[ 29.846694] r8:d865b008 r7:d871f140 r6:00000000 r5:d865b000 r4:d865b000
r3:00000009
[ 29.846716] [<80026c98>] (warn_slowpath_fmt+0x0/0x40) from [<80267514>] (gpio_to_desc+0x44/0x4c)
[ 29.846720] r3:fffffffe r2:806858b8
[ 29.846737] [<802674d0>] (gpio_to_desc+0x0/0x4c) from [<80269be8>] (gpio_free+0x10/0x18)
[ 29.846756] [<80269bd8>] (gpio_free+0x0/0x18) from [<7f12f6a8>] (spi_gpio_cleanup+0x30/0x3c [spi_gpio])
[ 29.846776] [<7f12f678>] (spi_gpio_cleanup+0x0/0x3c [spi_gpio]) from [<80357ae8>] (spidev_release+0x24/0x44)
[ 29.846781] r4:d865b000 r3:d8436c00
[ 29.846807] [<80357ac4>] (spidev_release+0x0/0x44) from [<802d2a8c>] (device_release+0x34/0x98)
[ 29.846812] r4:d865b008 r3:80357ac4
[ 29.846831] [<802d2a58>] (device_release+0x0/0x98) from [<80247938>] (kobject_release+0x9c/0x1b4)
[ 29.846836] r6:80763588 r5:d865b024 r4:8077dae0 r3:802d2a58
[ 29.846860] [<8024789c>] (kobject_release+0x0/0x1b4) from [<80247aa0>] (kobject_put+0x50/0x7c)
[ 29.846867] r8:00000000 r7:d865b174 r6:d865b000 r5:81e1f754 r4:d865b008
[ 29.846889] [<80247a50>] (kobject_put+0x0/0x7c) from [<802d2ddc>] (put_device+0x1c/0x20)
[ 29.846894] r4:d8436c00
[ 29.846914] [<802d2dc0>] (put_device+0x0/0x20) from [<8035919c>] (spi_register_master+0x644/0x760)
[ 29.846938] [<80358b58>] (spi_register_master+0x0/0x760) from [<8035aa44>] (spi_bitbang_start+0xa4/0x110)
[ 29.846959] [<8035a9a0>] (spi_bitbang_start+0x0/0x110) from [<7f12fa5c>] (spi_gpio_probe+0x2c8/0x45c [spi_gpio])
[ 29.846971] r4:d8764910 r3:8035a2cc
[ 29.847002] [<7f12f794>] (spi_gpio_probe+0x0/0x45c [spi_gpio]) from [<802d84b8>] (platform_drv_probe+0x20/0x24)
[ 29.847024] [<802d8498>] (platform_drv_probe+0x0/0x24) from [<802d6ea4>] (driver_probe_device+0x140/0x384)
[ 29.847043] [<802d6d64>] (driver_probe_device+0x0/0x384) from [<802d71c8>] (__driver_attach+0x94/0x98)
[ 29.847051] r8:00000000 r7:00000000 r6:d819fa44 r5:7f12fecc r4:d819fa10
[ 29.847082] [<802d7134>] (__driver_attach+0x0/0x98) from [<802d4f34>] (bus_for_each_dev+0x68/0x9c)
[ 29.847092] r6:802d7134 r5:7f12fecc r4:00000000 r3:00000000
[ 29.847125] [<802d4ecc>] (bus_for_each_dev+0x0/0x9c) from [<802d6884>] (driver_attach+0x24/0x28)
[ 29.847135] r6:80763840 r5:d84b1c80 r4:7f12fecc
[ 29.847152] [<802d6860>] (driver_attach+0x0/0x28) from [<802d6458>] (bus_add_driver+0x1d8/0x27c)
[ 29.847164] [<802d6280>] (bus_add_driver+0x0/0x27c) from [<802d7838>] (driver_register+0x80/0x148)
[ 29.847171] r7:d88de000 r6:807870c0 r5:7f12ff18 r4:7f12fecc
[ 29.847193] [<802d77b8>] (driver_register+0x0/0x148) from [<802d8b70>] (platform_driver_register+0x58/0x60)
[ 29.847198] r8:00000000 r7:d88de000 r6:807870c0 r5:7f12ff18 r4:d88dff48
r3:00000000
[ 29.847239] [<802d8b18>] (platform_driver_register+0x0/0x60) from [<7f132014>] (spi_gpio_driver_init+0x14/0x1c [spi_gpio])
[ 29.847260] [<7f132000>] (spi_gpio_driver_init+0x0/0x1c [spi_gpio]) from [<8000867c>] (do_one_initcall+0xe0/0x18c)
[ 29.847281] [<8000859c>] (do_one_initcall+0x0/0x18c) from [<8006fcb0>] (load_module+0x1be8/0x223c)
[ 29.847294] [<8006e0c8>] (load_module+0x0/0x223c) from [<800704a4>] (SyS_finit_module+0x88/0xb8)
[ 29.847311] [<8007041c>] (SyS_finit_module+0x0/0xb8) from [<8000f080>] (ret_fast_syscall+0x0/0x30)
[ 29.847318] r6:00000000 r5:00000000 r4:00000000
[ 29.847334] ---[ end trace 689f08d95c191799 ]---
I understand there is an invalid setting, so my question is, can I use the spidev inside a spi-gpio? If not, what would be the closest driver that can provide a user space driver for doing that?
Or, if it's just a regular error, how do I debug this?

Visual Studio File Nesting for gRPC proto files

I have a load of .proto files that generate an associated .cs file, and i also have a partial .cs file, all of the same name:
message.proto
message.cs (autogenerated in the obj\debug
directory)
mesasge.cs (in the same folder as the proto file)
I am trying to get them nesting with Visual Studio 2019, but unable. Pointers much appreciated as to what i am doing wrong.
{
"help": "https://go.microsoft.com/fwlink/?linkid=866610",
"root": true,
"dependentFileProviders": {
"add": {
"addedExtension": {},
"pathSegment": {
"add": {
".*": [
".js",
".css",
".html",
".htm",
".less",
".scss",
".coffee",
".iced",
".config",
".cs",
".vb",
".json",
".proto"
]
}
},
"extensionToExtension": {
"add": {
".proto": [
".cs"
],
".js": [
".coffee",
".iced",
".ts",
".tsx",
".jsx"
],
".css": [
".less",
".scss",
".sass",
".styl"
],
".html": [
".md",
".mdown",
".markdown",
".mdwn"
],
".map": [
".js",
".css"
],
".svgz": [
".svg"
],
".designer.cs": [
".resx"
],
".cs.d.ts": [
".cs"
]
}
},
"fileToFile": {
"add": {
".bowerrc": [
"bower.json"
],
".npmrc": [
"package.json"
],
"npm-shrinkwrap.json": [
"package.json"
],
"yarn.lock": [
"package.json"
],
".yarnclean": [
"package.json"
],
".yarnignore": [
"package.json"
],
".yarn-integrity": [
"package.json"
],
".yarnrc": [
"package.json"
],
"package-lock.json": [
"package.json"
]
}
},
"fileSuffixToExtension": {
"add": {
"-vsdoc.js": [
".js"
]
}
},
"allExtensions": {
"add": {
".*": [
".tt"
]
}
}
}
}
}

Group facts in prolog by 3 variables

I'm having troubles grouping some facts in prolog by 3 different properties.
This is my knownledge base (a graph basically):
% entity(Label, Id)
% relationship(Type, Subject, Object)
entity('Person', id_0).
entity('Place', 1468).
relationship('wasIn', id_0, 1468).
entity('Place', 1367).
relationship('wasIn', 1468, 1367).
entity('Person', 1466).
relationship('wasIn', 1466, 1468).
entity('Place', 1478).
relationship('aliasOf', 1478, 1468).
entity('Place', 1052).
relationship('wasIn', id_0, 1052).
entity('Place', 1184).
relationship('wasIn', 1052, 1184).
entity('Person', 1048).
relationship('wasIn', 1048, 1052).
entity('Place', 1069).
relationship('wasIn', id_0, 1069).
entity('Place', 1070).
relationship('wasIn', 1069, 1070).
entity('Person', 1068).
relationship('wasIn', 1068, 1069).
I wanted to group relationships by each entity id and subject type, so to get something like:
[
[
[id_0, wasIn, Place],
% because entities 1468, 1052, 1069 are Places
[ relationship(wasIn, id_0, 1468),
relationship(wasIn, id_0, 1052),
relationship(wasIn, id_0, 1069)]
],
[
[id_0, wasIn, Some Other Subject Label],
[relationship(wasIn, id_0, ...),
...]
],
[
[1468, wasIn, Place],
[relationship(wasIn, 1468, ...),
...]
],
...
]
and so forth.
For now, I managed to group by subject and type only. Sadly, I'm getting duplicates out from it (which I wanted to avoid). Any further attempt I tried didn't work, that's why I'm asking here.
This are my current rules:
group_relationships_by_node([[Subject, Type] | [R]]) :-
entity(_, Subject),
relationship(Type, Subject, _),
findall(relationship(Type, Subject, Object), relationship(Type, Subject, Object), R).
group_by_relationships(Result) :-
findall(X, group_relationships_by_node(X), Result).
this is my current result:
[
[
[id_0, wasIn],
[ relationship(wasIn, id_0, 1468),
relationship(wasIn, id_0, 1052),
relationship(wasIn, id_0, 1069) ]
],
% duplicate
[
[id_0, wasIn],
[ relationship(wasIn, id_0, 1468),
relationship(wasIn, id_0, 1052),
relationship(wasIn, id_0, 1069) ]
],
% duplicate
[
[id_0, wasIn],
[ relationship(wasIn, id_0, 1468),
relationship(wasIn, id_0, 1052),
relationship(wasIn, id_0, 1069) ]
],
[
[ 1468, wasIn ],
[ relationship(wasIn, 1468, 1367) ]
],
[
[ 1466, wasIn ],
[ relationship(wasIn, 1466, 1468) ]
],
[
[ 1478, aliasOf ],
[ relationship(aliasOf, 1478, 1468) ]
],
[
[ 1052, wasIn ],
[ relationship(wasIn, 1052, 1184) ]
],
[
[ 1048, wasIn ],
[ relationship(wasIn, 1048, 1052) ]
],
[
[ 1069, wasIn ],
[ relationship(wasIn, 1069, 1070) ]
],
[
[ 1068, wasIn ],
[ relationship(wasIn, 1068, 1069) ]
]
]
Alas,
I don't know prolog very well myself,
I hope you can even suggest me a better solution.
Thank you very much
So,
this is my attempt. It does work, but please let me know if a better solution exists (considering a dedup rule that removes duplicates from a list)
get_subject_ids(Subjects) :-
findall(Subject, entity(_, Subject), Subjects).
get_relationship_by_type_subject_objectLabel(Type, Subject, ObjectLabel, Result) :-
entity(ObjectLabel, Object),
relationship(Type, Subject, Object),
Result = relationship(Type, Subject, Object).
iterate_labels(_, _, [], []).
iterate_labels(Subject, Type, [ObjectLabel | Rest], [[ObjectLabel, Result] | ResultRest]) :-
findall(Rel, get_relationship_by_type_subject_objectLabel(Type, Subject, ObjectLabel, Rel), Result),
iterate_labels(Subject, Type, Rest, ResultRest).
iterate_objects([], []).
iterate_objects([Object | Rest], [ObjectLabel | ResultRest]) :-
entity(ObjectLabel, Object),
iterate_objects(Rest, ResultRest).
iterate_types(_, [], []).
iterate_types(Subject, [Type | Rest], [[Type, GroupedRels] | ResultRest]) :-
% find all relationships with type Type starting from Subject
% findall(relationship(Type, Subject, Object), relationship(Type, Subject, Object), Rels),
findall(Object, relationship(Type, Subject, Object), Objects),
iterate_objects(Objects, MObjects),
dedup(MObjects, ObjectLabels),
iterate_labels(Subject, Type, ObjectLabels, GroupedRels),
iterate_types(Subject, Rest, ResultRest).
iterate_subjects([], []).
iterate_subjects([Subject | Rest], [[Subject, Result] | ResultRest]) :-
findall(Type, relationship(Type, Subject, _), DTypes),
dedup(DTypes, Types),
iterate_types(Subject, Types, Result),
iterate_subjects(Rest, ResultRest).
main :-
get_subject_ids(Subjects),
iterate_subjects(Subjects, Result),
writeln(Result).
The result is something like:
[
[
id_0,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
id_0,
1468),
relationship(wasIn,
id_0,
1052)
]
],
[
Placez,
[
relationship(wasIn,
id_0,
1069)
]
]
]
]
]
],
[
1468,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
1468,
1367)
]
]
]
]
]
],
[
1367,
[
]
],
[
1466,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
1466,
1468)
]
]
]
]
]
],
[
1478,
[
[
aliasOf,
[
[
Place,
[
relationship(aliasOf,
1478,
1468)
]
]
]
]
]
],
[
1052,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
1052,
1184)
]
]
]
]
]
],
[
1184,
[
]
],
[
1048,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
1048,
1052)
]
]
]
]
]
],
[
1069,
[
[
wasIn,
[
[
Place,
[
relationship(wasIn,
1069,
1070)
]
]
]
]
]
],
[
1070,
[
]
],
[
1068,
[
[
wasIn,
[
[
Placez,
[
relationship(wasIn,
1068,
1069)
]
]
]
]
]
]
]
which is fine

Unable to resolve 271.0: missing requirement [271.0] package; (&(package =org.apache.cxf.jaxrs.client)(version>=2.7.0)(!(version>=3.0.0)))

I am trying to install my bundle and I get the following error:
org.osgi.framework.BundleException: Unresolved constraint in bundle horizon-util [271]: Unable to resolve 271.0: missing requirement [271.0] package; (&(package
=org.apache.cxf.jaxrs.client)(version>=2.7.0)(!(version>=3.0.0)))
Bundle ID: 271
these are my bundles :
karaf#root> osgi:list
START LEVEL 100 , List Threshold: 50
ID State Blueprint Spring Level Name
[ 45] [Active ] [ ] [ ] [ 50] geronimo-j2ee-management_1.1_spec (1.0.1)
[ 46] [Active ] [ ] [ ] [ 50] Commons Collections (3.2.1)
[ 47] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: jasypt (1.9.0.1)
[ 48] [Active ] [ ] [ ] [ 50] geronimo-jms_1.1_spec (1.1.1)
[ 49] [Active ] [ ] [ ] [ 50] Commons Pool (1.6.0)
[ 50] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: xpp3 (1.1.0.4c_5)
[ 51] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: dom4j-1.6.1 (1.6.1.2)
[ 52] [Active ] [ ] [ ] [ 50] Commons Lang (2.6)
[ 53] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: oro-2.0.8 (2.0.8.3)
[ 54] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Specs :: Stax API 1.0 (1.9.0)
[ 55] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: xstream-1.3 (1.3.0.3)
[ 56] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: jdom (1.1.0.4)
[ 57] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: velocity (1.7.0.4)
[ 58] [Active ] [ ] [ ] [ 50] Apache Aries Transaction Manager (0.3.0)
[ 59] [Active ] [ ] [ ] [ 50] kahadb (5.7.0)
[ 60] [Active ] [ ] [ ] [ 50] activemq-pool (5.7.0)
[ 61] [Active ] [ ] [ ] [ 50] activemq-console (5.7.0)
[ 62] [Active ] [ ] [ ] [ 50] activemq-ra (5.7.0)
[ 63] [Active ] [Created ] [ ] [ 50] activemq-core (5.7.0)
Fragments: 68
[ 64] [Active ] [Created ] [ ] [ 50] activemq-karaf (5.7.0)
[ 65] [Active ] [Created ] [ ] [ 50] Apache XBean :: OSGI Blueprint Namespace Handler (3.11.1)
[ 66] [Active ] [ ] [ ] [ 50] Commons JEXL (2.0.1)
[ 67] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Specs :: Scripting API 1.0 (1.9.0)
[ 68] [Resolved ] [ ] [ ] [ 50] activemq-blueprint (5.7.0)
Hosts: 63
[ 69] [Active ] [Created ] [ ] [ 50] activemq-broker.xml (0.0.0)
[ 83] [Active ] [ ] [ ] [ 50] Joda-Time (1.6.2)
[ 84] [Active ] [ ] [ ] [ 50] Apache XBean :: Spring (3.11.1)
[ 85] [Active ] [ ] [ ] [ 50] activemq-spring (5.7.0)
[ 99] [Active ] [Created ] [ ] [ 50] camel-karaf-commands (2.10.7)
[ 100] [Active ] [ ] [ ] [ 50] camel-core (2.10.7)
[ 102] [Active ] [ ] [ ] [ 50] camel-spring (2.10.7)
[ 103] [Active ] [Created ] [ ] [ 50] camel-blueprint (2.10.7)
[ 106] [Active ] [ ] [ ] [ 50] camel-jms (2.10.7)
[ 107] [Active ] [ ] [ ] [ 50] activemq-camel (5.7.0)
[ 172] [Active ] [ ] [ ] [ 50] Apache CXF Compatibility Bundle Jar (2.6.9)
[ 173] [Active ] [Created ] [ ] [ 50] camel-cxf (2.10.7)
[ 174] [Active ] [ ] [ ] [ 50] camel-cxf-transport (2.10.7)
[ 181] [Resolved ] [ ] [ ] [ 80] simple-camel-blueprint.xml (0.0.0)
[ 182] [Active ] [ ] [ ] [ 50] camel-stream (2.10.7)
[ 188] [Installed ] [ ] [ ] [ 80] ERP-blueprint.xml (0.0.0)
[ 199] [Active ] [ ] [ ] [ 50] camel-sql (2.10.7)
[ 204] [Installed ] [ ] [ ] [ 80] horizon-core (0.0.1)
[ 206] [Active ] [ ] [ ] [ 50] Data mapper for Jackson JSON processor (1.9.10)
[ 207] [Active ] [ ] [ ] [ 50] Jackson JSON processor (1.9.10)
[ 208] [Active ] [ ] [ ] [ 50] camel-jackson (2.10.7)
[ 209] [Active ] [ ] [ ] [ 50] MongoDB Java Driver (2.11.2.RELEASE)
[ 259] [Installed ] [ ] [ ] [ 80] Spring Data MongoDB Support (1.3.3.RELEASE)
[ 269] [Installed ] [ ] [ ] [ 80] horizon-util (0.0.1)
Do I need to update Apache CXF to 2.7.0 version?
And how do I do that?
I tried to update the bundle but it did not work.
Thank you for any pointer
your horizon-util bundle depends on bundle org.apache.cxf.jaxrs.client with version greater than and equal to 2.7.0.
So try to install appropriate version to resolve the error.
In my case I've installed Apache CXF without Camel so I will give you the steps for this scenario but maybe it will work the same way for Camel.
So, to remove the current version first you have to remove the repository:
feature:repo-list
--> gives you the list of repositories (in my case cxf-2.x.x)
feature:repo-remove cxf-2.x.x
--> removes the repository from Karaf
feature:repo-add mvn:org.apache.cxf.karaf/apache-cxf/2.7.10/xml/features
--> Adds new version of CXF repository (in this case 2.7.0)
feature:install cxf-jaxrs
--> Installs the part of CXF you need (or cxf instead of cxf-jaxrs if we need all)
feature:list | grep cxf
--> shows a list of bundles related to CXF. [x] means that they are started
I hope this will help you.

Karaf blueprint locked on startup with stopping bundles

We have been using karaf version 2.3.3 for some months now on a system that receives data files, translates the data into objects, and persists the data to a persistent store.
Recently, we've noticed that when karaf is stopped/restarted the bundles will get into some kind of locked state for a period of time.
Here is a sequence of events:
1) During chef run, bundles are deployed into the deploy directory while karaf is down
2) When karaf comes up, all bundles and blueprints resolve correctly
3) When karaf is cycled, bundles resolve correctly, but blueprints get into a locked state where most are up, but one is in a stopping state, and several might be in a resolved state
4) After 5 mins (timeout), the stopping bundle goes to resolved, and some other bundle moves into the stopping state
5) Some of the time (most of the time?), if you wait long enough, all bundles will eventually move to an Active state and the system will be fully up
While karaf is starting, I can use the karaf client to issue 'list' commands and watch the bundles start up. They cycle from: Installed -> Resolved -> Active,
while the blueprints cycle from:
blank -> Creating -> Created with an occasional GracePeriod thrown in while dependent services are coming up.
After it appears that all services are Active and all blueprints are Created, one bundle will get stuck in a Stopping state while others revert to a Resolved state:
[ 136] [Active ] [Created ] [ 80] transformation-services (1.0.3)
[ 137] [Active ] [Created ] [ 80] event-services (0.1.2)
[ 138] [Active ] [Created ] [ 80] ftp-services (0.0.0)
[ 139] [Active ] [Created ] [ 80] ingest-resources (0.0.1)
[ 140] [Active ] [Created ] [ 80] orchestration-app (0.2.3)
[ 141] [Active ] [Created ] [ 80] aws-services (0.4.0)
[ 142] [Resolved ] [ ] [ 80] point-data-service-test (0.2.0)
[ 143] [Active ] [Created ] [ 80] event-consumer-app (1.3.4)
[ 144] [Stopping ] [ ] [ 80] XXXX_no_op_log_transform.xml (0.0.0)
[ 145] [Resolved ] [ ] [ 80] persistence-app (1.3.3)
[ 146] [Active ] [Created ] [ 80] ftp-ingest-endpoint (1.0.2)
[ 147] [Resolved ] [ ] [ 80] secondary_ftp.xml (0.0.0)
[ 148] [Resolved ] [ ] [ 80] event-rest-test (0.0.0)
[ 149] [Resolved ] [ ] [ 80] customer_credentials.xml (0.0.0)
[ 150] [Resolved ] [ ] [ 80] customer1_xml.xml (0.0.0)
[ 151] [Active ] [Created ] [ 80] endpoint-services (0.0.0)
[ 152] [Active ] [Created ] [ 80] scheduler-services (0.1.0)
[ 153] [Active ] [Created ] [ 80] fourhundred_xml.xml (0.0.0)
[ 154] [Active ] [Creating ] [ 80] point-data-service (2.3.3)
[ 155] [Installed ] [ ] [ 80] customer1_csv.xml (0.0.0)
We have around 20 custom bundles that perform a variety of services. Some describe services that run in a scheduled executor. Some expose cxf REST services. Some are simple blueprint files that have been dropped into the karaf deploy directory. We are using the whiteboard pattern to discover, register, and access the services from the blueprint files that are dropped in the hot deploy.
I've played around with using a feature file or setting the bundle start levels, but still see the same behavior. There are a few JIRAs that I've found that talk about the problem being a blueprint synchronization problem (https://issues.apache.org/jira/browse/KARAF-1724 https://issues.apache.org/jira/browse/ARIES-1051) but don't really give any real advice.
Has anyone come across this same issue and come up with a reliable way to workaround it?

Resources