Apache Camel AS2 component failed to decrypt inbound message - spring

I adopted Apache Camel AS2 to listen AS2 messages over http. However, I got a encrypted message from a partner, which encrypted with mutually agreed certificates, but failed to decrypt it. The conten-transfer-encoding was binary, and charset was set to ISO-8859-1. The errors was dumped as:
Caused by: java.io.EOFException: DEF length 19 object truncated by 10
at org.bouncycastle.asn1.DefiniteLengthInputStream.toByteArray(DefiniteLengthInputStream.java:139) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1InputStream.getBuffer(ASN1InputStream.java:443) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1InputStream.createPrimitiveDERObject(ASN1InputStream.java:529) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.parseImplicitPrimitive(ASN1StreamParser.java:205) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.implParseObject(ASN1StreamParser.java:98) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:263) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSequenceParser.getLoadedObject(DLSequenceParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSetParser.getLoadedObject(DLSetParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSequenceParser.getLoadedObject(DLSequenceParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSequenceParser.getLoadedObject(DLSequenceParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSequenceParser.getLoadedObject(DLSequenceParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.DLSetParser.getLoadedObject(DLSetParser.java:41) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.BERSequenceParser.parse(BERSequenceParser.java:61) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.BERSequenceParser.getLoadedObject(BERSequenceParser.java:39) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.loadTaggedIL(ASN1StreamParser.java:136) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.BERTaggedObjectParser.getLoadedObject(BERTaggedObjectParser.java:84) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1StreamParser.readVector(ASN1StreamParser.java:267) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.BERSequenceParser.parse(BERSequenceParser.java:61) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.asn1.ASN1InputStream.readObject(ASN1InputStream.java:242) ~[bcprov-debug-jdk15on-1.70.jar!/:1.70.0]
at org.bouncycastle.cms.CMSUtils.readContentInfo(Unknown Source) ~[bcpkix-jdk15on-1.70.jar!/:1.70.00.0]
at org.bouncycastle.cms.CMSUtils.readContentInfo(Unknown Source) ~[bcpkix-jdk15on-1.70.jar!/:1.70.00.0]
at org.bouncycastle.cms.CMSEnvelopedData.<init>(Unknown Source) ~[bcpkix-jdk15on-1.70.jar!/:1.70.00.0]
at org.apache.camel.component.as2.api.entity.EntityParser.decryptData(EntityParser.java:233) ~[classes!/:0210]
Any advices what the potential problems might be and how to debug it. Thanks a lot.

Related

Gnuplot: frequency per min

I have a sample log file with 1000 lines, that looks like this,
TIME,STATUS
09:00,OK
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:01,OK
09:01,OK
09:01,OK
09:01,PERM
09:01,TEMP
09:01,TEMP
09:02,OK
09:02,TEMP
09:02,TEMP
09:03,OK
09:03,PERM
09:03,PERM
09:03,TEMP
09:03,TEMP
09:04,OK
09:04,PERM
09:04,PERM
09:04,TEMP
09:04,TEMP
09:04,TEMP
09:05,OK
09:05,OK
09:05,OK
09:05,PERM
09:05,TEMP
09:05,TEMP
09:05,TEMP
09:05,TEMP
09:06,OK
09:06,OK
09:06,PERM
09:06,PERM
09:06,PERM
09:06,PERM
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:07,OK
09:07,OK
09:07,TEMP
09:07,TEMP
09:07,TEMP
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,TEMP
09:08,TEMP
09:08,TEMP
09:08,TEMP
09:09,OK
09:09,OK
09:09,OK
09:09,PERM
09:10,OK
09:10,PERM
09:10,PERM
09:10,TEMP
09:11,OK
09:11,OK
09:11,OK
09:11,OK
09:11,PERM
09:11,PERM
09:11,PERM
09:11,PERM
09:11,TEMP
09:11,TEMP
09:11,TEMP
09:12,PERM
09:12,TEMP
09:12,TEMP
09:13,OK
09:13,OK
09:13,OK
09:13,OK
09:13,OK
09:13,PERM
09:13,PERM
09:13,PERM
09:13,TEMP
09:13,TEMP
09:14,OK
09:14,OK
09:14,OK
09:14,PERM
09:14,PERM
09:14,PERM
09:14,PERM
09:14,TEMP
09:16,OK
09:16,OK
09:16,OK
09:16,PERM
09:16,PERM
09:16,TEMP
09:16,TEMP
09:17,OK
09:17,OK
09:17,PERM
09:17,PERM
09:18,OK
09:18,OK
09:18,OK
09:18,OK
09:18,OK
09:18,PERM
09:18,PERM
09:18,TEMP
09:18,TEMP
09:18,TEMP
09:19,OK
09:19,OK
09:19,OK
09:19,OK
09:19,OK
09:19,PERM
09:20,OK
09:20,OK
09:20,PERM
09:20,PERM
09:20,TEMP
09:20,TEMP
09:21,OK
09:21,OK
09:21,OK
09:21,PERM
09:21,TEMP
09:22,OK
09:22,OK
09:22,PERM
09:22,PERM
09:22,TEMP
09:22,TEMP
09:23,OK
09:23,PERM
09:23,PERM
09:23,PERM
09:23,TEMP
09:23,TEMP
09:23,TEMP
09:24,PERM
09:24,PERM
09:24,PERM
09:25,OK
09:25,OK
09:25,PERM
09:25,TEMP
09:26,OK
09:26,OK
09:26,OK
09:26,OK
09:26,OK
09:26,PERM
09:26,TEMP
09:27,OK
09:27,OK
09:27,OK
09:27,PERM
09:27,PERM
09:27,TEMP
09:27,TEMP
09:27,TEMP
09:28,PERM
09:28,PERM
09:28,PERM
09:28,PERM
09:29,OK
...
while the final file will have 10K lines in the same time frame.
I need to create a graph to show number of statuses per minute for TEMP, PERM and OK. So I would like to use a line for the status (TEMP, PERM and OK), plot time on the X axis, and frequency of occurrence on the Y axis.
I installed Gnuplot only 2 days ago on my Ubuntu 20.04.4 LTS from the standard repo:
bi#green:bin$ apt list gnuplot* 2>/dev/null | grep installed
gnuplot-data/focal,focal,now 5.2.8+dfsg1-2 all [installed,automatic]
gnuplot-qt/focal,now 5.2.8+dfsg1-2 amd64 [installed,automatic]
gnuplot/focal,focal,now 5.2.8+dfsg1-2 all [installed]
and so far I haven't managed more than this,
#!/bin/bash
x=logoutcol
cat $x
gnuplot -p <<-EOF
#set ytics scale 0
#set yzeroaxis
reset
set format x "%H:%M" time
set xdata time
set yrange [0:*]
set ylabel "Occurences"
set ytics 2
#set margin at screen 0.95
binwidth=60
bin(val) = binwidth * floor(val/binwidth)
set boxwidth binwidth
set datafile separator ","
set term png
set output "$x.png"
plot "$x" using (bin(timecolumn(1,"%H%M"))):(2) smooth freq with boxes
EOF
shotwell $x.png
rm $x.png
which produces this:
Any help will be much appreciated.
I am pretty sure that there was an almost identical question here on SO, however, it seems I can't find it maybe due to my incapability of finding the right keywords for SO's search function.
The key point is the boolean expression (strcol(2) eq word(myKeys,i)) together with smooth frequency. If the value of the second column is identical to your keyword the expression results in 1, and 0 otherwise.
You don't need bins like in creating other histograms because you want a bin of 1 minute (and your time resolution is already 1 minute).
Check the following example as starting point for further optimization.
Script:
### count occurrences of keywords
reset session
$Data <<EOD
# TIME,STATUS
09:00,OK
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:00,TEMP
09:01,OK
09:01,OK
09:01,OK
09:01,PERM
09:01,TEMP
09:01,TEMP
09:02,OK
09:02,TEMP
09:02,TEMP
09:03,OK
09:03,PERM
09:03,PERM
09:03,TEMP
09:03,TEMP
09:04,OK
09:04,PERM
09:04,PERM
09:04,TEMP
09:04,TEMP
09:04,TEMP
09:05,OK
09:05,OK
09:05,OK
09:05,PERM
09:05,TEMP
09:05,TEMP
09:05,TEMP
09:05,TEMP
09:06,OK
09:06,OK
09:06,PERM
09:06,PERM
09:06,PERM
09:06,PERM
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:06,TEMP
09:07,OK
09:07,OK
09:07,TEMP
09:07,TEMP
09:07,TEMP
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,OK
09:08,TEMP
09:08,TEMP
09:08,TEMP
09:08,TEMP
09:09,OK
09:09,OK
09:09,OK
09:09,PERM
09:10,OK
09:10,PERM
09:10,PERM
09:10,TEMP
09:11,OK
09:11,OK
09:11,OK
09:11,OK
09:11,PERM
09:11,PERM
09:11,PERM
09:11,PERM
09:11,TEMP
09:11,TEMP
09:11,TEMP
09:12,PERM
09:12,TEMP
09:12,TEMP
09:13,OK
09:13,OK
09:13,OK
09:13,OK
09:13,OK
09:13,PERM
09:13,PERM
09:13,PERM
09:13,TEMP
09:13,TEMP
09:14,OK
09:14,OK
09:14,OK
09:14,PERM
09:14,PERM
09:14,PERM
09:14,PERM
09:14,TEMP
09:16,OK
09:16,OK
09:16,OK
09:16,PERM
09:16,PERM
09:16,TEMP
09:16,TEMP
09:17,OK
09:17,OK
09:17,PERM
09:17,PERM
09:18,OK
09:18,OK
09:18,OK
09:18,OK
09:18,OK
09:18,PERM
09:18,PERM
09:18,TEMP
09:18,TEMP
09:18,TEMP
09:19,OK
09:19,OK
09:19,OK
09:19,OK
09:19,OK
09:19,PERM
09:20,OK
09:20,OK
09:20,PERM
09:20,PERM
09:20,TEMP
09:20,TEMP
09:21,OK
09:21,OK
09:21,OK
09:21,PERM
09:21,TEMP
09:22,OK
09:22,OK
09:22,PERM
09:22,PERM
09:22,TEMP
09:22,TEMP
09:23,OK
09:23,PERM
09:23,PERM
09:23,PERM
09:23,TEMP
09:23,TEMP
09:23,TEMP
09:24,PERM
09:24,PERM
09:24,PERM
09:25,OK
09:25,OK
09:25,PERM
09:25,TEMP
09:26,OK
09:26,OK
09:26,OK
09:26,OK
09:26,OK
09:26,PERM
09:26,TEMP
09:27,OK
09:27,OK
09:27,OK
09:27,PERM
09:27,PERM
09:27,TEMP
09:27,TEMP
09:27,TEMP
09:28,PERM
09:28,PERM
09:28,PERM
09:28,PERM
09:29,OK
EOD
set datafile separator comma
myKeys = "OK TEMP PERM"
myKey(i) = word(myKeys,i)
myTimeFmt = "%H:%M"
set format x myTimeFmt timedate
plot for [i=1:words(myKeys)] $Data u (timecolumn(1,myTimeFmt)):(strcol(2) eq word(myKeys,i)) smooth freq w lp pt 7 ti word(myKeys,i)
### end of script
Result:

Application Suspended when calling method GeneXus.SD.Media.Camera.TakePhoto() using GX16 U7 SD IOS Generator

On devices with iOS 13, when calling the GeneXus.SD.Media.Camera.TakePhoto() method for the first time takes about 10 to 15 seconds to continue program execution.
Execution of any other option or button is suspended until the camera control has been shown, otherwise the application stops working.
Note: This behavior only happens the first time GeneXus.SD.Media.Camera.TakePhoto() method is called.
The apparent problem is that Genexus is making a call to a thread in the background without using the following statement:
DispatchQueue.main.async {
//Do UI Code here.
//Call Google maps methods.}
The log that the application shows when it is waiting to show the control of the camera is the following:
Log en XCODE 11.3
Main Thread Checker: UI API called on a background thread: -[UIApplication userInterfaceLayoutDirection]
PID: 268, TID: 5281, Thread name: (none), Queue name: com.apple.camera.zoom-dial-image-generation, QoS: 21
Backtrace:
4 GXUIApplication 0x000000010c674170 $s15GXUIApplicationAAC28userInterfaceLayoutDirectionSo06UIUsercdE0VvgTo + 212
5 UIKitCore 0x000000019fea34fc AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 15566076
6 UIKitCore 0x000000019f6bdc8c AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 7285900
7 UIKitCore 0x000000019f6bda18 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 7285272
8 UIKitCore 0x000000019f64a848 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6813768
9 CameraUI 0x00000001bdfee5a8 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1369512
10 UIKitCore 0x000000019f63ee94 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6766228
11 UIKitCore 0x000000019f63ecb0 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6765744
12 UIKitCore 0x000000019f63c464 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6755428
13 CameraUI 0x00000001bdfee2c0 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1368768
14 CameraUI 0x00000001bdfed65c 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1365596
15 AssetsLibraryServices 0x00000001b0462d3c 31232DEC-0B77-3A8B-B80A-A51A16204F8E + 228668
16 libdispatch.dylib 0x000000010d091e1c _dispatch_call_block_and_release + 32
17 libdispatch.dylib 0x000000010d09327c _dispatch_client_callout + 20
18 libdispatch.dylib 0x000000010d09a90c _dispatch_lane_serial_drain + 720
19 libdispatch.dylib 0x000000010d09b4fc _dispatch_lane_invoke + 408
20 libdispatch.dylib 0x000000010d0a64dc _dispatch_workloop_worker_thread + 1344
21 libsystem_pthread.dylib 0x000000019b62b6d0 _pthread_wqthread + 280
22 libsystem_pthread.dylib 0x000000019b6319e8 start_wqthread + 8
2020-02-03 16:52:16.618996-0500 Routik[268:5281] [reports] Main Thread Checker: UI API called on a background thread: -[UIApplication userInterfaceLayoutDirection]
PID: 268, TID: 5281, Thread name: (none), Queue name: com.apple.camera.zoom-dial-image-generation, QoS: 21
Backtrace:
4 GXUIApplication 0x000000010c674170 $s15GXUIApplicationAAC28userInterfaceLayoutDirectionSo06UIUsercdE0VvgTo + 212
5 UIKitCore 0x000000019fea34fc AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 15566076
6 UIKitCore 0x000000019f6bdc8c AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 7285900
7 UIKitCore 0x000000019f6bda18 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 7285272
8 UIKitCore 0x000000019f64a848 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6813768
9 CameraUI 0x00000001bdfee5a8 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1369512
10 UIKitCore 0x000000019f63ee94 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6766228
11 UIKitCore 0x000000019f63ecb0 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6765744
12 UIKitCore 0x000000019f63c464 AA897CA9-8D15-3DD7-BB4F-8D90F9A28571 + 6755428
13 CameraUI 0x00000001bdfee2c0 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1368768
14 CameraUI 0x00000001bdfed65c 91E5E69E-0F28-35E3-86F9-7AA8B1D7F726 + 1365596
15 AssetsLibraryServices 0x00000001b0462d3c 31232DEC-0B77-3A8B-B80A-A51A16204F8E + 228668
16 libdispatch.dylib 0x000000010d091e1c _dispatch_call_block_and_release + 32
17 libdispatch.dylib 0x000000010d09327c _dispatch_client_callout + 20
18 libdispatch.dylib 0x000000010d09a90c _dispatch_lane_serial_drain + 720
19 libdispatch.dylib 0x000000010d09b4fc _dispatch_lane_invoke + 408
20 libdispatch.dylib 0x000000010d0a64dc _dispatch_workloop_worker_thread + 1344
21 libsystem_pthread.dylib 0x000000019b62b6d0 _pthread_wqthread + 280
22 libsystem_pthread.dylib 0x000000019b6319e8 start_wqthread + 8
2020-02-03 16:52:26.217529-0500 Routik[268:5081] [Common] _BSMachError: port fe03; (os/kern) invalid capability (0x14) "Unable to insert COPY_SEND"
Actually, it's a false positive in Apple's Main Thread Checker tool. Let me explain:
Main Thread Checker works by at app launch, dynamically replaces the implementations of methods that should only be called on the main thread with a version that prepends the check. Methods known to be safe for use on background threads are excluded from this check.
The method called from a background thread is -[UIApplication userInterfaceLayoutDirection].
GeneXus applications use a subclass of UIApplication (GXUIApplication) which overrides this method (userInterfaceLayoutDirection) in order to support SetLanguage function of Right to Left languages on Left to Right configured devices at runtime (or the other way around). Inside this override, [super userInterfaceLayoutDirection] is called, and this is where Main Thread Checker raises the warning.
This method is called in the background internally from Apple's frameworks implementations as you can see in the Backtrace you posted, everything except GXUIApplication userInterfaceLayoutDirection method itself is not GeneXus code.
The problem is Main Thread Checker raises the [UIApplication userInterfaceLayoutDirection] call warning only when the call is explicit, and ignored when it's called internally from another Apple framework. In this case, it's considered explicit because the method is overwritten in the subclass even though it's called internally from another Apple framework.
You can check this by replacing in the source file main.m, in the line:
return UIApplicationMain(argc, argv, NSStringFromClass([GXUIApplication class]), NSStringFromClass([GXAppDelegate class]));
with:
return UIApplicationMain(argc, argv, NSStringFromClass([UIApplication class]), NSStringFromClass([GXAppDelegate class]));
With this change (not using the subclass with the override), Main Thread Checker won't raise the warning, even though the same method is still being called internally from a background thread.
We will look for a workaround for this Main Thread Checker issue in the upcoming GeneXus upgrades (thanks for the report) and also notify Apple about the issue with Main Thread Checker.
Meanwhile, you can disable Main Thread Checker from Xcode:
Also, you don't have to worry about this being an issue for your users, as Main Thread Checker is only active when the app is launched from Xcode (with Main Thread Checker diagnostic enabled).

Got a SIGSEGV while executing native code in Xamarin.IOS

The link between simulator and xamarin stops after a few seconds (simulator is white) and im able to push the play button again. I use Xamarin.Forms for this iOS project. I've tried this in the newest versions for both Xamarin Studio and Visual Studio.
Native stacktrace:
2017-09-13 14:22:34.113 NewsTestOne.iOS[25106:19993674] critical: 0 NewsTestOne.iOS 0x0000000106847184
mono_handle_native_crash + 244
2017-09-13 14:22:34.113 NewsTestOne.iOS[25106:19993674] critical: 1 NewsTestOne.iOS 0x000000010685320b mono_sigsegv_signal_handler + 171
2017-09-13 14:22:34.114 NewsTestOne.iOS[25106:19993674] critical: 2 libsystem_platform.dylib 0x000000010d451b3a _sigtramp + 26
2017-09-13 14:22:34.114 NewsTestOne.iOS[25106:19993674] critical: 3 ??? 0x0003f78b358d56aa 0x0 + 1116602201101994
2017-09-13 14:22:34.114 NewsTestOne.iOS[25106:19993674] critical: 4 CFNetwork 0x000000010bc51e2e _ZN15TCPIOConnection16_startConnectionEv + 530
2017-09-13 14:22:34.114 NewsTestOne.iOS[25106:19993674] critical: 5 CFNetwork 0x000000010bd8e32a ___ZN4Tube23_onqueue_prepConnectionEU13block_pointerFvvEU13block_pointerFviE_block_invoke.67 + 726
2017-09-13 14:22:34.114 NewsTestOne.iOS[25106:19993674] critical: 6 CFNetwork 0x000000010bd8e807 ___ZN4Tube23_onqueue_prepConnectionEU13block_pointerFvvEU13block_pointerFviE_block_invoke_2.83 + 21
2017-09-13 14:22:34.115 NewsTestOne.iOS[25106:19993674] critical: 7 libdispatch.dylib 0x000000010d0ae585 _dispatch_call_block_and_release + 12
2017-09-13 14:22:34.115 NewsTestOne.iOS[25106:19993674] critical: 8 libdispatch.dylib 0x000000010d0cf792 _dispatch_client_callout + 8
2017-09-13 14:22:34.115 NewsTestOne.iOS[25106:19993674] critical: 9 libdispatch.dylib 0x000000010d0b5237 _dispatch_queue_serial_drain + 1022
2017-09-13 14:22:34.115 NewsTestOne.iOS[25106:19993674] critical: 10 libdispatch.dylib 0x000000010d0b598f _dispatch_queue_invoke + 1053
2017-09-13 14:22:34.115 NewsTestOne.iOS[25106:19993674] critical: 11 libdispatch.dylib 0x000000010d0b7899 _dispatch_root_queue_drain + 813
2017-09-13 14:22:34.116 NewsTestOne.iOS[25106:19993674] critical: 12 libdispatch.dylib 0x000000010d0b750d _dispatch_worker_thread3 + 113
2017-09-13 14:22:34.116 NewsTestOne.iOS[25106:19993674] critical: 13 libsystem_pthread.dylib 0x000000010d4635a2 _pthread_wqthread + 1299
2017-09-13 14:22:34.116 NewsTestOne.iOS[25106:19993674] critical: 14 libsystem_pthread.dylib 0x000000010d46307d start_wqthread + 13
2017-09-13 14:22:34.116 NewsTestOne.iOS[25106:19993674] critical:
=================================================================
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
This usually indicate thread errors on your code, I suggest you debug all your server side calls which are asynchronous, it can be plenty of things like:
A Task.Run method running in a background thread trying to update a UI property.
A Custom Renderer or a Dependency Servicewhich is using within an asynchronous Task.
If you have Timers in your code take a looks at those also.
With the latest versions of Xamarin the debugger is not capable to correctly debug native code so make sure you clean your solution and try to debug on a real device. (also getting rid of break points sometime helps.)

Neo4j very slow for graph import

I'm using neo4j to load a graph . It is a csv file of 11 million rows
and it is taking a long time for loading
2 hours have passed yet the graph is not finished loading yet
Is it normal ?
My laptop is an i7 2.4Ghs and 8g RAM
The sample data:
protein1 protein2 combined_score
9615.ENSCAFP00000000001 9615.ENSCAFP00000014827 151
9615.ENSCAFP00000000001 9615.ENSCAFP00000026847 802
9615.ENSCAFP00000000001 9615.ENSCAFP00000015235 900
9615.ENSCAFP00000000001 9615.ENSCAFP00000007210 261
9615.ENSCAFP00000000001 9615.ENSCAFP00000025394 248
9615.ENSCAFP00000000001 9615.ENSCAFP00000038575 900
9615.ENSCAFP00000000001 9615.ENSCAFP00000011457 177
9615.ENSCAFP00000000001 9615.ENSCAFP00000002193 503
9615.ENSCAFP00000000001 9615.ENSCAFP00000042321 900
9615.ENSCAFP00000000001 9615.ENSCAFP00000011541 207
9615.ENSCAFP00000000001 9615.ENSCAFP00000038517 183
9615.ENSCAFP00000000001 9615.ENSCAFP00000003009 151
Query
CREATE CONSTRAINT ON (n:Node) ASSERT n.NodeID IS UNIQUE;
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///linksdog.csv'
AS line
MERGE (n1:Node {NodeID: line.protein1})
MERGE (n2:Node {NodeID: line.protein2})
MERGE (n1)-[:ACTING_WITH {Score: TOFLOAT(line.combined_score)}]->(n2);

Goroutines overhead and performance analysis when subsetting DataFrames (Gota)

I've been working since the beginning of 2016 on implementing a Pandas/R DataFrame implementation for Go: https://github.com/kniren/gota.
Recently, I've been focusing on improving the performance of the library to try to match that of Pandas/Dplyr. You can follow the progress so far here: https://github.com/kniren/gota/issues/16
Since one of the more frequently used operations is the DataFrame subsetting, I thought it could be a good idea to introduce concurrency to try to increase the performance of the system.
Before:
columns := make([]series.Series, df.ncols)
for i, column := range df.columns {
s := column.Subset(indexes)
columns[i] = s
}
After:
columns := make([]series.Series, df.ncols)
var wg sync.WaitGroup
wg.Add(df.ncols)
for i := range df.columns {
go func(i int) {
columns[i] = df.columns[i].Subset(indexes)
wg.Done()
}(i)
}
wg.Wait()
As far as I understand, creating a goroutine for each of the columns of a DataFrame should not introduce much overhead, so I was expecting to achieve at least a x2 speedup with respect to the serial version (At least for large datasets). However, when benchmarking this change with datasets and indexes of different sizes the results are very disappointing (NROWSxNCOLS_INDEXSIZE-CPUCORES):
benchmark old ns/op new ns/op delta
BenchmarkDataFrame_Subset/1000000x20_100 55230 109349 +97.99%
BenchmarkDataFrame_Subset/1000000x20_100-2 51457 67714 +31.59%
BenchmarkDataFrame_Subset/1000000x20_100-4 49845 70141 +40.72%
BenchmarkDataFrame_Subset/1000000x20_1000 518506 518085 -0.08%
BenchmarkDataFrame_Subset/1000000x20_1000-2 476661 311379 -34.67%
BenchmarkDataFrame_Subset/1000000x20_1000-4 505023 316583 -37.31%
BenchmarkDataFrame_Subset/1000000x20_10000 6621116 6314112 -4.64%
BenchmarkDataFrame_Subset/1000000x20_10000-2 7316062 4509601 -38.36%
BenchmarkDataFrame_Subset/1000000x20_10000-4 6483812 8394113 +29.46%
BenchmarkDataFrame_Subset/1000000x20_100000 105341711 106427967 +1.03%
BenchmarkDataFrame_Subset/1000000x20_100000-2 94567729 56778647 -39.96%
BenchmarkDataFrame_Subset/1000000x20_100000-4 91896690 60971444 -33.65%
BenchmarkDataFrame_Subset/1000000x20_1000000 1538680081 1632044752 +6.07%
BenchmarkDataFrame_Subset/1000000x20_1000000-2 1292113119 1100075806 -14.86%
BenchmarkDataFrame_Subset/1000000x20_1000000-4 1282367864 949615298 -25.95%
BenchmarkDataFrame_Subset/100000x20_100 50286 106850 +112.48%
BenchmarkDataFrame_Subset/100000x20_100-2 54537 70492 +29.26%
BenchmarkDataFrame_Subset/100000x20_100-4 58024 76617 +32.04%
BenchmarkDataFrame_Subset/100000x20_1000 541600 625967 +15.58%
BenchmarkDataFrame_Subset/100000x20_1000-2 493894 362894 -26.52%
BenchmarkDataFrame_Subset/100000x20_1000-4 535373 349211 -34.77%
BenchmarkDataFrame_Subset/100000x20_10000 6298063 7678499 +21.92%
BenchmarkDataFrame_Subset/100000x20_10000-2 5827185 4832560 -17.07%
BenchmarkDataFrame_Subset/100000x20_10000-4 8195048 3660077 -55.34%
BenchmarkDataFrame_Subset/100000x20_100000 105108807 82976477 -21.06%
BenchmarkDataFrame_Subset/100000x20_100000-2 92112736 58317114 -36.69%
BenchmarkDataFrame_Subset/100000x20_100000-4 92044966 63469935 -31.04%
BenchmarkDataFrame_Subset/1000x20_10 9741 53365 +447.84%
BenchmarkDataFrame_Subset/1000x20_10-2 9366 36457 +289.25%
BenchmarkDataFrame_Subset/1000x20_10-4 9463 46682 +393.31%
BenchmarkDataFrame_Subset/1000x20_100 50841 103523 +103.62%
BenchmarkDataFrame_Subset/1000x20_100-2 49972 62344 +24.76%
BenchmarkDataFrame_Subset/1000x20_100-4 72014 81808 +13.60%
BenchmarkDataFrame_Subset/1000x20_1000 457799 571292 +24.79%
BenchmarkDataFrame_Subset/1000x20_1000-2 460551 405116 -12.04%
BenchmarkDataFrame_Subset/1000x20_1000-4 462928 416522 -10.02%
BenchmarkDataFrame_Subset/1000x200_10 90125 688443 +663.88%
BenchmarkDataFrame_Subset/1000x200_10-2 85259 392705 +360.60%
BenchmarkDataFrame_Subset/1000x200_10-4 87412 387509 +343.31%
BenchmarkDataFrame_Subset/1000x200_100 486600 1082901 +122.54%
BenchmarkDataFrame_Subset/1000x200_100-2 471154 732304 +55.43%
BenchmarkDataFrame_Subset/1000x200_100-4 542846 659571 +21.50%
BenchmarkDataFrame_Subset/1000x200_1000 5926086 6686480 +12.83%
BenchmarkDataFrame_Subset/1000x200_1000-2 5364091 3986970 -25.67%
BenchmarkDataFrame_Subset/1000x200_1000-4 5904977 4504084 -23.72%
BenchmarkDataFrame_Subset/1000x2000_10 1187297 7800052 +556.96%
BenchmarkDataFrame_Subset/1000x2000_10-2 1217022 3930742 +222.98%
BenchmarkDataFrame_Subset/1000x2000_10-4 1301666 3617871 +177.94%
BenchmarkDataFrame_Subset/1000x2000_100 6942015 10790196 +55.43%
BenchmarkDataFrame_Subset/1000x2000_100-2 6588351 7592847 +15.25%
BenchmarkDataFrame_Subset/1000x2000_100-4 7067226 14391327 +103.63%
BenchmarkDataFrame_Subset/1000x2000_1000 62392457 69560711 +11.49%
BenchmarkDataFrame_Subset/1000x2000_1000-2 57793006 37416703 -35.26%
BenchmarkDataFrame_Subset/1000x2000_1000-4 59572261 58398203 -1.97%
benchmark old allocs new allocs delta
BenchmarkDataFrame_Subset/1000000x20_100 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_100-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_100-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_1000 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_1000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_1000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_10000 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_10000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_10000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_100000 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_100000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_100000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_1000000 41 42 +2.44%
BenchmarkDataFrame_Subset/1000000x20_1000000-2 41 43 +4.88%
BenchmarkDataFrame_Subset/1000000x20_1000000-4 41 46 +12.20%
BenchmarkDataFrame_Subset/100000x20_100 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_100-2 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_100-4 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_1000 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_1000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_1000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_10000 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_10000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_10000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_100000 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_100000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/100000x20_100000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_10 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_10-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_10-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_100 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_100-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_100-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_1000 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_1000-2 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x20_1000-4 41 42 +2.44%
BenchmarkDataFrame_Subset/1000x200_10 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_10-2 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_10-4 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_100 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_100-2 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_100-4 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_1000 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_1000-2 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x200_1000-4 401 402 +0.25%
BenchmarkDataFrame_Subset/1000x2000_10 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_10-2 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_10-4 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_100 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_100-2 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_100-4 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_1000 4001 4002 +0.02%
BenchmarkDataFrame_Subset/1000x2000_1000-2 4001 4010 +0.22%
BenchmarkDataFrame_Subset/1000x2000_1000-4 4001 4003 +0.05%
benchmark old bytes new bytes delta
BenchmarkDataFrame_Subset/1000000x20_100 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000000x20_100-2 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000000x20_100-4 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000000x20_1000 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000000x20_1000-2 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000000x20_1000-4 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000000x20_10000 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/1000000x20_10000-2 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/1000000x20_10000-4 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/1000000x20_100000 29083520 29083536 +0.00%
BenchmarkDataFrame_Subset/1000000x20_100000-2 29083520 29083547 +0.00%
BenchmarkDataFrame_Subset/1000000x20_100000-4 29083542 29083563 +0.00%
BenchmarkDataFrame_Subset/1000000x20_1000000 290121600 290121616 +0.00%
BenchmarkDataFrame_Subset/1000000x20_1000000-2 290121600 290121696 +0.00%
BenchmarkDataFrame_Subset/1000000x20_1000000-4 290121600 290121840 +0.00%
BenchmarkDataFrame_Subset/100000x20_100 32400 32416 +0.05%
BenchmarkDataFrame_Subset/100000x20_100-2 32400 32416 +0.05%
BenchmarkDataFrame_Subset/100000x20_100-4 32400 32416 +0.05%
BenchmarkDataFrame_Subset/100000x20_1000 298880 298896 +0.01%
BenchmarkDataFrame_Subset/100000x20_1000-2 298880 298896 +0.01%
BenchmarkDataFrame_Subset/100000x20_1000-4 298880 298896 +0.01%
BenchmarkDataFrame_Subset/100000x20_10000 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/100000x20_10000-2 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/100000x20_10000-4 2971520 2971536 +0.00%
BenchmarkDataFrame_Subset/100000x20_100000 29083520 29083536 +0.00%
BenchmarkDataFrame_Subset/100000x20_100000-2 29083520 29083536 +0.00%
BenchmarkDataFrame_Subset/100000x20_100000-4 29083542 29083553 +0.00%
BenchmarkDataFrame_Subset/1000x20_10 4880 4896 +0.33%
BenchmarkDataFrame_Subset/1000x20_10-2 4880 4896 +0.33%
BenchmarkDataFrame_Subset/1000x20_10-4 4880 4896 +0.33%
BenchmarkDataFrame_Subset/1000x20_100 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000x20_100-2 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000x20_100-4 32400 32416 +0.05%
BenchmarkDataFrame_Subset/1000x20_1000 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000x20_1000-2 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000x20_1000-4 298880 298896 +0.01%
BenchmarkDataFrame_Subset/1000x200_10 49568 49584 +0.03%
BenchmarkDataFrame_Subset/1000x200_10-2 49568 49584 +0.03%
BenchmarkDataFrame_Subset/1000x200_10-4 49568 49585 +0.03%
BenchmarkDataFrame_Subset/1000x200_100 324768 324784 +0.00%
BenchmarkDataFrame_Subset/1000x200_100-2 324768 324784 +0.00%
BenchmarkDataFrame_Subset/1000x200_100-4 324768 324784 +0.00%
BenchmarkDataFrame_Subset/1000x200_1000 2989568 2989584 +0.00%
BenchmarkDataFrame_Subset/1000x200_1000-2 2989568 2989584 +0.00%
BenchmarkDataFrame_Subset/1000x200_1000-4 2989569 2989588 +0.00%
BenchmarkDataFrame_Subset/1000x2000_10 491072 491088 +0.00%
BenchmarkDataFrame_Subset/1000x2000_10-2 491072 491133 +0.01%
BenchmarkDataFrame_Subset/1000x2000_10-4 491072 491088 +0.00%
BenchmarkDataFrame_Subset/1000x2000_100 3243072 3243088 +0.00%
BenchmarkDataFrame_Subset/1000x2000_100-2 3243074 3243102 +0.00%
BenchmarkDataFrame_Subset/1000x2000_100-4 3243076 3243100 +0.00%
BenchmarkDataFrame_Subset/1000x2000_1000 29891072 29891088 +0.00%
BenchmarkDataFrame_Subset/1000x2000_1000-2 29891086 29891797 +0.00%
BenchmarkDataFrame_Subset/1000x2000_1000-4 29891115 29891167 +0.00%
Running the profiler (cpu/mem) over this benchmark didn't seem to reveal nothing significant. The concurrent version seem to spend some time on rumtime.match_semaphore_signal but I guess this is to be expected when waiting for the goroutines to finish.
I've tried limiting the number of goroutines launched to the maximum number of cores as reported by runtime.GOMAXPROCS(0) but the results are somewhat even worse. Am I doing something horribly wrong here or is the goroutines overhead so big that it has such a significant effect on the performance?
Goroutines are cheap, but not free.
I didn't read your code, but if you are spawning NCOLS_INDEXSIZE goroutines for each row you process, then it's a very bad practice.
This can be seen in your benchmark where you have 2k columns and only 1k rows - you get very big improvement. But in all other cases, when number of columns << number of rows, goroutine spawning becomes a bottleneck.
Instead you should spawn a pool of goroutines (close to your CPU count) and distribute work between them through channels - it's the canonical way. You may want to read https://blog.golang.org/pipelines

Resources