Reactor test failed - reactor

I'm trying to try reactor for the first time.
So I clone and build (as in https://github.com/reactor/reactor).
I'm using windows XP and jdk 8.
However, when I run the ./gradlew test, I got the following error:
reactor.queue.PersistentQueueSpec > Java Chronicle-based PersistentQueue is performant FAILED
java.lang.IllegalStateException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'test': was expecting 'null', 'true', 'false' or NaN
at [Source: reactor.io.Buffer$BufferInputStream#136ec72; line: 1, column: 6]
at reactor.io.encoding.json.JsonCodec$JsonDecoder.apply(JsonCodec.java:112)
at reactor.io.encoding.json.JsonCodec$JsonDecoder.apply(JsonCodec.java:88)
at reactor.queue.IndexedChronicleQueuePersistor.read(IndexedChronicleQueuePersistor.java:189)
at reactor.queue.IndexedChronicleQueuePersistor.access$900(IndexedChronicleQueuePersistor.java:27)
at reactor.queue.IndexedChronicleQueuePersistor$ChronicleRemoveFunction.get(IndexedChronicleQueuePersistor.java:253)
at reactor.queue.IndexedChronicleQueuePersistor$1.next(IndexedChronicleQueuePersistor.java:172)
at reactor.queue.PersistentQueueSpec.Java Chronicle-based PersistentQueue is performant(PersistentQueueSpec.groovy:103)
Caused by:
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'test': was expecting 'null', 'true', 'false' or NaN
at [Source: reactor.io.Buffer$BufferInputStream#136ec72; line: 1, column: 6]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1524)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:557)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3095)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3073)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._matchToken(UTF8StreamJsonParser.java:2479)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:793)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:698)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3024)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2971)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2137)
at reactor.io.encoding.json.JsonCodec$JsonDecoder.apply(JsonCodec.java:103)
... 6 more
Seems like I get similar error with the question in this post. In this post, it's suggested to #Ignore the test if the feature is not used. But I'm not sure whether will use the features in the future or not.
Anyone knows how to build & test successfully without having to #Ignore?

It's not clear why this test fails for some Windows users. It likely has to do with using the Java Chronicle on that platform. The OpenHFT libraries rely on Unsafe to gain their speed for some functions and, in all honesty, I'm not sure how well-supported Java Chronicle is on Windows platforms.
It would be good to have a GitHub issue detailing this failure and including important details about OS, hardware, JVM version, etc... and we'll try to loop in some of the OpenHFT folks and see if there's anything they can point us toward.
Update: It seems that the issue with the test is actually in the cleanup which may fail on some OSs if the file descriptors aren't correctly released. That is a benign error and one we'll try and get a good fix for. In the meantime, I'd say it's safe to add #Ignore to the test and not worry that the PersistentQueue stuff isn't working since it's just test cleanup that's failing and not the functionality of the Java Chronicle itself.

Related

Exception: java.lang.ClassCastException: com.blazemeter.jmeter.threads.DynamicThread cannot be cast to org.apache.jmeter.samplers.SampleResult

I am currently using Blazemeter to run load (performance) tests for a Java application, but I am getting this error. When I run it locally on my machine, this error does not occur.
Under 'scenario definition' my test starts off with a standard jmeter thread group. I did a bit of searching and realised that com.blazemeter.jmeter.threads.DynamicThread comes from the CustomThreadGroups plugin, so I have also uploaded the appropriate jar file plugin jmeter-plugins-casutg-2.9.jar file.
Screenshot of scenario definition
Screenshot of error
Would just like to check if there is some other jar file that I need to upload to solve this error, or if there is another method to solve this issue? Thank you.
Normally you should raise this form of questions to BlazeMeter Support as they should have better understanding of their infrastructure.
With regards to your question itself, most probably you're suffering from a form of a Jar Hell as:
BlazeMeter uses Taurus under the hood for kicking off JMeter tests which automatically downloads JMeter Plugins so it might be the case your plugins versions clash with the plugins at their end
You need to remove one of joda-time libraries as you cannot tell for sure which one will be loaded into classpath and in case of API inconsistency you can get unpredictable errors.

Why would I suddenly get 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors?

We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.

Sonarqube Analysis org.joda.convert ERROR

Why do I get this error:
Class not found: org.joda.convert.ToString
None of my code uses this class (maven-based sonar analysis), and it doesn't seem to affect the analysis. However, I get worried whenever there are "[ERROR]" logs in the output. My exact command is:
mvn org.codehaus.mojo:sonar-maven-plugin:2.6:sonar
I switched to this because someone in another related answer suggested this...
This error message is logged by the analyzer whenever it tries to complete a symbol during semantic analysis and cannot find a .class file.
This will happen wether your classes are using this class directly or transitively (via a dependence, or a dependence of a dependence, etc.). This is arguably an error per say in all cases but this is an important information for users to know because lacking some classes can lead to incomplete results (some issues might not be raised because symbols won't be resolved).

ACR1222L and Ruby smartcard gem

I'm trying to make a ACR1222L work with this ruby script that I found on github.
The script is made for the older ACR122U, but in my research I've seen that they both should be pretty similar.
My problem is when trying to run the script, I get this error:
C:\Users\Emil\Desktop>driver.rb
Calibration Time!
Place and leave a tag on the device
C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/smartcard-0.5.5/lib/smartcard/pcsc/conte
xt.rb:112:in `wait_for_status_change': (0x100000006) (Smartcard::PCSC::Exception)
from C:/Users/Emil/Desktop/driver.rb:24:in `<main>'
Could it be that the "smartcard" gem used by the script does not support the ACR1222L, or am I simply just missing something?
Hope that someone can help!
The Smartcard::PCSC::Exception error code you get (0x100000006) translates to the Windows API error code INVALID_HANDLE_EXCEPTION (0x00000006). This typically indicates that the context handle used in the API call is invalid. With the smartcard gem, the PS/SC context (SCardEstablishContext) is established through the initializer of Smartcard::PCSC::Context. This operation seems to be successful, otherwise you would get an exception on line 13. The source of the INVALID_HANDLE_EXCEPTION seems to be SCardGetStatusChange (invoked by context.wait_for_status_change).
A possible reason for that call to fail with an INVALID_HANDLE_EXCEPTION could be a mismatch in the handle format, for instance caused by a 32-bit/64-bit mismatch. Thus, I would assume that the smartcard gem is designed for 32-bit only (while your path indicates that you are using a 64-bit version of Ruby).

OSGi - Candidate permutation failed due to a conflict between imports

I am in a situation where my Felix OSGi container will not start properly after deploying groovy via:
obr:deploy "Groovy Scripting Languge"#1.7.3
Managed to deploy & got Groovy stuff running, until I did a restart on my OSGi container...then most of the bundles will not start. FWIW, I am pretty sure Groovy is not the cause even though there's a typo in its bundle name. :-)
After some troubleshooting, turning on Felix's wire logging (thank god!), i noticed this (among all the other failed bundles, similar cause):
2011-04-03 16:26:43,108 DEBUG [FelixStartLevel] felix.wire - Candidate permutation failed due to a conflict between imports; will try another if possible. (org.apache.felix.framework.resolver.ResolveException: Unable to resolve module org.apache.felix.http.bundle [36.0] because it is exposed to package 'org.osgi.framework' from org.apache.felix.framework [0] and com.springsource.org.aspectj.tools [47.0] via two dependency chains.
Chain 1:
org.apache.felix.http.bundle [36.0]
import: (&(package=org.osgi.framework)(version>=1.3.0))
|
export: package=org.osgi.framework
org.apache.felix.framework [0]
Chain 2:
org.apache.felix.http.bundle [36.0]
import: (&(package=org.osgi.service.log)(version>=1.3.0))
|
export: package=org.osgi.service.log; uses:=org.osgi.framework
osgi.cmpn [15.0]
import: (&(package=org.osgi.framework)(version>=1.5.0)(!(version>=2.0.0)))
|
export: package=org.osgi.framework
com.springsource.org.aspectj.tools [47.0])
Seems like both o.a.felix.framework and c.s.o.aspectj.tools are exporting o.osgi.framework.
I am able to get things running again by removing bundle id 47 (c.s.o.aspectj.tools), but yet to check if there's other implications. Feels wrong because I removed c.s.o.aspectj.tools but it was indicated as required (or optional) for Groovy by the OBR repository. In fact, it was installed via the obr:deploy command for Groovy.
Feels like c.s.o.aspectj.tools should not be exporting o.osgi.framework, but that's just a guess as i do not use aspectj tools stuff.
Question: What is the proper way to resolve such issues without resorting to educated guesses?
You are right, c.s.o.aspectj.tools should not be export org.osgi.framework, but apparently it does. To be precise, I checked version 1.6.8, and that has the following export statement:
org.osgi.framework;version="1.6.8.RELEASE"
Furthermore, it does not import org.osgi.framework. This is plain wrong, and I'd say it's worth a bugreport with the Spring team; if you export, you should usually import, and I can't think of a valid reason to tag osg.osgi.framework with a different version than it actually has.
How can you get around this for now? The problem with the two resolution chains can be resolved by wiring both http and cmpn to the same framework package; perhaps even a simple osgi:refresh in the shell could help you, since the declared version (1.6.8.RELEASE) is within the import ranges of both http and cmpn, by accident.
If you don't really need the aspectj stuff, I would leave it out.

Resources