mvn gluon:runagent failure - Unknown Attribute 'queryAllDeclaredMethods' .. in definition of class ..HelloController - gluon

trying to get my first native build working.
(using Windows 10, jdk 17, javafx17, gluon 1.0.9, gluon graalvm (graalvm-svm-windows-gluon-21.2.0-dev.zip))
I am able to run mvn gluonfx:run
(and click on the 1 button I have in my test UI)
However when I run: mvn gluonfx:runagent, I get:
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] Error: Error parsing reflection configuration in file:/C:/devel/repos/Gluon-SingleViewProject-jdk17/target/classes/META-INF%5cnative-image%5creflect-config.json:
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] Unknown attribute 'queryAllDeclaredMethods' (supported attributes: allDeclaredConstructors, allPublicConstructors, allDeclaredMethods, allPublicMethods, allDeclaredFields, allPublicFields, methods, fields) in defintion of class com.gluonapplication.HelloController
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] Verify that the configuration matches the schema described in the -H:PrintFlags=+ output for option ReflectionConfigurationResources.
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] com.oracle.svm.core.util.UserError$UserException: Error parsing reflection configuration in file:/C:/devel/repos/Gluon-SingleViewProject-jdk17/target/classes/META-INF%5cnative-image%5creflect-config.json:
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] Unknown attribute 'queryAllDeclaredMethods' (supported attributes: allDeclaredConstructors, allPublicConstructors, allDeclaredMethods, allPublicMethods, allDeclaredFields, allPublicFields, methods, fields) in defintion of class com.gluonapplication.HelloController
[Wed Nov 17 08:10:41 PST 2021][INFO] [SUB] Verify that the configuration matches the schema described in the -H:PrintFlags=+ output for option ReflectionConfigurationResources.
The helloController, simply consists of 1 method atm:
public class HelloController {
public void pressButton(ActionEvent ae){
System.out.println("hello, source pressed: " + ae.getSource());
}
}
Any suggestions/tips greatly appreciated...(based on the error above..looks like build process is may be calling an unsupported method for jdk 17?)

Related

Spring returning 200 with no content and not hitting controller

I have a controller as
#Controller
#RequestMapping("/v2/**")
public class ReactController {
#RequestMapping(method = RequestMethod.GET)
public String reactEntry() {
return "react-entry";
}
}
When I log into the app and go through the login page, then navigate to the page that hits this URL, I get the content. However, if I go directly to the URL (it hits the login page but then forwards directly to this URL) spring returns a status code of 200 with a content length of 0 and my controller is never hit.
In debug logging, in the normal case I see:
[DEBUG] 17 Aug 2018 10:41:50,805 org.springframework.security.web.FilterChainProxy - /v2/ reached end of additional filter chain; proceeding with original chain
[TRACE] 17 Aug 2018 10:41:50,807 org.springframework.web.servlet.DispatcherServlet - Bound request context to thread: SecurityContextHolderAwareRequestWrapper[ org.springframework.security.web.context.HttpSessionSecurityContextRepository Servlet3SaveToSessionRequestWrapper#38f60506]
[DEBUG] 17 Aug 2018 10:41:50,807 org.springframework.web.servlet.DispatcherServlet - DispatcherServlet with name 'cem' processing GET request for [/cems/v2/]
[TRACE] 17 Aug 2018 10:41:50,807 org.springframework.web.servlet.DispatcherServlet - Testing handler map [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#746fb8d3] in DispatcherServlet with name 'cem'
[DEBUG] 17 Aug 2018 10:41:50,807 org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping - Looking up handler method for path /v2/
[TRACE] 17 Aug 2018 10:41:50,842 org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping - Found 1 matching mapping(s) for [/v2/] : [{[/v2/**],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}]
[DEBUG] 17 Aug 2018 10:41:50,842 org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping - Returning handler method [public java.lang.String rest.controller.ReactController.reactEntry()]
However, when hit via the second mechanism, I get:
[DEBUG] 17 Aug 2018 10:45:32,702 org.springframework.security.web.FilterChainProxy - /v2/ reached end of additional filter chain; proceeding with original chain
[DEBUG] 17 Aug 2018 10:45:32,779 org.springframework.beans.factory.annotation.InjectionMetadata - Processing injected element of bean 'domain.User': PersistenceElement for transient javax.persistence.EntityManager rest.domain.Personnel.entityManager
Notice the time gap which suggests that second line is part of a different request.
It appears that in the second case, the request is not being mapping to the cems DispatcherServlet even though it is the same URL.

Apache Phoenix UPSERT timeouts on large data

Exception in thread "main"
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Mon May 09 20:45:17 PDT 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=214280: row '�.�A`' on tabl..
Any suggestions?

LogStash failed action with response of 500, dropping action

I am trying to configure LogStash to watch a file and send events to elasticsearch server.
When I start logstash to output to stdout, it runs fine:
stdout {
codec => rubydebug
}
But when I add elasticsearch output:
elasticsearch {
cluster => 'myclustername'
host => 'myip'
node_name => 'Aragorn'
}
Logstash starts up
Mar 16, 2015 3:44:24 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [Aragorn] version[1.4.0], pid[7136], build[bc94bd8/2014-11-05T14:26:12Z]
Mar 16, 2015 3:44:24 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [Aragorn] initializing ...
Mar 16, 2015 3:44:24 PM org.elasticsearch.plugins.PluginsService <init>
INFO: [Aragorn] loaded [], sites []
Mar 16, 2015 3:44:25 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [Aragorn] initialized
Mar 16, 2015 3:44:25 PM org.elasticsearch.node.internal.InternalNode start
INFO: [Aragorn] starting ...
Mar 16, 2015 3:44:25 PM org.elasticsearch.transport.TransportService doStart
INFO: [Aragorn] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.98.134.83:9300]}
Mar 16, 2015 3:44:25 PM org.elasticsearch.discovery.DiscoveryService doStart
INFO: [Aragorn] myclustername/RjasP2X0ShKXEl0f2WRxBA
Mar 16, 2015 3:44:30 PM org.elasticsearch.cluster.service.InternalClusterService$UpdateTask run
INFO: [Aragorn] detected_master [Aragorn][0YytUoWlQ2qgw2_0i5V4mQ][SOMEMACHINE][inet[/myip:9300]], added {[Aragorn][0YytUoWlQ2qgw2_0i5V4mQ][
SOMEMACHINE][inet[/myip:9300]],}, reason: zen-disco-receive(from master [[Aragorn][0YytUoWlQ2qgw2_0i5V4mQ][SOMEMACHINE][inet[/myip:9300]]])
Mar 16, 2015 3:44:30 PM org.elasticsearch.node.internal.InternalNode start
INFO: [Aragorn] started
But when messages start coming in, nothing is in fact sent to elasticsearch and these start to appear in logstash output:
WARNING: [Aragorn] Message not fully read (response) for [28] handler org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$6#17b
531e, error [true], resetting
Mar 16, 2015 3:44:54 PM org.elasticsearch.transport.netty.MessageChannelHandler messageReceived
WARNING: [Aragorn] Message not fully read (response) for [29] handler org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$6#130
82f0, error [true], resetting
and
{:timestamp=>"2015-03-16T15:44:54.377000+0100", :message=>"failed action with response of 500, dropping action...
(the above message is much longer but does not seem to contain any useful diagnostics)
What might be wrong?

hadoop too many logs on screen

I start learning hadoop using hive recently. As a beginner I am not so familiar with all the logs showing on the screen. So it's better to see a clean version of all important logs. I learn hive based on Rutberglen's "Programming Hive" book.
Just started, and I got numerous of logs after the first command. While on the book, it's just "OK, Time taken: 3.543 seconds".
Anyone has solution to reduce these logs?
PS:below are the logs I got from command "create table x (a int);"
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Sep 28, 2014 12:10:28 AM org.apache.hadoop.hive.conf.HiveConf <clinit>
WARNING: hive-site.xml not found on CLASSPATH
Logging initialized using configuration in jar:file:/Users/admin/Documents/Study/software /Programming/Hive/hive-0.9.0-bin/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Sep 28, 2014 12:10:28 AM SessionState printInfo
INFO: Logging initialized using configuration in jar:file:/Users/admin/Documents/Study/software/Programming/Hive/hive-0.9.0-bin/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/admin/hive_job_log_admin_201409280010_720612579.txt
Sep 28, 2014 12:10:28 AM hive.ql.exec.HiveHistory printInfo
INFO: Hive history file=/tmp/admin/hive_job_log_admin_201409280010_720612579.txt
hive> CREATE TABLE x (a INT);
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver PerfLogBegin
INFO: <PERFLOG method=Driver.run>
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver PerfLogBegin
INFO: <PERFLOG method=compile>
Sep 28, 2014 12:10:31 AM hive.ql.parse.ParseDriver parse
INFO: Parsing command: CREATE TABLE x (a INT)
Sep 28, 2014 12:10:31 AM hive.ql.parse.ParseDriver parse
INFO: Parse Completed
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.parse.SemanticAnalyzer analyzeInternal
INFO: Starting Semantic Analysis
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.parse.SemanticAnalyzer analyzeCreateTable
INFO: Creating table x position=13
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver compile
INFO: Semantic Analysis Completed
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver getSchema
INFO: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver PerfLogEnd
INFO: </PERFLOG method=compile start=1411877431127 end=1411877431388 duration=261>
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver PerfLogBegin
INFO: <PERFLOG method=Driver.execute>
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.ql.Driver execute
INFO: Starting command: CREATE TABLE x (a INT)
Sep 28, 2014 12:10:31 AM hive.ql.exec.DDLTask createTable
INFO: Default to LazySimpleSerDe for table x
Sep 28, 2014 12:10:31 AM hive.log getDDLFromFieldSchema
INFO: DDL: struct x { i32 a}
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.metastore.HiveMetaStore newRawStore
INFO: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
Sep 28, 2014 12:10:31 AM org.apache.hadoop.hive.metastore.ObjectStore initialize
INFO: ObjectStore, initialize called
Sep 28, 2014 12:10:32 AM org.apache.hadoop.hive.metastore.ObjectStore getPMF
INFO: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
Sep 28, 2014 12:10:32 AM org.apache.hadoop.hive.metastore.ObjectStore setConf
INFO: Initialized ObjectStore
Sep 28, 2014 12:10:33 AM org.apache.hadoop.hive.metastore.HiveMetaStore logInfo
INFO: 0: create_table: db=default tbl=x
Sep 28, 2014 12:10:34 AM org.apache.hadoop.hive.ql.Driver PerfLogEnd
INFO: </PERFLOG method=Driver.execute start=1411877431389 end=1411877434527 duration=3138>
OK
Sep 28, 2014 12:10:34 AM org.apache.hadoop.hive.ql.Driver printInfo
INFO: OK
Sep 28, 2014 12:10:34 AM org.apache.hadoop.hive.ql.Driver PerfLogBegin
INFO: <PERFLOG method=releaseLocks>
Sep 28, 2014 12:10:34 AM org.apache.hadoop.hive.ql.Driver PerfLogEnd
INFO: </PERFLOG method=releaseLocks start=1411877434529 end=1411877434529 duration=0>
Sep 28, 2014 12:10:34 AM org.apache.hadoop.hive.ql.Driver PerfLogEnd
INFO: </PERFLOG method=Driver.run start=1411877431126 end=1411877434530 duration=3404>
Time taken: 3.407 seconds
Sep 28, 2014 12:10:34 AM CliDriver printInfo
INFO: Time taken: 3.407 seconds
Try starting hive shell as follows :
hive --hiveconf hive.root.logger=WARN,console
If you wanted to make this change persistent, modify the logger property file HIVE_CONF_DIR/hive-log4j.properties file. If you don't have this file in your HIVE_CONF_DIR, create this file by copying the contents of hive-log4j.default in the HIVE_CONF_DIR.

Mapreduce job failing even the required class is present

Hi my job is failing due to runtime exception, saying, mapper class not found.
Below is the exception:
14/06/16 05:52:56 INFO mapred.JobClient: Task Id : attempt_201406071432_1142_m_000028_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.cloudera.sa.omniture.mr.OmnitureRawDataMapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1774)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:191)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:631)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.lang.ClassNotFoundException: Class com.cloudera.sa.omniture.mr.OmnitureRawDataMapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1680)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1772)
... 8 more
But in my jar i used to run the job had the Mapper class present, still i am facing the issue. pls guide.
Below info. on my jar file confirms the presence of Mapper class in jar file.
[svtdphpd#d-hcr75y1 testJARs]$ jar tvf MalformedAnalysis.jar | grep Omniture
1405 Mon Jun 16 15:59:34 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureRawDataMapper$HITDATA_PROBLEM.class
3728 Mon Jun 16 15:59:34 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureRawDataMapper.class
5280 Mon Jun 16 16:00:46 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureToRCFileJob.class
6436 Mon Jun 16 16:03:34 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureDataFileRecordReader.class
3642 Mon Jun 16 16:00:16 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureRawDataReducer.class
1792 Mon Jun 16 15:59:34 CDT 2014 com/cloudera/sa/omniture/mr/OmnitureDataFileInputFormat.class
Below is the Job configuration:
// Create job
Job job = Job.getInstance(config, "LoadOmnitureData");
job.setJarByClass(OmnitureToRCFileJob.class);
// Add named output for malformed records
MultipleOutputs.addNamedOutput(job, "Malformed",
TextOutputFormat.class, NullWritable.class, Text.class);
FileInputFormat.addInputPath(job, new Path(inputPath));
job.setInputFormatClass(OmnitureDataFileInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
job.setReducerClass(OmnitureRawDataReducer.class);
job.setMapperClass(OmnitureRawDataMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
TextOutputFormat.setOutputPath(job, new Path(outputPath));
job.setNumReduceTasks(numReduceTasks);

Resources