first steps using Spring - spring-boot

tried to follow the Spring Boot Getting Started.
I used STS-4 IDE with Import Spring Getting Started Content, getting the project gs-consuming-rest-complete.
mvn test failed with a big stack dump followed by the lines
... <<< FAILURE! - in com.example.consumingrest.ConsumingRestApplicationTest
[ERROR] contextLoads Time elapsed: 0.113 s <<< ERROR!
java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: java.lang.IllegalStateException: Failed to execute CommandLineRunner
Caused by: org.springframework.web.client.HttpClientErrorException$NotFound:
404 Not Found: [404 Not Found: Requested route ('gturnquist-quoters.cfapps.io') does not exist.
]
To me it looks like the URL https://gturnquist-quoters.cfapps.io/api/random is not up. Hence, I cannot use that example. Am I right here or did I do some beginners failure?
here I have put the full console output.

To me it looks like the URL https://gturnquist-quoters.cfapps.io/api/random is not up
Yes you are right, the URL is returning 404.
As per article's discuss comment section it's confirmed that there are some issues with this website.

Related

Tomcat error - java.lang.IllegalArgumentException: More than one fragment with the name [org_apache_jasper] was found

My Grails 5.2 application fails to start on Heroku with this error:
Caused by: java.lang.IllegalArgumentException: More than one fragment with the name [org_apache_jasper] was found. This is not legal with relative ordering. See section 8.2.2 2c of the Servlet specification for details. Consider using absolute ordering. Duplicate fragments found in [[jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/tomcat-embed-jasper-9.0.68.jar, jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/webapp-runner-9.0.70.0.jar, jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/tomcat-jasper-9.0.70.jar]].
Here is the error with better formatting:
Caused by: java.lang.IllegalArgumentException: More than one fragment with the name [org_apache_jasper] was found.
Duplicate fragments found in [
[jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/tomcat-embed-jasper-9.0.68.jar,
jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/webapp-runner-9.0.70.0.jar,
jar:file:/app/build/libs/OLRMain-1.3.0-plain.war!/WEB-INF/lib/tomcat-jasper-9.0.70.jar]]
I understand the duplication error but don't know how to fix. I am deploying to Heroku.
Below is my Heroku procfile.
web: cd build; java $JAVA_OPTS -Dgrails.env=prod -jar ../build/server/webapp-runner-9.0.70.0.jar --expand-war --port $PORT libs/*.war
I've seen similar issues on SO such as "clearing" tomcat, and adding an entry to web.xml but don't know how to do this using Grails and Heroku.

Flink in ECS fails to find shaded ContainerCredentialsProvider

I'm trying to run Flink 1.7.2 on ECS with Fargate. I've set up the state backend for my job to be RocksDB with a path=s3://...
In my Dockerfile my base image is 1.7.2-hadoop27-scala_2.11, and I run the following 2 commands:
RUN echo "fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider" >> "$FLINK_CONF_DIR/flink-conf.yaml"
RUN cp /opt/flink/opt/flink-s3-fs-hadoop-1.7.2.jar /opt/flink/lib/flink-s3-fs-hadoop-1.7.2.jar
Just like it says in
https://issues.apache.org/jira/browse/FLINK-8439
However I get the following exception:
Caused by: java.io.IOException: From option fs.s3a.aws.credentials.provider java.lang.ClassNotFoundException: Class org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider not found
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.loadAWSProviderClasses(S3AUtils.java:592)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:556)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:256)
at org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3FileSystemFactory.java:125)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:395)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:318)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStorage.<init>(FsCheckpointStorage.java:58)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:444)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createCheckpointStorage(RocksDBStateBackend.java:407)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:249)
... 17 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider not found
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2375)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClasses(Configuration.java:2446)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.loadAWSProviderClasses(S3AUtils.java:589)
... 28 more
Looking in the flink-s3-fs-hadoop-1.7.2.jar I see that the package of the class ContainerCredentialsProvider is actually org.apache.flink.fs.s3base.shaded.com.amazonaws.auth
I already tried:
Adding the aws-sdk-core jar to the lib, and setting the credentials provider to be just com.amazonaws.auth.ContainerCredentialsProvider (without the shading) but I get the problem mentioned in the issue link above
Setting the credentials provider to be org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.ContainerCredentialsProvider but then the code in S3FileSystemFactory.java prefixes it with org.apache.flink.fs.s3hadoop.shaded.
Any ideas here for finding the class?
The issue was resolved in one of the versions after.
I ran it on Flink 1.9.0 cluster with the following line:
RUN echo "fs.s3a.aws.credentials.provider: com.amazonaws.auth.ContainerCredentialsProvider" >> "$FLINK_CONF_DIR/flink-conf.yaml"
and the class is found and works.
You can see in:
https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3FileSystemFactory.java
that the FLINK_SHADING_PREFIX is now correct

Batch Process Test_Bib_Import fails on OLE 1.5.2.1 installation

After upgrading to OLE 1.5.2.1 when I try to upload a local MARC .mrc-file via batch process I get the following error:
Batch process Failed for profile :: Test_Bib_Import
The same exact file worked fine in OLE 1.5.0-M2
Catalina.out contains the following error:
[INFO] org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep - Executing Batch process type :: Bib Import
[ERROR] org.kuali.ole.batch.ingest.BatchProcessBibImport - java.lang.NullPointerException
[ERROR] org.kuali.ole.batch.helper.OLEBatchProcessDataHelper - Error while performing batch process for profile :: Test_Bib_Import
java.lang.RuntimeException: java.lang.NullPointerException
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:90)
at org.kuali.ole.batch.impl.AbstractBatchProcess.process(AbstractBatchProcess.java:87)
at org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep.executeBatch(OLEBatchProcessAdhocStep.java:50)
at org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep.execute(OLEBatchProcessAdhocStep.java:30)
at org.kuali.ole.sys.batch.Job.runStep(Job.java:175)
at org.kuali.ole.sys.batch.Job.execute(Job.java:121)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Caused by: java.lang.NullPointerException
at org.kuali.ole.batch.helper.BatchBibImportHelper.processBibMarcRecord(BatchBibImportHelper.java:89)
at org.kuali.ole.batch.helper.BatchBibImportHelper.processBatch(BatchBibImportHelper.java:70)
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:152)
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:83)
... 7 more
[ERROR] org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep - Error while running Batch Process Step::OLEBatchProcessAdhocStep
java.lang.Exception: Batch process Failed
at org.kuali.ole.batch.impl.AbstractBatchProcess.process(AbstractBatchProcess.java:123)
at org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep.executeBatch(OLEBatchProcessAdhocStep.java:50)
at org.kuali.ole.batch.impl.OLEBatchProcessAdhocStep.execute(OLEBatchProcessAdhocStep.java:30)
at org.kuali.ole.sys.batch.Job.runStep(Job.java:175)
at org.kuali.ole.sys.batch.Job.execute(Job.java:121)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:90)
at org.kuali.ole.batch.impl.AbstractBatchProcess.process(AbstractBatchProcess.java:87)
... 6 more
Caused by: java.lang.NullPointerException
at org.kuali.ole.batch.helper.BatchBibImportHelper.processBibMarcRecord(BatchBibImportHelper.java:89)
at org.kuali.ole.batch.helper.BatchBibImportHelper.processBatch(BatchBibImportHelper.java:70)
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:152)
at org.kuali.ole.batch.ingest.BatchProcessBibImport.processBatch(BatchProcessBibImport.java:83)
... 7 more
The line from which that that error is coming from suggests that the Batch Profile you are using has not been set up correctly in the database (specifically, it can't find the matching profile). One definitive indicator would be a stack trace output in catalina.out involving the getMatchingProfileObj method in the org.kuali.ole.batch.bo.OLEBatchProcessProfileBo class.
If you migrated your application code over top of an existing database without having fully migrated the data in that database correctly, this problem could result. Given that your question illustrates you used 1.5.0-M2 previously, which is a pre-release milestone, you are better off re-initializing your database and reloading your data before running batch processes with a new version of the OLE codebase.
If you already did that, then this may be fodder for a bug report.
Note that there were a lot of changes to the match/overlay part of the profiles in 1.5.2. You might want to look at how it is set up and make sure the choices apply. Match and overlay was added for holdings and items and is being tested, this makes for more complicated possible choices. YOu could try setting it to no match, just add the bib and see if that works. As long as your file is utf8, not MARC8 encoding it shoul dload

BeanDefinitionStoreException with Play framework

I get this exception when trying to get to first page of my website.
Oops: BeanDefinitionStoreException
An unexpected error occured caused by exception BeanDefinitionStoreException: I/O failure during classpath scanning; nested exception is java.io.FileNotFoundException: /Users/mmm/Documents/work/workspace/BCR/precompiled/java/app/config/AppConfig$WebserviceMode.class (No such file or directory)
play.exceptions.UnexpectedException: Unexpected Error
at play.Play.start(Play.java:545)
I'm using Spring 1.0.2 and Play 1.2.4
Can you please help me on this? I searched the internet but couldn't get a clear response!
I think that you are hitting this issue : https://github.com/pepite/Play--framework-Spring-module/pull/9
Try play compile before starting your app.
I fixed the problem by removing an extra attribute 'xsi' in my application-context.xml on

emma coverage tool

I am getting the following error while trying to get coverage data using emma ctl tool.
EMMA: processing control command sequence ...
EMMA: executing [coverage.get (C:/FD_DEV3/feddebt_sources/report/emma/coverage.ec,true,true)] ...
[EMMA v2.1, build 5320 (stable)]
emma ctl: coverage.get: RPC failure while executing [coverage.get]
Exception in thread "main" com.vladium.emma.EMMARuntimeException: coverage.get: RPC failure while executing [coverage.get]
at com.vladium.emma.ctl.CtlProcessor._run(CtlProcessor.java:242)
at com.vladium.emma.Processor.run(Processor.java:88)
at com.vladium.emma.ctl.ctlCommand.run(ctlCommand.java:151)
at emma.main(emma.java:50)
Caused by: java.io.StreamCorruptedException: invalid stream header
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:770)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:286)
at com.vladium.emma.rt.rpc.Response.read(Response.java:46)
at com.vladium.emma.rt.RTControllerClientProxy.execute(RTControllerClientProxy.java:100)
at com.vladium.emma.ctl.CtlProcessor._run(CtlProcessor.java:231)
... 3 more
Please help.
Thanks
The reason for it is:
The jar is not correct. Make sure that the jar is correct one.
The other error is that you did not place the jar in java/jre folder .

Resources