I am creating a demo application for Groovy using Spring boot with Kafka and elastic.
I used #EmbeddedKafka annotation in my Spock tests and they are working really nice locally; both on Windows and Ubuntu. They work from within Intellij by just running or debuggin, no issue. It's the same when trying in my shell "./gradlew test". Everything is good.
As soon as I pushed it to github.com, my github action fails. But it's calling the same command.
the action definition: https://github.com/besessener/GroovySpringBootKafkaElasticsearchDemo/blob/main/.github/workflows/test.yml
remote failing test case:
https://github.com/besessener/GroovySpringBootKafkaElasticsearchDemo/blob/main/src/test/groovy/me/spring/GroovyDemo/stream/KafkaSendAndReceiveTest.groovy
action: https://github.com/besessener/GroovySpringBootKafkaElasticsearchDemo/runs/3019862203?check_suite_focus=true
The only thing that looks like an error to me from the actions output, is this:
2021-07-08 14:05:35.896 WARN 2693 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-UserGroup-1, groupId=UserGroup] Error while fetching metadata with correlation id 4 : {topic-user=LEADER_NOT_AVAILABLE}
I read many things about not using static ports for the kafka tests. but this is my only kafka test, so I don't really understand how there should be a conflict. Furthermore LEADER_NOT_AVAILABLE could be a problem with a non-existing topic or maybe the consumer is simply not able to properly conenct to the broker. But I don't see any of this.
I still have the feeling it is more related to "localhost:9092" as brokerProperties. Is there an issue in regards to that when using Github actions? Or anything else I am missing?
Related
Just looking for some information if others have solved this pattern. I want to use Spring Integration and Spring Batch together. Both of these are SpringBoot applications and ideally I'd like to keep them and their respective configuration separated, so they are both their own executable jar. I'm having problems executing them in their own process space and I believe I want, unless someone can convince me otherwise, each to run like they are their own Spring Boot app and initialize themselves with their own profiles and properties. What I'm having trouble with though is the invocation of the job in my SpringBatch project from my SpringIntegration project. At first I couldn't get the properties loaded from the batch project, so I realized I need to pass the spring.active.profiles as a Job Parameter and that seemed to solve that. But there are other things in the Spring Boot Batch application that aren't loading correctly like the schema-platform.sql file and the database isn't getting initialized, etc.
On this initial launch of the job I might want the response to go back to Spring Integration for some messaging on Job Status. There might be times when I want to run a job without Spring Integration kicking off the job, but still take advantage of sending statuses back to the Spring Integration project providing its listening on a channel or something.
I've reviewed quite a few Spring samples and have yet to find my exact scenario, most are with the two dependencies in the same project, so maybe I'm doing something that's not possible, but I'm sure I'm just missing a little something in the Spring configuration.
My questions/issues are:
I don't want the Spring Integration project to know anything about the SpringBatch configuration other than the job its kicking off. I have found a good way to do that reference to the Job Bean without getting my entire batch configuration loading.
Should I keep these two projects separated or would it be better to combine them since I have two-way communication between both.
How should the Job be launch from the integration project. We're using the spring-batch-integration project with JobLaunchRequest and JobLauncher. This seems to run it in the same process as the Spring Integration project and I'm missing a lot of my SpringBootBatch projects initialization
Should I be using a CommandLineRunner instead to force it to another process.
Is SpringApplication.run(BatchConfiguration.class) the answer?
Looking for some general project configuration setup to meet these requirements.
Spring Cloud Data Flow in combination with Spring Cloud Task does exactly what you're asking. It launches Spring Cloud Task applications (which can contain batch jobs) as new processes on the platform you choose. I'd encourage you to check out that project here: http://cloud.spring.io/spring-cloud-dataflow/
I am trying to get the easiest spring cloud stream example to work, so I decided to implement the one from the reference guide reference guide.
I want it to work with kafka so I made too application both with
spring-cloud-starter-stream-kafka
as a dependency. I also tried with rabbitMq by replacint it with
spring-cloud-starter-stream-rabbit
I can't get it to work however. I don't get any exceptions and i can see that the source puts works, the sink however isn't printing the message. I am sure that it connects correctly to kafka/rabbitMq, because i don't get any exceptions and if i don't run kafka/rabbitMq I do get exceptions. I am also using the destination parameter when i run the apps (like it says in the guide), so it should be using the same destination.
I am using spring boot 1.4.2.RELEASE and spring cloud Camden.SR2
Anyone know what i missed?
I would like to stop Tomcat when a war deployment fails. Is there some hook or listener which could be used for that?
I know, normally one would not make the container stop when a deployment fails. In my case I would like to implement a Fail-fast error model with Tomcat since there is currently no chance to replace the WAR with a fat jar with an embedded Servlet engine (e.g., Spring-Boot).
In the mean time I have implemented a Tomcat LifecycleListener which shuts down TC when a deployment fails: https://github.com/ascheman/tomcat-lifecyclelistener
Thanks to Thomas Meyer who gave some hints on Twitter: https://twitter.com/thomasmey/status/752971635825729537.
Spring boot provides shutdown hook. SOF has similar query as below
Spring Boot shutdown hook
This can give and idea to implement your app Fail Fast behavior with hook.
I created a simple app using Spring boot and the spring cloud starter hystrix library.
In my build.gradle:
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
compile("org.springframework.cloud:spring-cloud-starter-hystrix-dashboard:1.0.0.RC2")
compile("org.springframework.cloud:spring-cloud-starter-hystrix:1.0.0.RC2")
}
I deployed one app as a hystrix dashboard using the above libraries and #EnableHystrixDashboard
I then deployed another app which was annotated with #EnableHystrix
I added a component that has a command that I invoke through a controller, just to test things out:
#HystrixCommand(fallbackMethod = "onFailedToSayHello")
public String sayHello(Map<String, String> parameters) {
if (parameters.get("fail") != null && parameters.get("fail").equals("yes")) {
throw new RuntimeException("I failed because you told me to");
}
return "Hello";
}
private String onFailedToSayHello(Map<String, String> parameters) {
return "Bye";
}
The hystrix app runs fine. When I hit the URL I see the stream, the ouptut of which I put in a gist here.
I just see that repeating over and over.
My dashboard is up and running and when I enter the URL of my running hystrix sample app I get a loading screen:
Then, when I check my hystrix app again I see this:
λ curl http://myappurl/hystrix.stream
{"timestamp":1423748238280,"status":503,"error":"Service Unavailable","message":"MaxConcurrentConnections reached: 5","path":"/hystrix.stream"}
I am not sure where to go from here. I tried deploying the hystrix dashboard war instead of building it myself which I downloaded from here but got the same result.
I also noticed some JavaScript error outputs in the browser console which I put here in case they are any use.
And in the server logs I see this repeated over and over:
2015-02-15 20:03:55.324 INFO 9360 --- [nio-8080-exec-9]
ashboardConfiguration$ProxyStreamServlet :
Proxy opening connection to: http://myappurl/hystrix.stream
I am now going to try and get turbine running and see if using that somehow magically fixes things. Thought I would post here too though on the off chance someone can spot an error on my part based on what I've done so far.
EDIT:
An important point I didn't mention is that I have both the app and the dashboard deployed on PCF. This seems to be important since this issue doesn't happen when I deploy locally. Still no idea what's causing it though.
Problem goes away if you build hystrix-dashboard from the latest source, or use the most recently released war (version 1.4.3 on 27th March at time of writing).
Several things can probably lead to MaxConcurrentConnections, and one of them is no any metrics data generated by the application. The servlets (Hystrix.stream servlets) will keep looping to wait data, then it will consumes all the connections. There are very good discussion on the Hystrix github wiki. For example:
hystrix.stream holds connection open if no metrics #85
I'm developing an application with Spring Integration and RabbitMq and I'm wondering how to test it (integration tests).
I think SoapUI can be a great Solution but it doesn't support RAbbitMq, hermesjms.com has support for Qpid so i thought that could be easy to do a new plugin to support Rabbitmq but it's being more difficult than I thought due to the project is a little old and has a bunch of dependencies.
So I'm starting to think in doing something myself, like a DSL in python, something like this:
tests = [{ 'name': 'start',
'routing_key': 'returned',
'payload' : "xxxxx",
'timeOut' : '10000',
'expected': '',
'threads': '1'
},
{ 'name': 'second',.....
]
And then with Pika execute the actions and check the results.
I know it's very stupid and sopaui is huge and awesome, but at least it'd allow me to do small tests.
What would you recommend?
RabbitMQ provides you a Webfrontend (so called Management View 1).
So: What exactly do you want to test? Let's say, you want to verify that an incoming message on requestChannel down to the service and back, you could just autowire the channel directly (i.e. #Autowired private Channel requestChannel;) and put a message into it.
However, only if you design your architecture right: Each step of your process can be tested using mocks or special modified injected dependencies.
In addition to you own components, these testability applies for the spring components (interfaces). Let's say, you have implement your own router: Test und verify in- and output. The same is for a transformer.
If you try to verify the "bic picture", you will have to rebuild the complete scenario. But this should not be so complicated with non persistent and non durable queues and messages.
Is there something else you want test?
For rabbitMq , my advice is to use a real RabbiMQ : this can be done by using Vagrant with chef for provisioning the RabbitMq and the Vagrant maven plugin to start the Box before Integration tests and halt it in the post phase of integration tests :
The Vagrant Maven plugin : http://nicoulaj.github.io/vagrant-maven-plugin/
Vagrant WebSite : http://www.vagrantup.com/
Cookbook Chef for RabbitMQ : https://github.com/opscode-cookbooks/rabbitmq
To Summarize you must :
Install Vagrant and create an empty Box(Centos or Ubunutu).
provision the VM with rabbitMQ cookbook .
place .box into you home folder (rabbitMQ.box).
Configure you maven Project to start the VM with vagrant up (~/rabbitMQ.box) in the pre phase of integration tests .
Configure you maven Project to stop the VM with vagrant halt (~/rabbitMQ.box) in the pre phase of integration tests .
Hope that this help
RabbitMQ now has an HTTP API so you could use this instead of its JMS
http://hg.rabbitmq.com/rabbitmq-management/raw-file/rabbitmq_v2_8_4/priv/www/api/index.html