generic testcontain for testcontainers throwing 409 saying container is not running - testcontainers

Here is my code sniplet
override val container = GenericContainer("memsql/cluster-in-a-box:latest")
container.start()
container.underlyingUnsafeContainer.setWaitStrategy(Wait.forHealthcheck())
container.underlyingUnsafeContainer.start()
container.underlyingUnsafeContainer.execInContainer("ls", "-al", "/") //throw 409 here
container.stop()
I saw in the console that say the container has started. Not sure why I am getting 409 then.
Console log:Container memsql/cluster-in-a-box:latest started in PT0.406374S

Related

TestContainer can't start due to error: Timed out waiting for log output matching

I got "ContainerLaunchException: Timed out waiting for log output matching" when starting testcontainer for elasticserach. How should I fix this issue?
container = new ElasticsearchContainer(ELASTICSEARCH_IMAGE)
.withEnv("discovery.type", "single-node")
.withExposedPorts(9200);
container.start();
12:16:50.370 [main] ERROR 🐳 [docker.elastic.co/elasticsearch/elasticsearch:7.16.3] - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for log output matching '.("message":\s?"started".|] started
$)'
at org.testcontainers.containers.wait.strategy.LogMessageWaitStrategy.waitUntilReady(LogMessageWaitStrategy.java:49)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:51)
Updated:
I looked into contructor ElasticsearchContainer
public ElasticsearchContainer(DockerImageName dockerImageName) {
super(dockerImageName);
this.caCertAsBytes = Optional.empty();
dockerImageName.assertCompatibleWith(new DockerImageName[]{DEFAULT_IMAGE_NAME, DEFAULT_OSS_IMAGE_NAME});
this.isOss = dockerImageName.isCompatibleWith(DEFAULT_OSS_IMAGE_NAME);
this.logger().info("Starting an elasticsearch container using [{}]", dockerImageName);
this.withNetworkAliases(new String[]{"elasticsearch-" + Base58.randomString(6)});
this.withEnv("discovery.type", "single-node");
this.addExposedPorts(new int[]{9200, 9300});
this.isAtLeastMajorVersion8 = (new ComparableVersion(dockerImageName.getVersionPart())).isGreaterThanOrEqualTo("8.0.0");
String regex = ".*(\"message\":\\s?\"started\".*|] started\n$)";
this.setWaitStrategy((new LogMessageWaitStrategy()).withRegEx(regex));
if (this.isAtLeastMajorVersion8) {
this.withPassword("changeme");
}
}
It uses setWaitStrategy. So I updated my code as below
container.setWaitStrategy((new LogMessageWaitStrategy()).withRegEx(regex).withTimes(1));
But I still get the same error. Here is how far the log messages go.
Updated again: I relized above code change doesn't update any default values.
Here is the new change:
container.setWaitStrategy((new LogMessageWaitStrategy())
.withRegEx(regex)
.withStartupTimeout(Duration.ofSeconds(180L)));
It works with this new change. I have to copy regex from ElasticsearchContainer constructor. I hope it has a better way to override the timeout value.

okhttp3.internal.framed.StreamResetException: stream was reset: CANCEL

I'm getting secret from Azure KeyVault through rest api. At backend, I'm using the azure-keyvault-client which is using retrofit and okhttp3 at behind. My app has been running well for a long time. Now it shows up the exceptions suddenly. Each time the exceptions happens, I restarts the app. Then the exceptions are gone. Everything looks good. What will be the reason that "stream was cancelled"?
Below is the full stack trace.
I have tried to remote debug the app. I found that the exception is thrown when call FramedStream.closeInternal(). The HttpEngine tries to close but source.finished = false and sink.finished = true.
It could either be a local cancel of the stream, or from the server. You can use HTTP/2 Frame logging so see if it's coming from the server.
https://square.github.io/okhttp/debug_logging/#http2-frame-logging
Does the problem persist until restart? If so it's possible that the server is expecting you or OkHttp to close the connection after the cancel, but that isn't expected in that case. So best to debug the frames and then discuss with the server team.
https://github.com/square/okhttp/blob/master/okhttp-testing-support/src/main/kotlin/okhttp3/OkHttpDebugLogging.kt
fun enableHttp2() = enable(Http2::class)
fun enableTaskRunner() = enable(TaskRunner::class)
fun enable(loggerClass: String) {
val logger = Logger.getLogger(loggerClass)
if (configuredLoggers.add(logger)) {
logger.addHandler(ConsoleHandler().apply {
level = Level.FINE
formatter = object : SimpleFormatter() {
override fun format(record: LogRecord) =
String.format("[%1\$tF %1\$tT] %2\$s %n", record.millis, record.message)
}
})
logger.level = Level.FINEST
}
}

Unable to create Kinesis Client in Lambda function

I have created a Lambda function which is triggered by a DynamoDB stream. I am trying to process Dynamodb events and put them into a Kinesis stream after some transformation. The Lambda has full access to both DynamoDB and Kinesis stream.
I am using Cloudwatch to check the logs and can see that the DynamoDb events are successfully processed. But when I try to create the Kinesis client (present in a different class), the code fails. I tried logging the error and even printing it but it did not help. Sometimes the logs end with this message
END RequestId: {some request id}
Other times, I get the following error
log4j:WARN No appenders could be found for logger (com.amazonaws.AmazonWebServiceClient).
The code fails at the time of creation of Kinesis client. I can see the log messages / print statements before the creation of Kinesis client. But right at that line code fails. I am not sure what the problem is. Can someone please help me out?
Here is the class in which the code fails
private AmazonKinesis kinesisClient;
private String streamName;
public TestKinesisPut(String streamName) {
this.streamName = streamName;
BasicAWSCredentials awsCreds = new BasicAWSCredentials("ACCESS_KEY", "SECRET_KEY");
System.out.println("aws creds are: " + awsCreds);
clientBuilder = AmazonKinesisClientBuilder.standard().withRegion(Regions.AP_SOUTH_1).
withCredentials(new AWSStaticCredentialsProvider(awsCreds));
System.out.println("Credentials are set: \n " + clientBuilder);
try {
System.out.println("This one is new \n About to build new kinesis client");
// the code fails after this line
kinesisClient = clientBuilder.build();
System.out.println("failed to build client");
}
catch(Exception e) {
System.out.println("failed to initialize producer: " + e.getMessage());
kinesisClient = null;
}
}
Thanks
After a few days of head scratching I decided to tinker with the configuration of my Lambda function. Looks like the problem was caused by OutOfMemoryError. I increased the memory of my Lambda function and it started working.
It seems that at the time of creation of the KinesisClient, the JVM was getting out of metaspace. I did some research and found this stackoverflow thread. Please refer the link to view a detailed discussion on a similar scenario.

Internal Failures when using Spring-Integration-Kinesis Message Driven Adapter

I have set up this KinesisMessageDrivenChannelAdapter:
#Bean
public KinesisMessageDrivenChannelAdapter kinesisInboundChannel(AmazonKinesis amazonKinesis, MetadataStore store) {
KinesisMessageDrivenChannelAdapter adapter =
new KinesisMessageDrivenChannelAdapter(amazonKinesis, config.getStreamName());
adapter.setCheckpointMode(CheckpointMode.batch);
adapter.setListenerMode(ListenerMode.batch);
adapter.setStartTimeout(10000);
// Set idle to milliseconds. Max value is 596 before getting an overflow exception.
adapter.setIdleBetweenPolls(config.getPollHours() * 3_600_000);
adapter.setShouldTrack(true);
adapter.setDescribeStreamRetries(5);
adapter.setConcurrency(50);
adapter.setCheckpointStore(store);
adapter.setStreamInitialSequence(KinesisShardOffset.trimHorizon());
adapter.setOutputChannelName("logMessage.input");
adapter.setErrorChannel(errorChannel());
return adapter;
}
Most of the time, it works fine. But from time to time, I get this kind of message:
Exception in thread "kinesisInboundChannel-kinesis-consumer-1" com.amazonaws.services.kinesis.model.AmazonKinesisException: null (Service: AmazonKinesis; Status Code: 500; Error Code: InternalFailure; Request ID: c2f66be9-23f4-b211-9165-ed92383ee673)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2276)
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2252)
at com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1062)
at com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:1038)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.getRecords(KinesisMessageDrivenChannelAdapter.java:853)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer.access$3500(KinesisMessageDrivenChannelAdapter.java:688)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ShardConsumer$2.run(KinesisMessageDrivenChannelAdapter.java:816)
at org.springframework.integration.aws.inbound.kinesis.KinesisMessageDrivenChannelAdapter$ConsumerInvoker.run(KinesisMessageDrivenChannelAdapter.java:1003)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
After that, the adapter stops working entirely without hanging the app. I specified what error channel to use, and I'd be happy to just restart the application to bring the adapter back online if I need to. But that appears to not be an option.
How do I build error handling into this?
Sounds like a problem has been fixed here: https://github.com/spring-projects/spring-integration-aws/issues/84
You need to consider to use the latest version (2.0.0.M2), or even better 2.0.0.BUILD-SNAPSHOT.

WebFlux/Reactive Spring RabbitmMq Message is acknowledged even the save failed

I've started working recently with spring webflux and Rabbitmq along with cassandra reactive repository. What I've noticed is that the message is acknowledged even saving in cassandra didn't succued for some element. I propagete exception thrown during saving but even though the message is take down from queue. I'm wondering what I should do to let Rabbitmq know that this message should be consider as failed (I want to reject message to send it to dead letter queue )
#RabbitListener(queues = Constants.SOME_QUEUE, returnExceptions = "true")
public void receiveMessage(final List<ItemList> itemList) {
log.info("Received message from queue: {}", Constants.SOME_QUEUE);
itemService.saveAll(itemList)
.subscribe(
item-> log.info("Saving item with {}", item.getId()),
error -> {
log.error("Error during saving item", error);
throw new AmqpRejectAndDontRequeueException(error.getMessage());
},
() -> log.info(Constants.SOME_QUEUE+
" queue - {} items saved", itemList.size())
);
}
Reactive is non-blocking; the message will be acked as soon as the listener thread returns to the container. You need to somehow block the listener thread (e.g. with a Future<?>) and wake it up when the cassandra operation completes, exiting normally if successful, or throwing an exception on failure so the message will be redelivered.
I solved my problem by sending explicitly acknowledge/reject message to rabbitmq. It caused that I was forced to wrote a little more code but now at least it works and I have full controll what's happening.

Resources