Configuring Minio Testcontainers with different default port - spring

Quick summary: How can I change the default port of the minio client running in my test container?
I want to use minio as a testcontainer in my application which already working when I start it locally. here is the codesnippet I use to run the testcontainer:
public class MinioContainer extends GenericContainer<MinioContainer> {
private static final int DEFAULT_PORT = 9000;
private static final String DEFAULT_IMAGE = "/minio/minio";
private static final String DEFAULT_TAG = "latest";
private static final String MINIO_ACCESS_KEY = "MINIO_ACCESS_KEY";
private static final String MINIO_SECRET_KEY = "MINIO_SECRET_KEY";
private static final String DEFAULT_STORAGE_DIRECTORY = "/data";
private static final String HEALTH_ENDPOINT = "/minio/health/ready";
public MinioContainer() {
this(DEFAULT_IMAGE + ":" + DEFAULT_TAG);
}
public MinioContainer(String image) {
super(image == null ? DEFAULT_IMAGE + ":" + DEFAULT_TAG : image);
Network network = Network.newNetwork();
withNetwork(network);
withNetworkAliases("minio-" + Base58.randomString(6));
addExposedPort(DEFAULT_PORT);
withEnv(MINIO_ACCESS_KEY, "access_key");
withEnv(MINIO_SECRET_KEY, "secret_key");
withCommand("server", DEFAULT_STORAGE_DIRECTORY);
setWaitStrategy(new HttpWaitStrategy()
.forPort(DEFAULT_PORT)
.forPath(HEALTH_ENDPOINT)
.withStartupTimeout(Duration.ofMinutes(1)));
}
public String getHostAddress() {
return getHost() + ":" + getMappedPort(DEFAULT_PORT);
}
}
As soon as I deploy this on our cluster, where also an minio container is running at port 9000, it shows this error message in the console:
io.minio.errors.ErrorResponseException: The Access Key Id you provided does not exist in our records.
at some.package.MinioTest.setup(MinioTest.java:58)
In my test i am running a SpringBootTest using this container and injecting my minio client. I also configured a test application yaml so I can run my test with an active test profile. The error happens on following code snippet:
private final String BUCKET = "bucket";
....
#BeforeEach
void setup() {
boolean bucketExists = minioClient.bucketExists(BucketExistsArgs.builder().bucket(BUCKET).build());
...
}
Is there a way to change the DEFAULT_PORT on my MinioContainer so it is not the same port as the minio container already running on my cluster? I am not able to get my tests running on our pipeline because of this issue, which is only happening on our cluster.
As soon as I change the DEFAULT_PORT to something different than 9000 on my MinioContainer, the Container stops working because it is not able to find the HEALTH_ENDPOINT and therefor the whole container just stops working.
I hope I explained my problem clear enough. If not please tell me so I can try to explain it clearer. I am already completely frustrated with this issue.
BR
Null

I found the solution for my problem. Minio supports the following command:
"server /data --address :9100"
Now I was able to generate my testcontainer now like this:
public static GenericContainer<?> minioContainer = new GenericContainer<>(MINIO_IMAGE)
.withCommand("server /data --address :9100")
.withExposedPorts(9100);
Now the MinioContainer in my Test runs with Port 9100.
Hopefully I was able to help someone else with this issue.

Related

Testcontainers ftp and real server different navigation

I want to test my simple class that connected to public Ftp and get list of fileNames from there.
And i found very strange issue - navigation on real server and on test conteiner with same dirs are not same
there is a dir structure
Well if i connect to real server and use ftp.changeWorkingDirectory("/out/published")
everything is good - directory changed and i retrive file from there
if i trying to do same in testconteiner ftp.changeWorkingDirectory("/out/published") it return false ( cant find this directory) and if i remove slash like this ftp.changeWorkingDirectory("out/published") it will work
and second issue if i will do ftp.listnames("/out/published/tverskaya_obl/purchaseNotice/daily) on real server it will work good, but in testconteiner it work only if i use this
ftp.listNames("tverskaya_obl/purchaseNotice/daily"
)
is there any way to fix it
here is the code to create testcontainer and folders
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
#SpringBootTest(properties = "ftp.host = localhost")
class PurchaseFZ223FtpServiceTest {
private static final int PORT = 21;
private static final String USER = "";
private static final String PASSWORD = "";
private static final int PASSIVE_MODE_PORT = 21000;
#Autowired
FtpService ftpService;
private static final FixedHostPortGenericContainer ftp = new FixedHostPortGenericContainer<>(
"delfer/alpine-ftp-server:latest")
.withFixedExposedPort(PASSIVE_MODE_PORT, PASSIVE_MODE_PORT)
.withExposedPorts(PORT)
.withEnv("USERS", USER + "|" + PASSWORD)
.withEnv("MIN_PORT", String.valueOf(PASSIVE_MODE_PORT))
.withEnv("MAX_PORT", String.valueOf(PASSIVE_MODE_PORT));
#BeforeAll
void init() throws NoSuchFieldException, IllegalAccessException, IOException {
ftp.start();
FTPClient client =new FTPClient();
client.connect("localhost", ftp.getMappedPort(PORT));
client.enterLocalPassiveMode();
client.login(USER, PASSWORD);
client.makeDirectory("out");
client.makeDirectory("out/published");
client.makeDirectory("out/published/tverskaya_obl");
client.makeDirectory("out/published/tverskaya_obl/purchaseNotice");
client.makeDirectory("out/published/tverskaya_obl/purchaseNotice/daily");
client.changeWorkingDirectory("out/published");
client.storeFile("list_regions.txt", new ByteArrayInputStream("tverskaya_obl".getBytes(StandardCharsets.UTF_8)));
client.changeWorkingDirectory("tverskaya_obl/purchaseNotice/daily");
client.storeFile("purchaseNotice_Tverskaya_obl_20221209_000000_20221209_235959_daily_001.xml.txt", new ByteArrayInputStream("test".getBytes(StandardCharsets.UTF_8)));
}
Finally found what is going wrong
When testconteiner create ftp server it just add two folders like this
ftp/yourLoginName
so my mistake was to ask list of file name with invalid path
valid path for me to get list of fileNames is /ftp/***/out/published/tverskaya_obl/purchaseNotice/daily
same with change working dir
if you dnt know how to check real path you are at just do this
ftp.printWorkingDirectory()

Cannot access docker container app by localhost, access by IP got timeout

I just got a new PC last week. So I setup my working environment as usual in Windows 10 with the latest Windows Docker Desktop. Then created a very simple spring boot REST service just to say hello, created the image with Spring boot Buildpacks 3 days ago, it worked fine with port mapping “docker run -p 8090:8080 davy/myapp”. This image is working well even today: I can access my application by “http://localhost:8090/sayHello” even today.
the working image
So, I started to build my real application and completed some functionalities. I wanted to test my app and created a new image by using spring boot Buildpacks.
Now I got a big problem: I cannot not access the application running in the container by port mapping with port mapping “docker run -p 8090:8080 davy/myapp” any more by “http://localhost:8090/sayHello”. It got an error page said "localhost did not send any data"
cannot send data image
Then I got my container IP by “docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 548e29f46ca7”, which displayed as “172.17.0.2/”. So I tried http://172.17.0.2:8090/sayHello. Now I got a timeout after waiting for some time I got "172.17.0.2 took too long to respond":
timeout image
I did not see any difference in the ports binding: both are 0.0.0.0:8090->8080/tcp
port binding for 2 images
I re-built the image several times by using Spring boot Buildpacks, 1 time Dockerfile and docker-compose.yml, and I cannot make the container like the old container any more.
I also tried “docker run -p 8088:8080 davyhu/myapp -m http.server --bind 0.0.0.0”, but got the same result: cannot access app by localhost, and IP timeout.
Thanks in advance for the helps!
Here are some more information:
config in pom.xml for buildpacks (no change for both versions in the pom.xml):
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>
</plugins>
</build>
PDFController.java
public class PdfGenerationController {
private static final Logger logger = LoggerFactory.getLogger(PdfGenerationController.class);
private static final SimpleDateFormat DateFormatterIn1 = new SimpleDateFormat("yyyy-MM-dd");
//private static final SimpleDateFormat DateFormatterIn2 = new SimpleDateFormat("dd/MM/yyyy");
private static final SimpleDateFormat DateFormatterOut = new SimpleDateFormat("dd/MM/yyyy");
private static final SimpleDateFormat DateFormatterIn2 = new SimpleDateFormat("yyyy-MM-dd");
//private static final SimpleDateFormat DateFormatterOut = new SimpleDateFormat("yyyy-MM-dd");
#Autowired
private ResourceBundleMessageSource source;
#Value("${pdf.title}")
private static String pdfTitle;
#Value("${pdf.footerText1}")
private static String pdfFooterText1;
#CrossOrigin(origins = "http://localhost:4200")
#PostMapping("/getPdf")
public ResponseEntity<byte[]> getPDF(
#RequestHeader(name = "Accept-Language", required = true) final Locale locale,
#RequestBody String jsonInput) {
logger.info("myCustomerDetails() method started");
logger.info(jsonInput);
logger.info("locale = {}", locale);
JSONObject data = new JSONObject(jsonInput);
byte[] pdfFile = null;
ResponseEntity<byte[]> response = null;
HttpHeaders headers = new HttpHeaders();
try {
pdfFile = new PdfGenertor().generatePDF(data,locale);
}catch (Exception e) {
e.printStackTrace();
response = new ResponseEntity<>(null, headers, HttpStatus.METHOD_FAILURE);
}
headers.setContentType(MediaType.APPLICATION_PDF);
String filename = "fro_soa_form.pdf";
headers.setContentDispositionFormData(filename, filename);
headers.setCacheControl("must-revalidate, post-check=0, pre-check=0");
response = new ResponseEntity<>(pdfFile, headers, HttpStatus.OK);
return response;
}
private String formatDate(SimpleDateFormat format, String str) {
try {
Date date = format.parse(str);
return DateFormatterOut.format(date);
} catch (Exception e) {
return "";
}
}
#GetMapping("/sayHello")
public String sayHello() {
return "Hello";
}
}
It the code worked fine in eclipse with postman (PDF displayed with Jason input and header accepted language).
first and second image are all build with "mvn spring-boot:build-image".
If anything I need to post, please let mw know.
Thanks!
Dockerfile:
FROM openjdk:11-slim as build
MAINTAINER xxxx.ca
COPY target/fro_soa_backend-0.0.1-SNAPSHOT.jar fro_soa_backend-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","/fro_soa_backend-0.0.1-SNAPSHOT.jar"]
Well just to tell you that 172.17..0.2 is the ip container so it can't be reached out of the container, it's created from the container to get access from other services or micro-services.
It seems that your app config isn't proprely display to this port, means that your application isn't on 8080 port that's why it gets you empty response, when you added your fonctionalities, you need to use your Dockerfile and expose your application in 8080.
FROM <image_name>
EXPOSE 8080
I cannot figure it out what's wrong. So I just created a new workspace from scratch, import the code, rebuilt the application into Docker with Dockerfile, then the Port mapping get working again.
Now my front Angular and backend Spring boot Apps can communicate on localhost.

How to create apache spark standalone cluster for integration testing using TestContainers?

Is anyone knows how to create an apache-spark cluster for integration testing using testContainers https://www.testcontainers.org/
any running example please, i am struggling to find that.
I was able to create this kind of integration test using the GenericContainer class and the bitnami/spark image. It's the following code (I wrote it for a library that writes a dataframe to AWS SQS).
The idea is creating a Spark container (in this case, it's not a cluster, but just the master node), copying all the files needed to run the test (some Python files and all the dependencies), issuing the spark-submit command and checking the final state (a message in the Localstack's SQS service in another container).
#Testcontainers
public class SparkIntegrationTest {
private static Network network = Network.newNetwork();
#Container
public LocalStackContainer localstack = new LocalStackContainer(DockerImageName.parse("localstack/localstack:0.12.13"))
.withNetwork(network)
.withNetworkAliases("localstack")
.withServices(SQS);
#Container
public GenericContainer spark = new GenericContainer(DockerImageName.parse("bitnami/spark:3.1.2"))
.withCopyFileToContainer(MountableFile.forHostPath("build/resources/test/.", 0744), "/home/")
.withCopyFileToContainer(MountableFile.forHostPath("build/libs/.", 0555), "/home/")
.withNetwork(network)
.withEnv("AWS_ACCESS_KEY_ID", "test")
.withEnv("AWS_SECRET_KEY", "test")
.withEnv("SPARK_MODE", "master");
#Test
public void shouldPutASQSMessageInLocalstackUsingSpark() throws IOException, InterruptedException {
String expectedBody = "my message body"; // the same value in resources/sample.txt
AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withEndpointConfiguration(localstack.getEndpointConfiguration(SQS))
.withCredentials(localstack.getDefaultCredentialsProvider())
.build();
sqs.createQueue("my-test");
org.testcontainers.containers.Container.ExecResult lsResult =
spark.execInContainer("spark-submit",
"--jars", "/home/spark-aws-messaging-0.3.1.jar,/home/deps/aws-java-sdk-core-1.12.12.jar,/home/deps/aws-java-sdk-sqs-1.12.12.jar",
"--master", "local",
"/home/sqs_write.py",
"/home/sample.txt",
"http://localstack:4566");
System.out.println(lsResult.getStdout());
System.out.println(lsResult.getStderr());
assertEquals(0, lsResult.getExitCode());
String queueUrl = sqs.getQueueUrl("my-test").getQueueUrl()
.replace("localstack", localstack.getContainerIpAddress());
List<Message> messages = sqs.receiveMessage(queueUrl)
.getMessages();
assertEquals(expectedBody, messages.get(0).getBody());
}
}
There's still a drawback: It's a black box, I can't measure code coverage.

Security works correctly in unit test but not when deployed in application server (weblogic)

Can anyone tell why this didn't work? The code works great when I run it from my unit tests. Security gets setup perfectly and our service works great just like I expect.
However, when I deployed it to our application server (weblogic), my service fails every time because my tokens are not getting setup. I got it working by setting up the tokens every time my send(final ServiceAPInvoice invoice) method gets called.
My question is why does the tokens not get setup by my constructor when this is deployed in our Weblogic environment? What causes this issue? OAuthSecurityContextHolder is a static class. Is that playing into my issue? Will I still run into issues if I setup the tokens each time my send method is called? I haven't noticed any issues yet but have not done any load testing
I am using Spring's OAuthRestTemplate (1.0) and I have non-expiring tokens that I need to setup.
Here is where the magic happens. I had to rename the code slightly to make it generic so hopefully I don't have any typos:
public class ServiceRestTemplate {
private final OAuthRestTemplate serviceOAuthRestTemplate;
private final String apUri;
private final String arUri;
private final String tokenValue;
private final String tokenSecret;
public ServiceRestTemplate(final OAuthRestTemplate serviceOAuthRestTemplate,
final String apUri,
final String arUri,
final String tokenValue,
final String tokenSecret) {
this.serviceOAuthRestTemplate = serviceOAuthRestTemplate;
this.apUri = apUri;
this.arUri = arUri;
this.tokenSecret = tokenSecret;
this.tokenValue = tokenValue;
setContext(tokenValue, tokenSecret); // I expected this to be enough to setup my tokens 1 time
}
private void setContext(final String tokenValue, final String tokenSecret) {
final OAuthConsumerToken accessToken = new OAuthConsumerToken();
accessToken.setAccessToken(true);
accessToken.setResourceId(serviceOAuthRestTemplate.getResource().getId());
accessToken.setValue(tokenValue);
accessToken.setSecret(tokenSecret);
final OAuthSecurityContextImpl securityContext = new OAuthSecurityContextImpl();
if (securityContext.getAccessTokens() == null) {
securityContext.setAccessTokens(new HashMap<String, OAuthConsumerToken>());
}
if (!securityContext.getAccessTokens().containsKey(accessToken.getResourceId())) {
securityContext.getAccessTokens().put(accessToken.getResourceId(), accessToken);
}
OAuthSecurityContextHolder.setContext(securityContext);
}
#Override
public ServiceWebResponse send(final ServiceAPInvoice invoice) {
setContext(this.tokenValue, this.tokenSecret); // This line of code is the workaround to fixed my issue.
final ServiceWebResponse serviceResponse = serviceOAuthRestTemplate.postForObject(apUri,
invoice,
ServiceWebResponse.class);
return serviceResponse;
}
}

yarn [hadoop 2.2] where is the location mapper or reducer log outputed?

I want to check the log mapper or reducer output ?I can not find it in syslog under container foler?So where is the log outputing?
public class SkipStat {
private static Log log = LogFactory.getLog(SkipStat.class);
private static BlockWorkerRepository blockWorkerRepository;
static {
blockWorkerRepository = new BlockWorkerRepositoryImpl();
}
private static class SkipInfoMapper extends Mapper<Object, BSONObject, Text, AssignmentWritable> {
private final String invalidResult = "^";
private static final Calendar currentCalendar = Calendar.getInstance();
static {
currentCalendar.add(Calendar.HOUR, -24);
}
protected void map(Object key, BSONObject value, Context context) throws IOException, InterruptedException {
String result = (String) value.get("result");
log.info("lol... get one result " + result); // LOG ...
if (invalidResult.equals(result)) {
To enable log aggregation, which then allows you to run the "yarn logs" command to see the logs for an application id, set these in your yarn-site.xml:
yarn.log-aggregation-enable
yarn.log-aggregation.retain-seconds
yarn.log-aggregation.retain-check-interval-seconds
It seemed necessary to restart yarn after making this change.
Note the command that was referenced in a comment will work after aggregation has been enabled:
yarn logs -applicationId application_1397033985470_0006
See: http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
Descriptions for these parameters are available in the 2.3.0 documentation: http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
The historyserver is collecting those logs and keeping them. You can view them using the historyserver WebUI or using yarn logs CLI. See Simplifying user-logs management and access in YARN. Before they're uploaded the logs are:
Logs for all the containers belonging to a single Application and that
ran on a given NM are aggregated and written out to a single (possibly
compressed) log file at a configured location in the FS
The ApplicationMasterUI will show current executing application logs.

Resources