Where are the Zalenium log files stored - zalenium

We have max-tests set to 1 so that each test case should be getting a brand new container. However, we are seeing that used containers are not being destroyed consistently.
Therefore I am asking where can I see any errors from Zalenium itself?
I see there are options for LogBack config, but what is the default location for any log file?

The plain vanilla install on Ubuntu
Im assuming the Zalenium container is creating logs on that container
Im just asking where is the default location for these?
Thanks

Related

How to start Docker Axon server in development mode

I'm new to Axon and Docker and I would like to start axon server in Docker using developmnent mode in order to clear events as I'm in the process of building a system and my events and commands change often.
I read on Axon documentation that a certain property axoniq.axonserver.devmode.enabled (defaults to false) has to be set. I also know that Axon uses spring boot, so I guess I would need to somehow access the axonserver.properties on Docker, but here is the problem, i don't know how.
I would be thankful if anyone could explain how to change this configuration.
Fortunatelly Axon has been publishing blogs about running axon-server and one of them, they teach how to run it on docker =)
Blog post: https://axoniq.io/blog-overview/running-axon-server-in-docker
The important part, in your case, is here:
A third directory, not marked as a volume in the image, is important for our case: If you put an “axonserver.properties” file in “/config”, it can override the settings above and add new ones:
Which means, you can create your axonserver.properties in this directory with the desired property (axoniq.axonserver.devmode.enabled=true) and it will pick it up from there!
On the other hand, you can also set the environment variable: AXONIQ_AXONSERVER_DEVMODE_ENABLED to true.
Hope it helps.

How to edit internal files without running container

Mariadb10.3 was installed as Docker on Mac, and the collaction-server value in the /etc/mysql/my.cnf file was modified.
After modification, I tried to restart the container, but when I entered the'docker ps -a' command, the Status was displayed as Exited(1).
So I entered docker logs [container name] and the following error was displayed.
The setting parameter was incorrectly written as'collection-server=utf8_unicode_ci'.
So the container did not run.
I've looked at several ways, but I can't find a way to modify the internal files without running the container.
I know that you shouldn't tamper with files inside the Docker container.
My question may be,'How do I edit a file inside the computer without turning on the computer?', but I don't think that the answer is to delete the container and create a new one.
Of course, deleting the container and installing a new one will save time and may be the simplest method. But I thought in a different way.
If a company that actually operates this docker container has the same mistake as me and cannot operate the container, it must be a very fatal mistake.
Because of that, I don't know... I think there is definitely a way.
I would like advice on a solution to this method.

How to run spark-jobs outside the bin folder of spark-2.1.1-bin-hadoop2.7

I have an existing spark-job, the functionality of this spark-job is to connect kafka-server get the data and then storing the data into cassandra tables, now this spark-job is running on server inside spark-2.1.1-bin-hadoop2.7/bin but whenever I am trying to run this spark-job from other location, Its not running, this spark-job contains some JavaRDD related code.
Is there any chance, I can run this spark-job from outside also by adding any dependency in pom or something else?
whenever I am trying to run this spark-job from other location, Its not running
spark-job is a custom launcher script for a Spark application, perhaps with some additional command-line options and packages. Open it, review the content and fix the issue.
If it's too hard to figure out what spark-job does and there's no one nearby to help you out, it's likely time to throw it away and replace with the good ol' spark-submit.
Why don't you use it in the first place?!
Read up on spark-submit in Submitting Applications.

ClassPath resource not found

I'm trying to deploy my Spring Boot based application to a CloudControl container.
I've added the mysql.free add-on and configured it through my application.properties:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.max-active=1
spring.datasource.max-idle=1
spring.datasource.min-idle=1
spring.datasource.initial-size=1
spring.datasource.url=jdbc:mysql://${MYSQLS_HOSTNAME}:${MYSQLS_PORT}/${MYSQLS_DATABASE}
spring.datasource.username=${MYSQLS_USERNAME}
spring.datasource.password=${MYSQLS_PASSWORD}
On my local development system, everything works perfectly fine, but on the CloudControl container, the app won't start.
I added the StackTrace here. I'm trying to solve the problem for days, but I am not able to solve it by my own.
Spring apps are very memory consuming and the mysqls.free addon allows only a limited number of parallel connections. Although your Stacktrace doesn't show any of these problems. It's hard to solve this issue without more context like logs or environment settings.
The following commands may help:
cctrlapp app_name/default log error # shows startup log
cctrlapp app_name/default addon.creds # shows DB credentials
I've uploaded some spring-boot example code at https://github.com/cloudControl/spring-boot-example-app which I've tested on cloudControl today.
Please take a look at the configuration there. If you want to deploy it, make sure your container has memory size >= 768mb.
cctrlapp app_name/default deploy --memory 768MB
If you still have issues, please contact cloudControl support to help you.

How does one run Spring XD in distributed mode?

I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Resources