Is there a way of persisting Quarkus devservices databases? Maybe a way of using volumes, but I can not find any reference. I am thinking on something like a property (non existing) quarkus.datasource.devservices.volume=some_volume that will reuse some_volume existing volume with the spin Docker container.
Maybe what you can do for now is, disable database startup from dev-services (see link 3 below) and add a QuarkusTestResource on your test class and startup your own docker image with a volume mount to your disk.
And the next time you startup your test, the data should be available as long as it points to the same volume mount. Also make sure that you don't use TestTransaction, otherwise the transaction will be rollbacked at the end of the test.
Maybe these links can help you:
cheat sheat: continious testing
cheat sheat: dev-services
dev-service guide
Related
I'm new to Axon and Docker and I would like to start axon server in Docker using developmnent mode in order to clear events as I'm in the process of building a system and my events and commands change often.
I read on Axon documentation that a certain property axoniq.axonserver.devmode.enabled (defaults to false) has to be set. I also know that Axon uses spring boot, so I guess I would need to somehow access the axonserver.properties on Docker, but here is the problem, i don't know how.
I would be thankful if anyone could explain how to change this configuration.
Fortunatelly Axon has been publishing blogs about running axon-server and one of them, they teach how to run it on docker =)
Blog post: https://axoniq.io/blog-overview/running-axon-server-in-docker
The important part, in your case, is here:
A third directory, not marked as a volume in the image, is important for our case: If you put an “axonserver.properties” file in “/config”, it can override the settings above and add new ones:
Which means, you can create your axonserver.properties in this directory with the desired property (axoniq.axonserver.devmode.enabled=true) and it will pick it up from there!
On the other hand, you can also set the environment variable: AXONIQ_AXONSERVER_DEVMODE_ENABLED to true.
Hope it helps.
How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
I saw Neo4j can run as Impermanent DB for unit testing porpouses, I'm not sure if this fits my needs. I have my data stored in neo4j the usual way (persistent) but, starts from my data, I want to let each user start an "experimental session": the users add/delete nodes and relationships, but NOT in permanent way, just experimenting with the data (after that session the edits should be lost). The edits shouldn't be saved and obiouvsly they shouldn't be visibile to the others. What's the best way to accomplish that?
Using impermanent database should work. You would
need to import the data to each new database
spring-data-neo4j is not able to connect to multiple databases (in current release), you would need to start multiple instances of your application, e.g. in a tomcat container
when your application stops (or crashes) you would obviously lose data
Or you could potentially use only 1 database with the base data being public (= visible to everyone) and then for all new nodes/relationships you can add owner property.
When querying the data you would check the property is either public or the current user.
At the end of the session you would just delete all nodes and relationships with given owner.
If you also want to edit existing data then it gets more complicated, you could create a copy of the node/relationship and somehow handle that, or if it's not too large copy whole dataset.
You can build a docker image from the neo4j base image (or build your own) and copy your graph.db into it.
Then you can have every user start a docker container from said image.
If that doesn't answer your question, more info is needed.
I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.
I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.