I saw Neo4j can run as Impermanent DB for unit testing porpouses, I'm not sure if this fits my needs. I have my data stored in neo4j the usual way (persistent) but, starts from my data, I want to let each user start an "experimental session": the users add/delete nodes and relationships, but NOT in permanent way, just experimenting with the data (after that session the edits should be lost). The edits shouldn't be saved and obiouvsly they shouldn't be visibile to the others. What's the best way to accomplish that?
Using impermanent database should work. You would
need to import the data to each new database
spring-data-neo4j is not able to connect to multiple databases (in current release), you would need to start multiple instances of your application, e.g. in a tomcat container
when your application stops (or crashes) you would obviously lose data
Or you could potentially use only 1 database with the base data being public (= visible to everyone) and then for all new nodes/relationships you can add owner property.
When querying the data you would check the property is either public or the current user.
At the end of the session you would just delete all nodes and relationships with given owner.
If you also want to edit existing data then it gets more complicated, you could create a copy of the node/relationship and somehow handle that, or if it's not too large copy whole dataset.
You can build a docker image from the neo4j base image (or build your own) and copy your graph.db into it.
Then you can have every user start a docker container from said image.
If that doesn't answer your question, more info is needed.
Related
Is there a way of persisting Quarkus devservices databases? Maybe a way of using volumes, but I can not find any reference. I am thinking on something like a property (non existing) quarkus.datasource.devservices.volume=some_volume that will reuse some_volume existing volume with the spin Docker container.
Maybe what you can do for now is, disable database startup from dev-services (see link 3 below) and add a QuarkusTestResource on your test class and startup your own docker image with a volume mount to your disk.
And the next time you startup your test, the data should be available as long as it points to the same volume mount. Also make sure that you don't use TestTransaction, otherwise the transaction will be rollbacked at the end of the test.
Maybe these links can help you:
cheat sheat: continious testing
cheat sheat: dev-services
dev-service guide
I am working On a Microservice (Spring boot) that require to store some static information that infrequently changes (once per quarter). The data (below) is about the company reports that looks like
reportId#1: "frequency"="daily","to":"some email ids"
reportId#2: "frequency"="weekly", "to":"some emailids"
As you can see an entry in the data is basically a Report id, and associated attributes are frequency of reports and receiver's email id.
My question is.. What is the best place to store this information? I have some thoughts..and here are my views.
a) NoSQL DB like MongoDB seems to be a good option.. I can create a Collection and store it there and retrieve it once during app startup. But the I thought, whether creating a Collection just to store this static info is a good choice?
b) Redis seems to be another good option. I can create a template for above dataset and store it there. I can query the Redis based on the reportId to retrieve the frequency and senders list.
c) Store it in a file in the classpath and load at the app startup. The downside is that, I will have to redeploy the app with new changes in file whenever this report listing changes. I believe externalizing this information to either Mongo or Redis is a better option.
d) The app is running in the AWS..so I can even store this in a file in S3 bucket.
Would like to know your views?
Since the config will only change once a quarter, the overheard of a database is not required. You should consider Apache commons configuration. It will allow you to load config changes from files without the need for an application restart.
http://commons.apache.org/proper/commons-configuration///userguide/howto_reloading.html
How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
I am trying to figure out how GlobalKTable is working and noticed that my in memory key value store is not filled in case a restart. However the documentation sounds that it is filled in case a restart since the whole data is duplicated on clients.
When I debug my application see that there is a file on /tmp/kafka-streams/category-client-1/global/.checkpoint and it is including an offset about my topic. This might be maybe necessary for stores which are persisting their data and improve restarts however since there is an offset in this file, my application skips restoring its state.
How can I be sure that each restart or fresh start includes whole data of my topic?
Because you are using in-memory store I assume that you are hitting this bug: https://issues.apache.org/jira/browse/KAFKA-6711
As a workaround, you can delete the local checkpoint file for the global store -- this will trigger the bootstrapping on restart. Or you switch back to default RocksDB store.
What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.