Springboot #Async in multiserver environment - spring

We have a requirement to write to the Database in #async mode. We have two servers and one DB. Its working fine on local environment but on the server, the request is going to both the servers and for single request, its inserting same records in DB two times.
Please let me know how to fix this issue.

two servers got the one request? you do not use nginx for loadbalance?

Related

Oracle ORDS: Get request returns old data, then after period of time the changed data

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

Activate Batch on only one Server instance

I have a nginx loadbalancer in front of two tomcat instances each contains a spring boot application. Each spring boot application executes a batch that writes data in a database.
The batch executes every day at 1am.
The problem is that both instances execute the batch simultaniously which i don't want.
Is there a way to keep the batchs deployed in two instances and tell tomcat or nginx to start the batch in master server (and the slave server doesn't run the batch).
If one of the servers stops, the second server could start the batch on his behalf.
Is there a tool in nginx or tomcat (or some other technology) to do that ?
thank you in advance.
Here is a simplistic design approach.
Since you have two scheduled methods in the 2 VMs triggered at same time, add a random delay to both. This answer has many options on how to delay the trigger for a random duration. Spring #Scheduled annotation random delay
Inside the method run the job only if it is NOT already started (by the other VM). This could be done with a new table to track this.
Here is the pseudo code for this design:
#Scheduled(cron = "schedule expression")
public void batchUpdateMethod() {
//Check database for signs of job running now.
if (job is not running){
//update database table to indicate job is running
//Run the batch job
//update database table to indicate job is finished
}
}
The database, or some common file location, should be used as a lock to sync between the two runs, since the two VMs are independent of each other.
For a more robust design, consider Spring Batch
Spring Batch uses a database for its jobs (JobsRepository). By default an in memory datasource is used to keep track of running jobs and their status. In your setup, the 2 instances are (most likely) using their own in memory database.
Multiple instances of Spring Batch can coordinate with each other as a cluster and one can run jobs, while the other actasa backup, if the jobsRepository database is shared.
For this you need to configure the 2 instances to use a common datasource.
Here are some docs:
https://docs.spring.io/spring-batch/docs/current/reference/html/index-single.html#jobrepository
https://docs.spring.io/spring-batch/docs/current/reference/html/job.html#configuringJobRepository
If you design two app server instances to run the same job at the same time, then by design, one will succeed to create a job instance and the other will fail (and this failure can be ignored). See Javadoc of JobRepository. This is one of the roles of the job repository: to act as a safeguard against duplicate job executions in a clustered environment.
If one of the servers stops, the second server could start the batch on his behalf. Is there a tool in nginx or tomcat (or some other technology) to do that ?
I believe there is no need for such tool or technology. If one of the servers is down at the time of the schedule, the other will be able to take things over and succeed in launching the job.
I did implement a simple BCM Server functionality, where all servers do register(create a Server-table entry) with their unique IP. The Servers need to register within a defined time(e.g. 10 sec). If a Server does not register within time(last update timestamp > 10 sec), then the Server gets de-registered(delete Server-table entry) by the Server, which do register.
At the end I do have a table with ordered Server entries and can define the task uniquely to the registered Servers.
The Implementation is very simple and Works perfectly.
Before I did also have in mind the Spring Batch Job Sharing functionality, but I wanted zu have a more lightweight and more flexible Solution.
Currently I use it in all my projects where I need to have Batch-Processing implemented.

Is it possible to make a runtime db connection and use it in Schema, DB and models without effecting configs?

I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

Randomly raising Internal Server Error 500. How to debug?

Recently I moved my zend app from simple hosting to azure, configured to process requests via nginx->haproxy->apache chain. nginx, haproxy and apache are docker containers. While loading the application makes 20-30 AJAX requests to different controllers. Here's the common init method I use in controllers:
public function init()
{
$this->sa = System_Auth::getInstance();
$this->data = $this->getRequest()->getParams();
$this->current = $this->sa->getCurrent();
$this->data['customer_id'] = $this->current->customer ? $this->current->customer->id : $this->current->id;
$this->_helper->contextSwitch()
->addActionContext('test', 'json')
->setAutoJsonSerialization(true)
->initContext();
$this->_helper->layout()->disableLayout();
$this->_helper->viewRenderer->setNoRender(true);
}
So there is nothing interesting in it. But every time I load application about 3 or 4 AJAX requests return 500 error. Each time different requests failed with 500 error. Apache and nginx logs are empty. And I can't dump data passed to controller because failed controllers aren't even called.
Anyone has idea?
OK quys, finally it was fixed.
Where happened:
HaProxy and round-robined mariaDB Cluster Master-Master-Master
What happened:
Some queries randomly failed each time
Why:
sql_mode=only_full_group_by was found on one of three mysql servers
Earlier I cleared sql_mode variable on all servers, but it seems later it was restored when one of servers was restarted. And variables aren't replicable, you should manually change them on each server*
*It may be possible but I don't know right now

Resources