Call an external api and switch between 2 urls while maintainig only one connection - spring-boot

Using the BasicHttpClientConnectionManager, I came to notice that I can't switch between two different urls because the contents remains the same when I deployed.
I thought PoolingHttpClientConnectionManager might make a change, but I came to know that with that the connection will remain valid but this is no the behavior I was looking for is there a way to make the request made switch between two urls using the SSlcontext and SSLConnectionSocketFactory as compounds to build the request and knowing that the url is stored in data base so I would look for it each time I call the external api
The second option inspired me but in my case I want to switch but if there is a thread safe solution with the sort of singleton I have tried to make a #service that extends PoolingHttpClientConnectionManager but this caused me bean instantiation failed and I don't know how to actually close and make sure that any call would be valid in the mean time
The last method is supposed to make the switch based on the URIBuilder returned from uriBuilderInit()
update -1
A transactional method in which I would call either basic or pooling httpclientConnectionManager ?
NB: the fact there is tree url that can post requested is also a problem due to the host limitation (2) but at least I will only ensure the swap between two at a time but only one connection would be maintained at a time
Thanks in advance

Related

Batching stores transparently

We are using the following frameworks and versions:
jOOQ 3.11.1
Spring Boot 2.3.1.RELEASE
Spring 5.2.7.RELEASE
I have an issue where some of our business logic is divided into logical units that look as follows:
Request containing a user transaction is received
This request contains various information, such as the type of transaction, which products are part of this transaction, what kind of payments were done, etc.
These attributes are then stored individually in the database.
In code, this looks approximately as follows:
TransactionRecord transaction = transactionRepository.create();
transaction.create(creationCommand);`
In Transaction#create (which runs transactionally), something like the following occurs:
storeTransaction();
storePayments();
storeProducts();
// ... other relevant information
A given transaction can have many different types of products and attributes, all of which are stored. Many of these attributes result in UPDATE statements, while some may result in INSERT statements - it is difficult to fully know in advance.
For example, the storeProducts method looks approximately as follows:
products.forEach(product -> {
ProductRecord record = productRepository.findProductByX(...);
if (record == null) {
record = productRepository.create();
record.setX(...);
record.store();
} else {
// do something else
}
});
If the products are new, they are INSERTed. Otherwise, other calculations may take place. Depending on the size of the transaction, this single user transaction could obviously result in up to O(n) database calls/roundtrips, and even more depending on what other attributes are present. In transactions where a large number of attributes are present, this may result in upwards of hundreds of database calls for a single request (!). I would like to bring this down as close as possible to O(1) so as to have more predictable load on our database.
Naturally, batch and bulk inserts/updates come to mind here. What I would like to do is to batch all of these statements into a single batch using jOOQ, and execute after successful method invocation prior to commit. I have found several (SO Post, jOOQ API, jOOQ GitHub Feature Request) posts where this topic is implicitly mentioned, and one user groups post that seemed explicitly related to my issue.
Since I am using Spring together with jOOQ, I believe my ideal solution (preferably declarative) would look something like the following:
#Batched(100) // batch size as parameter, potentially
#Transactional
public void createTransaction(CreationCommand creationCommand) {
// all inserts/updates above are added to a batch and executed on successful invocation
}
For this to work, I imagine I'd need to manage a scoped (ThreadLocal/Transactional/Session scope) resource which can keep track of the current batch such that:
Prior to entering the method, an empty batch is created if the method is #Batched,
A custom DSLContext (perhaps extending DefaultDSLContext) that is made available via DI has a ThreadLocal flag which keeps track of whether any current statements should be batched or not, and if so
Intercept the calls and add them to the current batch instead of executing them immediatelly.
However, step 3 would necessitate having to rewrite a large portion of our code from the (IMO) relatively readable:
records.forEach(record -> {
record.setX(...);
// ...
record.store();
}
to:
userObjects.forEach(userObject -> {
dslContext.insertInto(...).values(userObject.getX(), ...).execute();
}
which would defeat the purpose of having this abstraction in the first place, since the second form can also be rewritten using DSLContext#batchStore or DSLContext#batchInsert. IMO however, batching and bulk insertion should not be up to the individual developer and should be able to be handled transparently at a higher level (e.g. by the framework).
I find the readability of the jOOQ API to be an amazing benefit of using it, however it seems that it does not lend itself (as far as I can tell) to interception/extension very well for cases such as these. Is it possible, with the jOOQ 3.11.1 (or even current) API, to get behaviour similar to the former with transparent batch/bulk handling? What would this entail?
EDIT:
One possible but extremely hacky solution that comes to mind for enabling transparent batching of stores would be something like the following:
Create a RecordListener and add it as a default to the Configuration whenever batching is enabled.
In RecordListener#storeStart, add the query to the current Transaction's batch (e.g. in a ThreadLocal<List>)
The AbstractRecord has a changed flag which is checked (org.jooq.impl.UpdatableRecordImpl#store0, org.jooq.impl.TableRecordImpl#addChangedValues) prior to storing. Resetting this (and saving it for later use) makes the store operation a no-op.
Lastly, upon successful method invocation but prior to commit:
Reset the changes flags of the respective records to the correct values
Invoke org.jooq.UpdatableRecord#store, this time without the RecordListener or while skipping the storeStart method (perhaps using another ThreadLocal flag to check whether batching has already been performed).
As far as I can tell, this approach should work, in theory. Obviously, it's extremely hacky and prone to breaking as the library internals may change at any time if the code depends on Reflection to work.
Does anyone know of a better way, using only the public jOOQ API?
jOOQ 3.14 solution
You've already discovered the relevant feature request #3419, which will solve this on the JDBC level starting from jOOQ 3.14. You can either use the BatchedConnection directly, wrapping your own connection to implement the below, or use this API:
ctx.batched(c -> {
// Make sure all records are attached to c, not ctx, e.g. by fetching from c.dsl()
records.forEach(record -> {
record.setX(...);
// ...
record.store();
}
});
jOOQ 3.13 and before solution
For the time being, until #3419 is implemented (it will be, in jOOQ 3.14), you can implement this yourself as a workaround. You'd have to proxy a JDBC Connection and PreparedStatement and ...
... intercept all:
Calls to Connection.prepareStatement(String), returning a cached proxy statement if the SQL string is the same as for the last prepared statement, or batch execute the last prepared statement and create a new one.
Calls to PreparedStatement.executeUpdate() and execute(), and replace those by calls to PreparedStatement.addBatch()
... delegate all:
Calls to other API, such as e.g. Connection.createStatement(), which should flush the above buffered batches, and then call the delegate API instead.
I wouldn't recommend hacking your way around jOOQ's RecordListener and other SPIs, I think that's the wrong abstraction level to buffer database interactions. Also, you will want to batch other statement types as well.
Do note that by default, jOOQ's UpdatableRecord tries to fetch generated identity values (see Settings.returnIdentityOnUpdatableRecord), which is something that prevents batching. Such store() calls must be executed immediately, because you might expect the identity value to be available.

RESTful CRUD functions which handle multiple actions?

I'm currently developing an app in Laravel. While trying to adhere to REST API guidelines I've come across a scenario that I'm not sure how to handle RESTfully.
I have a Lease resource that handles multiple actions:
Route::get('/lease/create', 'API\LeaseController#create');
Route::get('/lease/{leaseId}', 'API\LeaseController#show');
Route::post('/lease', 'API\LeaseController#store');
Route::patch('/lease/{leaseId}', 'API\LeaseController#update');
Route::delete('/lease/{leaseId}', 'API\LeaseController#destroy');
So far these are a 1:1 between the URI and the controller actions. Now I have additional operations that I need to perform on a Lease and this is where I'm not sure what the best way to handle this is.
1) A Lease can be renewed (clone existing lease with new start and end dates).
2) A Lease can be ended (status changed to Inactive, end date updated).
When I think about doing this RESTfully I look at these two additional operations as a post and a patch to existing endpoints (both would map to the store and update method on the controller and could use the existing URIs.
Should I continue to think about it that way and map them both to existing endpoints? My concern with that is how would I handle different responses? For example if after a renew operation completes I want to pass a message saying "This lease has been successfully renewed.", how would I differentiate between a renew operation and a regular store operation since they both hit the same end point?
Or should I create two new URI's, something like:
Route::patch('/lease/{leaseId}/end', 'API\LeaseController#updateLeaseEnd');
Route::post('/lease/{leaseId}/renew', 'API\LeaseController#storeLeaseRenew');
And control logic in two separate functions even though it would be somewhat redundant since they really are just additional stores and updates?
It looks like you are trying to fit a RPC style API into a RESTful style API, which is possible, but can be confusing. You could do like you were saying with using the PATCH method, but now you have an overloaded method that should only do a partial update, but now it might execute an action on the resource. That would be confusing.
One way I've seen this done is by using what is called a verb (not to be confused by the "HTTP Verb") in the URI. This is essentially what you were stating as your last option in the question.
Structure
https://api.domain.com/namespace/resource/_verb
https://api.domain.com/namespace/resource/{id}/_verb
Example
https://api.domain.com/namespace/lease/{id}/_end
https://api.domain.com/namespace/lease/{id}/_renew
The underscore is there to signify that this is not a resource, but rather an execution call.
Another option would be to separate your REST API from your RPC API. You could use the traditional SOAP web service or go with the new gRPC, by Google.

REST API for main page - one JSON or many?

I'm providing RESTful API to my (JS) client from (Java Spring) server.
Main site page contains a number of logical blocks (news, last comments, some trending stuff), each of them has a corresponding entity on server. Which way is a right one to go, handle one request like
/api/main_page/ ->
{
news: {...}
comments: {...}
...
}
or let the client do a few requests like
/api/news/
/api/comments/
...
I know in general it's better to have one large request/response, but is this an answer to this situation as well?
Ideally, you should have different API calls for fetching individual configurable content blocks of the page from the same API.
This way your content blocks are loosely bounded to each other.
You
can extend, port(to a new framework) and modify them independently at
anytime you want.
This comes extremely useful when application grows.
Switching off a feature is fairly easy in this
case.
A/B testing is also easy in this case.
Writing automation is
also very easy.
Overall it helps in reducing the testing efforts.
But if you really want to fetch this in one call. Then you should add additional params in request and when the server sees that additional param it adds the additional independent JSON in the response by calling it's own method from BL layer.
And, if speed is your concern then try caching these calls on server for some time(depends on the type of application).
I think in general multiple requests can be justified, when the requested resources reflect parts of the system state. (my personal rule of thumb, still WIP).
i.e. if a news gets displayed in your client application a lot, I would request it once and reuse it wherever I can. If you aggregate here, you would need to request for it later, maybe some of them never get actually displayed, and you have some magic to do if the representation of a news differs in the aggregation and /news/{id}-resource.
This approach would increase communication if the page gets loaded for the first time, but decrease communication throughout your client application the longer it runs.
The state on the server gets copied request by request to your client or updated when needed (Etags, last-modified, etc.).
In your example it looks like /news and /comments are some sort of latest or since last visit, but not all.
If this is true, I would design them to be a resurce as well, like /comments/latest or similar.
But in any case I would them only have self-links to the /news/{id} or /comments/{id} respectively. Then you would have a request to /comments/latest, what results in a list of news-self-links, for what I would start a request only if I don't already have that news (maybe I want to check if the cached copy is still up to date).
It is also possible to trigger the request to a /news/{id} only if it gets actually displayed (scrolling, swiping).
Probably the lifespan of a news or a comment is a criterion to answer this question. Meaning the caching in the client it is not that vital to the system, in opposite of a book in an Book store app.

Blocking message pending 10000 for BLOCKING..using spring websockets

I am facing the following error while using spring websockets:
Use case: On our server side code we have a fucntionality to search values in data base..if the values are not present in database..it will hit a servlet and get the data..The second part i.e., hitting servlet and getting the data is asynchronous call.
So for one request there are multiple things we have to search for in data base..
Example: In request we got a some parameter channel: 1
This channel is mapped to multiple ids say 1 is mapped to 1,2,3,4,5
In websocket once the request comes to server i will extract the channel and get all the id's mapped and run a loop over id's as follows:
for(int i=0;i<ids.length;i++)
{
SomeObject databaseRespObj=callToDatabase(i); //SomeObject contains two fields value exists and string values
if(!databaseRespObj.valuesExists)
{
AsynchronouscallToServelt(i);
//once response received it will send message immediately using session
}
}
While execution of above server side code,some times only i am facing the below error.
java.lang.IllegalStateException: Blocking message pending 10000 for BLOCKING
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.lockMsg(WebSocketRemoteEndpoint.java:130) ~[websocket-common-9.3.8.v20160314.jar:9.3.8.v20160314]
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.sendString(WebSocketRemoteEndpoint.java:379) ~[websocket-common-9.3.8.v20160314.jar:9.3.8.v20160314]
at org.springframework.web.socket.adapter.jetty.JettyWebSocketSession.sendTextMessage(JettyWebSocketSession.java:188) ~[spring-websocket-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.web.socket.adapter.AbstractWebSocketSession.sendMessage(AbstractWebSocketSession.java:105) ~[spring-websocket-4.2.4.RELEASE.jar:4.2.4.RELEASE]
Sorry if the above framing of the question is not clear.Will spring support sending asynchronous messages like normal javax websocket does Session.getAsyncRemote().sendText(String text)
What is the configuration made in spring to send asynchronous messages using websocket session
From what I understand, you have multiple threads sending messages on the same RemoteEndpoint when the asynchronous technique kicks in.
Seems very similar to this :
WebSocket async send can result in blocked send once queue filled
I don't thing you necessarily have to use Futures or mechanisms described in the above post.
What I don't really get is : why doing asynchronous call to servlets ? Ofcourse several could send messages on the same RemoteEndPoint..
But can't you simply make synchronous calls to the relevant classes and keep the same request-response flow that you use when records are found in your database ? :)
UPDATE
Since you added in comments the fact that you need to focus on speed, and since it seems that your current solution is not applicable, let's maybe have a look at the problem from a different angle.
I'm not a websocket expert but as far as I understand what you try to achieve with the asynch servlet calls is not possible.
However, if you change the design/config of your project, this should be achievable.
Personally I use Websockets to be able to send a message to an arbitrary user that did not necessarily made a request - as long as he is connected, he must get the message.
To do this, I simply use the SimpMessagingTemplate class offered by Spring in their websocket support. To send a message to ANY USER THAT I WANT, I do this :
#Autowired
SimpMessagingTemplate smt;
(.......)
smt.convertAndSendToUser(recipient.getUsername(), "/queue/notify", payload);
So in your case, you could, in your loop :
make class instance method calls (instead of a servlet, no network transit, you cannot be faster ! Just a call to your biz-logic / service / whatever)
every time a method returns data, use the SimpMessagingTemplate like in the snippet here above :)
you can still do it asynchronously if you want ! :)
By doing this, you reduce latency (calling servlets adds alot), and have a reliable technique.
You can easily and quickly send thousands of messages to one user or to several users, at your own discretion, without stumbling upon the "10000 for BLOCKING" problem, which probably comes from more than 1 servlet "answering the same question" ;)
To obtain a SimpMessagingTemplate from Spring, you will need to use the <websocket:message-broker> tag or the equivalent non-xml-java-config.
I suggest to check this doc which has more info :
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
and the following post I created on SO where I am using it (I've got another problem in the post, related to spring configuration and context hierarchies, but at least you have some template code to look at, the code is working) :
Spring Websocket : receiving nothing from SimpMessagingTemplate
Spring : how to expose SimpMessagingTemplate bean to root context ?

Spring Cache For Pending Method Calls

Let's say I'm writing a Spring web-service that gets called by an external application. That application requests data that I need to load from an external resource. Furthermore, the design has it that it calls my service more than once with different parameters. In other words, the user sitting in front of the application presses one button, which generates a bunch of requests to my web-service in a very short time frame.
My web-service parses the parameters and comes up with necessary requests to the external resource. The logic has it that it may cause calling the external resource with the same parameters over and over again, which makes this the ideal candidate for caching.
Example:
The user presses that one button in the application
Application initiates ten requests to my web-service
My web-service receives them in parallel
After analysing the parameters of all requests, overall I'd need to call the external resources 15 times, but the parameters are mostly equal and only show that three calls would be enough to serve the 15 intended calls.
However, one call to the external resource may take some time.
As far as I understand how Spring does caching it writes the result of a #Cachable method into the cache. Apparently this means that before it treats another invocation of that method with the same parameters as cache hit, it must have a result of a previous invocation. This means that it doesn't provide support for pending method calls.
I need something like "Hey, I just saw a method invocation with the same parameters a second ago, but I'm still waiting for the result of that invocation. While I can't provide a result yet, I will hold that new invocation and reuse the result for it."
What are my options? Can I make Spring do that?
You can't make Spring do that out-of-the-box for very good reasons. The bottom line is that locking and synchronizing is very hard using a specific cache implementation so trying to do that in an abstraction is a bit insane. You can find some rationale and some discussion here
There is a discussion of using ehcache's BlockingCache SPR-11540
Guava also has such feature but the cache needs to be accessed in a very specific way (using a callback) that the CacheInterceptor does not really fit. It's still our plan to try to make that work at some point.
Do not forget that caching must be transparent (i.e. putting it on and off only leads to a performance change). Trying to parse arguments and compute what call should be made to your web service has high chances to lead to side effects. Maybe you should cache things at a different place?

Resources