Performance in microservice-to-microservice data transfer - spring-boot

I have controller like this:
#RestController
#RequestMapping("/stats")
public class StatisticsController {
#Autowired
private LeadFeignClient lfc;
private List<Lead> list;
#GetMapping("/leads")
private int getCount(#RequestParam(value = "count", defaultValue = "1") int countType) {
list = lfc.getLeads(AccessToken.getToken());
if (countType == 1) {
return MainEngine.getCount(list);
} else if (countType == 2) {
return MainEngine.getCountRejected(list);
} else if (countType == 3) {
return MainEngine.getCountPortfolio(list);
} else if (countType == 4) {
return MainEngine.getCountInProgress(list);
} else if (countType == 5) {
return MainEngine.getCountForgotten(list);
} else if (countType == 6) {
return MainEngine.getCountAddedInThisMonth(list);
} else if (countType == 7) {
return MainEngine.getCountAddedInThisYear(list);
} else {
throw new RuntimeException("Wrong mapping param");
}
}
#GetMapping("/trends")
private boolean getTrend() {
return MainEngine.tendencyRising(list);
}
It is basically a microservice that will handle statistics basing on list of 'Business Leads'. FeignClient is GETting list of trimmed to the required data leads. Everything is working properly.
My only concern is about performance - all of this statistics (countTypes) are going to be presented on the landing page of webapp. If i will call them one by one, does every call will retrieve lead list again and again? Or list will be somehow stored in temporary memory? I can imagine that if list become longer, it could take a while to load them.
I've tried to call them outside this method, by #PostConstruct, to populate list at the start of service, but this solution has two major problems: authentication cannot be handled by oauth token, retrieved list will be insensitive to adding/deleting leads, cause it is loaded at the beginning only.

The list = lfc.getLeads(AccessToken.getToken()); will be called with each GET request. Either take a look at caching the responses which might be useful when you need to obtain a large volume of data often.
I'd start here: Baeldung's: Spring cache tutorial which gives you an idea about the caching. Then you can take a look at the EhCache implementation or implement own interceptor putting/getting from/to external storage such as Redis.
The caching is the only way I see to resolve this: Since the Feign client is called with a different request (based on the token) the data are not static and need to be cached.

You need to implement a caching layer to improve performance. What you can do is, you can have cache preloaded immediately after application starts. This way you will have the response ready in the cache. I would suggest to go with Redis cache. But any cache will do the job.
Also, it will be better if you can move the logic of getCount() to some service class.

Related

Springboot cache miss when caching a object

While using Spring cache in my project, It results in cache miss when I am caching a complex java object (Tinker Graph) but works fine when caching a String. I am invoking these methods over rest thru a controller layer. Any ideas why this is the behavior ? My code is mostly similar to this tutorial.
For eg:
SomeService.java
#Cacheable(NAME_OF_CACHE_1)
public TinkerGraph getGraph(String graphCode) {
// do something
// this methods results in cache miss everytime.
}
#Cacheable(NAME_OF_CACHE_2)
public String getGraphAsString(String graphCode) {
// this is working fine and the string result is getting cached.
return convertToString(getGraph(graphCode));
}
After some debugging I also tried setting up the caches during the startup and could see that the values are cached into the Concurrent Hashmap but still causes the cache miss when returning the Graph instance but works perfectly fine when returning String instance.
CacheService.java
#CachePut("NAME_OF_CACHE_1")
public TinkerGraph getGraph(String graphCode) {
return (TinkerGraph)someService.getGraph(graphCode);
}
#CachePut("NAME_OF_CACHE_2")
public String getGraphAsString(String graphCode) {
return someService.getGraphAsString(graphCode);
}
#EventListner(ApplicationReadyEvent.class)
public void initCache() {
getGraph("G001");
getGraphAsString("G001");
}

How can I manipulate the REST API GET generated by YANG in Opendaylight?

PUT, DELETE, POST can be operated as shown below.
By the way, I do not know how to do GET.
Please help me.
// PUT & DELETE (mapped to WRITE, DELETE of MD-SAL)
public void onDataTreeChanged(Collection<DataTreeModification<GreetingRegistry>> changes) {
for(DataTreeModification<GreetingRegistry> change: changes) {
DataObjectModification<GreetingRegistry> rootNode = change.getRootNode();
if(rootNode.getModificationType() == WRITE) {
...
}
else if(rootNode.getModificationType() == DELETE) {
...
}
}
// POST (mapped to RPC of MD-SAL)
public Future<RpcResult<HelloWorldOutput>> helloWorld(HelloWorldInput input)
{
HelloWorldOutputBuilder helloBuilder = new HelloWorldOutputBuilder();
helloBuilder.setGreeting("Hello " + input.getName());
return RpcResultBuilder.success(helloBuilder.build()).buildFuture();
}
// GET (???)
How should I implement it?
You don't actually have to implement anything for GET in your code, when you want to read from YANG modeled MD-SAL data, the GET method is be available by default, and returns whatever data you ask for in the URL. It is important to point to the correct URL.
If you want to do some processing on data before returning it to the user, you can use RPCs with POST, and do the processing in the RPC based methods. In your example above, you could pot the search keys into HelloWorldInput, do the processing in helloWorld(), and return results in HelloWorldOutput.

Spring batch reader - How to avoid returning a list of objects

So I have a spring batch app that I have getting a list of ids that it then uses 'read()' on to get 1 to many results back. The issue is, I have no control over how many results I get back for each id meaning that my chunking is spotty at best. Is there a suggested way to avoid spikes in memory/cpu? An example is below:
#Before
public void getIds() {
*getListOfIds* //Usually around 10,000 or so
}
#Override
public AccountObject read() {
if(list of ids havent all been used) {
List<AccountObject> myAccounts = myService.getAccounts(id);
return myAccounts; //This could be anywhere from 1 result to 100,000 results.
} else {
return null;
}
}
So the myAccounts object above could be small or huge. This causes chunking to basically be useless because at the moment I am chunking by List. I'd really rather chunk by straight AccountObject but don't see an easy way to do this.
Is there a class, strategy, etc. that I am missing here?

grails + singleton service for object saving to database for cloud application

Here I have one service Called 'DataSaveService' which I used for Saving Objects like..
class DataSaveService {
static transactional = true
def saveObject(object)
{
if(object != null)
{
try
{
if(!object.save())
{
println( ' failed to save ! ')
System.err.println(object.errors)
return false
}
else
{
println('saved...')
return true
}
}
catch(Exception e)
{
System.err.println("Exception :" + e.getMessage())
return false
}
}
else
{
System.err.println("Object " + object + " is null...")
return false
}
}
}
this service is common, and used by many class`s object for storing.
when there is multiple request are there at that time is very slow for saving or you can say its bulky. Because of default scope i.e. singleton.
So, I think for reducing work, I am going to make this service as session scope. like..
static scope = 'session'
then after I am accessing this service and method in controller it generated exception..
what to do for session scope service?, any other idea for implementation this scenario......?
Main thing is I want need best performance in cloud. yeah, I need answer for cloud.
Singleton (if it's not marked as synchronized) can be called from different thread at same time, in parallel, w/o performance loss, it's not a bottleneck.
But if you really need thread safety (mean you have some shared state, that should be used inside one method call, or from different parts of application during one http request or even different requests from same user, and you aren't going to run your app in the cloud), then you can use different scopes, like session or request scope. But i'm not sure that it's a good architecture.
For your current example, there are no benefits of using non singleton scope. And also, you must be know that having few instances of same service requires extra mem and cpu resources.

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources