I am using quarkus.rest-client to call an external API and want to limit the frequency of those calls to say 50 per second, such that I don't drown the external service. What is the recommended way of achieving this without a side-car approach (through code)?
You could use the #Bulkhead Microprofile annotation and set the maximum concurrent threads limit for the execution of your method. But, this will only work inside of one instance of your application.
Eclipse Microprofile Documentation
Example copied from the above documentation:
#Bulkhead(5) // maximum 5 concurrent requests allowed
public Connection serviceA() {
Connection conn = null;
counterForInvokingServiceA++;
conn = connectionService();
return conn;
}
// maximum 5 concurrent requests allowed, maximum 8 requests allowed in the waiting queue
#Asynchronous
#Bulkhead(value = 5, waitingTaskQueue = 8)
public Future<Connection> serviceA() {
Connection conn = null;
counterForInvokingServiceA++;
conn = connectionService();
return CompletableFuture.completedFuture(conn);
}
You can even set the value on the deploy, so you can change this parameter without a new build.
To use the #Bulkhead, you must add the Fault Tolerance to your project
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-fault-tolerance</artifactId>
</dependency>
Related
I'm using the jmeter api to develop pressure testing tools,
How to modify the parameters of jmeter at runtime? such as the number of thread pools,
ConstantThroughputTimer.throughput
demo
github,but not found answer
You cannot change the number of threads in the runtime (at least not with JMeter 5.5)
What you can do is to use Constant Throughput Timer in combination with Beanshell Server to control requests execution rate.
I tried and found the answer by writing my own code. Parameters can be dynamically modified in the form of apis. Just call JMeterUtils.getJMeterProperties().setProperty("throughput", prop)。
ConstantThroughputTimer :
ConstantThroughputTimer timer = new ConstantThroughputTimer();
long rpsCalc = (long) (rps * 60);
String paramStr = "${__P(throughput,50)}";
timer.setProperty("calcMode", 2);
StringProperty stringProperty = new StringProperty();
stringProperty.setName("throughput");
stringProperty.setValue(paramStr);
timer.setProperty(stringProperty);
timer.setEnabled(true);
timer.setProperty(TestElement.TEST_CLASS, ConstantThroughputTimer.class.getName());
timer.setProperty(TestElement.GUI_CLASS, TestBeanGUI.class.getName());
return timer;
I have a springboot application which uses azure sdk. I want to set the retry count to just once for authenticating since currently it uses the default value of 3 as I want the exception to be thrown without much delay for incorrect credentials.
com.azure.core.http.policy.RetryPolicy : Retry attempts have been exhausted after 3 attempts.
I tried debugging and found this, https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/resourcemanager/docs/AUTH.md but the Retry Policy only specifies after how long we can retry, not how many times. Further checking, RetryPolicy creates a new ExponentialBackOff instance - and here I see this comment:
Creates an instance of ExponentialBackoff with a maximum number of retry attempts configured by the environment property Configuration.PROPERTY_AZURE_REQUEST_RETRY_COUNT, or three if it isn't configured or is less than or equal to 0. This strategy starts with a delay of 800 milliseconds and exponentially increases with each additional retry attempt to a maximum of 8 seconds.
At this point, not sure how to proceed. Can someone point me how we can set the retries only for this particular method?
public AzureResourceManager getAzureResourceManagerClient(String clientId, String clientSecret, String tenantId,
String subscriptionId) {
AzureProfile profile = new AzureProfile(tenantId, subscriptionId, AzureEnvironment.AZURE);
TokenCredential clientSecretCredential = new ClientSecretCredentialBuilder()
.clientId(clientId)
.clientSecret(clientSecret)
.tenantId(tenantId)
.authorityHost(profile.getEnvironment().getActiveDirectoryEndpoint())
.build();
return AzureResourceManager.configure()
.authenticate(clientSecretCredential, profile)
.withSubscription(subscriptionId);
}
I'm trying to set-up jdbc read side processor in lagom service:
class ProjectEventsProcessor(readSide: JdbcReadSide)(implicit ec: ExecutionContext) extends ReadSideProcessor[ProjectEvent] {
def buildHandler = {
readSide.builder[ProjectEvent]("projectEventOffset")
.setEventHandler[ProjectCreated]((conn: Connection, e: EventStreamElement[ProjectCreated]) => insertProject(e.event))
.build
}
private def insertProject(e: ProjectCreated) = {
Logger.info(s"Got event $e")
}
override def aggregateTags: Set[AggregateEventTag[ProjectEvent]] = ProjectEvent.Tag.allTags
}
Services connects to database fine on startup
15:40:32.575 [info] play.api.db.DefaultDBApi [] - Database [default] connected at jdbc:postgresql://localhost/postgres?user=postgres
But right after this I'm getting exception.
com.typesafe.config.ConfigException$Missing: No configuration setting
found for key 'slick.profile'
First of all, why slick is involved here at all?
I'm using JdbcReadSide but not SlickReadSide.
Ok, let's say JdbcReadSide internally uses slick somehow.
I've added slick.profile in application.config of my service.
db.default.driver="org.postgresql.Driver"
db.default.url="jdbc:postgresql://localhost/postgres?user=postgres"
// Tried this way
slick.profile="slick.jdbc.PostgresProfile$"
// Also this fay (copied from play documentation).
slick.dbs.default.profile="slick.jdbc.PostgresProfile$"
slick.dbs.default.db.dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
slick.dbs.default.db.properties.driver = "org.postgresql.Driver"
But still getting this exception.
What is going on? How to solve this issue?
According to the docs, Lagom uses akka-persistence-jdbc, which under the hood:
uses Slick to map tables and manage asynchronous execution of JDBC calls.
A full configuration, using also the default connection pool (HikariCP), to set in the application.conf file, may be the following (mostly copied from the docs):
# Defaults to use for each Akka persistence plugin
jdbc-defaults.slick {
# The Slick profile to use
# set to one of: slick.jdbc.PostgresProfile$, slick.jdbc.MySQLProfile$, slick.jdbc.OracleProfile$ or slick.jdbc.H2Profile$
profile = "slick.jdbc.PostgresProfile$"
# The JNDI name for the Slick pre-configured DB
# By default, this value will be used by all akka-persistence-jdbc plugin components (journal, read-journal and snapshot).
# you may configure each plugin component to use different DB settings
jndiDbName=DefaultDB
}
db.default {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/postgres?user=postgres"
# The JNDI name for this DataSource
# Play, and therefore Lagom, will automatically register this DataSource as a JNDI resource using this name.
# This DataSource will be used to build a pre-configured Slick DB
jndiName=DefaultDS
# Lagom will configure a Slick Database, using the async-executor settings below
# and register it as a JNDI resource using this name.
# By default, all akka-persistence-jdbc plugin components will use this JDNI name
# to lookup for this pre-configured Slick DB
jndiDbName=DefaultDB
async-executor {
# number of objects that can be queued by the async executor
queueSize = 10000
# 5 * number of cores
numThreads = 20
# same as number of threads
minConnections = 20
# same as number of threads
maxConnections = 20
# if true, a Mbean for AsyncExecutor will be registered
registerMbeans = false
}
# Hikari is the default connection pool and it's fine-tuned to use the same
# values for minimum and maximum connections as defined for the async-executor above
hikaricp {
minimumIdle = ${db.default.async-executor.minConnections}
maximumPoolSize = ${db.default.async-executor.maxConnections}
}
}
lagom.persistence.jdbc {
# Configuration for creating tables
create-tables {
# Whether tables should be created automatically as needed
auto = true
# How long to wait for tables to be created, before failing
timeout = 20s
# The cluster role to create tables from
run-on-role = ""
# Exponential backoff for failures configuration for creating tables
failure-exponential-backoff {
# minimum (initial) duration until processor is started again
# after failure
min = 3s
# the exponential back-off is capped to this duration
max = 30s
# additional random delay is based on this factor
random-factor = 0.2
}
}
}
Need some help understanding on where disconnects occur (SocketJS, Vertx) and how timeouts can be configured.
I am creating SockJSServer along with creating eventBus bridge. Problem that I observer is frequent WebSocket connection disconnects. Looking at the websocket frames, I see pings every 5 seconds and heart-beats what I configured every 1/2 sec(what seem to take effect). However, once heart beats are being delayed for longer then 5 second disconnects comes with message c[3000,'Go away']. As observed it happens when server is busy(doing something else on separate thread).
I have searched Vertx documentation and looked over vertx code and found few configuration parameter(which appear to be different across versions and documentation).
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500)
To be absolutely sure, I have tried different config that did not appear to have any impact. At this point, I think I have reached dead wall and need some help.
Vertx version 2.1P3
Server snipet
final SockJSServer server = vertx.createSockJSServer(httpServer);
server.bridge(new JsonObject().putString("prefix", "/eventbus")
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500),
new JsonArray().addObject(new JsonObject()),
new JsonArray().addObject(new JsonObject()));
Client code:
var eventBus = new EventBus('//hostX:12001/eventbus');
When you receive a SOCKET_IDLE event, you can't complete the event with a "true" parameter, as it indicates the socket must be closed:
SockJSHandler.create(vertx,handlerOptions).bridge(options, event -> {
boolean result = true;
switch(event.type()) {
case SOCKET_CREATED:
LOGGER.info("Socket created");
break;
case SOCKET_IDLE:
result = false;
return;
case SOCKET_CLOSED:
LOGGER.info("Socket closed");
break;
}
event.complete(result);
});
I have created EJB 3 Timer and it is deployed into weblogic cluster. The createTimer parameters are createTimer(1000, 20000, null);
This should create recurring timer for every 20 seconds. But the timer is always created every 30 seconds. I even changed the intervalDuration to 40 and 50 seconds but the timeout method is always triggered every 30 seconds.
Below are the entry from WEBLOGIC_TIMERS table
1##OSBNode2_1355845459844 (BLOB) 1355846770914 1000 TimerearTimerTest.jarTimerTest OSBDomain OSBCluster
Below are the entry from ACTIVE table
timer.1##OSBNode2_1355843156331 -96726833478167425/OSBNode2 OSBDomain OSBCluster 18-DEC-12
service.TimerMaster 8866906753834651127/OSBNode1 OSBDomain OSBCluster 18-DEC-12
service.SINGLETON_MASTER 8866906753834651127/OSBNode1 OSBDomain OSBCluster 18-DEC-12
Can anyone help me to investigate why the timer always triggers every 30 second instead of my intervalDuration value?
Below is the EJB ----->>
package com.timertest;
import java.util.*;
import javax.annotation.Resource;
import javax.ejb.*;
#Stateless(mappedName = "TimerTest")
public class TimerTest implements TimerTestRemote
{
#Resource
private SessionContext ctx;
#Override
public void createMyTimer()
throws EJBException
{
ctx.getTimerService().createTimer(1000, 20000, null);
}
#Timeout
public void timeout(Timer timer)
{
System.out.println("-> Timed Out ..."+ new Date());
}
}
Below is the and weblogic descripor-------->>
<?xml version="1.0" encoding="UTF-8"?>
<wls:weblogic-ejb-jar
xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-ejb-jar"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd http://xmlns.oracle.com/weblogic/weblogic-ejb-jar http://xmlns.oracle.com/weblogic/weblogic-ejb-jar/1.2/weblogic-ejb-jar.xsd">
<wls:weblogic-enterprise-bean>
<wls:ejb-name>TimerTest</wls:ejb-name>
<wls:stateless-session-descriptor>
<wls:stateless-clustering>
<wls:home-is-clusterable>true</wls:home-is-clusterable>
<wls:home-load-algorithm>round-robin</wls:home-load-algorithm>
<wls:stateless-bean-is-clusterable>true</wls:stateless-bean-is-clusterable>
<wls:stateless-bean-load-algorithm>round-robin
</wls:stateless-bean-load-algorithm>
</wls:stateless-clustering>
<wls:business-interface-jndi-name-map>
<wls:business-remote>TimerTestRemote</wls:business-remote>
<wls:jndi-name>TimerTest</wls:jndi-name>
</wls:business-interface-jndi-name-map>
</wls:stateless-session-descriptor>
<wls:enable-call-by-reference>false</wls:enable-call-by-reference>
<wls:jndi-name>TimerTest</wls:jndi-name>
</wls:weblogic-enterprise-bean>
<wls:timer-implementation>Clustered</wls:timer-implementation>
</wls:weblogic-ejb-jar>
Thanks in advance
This might not be the direct answer, but might help to avoid such condition.
Timers are persistent by default, so you have to cancel the previous timers before starting new ones.
Also, you can provide info while creating timer object instead of passing null, may be string identifier. It will later help to identify which timer had timed out or to cancel it in case of multiple timers in the system.
Creating timers & restarting the application multiple times, there are several such timers having same info, but are having different hashCode. So they don't overlap & are created new each time.
You can get timers by ctx.getTimerService().getTimers() & cancel by iterating over them.
Edit : I have replicated the scenario & have faced similar issue. I have debugged & it happens when there are multiple previous timers active for the same interval, which aren't being cancelled.
You try below code, I have tried & it resolved the issue.
for(Timer t : timerService.getTimers())
t.cancel(); //-- Cancel timers before creatimng new one
ctx.getTimerService().createTimer(1000, 20000, null);
Excerpt from documentation :
intervalDuration - If expiration is delayed (e.g. due to the interleaving of other method calls on the bean), two or more expiration notifications may occur in close succession to "catch up".
You can use a different style (annotations). See below:
#Schedule(hour = "*", minute = "*",second = "*/40")
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Lock(LockType.WRITE)
#AccessTimeout(1000 * 60 * 60)//1 Hour...//Concurrent Access is not permitted...