setConnectTimeout causes hang? - time

hi im trying to test a proxy but if i set setConnectTimeout over 1500 it causes to programe to hang it just dies nothing gets printed my processor and memery arint doing much it just dies does anyone know of a solution for this the code is
HttpURLConnection inSite = (HttpURLConnection) site.openConnection(proxy);
inSite.setConnectTimeout(2000); //if set below 1500 its fine
this is a real prob as most proxies are too slow to respon in this time thanks

private final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(1);
public void sixtymin() {
final Runnable logger= new Runnable() {//instansiation
#Override
public void run() {
System.out.println(System.currentTimeMillis());//code to execute
}
//You application logic as shown in the question
};
final ScheduledFuture<?> loggerHandle =
scheduler.scheduleAtFixedRate(logger, 0, 5, SECONDS );//action,delay before first run,delay between,unit
this was the solution the above code prints time every 5 secs so i used it to kill process ofter timeout

Related

JobRunr Spring Boot: how to get notified if a recurring job - including retries - has failed

I'm using jobrunr 5.1.4 in my spring boot application. I have a simple service declaring a recurring job which allows for some retries. A single failing job run is not that relevant for me. Instead, I'm interested in getting notified after all jobs, i.e. the initial job including all the retries, have failed.
I thought JobRunr's JobServerFilter would be a good idea. But the onProcessed() method never gets triggered in case of an exception only in case of a successful job run. And the ApplyStateFilter gets triggered on every state change. Far too often for my requirement. Leaving me clueless, if a change to a FAILED state was the last in a series of jobs belonging together (initial job + allowed retried jobs).
A simple example would look like this:
#Service
public class JobScheduler {
#Job(name = "My Recurring Job", retries = 2, jobFilters = ExceptionFilter.class)
#Recurring(id = "my-recurring-job", cron = "*/10 * * * *")
public void recurringJob() {
throw new RuntimeException("foo");
}
}
A basic implementation of my JobFilter looks like this:
#Component
public class ExceptionFilter implements JobServerFilter, ApplyStateFilter {
#Override
public void onProcessing(Job job) {
log.info("onProcessing: {}", job.getJobName());
log.info(job.getJobState().getName().name());
}
#Override
public void onProcessed(Job job) {
log.info("onProcessed: {}", job.getJobName());
log.info(job.getJobState().getName().name());
}
#Override
public void onStateApplied(Job job, JobState jobState1, JobState jobState2) {
log.info("onStateApplied: {}", job.getJobName());
log.info("jobState1: {}", jobState1.getName().name());
log.info("jobState2: {}", jobState2.getName().name());
}
}
Is this use case even possible with JobRunr? Or does anyone have an idea how to solve this issue in a different way?
Thank you very much in advance for you support.
I think you're on the right track with onStateApplied from ApplyStateFilter.
You can use the following approach:
#Override
public void onStateApplied(Job job, JobState oldState, JobState newState) {
if (isFailed(newState) && maxAmountOfRetriesReached(job)) {
// your logic here
}
}
OnProcessed is not triggered as your job was not processed (due to the failure).

Spring kafka idlebetweenpolls is always triggering partition rebalance

I'm trying to use the idle between polls mentioned here to slow down the consumption rate, i also use the max.poll.interval.ms to double the idle between polls, but its always triggering partition rebalance, any idea what is the problem?
[Edit]
I have 5 hosts and i'm setting concurrency level to 1
[Edit 2]
I was setting the idle between polls to 5 min and max.poll.interval.ms to 10 min i also noticed this log "About to close the idle connection from 105 due to being idle for 540012 millis".
I decreased the idle between polls to 10 sec and the issue disappeared, any idea why?
private ConsumerFactory<String, GenericRecord> dlqConsumerFactory() {
Map<String, Object> configurationProperties = commonConfigs();
DlqConfiguration dlqConfiguration = kafkaProperties.getConsumer().getDlq();
final Integer idleBetweenPollInterval = dlqConfiguration.getIdleBetweenPollInterval()
.orElse(DLQ_POLL_INTERVAL);
final Integer maxPollInterval = idleBetweenPollInterval * 2; // two times the idleBetweenPoll, to prevent re-balancing
logger.info("Setting max poll interval to {} for DLQ", maxPollInterval);
overrideIfRequired(DQL_CONSUMER_CONFIGURATION, configurationProperties, ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPollInterval);
dlqConfiguration.getMaxPollRecords().ifPresent(maxPollRecords ->
overrideIfRequired(DQL_CONSUMER_CONFIGURATION, configurationProperties, ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords)
);
return new DefaultKafkaConsumerFactory<>(configurationProperties);
}
<time to process last polled records> + <idle between polls> must be less than max.poll.interval.ms.
EDIT
There is logic in the container to make sure we never exceed the max poll interval:
idleBetweenPolls = Math.min(idleBetweenPolls,
this.maxPollInterval - (System.currentTimeMillis() - this.lastPoll)
- 5000); // NOSONAR - less by five seconds to avoid race condition with rebalance
I can't reproduce the issue with this...
#SpringBootApplication
public class So63411124Application {
public static void main(String[] args) {
SpringApplication.run(So63411124Application.class, args);
}
#KafkaListener(id = "so63411124", topics = "so63411124")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(ConcurrentKafkaListenerContainerFactory<?, ?> factory,
KafkaTemplate<String, String> template) {
factory.getContainerProperties().setIdleBetweenPolls(300000L);
return args -> {
while (true) {
template.send("so63411124", "foo");
Thread.sleep(295000);
}
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63411124").partitions(1).replicas(1).build();
}
}
logging.level.org.springframework.kafka=debug
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.properties.max.poll.interval.ms=600000
If you can provide a small example like this that exhibits the behavior you describe, I will take a look to see what's wrong.

Long-running AEM EventListener working inconsistently - blacklisted?

As always, AEM has brought new challenges to my life. This time, I'm experiencing an issue where an EventListener that listens for ReplicationEvents is working sometimes, and normally just the first few times after the service is restarted. After that, it stops running entirely.
The first line of the listener is a log line. If it was running, it would be clear. Here's a simplified example of the listener:
#Component(immediate = true, metatype = false)
#Service(value = EventHandler.class)
#Property(
name="event.topics", value = ReplicationEvent.EVENT_TOPIC
)
public class MyActivityReplicationListener implements EventHandler {
#Reference
private SlingRepository repository;
#Reference
private OnboardingInterface onboardingService;
#Reference
private QueryInterface queryInterface;
private Logger log = LoggerFactory.getLogger(this.getClass());
private Session session;
#Override
public void handleEvent(Event ev) {
log.info(String.format("Starting %s", this.getClass()));
// Business logic
log.info(String.format("Finished %s", this.getClass()));
}
}
Now before you panic that I haven't included the business logic, see my answer below. The main point of interest is that the business logic could take a few seconds.
While crawling through the second page of Google search to find an answer, I came across this article. A German article explaining that EventListeners that take more than 5 seconds to finish are sort of silently quarantined by AEM with no output.
It just so happens that this task might take longer than 5 seconds, as it's working off data that was originally quite small, but has grown (and this is in line with other symptoms).
I put a change in that makes the listener much more like the one in that article - that is, it uses an EventConsumer to asynchronously process the ReplicationEvent using a pub/sub model. Here's a simplified version of the new model (for AEM 6.3):
#Component(immediate = true, property = {
EventConstants.EVENT_TOPIC + "=" + ReplicationEvent.EVENT_TOPIC,
JobConsumer.PROPERTY_TOPICS + "=" + AsyncReplicationListener.JOB_TOPIC
})
public class AsyncReplicationListener implements EventHandler, JobConsumer {
private static final String PROPERTY_EVENT = "event";
static final String JOB_TOPIC = ReplicationEvent.EVENT_TOPIC;
#Reference
private JobManager jobManager;
#Override
public JobConsumer.JobResult process (Job job) {
try {
ReplicationEvent event = (ReplicationEvent)job.getProperty(PROPERTY_EVENT);
// Slow business logic (>5 seconds)
} catch (Exception e) {
return JobResult.FAILED;
}
return JobResult.OK ;
}
#Override
public void handleEvent(Event event) {
final Map <String, Object> payload = new HashMap<>();
payload.put(PROPERTY_EVENT, ReplicationEvent.fromEvent(event));
final Job addJobResult = jobManager.addJob(JOB_TOPIC , payload);
}
}
You can see here that the EventListener passes off the ReplicationEvent wrapped up in a Job, which is then handled by the JobConsumer, which according to this magic article, is not subject to the 5 second rule.
Here is some official documentation on this time limit. Once I had the "5 seconds" key, I was able to a bit more information, here and here, that talk about the 5 second limit as well. The first article uses a similar method to the above, and the second article shows a way to turn off these time limits.
The time limits can be disabled entirely (or increased) in the configMgr by setting the Timeout property to zero in the Apache Felix Event Admin Implementation configuration.

How to close a database connection opened by an IBackingMap implementation within a Storm Trident topology?

I'm implementing an IBackingMap for my Trident topology to store tuples to ElasticSearch (I know there are several implementations for Trident/ElasticSearch integration already existing at GitHub however I've decided to implement a custom one which suits my task better).
So my implementation is a classic one with a factory:
public class ElasticSearchBackingMap implements IBackingMap<OpaqueValue<BatchAggregationResult>> {
// omitting here some other cool stuff...
private final Client client;
public static StateFactory getFactoryFor(final String host, final int port, final String clusterName) {
return new StateFactory() {
#Override
public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
ElasticSearchBackingMap esbm = new ElasticSearchBackingMap(host, port, clusterName);
CachedMap cm = new CachedMap(esbm, LOCAL_CACHE_SIZE);
MapState ms = OpaqueMap.build(cm);
return new SnapshottableMap(ms, new Values(GLOBAL_KEY));
}
};
}
public ElasticSearchBackingMap(String host, int port, String clusterName) {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();
// TODO add a possibility to close the client
client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(host, port));
}
// the actual implementation is left out
}
You see it gets host/port/cluster name as input params and creates an ElasticSearch client as a member of the class BUT IT NEVER CLOSES THE CLIENT.
It is then used from within a topology in a pretty familiar way:
tridentTopology.newStream("spout", spout)
// ...some processing steps here...
.groupBy(aggregationFields)
.persistentAggregate(
ElasticSearchBackingMap.getFactoryFor(
ElasticSearchConfig.ES_HOST,
ElasticSearchConfig.ES_PORT,
ElasticSearchConfig.ES_CLUSTER_NAME
),
new Fields(FieldNames.OUTCOME),
new BatchAggregator(),
new Fields(FieldNames.AGGREGATED));
This topology is wrapped into some public static void main, packed in a jar and sent to Storm for execution.
The question is, should I worry about closing the ElasticSearch connection or it is Storm's own business? If it is not done by Storm, how and when in the topology's lifecycle I should do that?
Thanks in advance!
Okay, answering my own question.
First of all, thanks again #dedek for suggestions and reviving the ticket in Storm's Jira.
Finally, since there's no official way to do that, I've decided to go for cleanup() method of Trident's Filter. So far I've verified the following (for Storm v. 0.9.4):
With LocalCluster
cleanup() gets called on cluster's shutdown
cleanup() DOESN'T get called when killing the topology, this shouldn't be a tragedy, very likely one won't use LocalCluster for real deployments anyway
With a real cluster
it gets called when the topology is killed as well as when the worker is stopped using pkill -TERM -u storm -f 'backtype.storm.daemon.worker'
it doesn't get called if the worker is killed with kill -9 or when it crashes or - sadly - when the worker dies due to an exception
In overall that gives more or less decent guarantee of cleanup() to get called, provided you'll be careful with exception handling (I tend to add 'thundercatches' to every of my Trident primitives anyway).
My code:
public class CloseFilter implements Filter {
private static final Logger LOG = LoggerFactory.getLogger(CloseFilter.class);
private final Closeable[] closeables;
public CloseFilter(Closeable... closeables) {
this.closeables = closeables;
}
#Override
public boolean isKeep(TridentTuple tuple) {
return true;
}
#Override
public void prepare(Map conf, TridentOperationContext context) {
}
#Override
public void cleanup() {
for (Closeable c : closeables) {
try {
c.close();
} catch (Exception e) {
LOG.warn("Failed to close an instance of {}", c.getClass(), e);
}
}
}
}
However would be nice if some day hooks for closing connections become a part of the API.

FTP server connection simulation

How can I write a script or otherwise simulate about 100
users connection to my own ftp server?
You can prepare a simple Java code.
First, you have to decide how these requests arrive to your server. I.e., completely random, one per minute, following a normal distribution or more likely an exponential distribution.
Then, you have to use a thread that has:
A method to make an ftp connection (e.g. ftpCall())
A method to get the x milliseconds to the next FTP call (e.g. getTimeToNext())
After an FTP call, the method has to stay in sleep for x milliseconds before to make the next call. Here is the outline of the code in Java
public class FTPTest{
class MyFTPThread{
private int numberOfCall=100;
private void ftpCall() {
//DO CONNECTION
}
private long void getTimeToNext() {
//RETURN A RANDOM TIME OR A FIXED VALUE
}
public void run(){
int counter = 0;
while(++counter <= numberOfCall){
ftpCall();
this.sleep(getTimeToNext());
}
}
}
public static void main(String [] args){
MyFTPThread t = new MyFTPThread();
t.start();
}
}

Resources