GRPC client onNext does not fail if there is no server - client

I have a simple gRPC client as follows:
/**
* Client that calls gRPC.
*/
public class Client {
private static final Context.Key<String> URI_CONTEXT_KEY =
Context.key(Constants.URI_HEADER_KEY);
private final ManagedChannel channel;
private final DoloresRPCStub asyncStub;
/**
* Construct client for accessing gRPC server at {#code host:port}.
* #param host
* #param port
*/
public Client(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port).usePlaintext(true));
}
/**
* Construct client for accessing gRPC server using the existing channel.
* #param channelBuilder {#link ManagedChannelBuilder} instance
*/
public Client(ManagedChannelBuilder<?> channelBuilder) {
channel = channelBuilder.build();
asyncStub = DoloresRPCGrpc.newStub(channel);
}
/**
* Closes the client
* #throws InterruptedException
*/
public void shutdown() throws InterruptedException {
channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}
/**
* Main async method for communication between client and server
* #param responseObserver user's {#link StreamObserver} implementation to handle
* responses received from the server.
* #return {#link StreamObserver} instance to provide requests into
*/
public StreamObserver<Request> downloading(StreamObserver<Response> responseObserver) {
return asyncStub.downloading(responseObserver);
}
public static void main(String[] args) {
Client cl = new Client("localhost", 8999); // fail??
StreamObserver<Request> requester = cl.downloading(new StreamObserver<Response>() {
#Override
public void onNext(Response value) {
System.out.println("On Next");
}
#Override
public void onError(Throwable t) {
System.out.println("Error");
}
#Override
public void onCompleted() {
System.out.println("Completed");
}
}); // fail ??
System.out.println("Start");
requester.onNext(Request.newBuilder().setUrl("http://my-url").build()); // fail?
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
System.out.println("Finish");
}
}
I don't start any server and run the main method. I would suppose that the program fails on:
client creation
client.downloading call
or observer.onNext
but suprisingly (for me), the code runs successfully, only messages got lost. The output is:
Start
Finish
Error
Because of the asynchronnous nature, the finish can be called even before an error is propagated at least through the response observer. Is that a desired behavior? I can't lose any messages. Am I missing something?
Thank you, Adam

This is the intended behavior. As you mentioned the API is asynchronous and so errors must generally be asynchronous as well. gRPC does not guarantee message delivery and in the case of a streaming RPC failure does not indicate which messages were received by the remote side. The advanced ClientCall API calls this out.
If you need stronger guarantees it must be added at the application-level, such as with replies or with a Status of OK. As an example, in gRPC + Image Upload I mention using a bidirectional stream for acknowledgements.
Creating a ManagedChannelBuilder does not error because the channel is lazy: it only creates a TCP connection when necessary (and reconnects when necessary). Also since most failures are transient, we wouldn't want to prevent all future RPCs on the channel just because your client happened to start when the network was broken.
Since the API is asynchronous already, grpc-java can purposefully throw away messages when sending even when it knows an error has occurred (i.e., it chooses not to throw). Thus almost all errors are delivered to the application via onError().

Related

Spring Apache Kafka onFailure Callback of KafkaTemplate not fired on connection error

I'm experimenting a lot with Apache Kafka in a Spring Boot App at the moment.
My current goal is to write a REST endpoint that takes in some message payload, which will use a KafkaTemplate to send the data to my local Kafka running on port 9092.
This is my producer config:
#Bean
public Map<String,Object> producerConfig() {
// config settings for creating producers
Map<String,Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,this.bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,5000);
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,4000);
configProps.put(ProducerConfig.RETRIES_CONFIG,0);
return configProps;
}
#Bean
public ProducerFactory<String,String> producerFactory() {
// creates a kafka producer
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean("kafkaTemplate")
public KafkaTemplate<String,String> kafkaTemplate(){
// template which abstracts sending data to kafka
return new KafkaTemplate<>(producerFactory());
}
My rest endpoint forwards to a service, the service looks like this:
#Service
public class KafkaSenderService {
#Qualifier("kafkaTemplate")
private final KafkaTemplate<String,String> kafkaTemplate;
#Autowired
public KafkaSenderService(KafkaTemplate<String,String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessageWithCallback(String message, String topicName) {
// possibility to add callbacks to define what shall happen in success/ error case
ListenableFuture<SendResult<String,String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new KafkaSendCallback<String, String>() {
#Override
public void onFailure(KafkaProducerException ex) {
logger.warn("Message could not be delivered. " + ex.getMessage());
}
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Your message was delivered with following offset: " + result.getRecordMetadata().offset());
}
});
}
}
The thing now is: I'm expecting the "onFailure()" method to get called when the message could not be sent. But this seems not to work. When I change the bootstrapServers variable in the producer config to localhost:9091 (which is the wrong port, so there should be no connection possible), the producer tries to connect to the broker. It will do several connection attempts, and after 5 seconds, a TimeOutException will occur. But the "onFailure() method won't get called. Is there a way to achieve that the "onFailure()" method can get called event if the connection cannot be established?
And by the way, I set the retries count to zero, but the prodcuer still does a second connection attempt after the first one. This is the log output:
EDIT: it seems like the Kafke producer/ KafkaTemplate goes into an infinite loop when the broker is not available. Is that really the intended behaviour?
The KafkaTemplate does really nothing fancy about connection and publishing. Everything is delegated to the KafkaProducer. What you describe here would happen exactly even if you'd use just plain Kafka Client.
See KafkaProducer.send() JavaDocs:
* #throws TimeoutException If the record could not be appended to the send buffer due to memory unavailable
* or missing metadata within {#code max.block.ms}.
Which happens by the blocking logic in that producer:
/**
* Wait for cluster metadata including partitions for the given topic to be available.
* #param topic The topic we want metadata for
* #param partition A specific partition expected to exist in metadata, or null if there's no preference
* #param nowMs The current time in ms
* #param maxWaitMs The maximum time in ms for waiting on the metadata
* #return The cluster containing topic metadata and the amount of time we waited in ms
* #throws TimeoutException if metadata could not be refreshed within {#code max.block.ms}
* #throws KafkaException for all Kafka-related exceptions, including the case where this method is called after producer close
*/
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long nowMs, long maxWaitMs) throws InterruptedException {
Unfortunately this is not explained in the send() JavaDocs which claims to be fully asynchronous, but apparently it is not. At least in this metadata part which has to be available before we enqueue the record for publishing.
That's what we cannot control and it is not reflected on the returned Future:
try {
clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), nowMs, maxBlockTimeMs);
} catch (KafkaException e) {
if (metadata.isClosed())
throw new KafkaException("Producer closed while send in progress", e);
throw e;
}
See more info in Apache Kafka docs how to adjust the KafkaProducer for this matter: https://kafka.apache.org/documentation/#theproducer
Question answered inside the discussion on https://github.com/spring-projects/spring-kafka/discussions/2250# for anyone else stumbling across this thread. In short, kafkaTemplate.getProducerFactory().reset();does the trick.

How to test MessageChannel in Spring Integrtion?

I'm trying to know if the message passed through specific channel for test or i'd like to get the message from specific channel
So my flow is: controller -> gateway -> ServiceActivator
private final Gateway gateway;
public ResponseEntity<Map<String,String>> submit(String applicationId, ApplicationDto applicationDto) {
applicationDto.setApplicationId(applicationId);
gateway.submitApplication(applicationDto);
return ResponseEntity.ok(Map.of(MESSAGE, "Accepted submit"));
}
the gateway
#Gateway(requestChannel = "submitApplicationChannel", replyChannel = "replySubmitApplicationChannel")
WorkflowPayload submitApplication(ApplicationDto applicationDto);
pipeline
#Bean
MessageChannel submitApplicationChannel() {
return new DirectChannel();
}
So my test is sending a request to start the flow
#Test
#DisplayName("Application Submission")
void submissionTest() throws Exception {
mockMvc.perform(MockMvcRequestBuilders
.post("/api/v1/applications/contract-validation/" + APPLICATION_ID)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(payload)))
.andExpect(status().isAccepted())
.andReturn();
//Check HERE if the message passed through the channel
}
Can you give me a hand??
In your test, add a ChannelInterceptor to the submitApplicationChannel before calling the gateway.
public interface ChannelInterceptor {
/**
* Invoked before the Message is actually sent to the channel.
* This allows for modification of the Message if necessary.
* If this method returns {#code null} then the actual
* send invocation will not occur.
*/
#Nullable
default Message<?> preSend(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked immediately after the send invocation. The boolean
* value argument represents the return value of that invocation.
*/
default void postSend(Message<?> message, MessageChannel channel, boolean sent) {
}
/**
* Invoked after the completion of a send regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preSend} successfully
* completed and returned a Message, i.e. it did not return {#code null}.
* #since 4.1
*/
default void afterSendCompletion(
Message<?> message, MessageChannel channel, boolean sent, #Nullable Exception ex) {
}
/**
* Invoked as soon as receive is called and before a Message is
* actually retrieved. If the return value is 'false', then no
* Message will be retrieved. This only applies to PollableChannels.
*/
default boolean preReceive(MessageChannel channel) {
return true;
}
/**
* Invoked immediately after a Message has been retrieved but before
* it is returned to the caller. The Message may be modified if
* necessary; {#code null} aborts further interceptor invocations.
* This only applies to PollableChannels.
*/
#Nullable
default Message<?> postReceive(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked after the completion of a receive regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preReceive} successfully
* completed and returned {#code true}.
* #since 4.1
*/
default void afterReceiveCompletion(#Nullable Message<?> message, MessageChannel channel,
#Nullable Exception ex) {
}
}

Multithreaded Executor channel to speed up the consumer process

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. Currently it is single threaded and not able to match with the rate at which the producer are sending the messages. hence the queue depth keeps on increasing
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)// push raw fa data
.filter(ingFilter, "filterMessageOnEvent").transform(eventHandler, "parseEvent")
.aggregate(a -> a.correlationStrategy(corStrgy, "getCorrelationKey").releaseStrategy(g -> {
boolean eonExists = g.getMessages().stream()
.anyMatch(eon -> ((FlightModel) eon.getPayload()).getEstGmtOnDtm() != null);
if (eonExists) {
boolean einExists = g.getMessages().stream()
.anyMatch(ein -> ((FlightModel) ein.getPayload()).getEstGmtInDtm() != null);
if (einExists) {
return true;
}
}
return false;
}).messageStore(this.messageStore)).channel("AggregatorEventChannel").get();
is it possible to use executor channel to process this in a multithreaded environment and speed up the consumer process
If yes, please suggest how can i achieve - To ensure ordering of the messages I need to assign the messages of same type (based on the id of the message) to the same thread of the executor channel.
[UPDATED CODE]
I have created the below executor channels
public static final MessageChannel SKW_DEFAULT_CHANNEL = MessageChannels
.executor(ASQ_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
public static final MessageChannel RPA_DEFAULT_CHANNEL = MessageChannels
.executor(ASH_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
Now from the main message flow I redirected to a custom router which forwards the message to Executor channel as shown below -
#Bean
public IntegrationFlow baseEventFlow1() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)// push raw fa data
.filter(ingFilter, "filterMessageOnEvent").route(route()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(RPA_DEFAULT_CHANNEL);
} else if (message.getPayload().toString().contains("\"id\":\"SKW")) {
return Collections.singletonList(SKW_DEFAULT_CHANNEL);
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
I will have individual consumer flow for the corresponding executor channel.
Please correct my understaning
[UPDATED]
#Bean
#BridgeTo("uaxDefaultChannel")
public MessageChannel ucaDefaultChannel() {
return MessageChannels.executor(UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel ualDefaultChannel() {
return MessageChannels.executor(UAL_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel").wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
}
So BridgeTo on the executor channel will forward the messages
hence the queue depth keeps on increasing
Since it looks like your queue is somewhere on JMS broker that is really OK to have such a behavior. That's exactly for what messaging systems have been designed - to distinguish producer and consumer and deal with messages in a destination whenever it is possible.
if you want to increase a polling from JMS, you can consider to have a concurrency option on the JMS container:
/**
* The concurrency to use.
* #param concurrency the concurrency.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setConcurrency(String)
*/
public JmsDefaultListenerContainerSpec concurrency(String concurrency) {
this.target.setConcurrency(concurrency);
return this;
}
/**
* The concurrent consumers number to use.
* #param concurrentConsumers the concurrent consumers count.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setConcurrentConsumers(int)
*/
public JmsDefaultListenerContainerSpec concurrentConsumers(int concurrentConsumers) {
this.target.setConcurrentConsumers(concurrentConsumers);
return this;
}
/**
* The max for concurrent consumers number to use.
* #param maxConcurrentConsumers the max concurrent consumers count.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setMaxConcurrentConsumers(int)
*/
public JmsDefaultListenerContainerSpec maxConcurrentConsumers(int maxConcurrentConsumers) {
this.target.setMaxConcurrentConsumers(maxConcurrentConsumers);
return this;
}
See more info the Docs: https://docs.spring.io/spring/docs/5.2.3.RELEASE/spring-framework-reference/integration.html#jms-receiving
But that won't allow you to "asign messages to the specific thread". There is just like no way to partition in JMS.
We can do that with Spring Integration using router according your "based on the id of the message" and particular ExecutorChannel instances configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread. This way you will ensure an order for messages with the same partition key and you'll process them in parallel. All the ExecutorChannel can have the same subscriber or bridge to the same channel for processing.
However you need to keep in mind that when you are leaving JMS listener thread, you finish JMS transaction and you fail to process a message in that separate thread you may lose a message.

Agent not returning correct OID value when using SNMP4j (org.snmp4j) v3 and user authentication?

I have written an SNMP agent and registered a managed object (created/set a value of an MIB OID).
When I retrieve this value using SNMPv2c, the value is returned correctly - the PDU from ResponseEvent.getResponse has type=GET and the variable bindings have expected data - correct OID etc.
When I retrieve this value using SNMPv3 and user authentication, the value is not returned correctly - the PDU from ResponseEvent.getResponse has type=REPORT and the variable bindings have
a different OID from that in the request - from what I've read so far this indicates a config/authentication error.
Below is sample code (snippets) used for client & agent - please can you inform me how to create agent & client - where I'm going wrong?
// TestSNMPAgent:
public class TestSNMPAgent {
private OID sysDescr = new OID("1.3.6.1.2.1.1.1.0");
...
public static void main(String[] args) throws IOException {
TestSNMPAgent agent = new TestSNMPAgent();
agent.init("0.0.0.0/4071");
private void init(String agentIp) throws IOException {
agent = new SNMPAgent(agentIp);
agent.start();
agent.unregisterManagedObject(agent.getSnmpv2MIB());
agent.registerManagedObject(new MOScalar(oid,
MOAccessImpl.ACCESS_READ_WRITE,
getVariable(value),sysDescr,
"1")));
...
}
}
// SNMPAgent:
public class SNMPAgent extends BaseAgent {
...
#Override
protected void addUsmUser(USM arg0) {
UsmUser user = new UsmUser(new OctetString("SHADES"),
AuthSHA.ID,
new OctetString("SHADESAuthPassword"),
PrivDES.ID,
new OctetString("SHADESPrivPassword"));
}
#Override
protected void addViews(VacmMIB vacm) {
vacm.addGroup(SecurityModel.SECURITY_MODEL_USM,
new OctetString("SHADES"),
new OctetString("v3group"),
StorageType.nonVolatile);
vacm.addAccess(new OctetString("v3group"), new OctetString(),
SecurityModel.SECURITY_MODEL_USM,
SecurityLevel.NOAUTH_NOPRIV, VacmMIB.vacmExactMatch,
new OctetString("fullReadView"),
new OctetString("fullWriteView"),
new OctetString("fullNotifyView"),
StorageType.nonVolatile);
}
public void registerManagedObject(ManagedObject mo) {
try {
server.register(mo, null);
} catch (DuplicateRegistrationException ex) {
throw new RuntimeException(ex);
}
}
// TestSNMPMgr
public class TestSNMPMgr {
public static void main(String[] args) throws IOException {
TestSNMPMgr client = new TestSNMPMgr();
client.init();
}
public void init() {
SNMPMgr client = new SNMPMgr();
client.start();
// Get back Value which is set
String value = client.getAsString(new OID("1.3.6.1.2.1.1.1.0"));
}
}
// SNMPMgr
public class SNMPMgr {
Snmp snmp = null;
Address address = null;
public SNMPMgr()
{
address = "1.3.6.1.2.1.1.1.0";
}
/**
* Start the Snmp session. If you forget the listen() method you will not
* get any answers because the communication is asynchronous
* and the listen() method listens for answers.
* #throws IOException
*/
public void start() throws IOException {
address = GenericAddress.parse("udp:127.0.0.1/4701");
TransportMapping transport = new DefaultUdpTransportMapping();
snmp = new Snmp(transport);
USM usm = new USM(SecurityProtocols.getInstance(),
new OctetString(MPv3.createLocalEngineID()), 0);
SecurityModels.getInstance().addSecurityModel(usm);
transport.listen();
}
public void end() {
try {
snmp.close();
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* Method which takes a single OID and returns the response from the agent as a String.
* #param oid
* #return
* #throws IOException
*/
public String getAsString(OID oid) throws IOException {
ResponseEvent event = get(new OID[] { oid });
return event.getResponse().get(0).getVariable().toString();
}
public ResponseEvent get(OID oids[]) throws IOException {
PDU pdu = new ScopedPDU();
for (OID oid : oids) {
pdu.add(new VariableBinding(oid));
}
pdu.setType(PDU.GET);
// add user to the USM
snmp.getUSM().addUser(new OctetString("SHADES"),
new UsmUser(new OctetString("SHADES"),
AuthSHA.ID,
new OctetString("SHADESAuthPassword"),
PrivDES.ID,
new OctetString("SHADESPrivPassword")));
// send the PDU
ResponseEvent event = snmp.send(pdu, getTarget(), null);
if(event != null) {
return event;
}
throw new RuntimeException("GET timed out");
}
/**
* This method returns a Target, which contains information about
* where the data should be fetched and how.
* #return
*/
private UserTarget getTarget() {
UserTarget target = new UserTarget();
target.setAddress(address);
target.setRetries(1);
target.setTimeout(5000);
target.setVersion(SnmpConstants.version3);
target.setSecurityLevel(SecurityLevel.NOAUTH_NOPRIV);
target.setSecurityName(new OctetString("SHADES"));
return target;
}
}
The OID in the Report PDU should tell you what is happening. Under typical circumstances there will be one or two (or one of two) request/report exchanges to establish initial SNMPv3 communications between manager and agent (or, rather, non-authoritative and authoritative engines, respectively).
The first is typically a usmStatUnknownEngineIDs report that allows the manager to discover the agent's Engine ID (needed for key localization/etc.) and will happen if you don't specify the proper Engine ID in the initial request. The second/other happens if using auth/noPriv or auth/priv level security, and that is usmStatsNotInTimeWindows, which is sent if the request doesn't specify Engine Boots/Engine Time values within proper range of the agent's values. These values prevent message replay attacks by making requests no longer valid if they fall out of the time window, and the manager typically doesn't know what they are until it receives them from the agent by way of a Report PDU.
After the manager has the proper Engine ID, Boots, and Time, and has localized keys to the Engine ID if necessary, then the normal request/response exchange can proceed as expected. Some SNMP APIs will take care of this exchange for you so you just send your request and get the eventual result after the exchange. It would seem that SNMP4j doesn't and you may have to handle it yourself if it's one of these reports.
If it's not one of these reports, then you likely have a mismatch in configuration.

Running async jobs in dropwizard, and polling their status

In dropwizard, I need to implement asynchronous jobs and poll their status.
I have 2 endpoints for this in resource:
#Path("/jobs")
#Component
public class MyController {
#POST
#Produces(MediaType.APPLICATION_JSON)
public String startJob(#Valid MyRequest request) {
return 1111;
}
#GET
#Path("/{jobId}")
#Produces(MediaType.APPLICATION_JSON)
public JobStatus getJobStatus(#PathParam("id") String jobId) {
return JobStatus.READY;
}
}
I am considering to use quartz to start job, but only single time and without repeating. And when requesting status, I will get trigger status. But the idea of using quartz for none-scheduled usage looks weird.
Is there any better approaches for this? Maybe dropwizard provides better tools itself? Will appriciate any advices.
UPDATE: I also looking at https://github.com/gresrun/jesque, but can not find any way to poll the status of running job.
You can use the Managed interface. In the snippet below I am using the ScheduledExecutorService to exuecute jobs, but you can use Quartz instead if you like. I prefer working with ScheduledExecutorService as it is simpler and easier...
first step is to register your managed service.
environment.lifecycle().manage(new JobExecutionService());
Second step is to write it.
/**
* A wrapper around the ScheduledExecutorService so all jobs can start when the server starts, and
* automatically shutdown when the server stops.
* #author Nasir Rasul {#literal nasir#rasul.ca}
*/
public class JobExecutionService implements Managed {
private final ScheduledExecutorService service = Executors.newScheduledThreadPool(2);
#Override
public void start() throws Exception {
System.out.println("Starting jobs");
service.scheduleAtFixedRate(new HelloWorldJob(), 1, 1, TimeUnit.SECONDS);
}
#Override
public void stop() throws Exception {
System.out.println("Shutting down");
service.shutdown();
}
}
and the job itself
/**
* A very simple job which just prints the current time in millisecods
* #author Nasir Rasul {#literal nasir#rasul.ca}
*/
public class HelloWorldJob implements Runnable {
/**
* When an object implementing interface <code>Runnable</code> is used
* to create a thread, starting the thread causes the object's
* <code>run</code> method to be called in that separately executing
* thread.
* <p>
* The general contract of the method <code>run</code> is that it may
* take any action whatsoever.
*
* #see Thread#run()
*/
#Override
public void run() {
System.out.println(System.currentTimeMillis());
}
}
As mentioned in the comment below, if you use Runnable, you can Thread.getState(). Please refer to Get a List of all Threads currently running in Java. You may still need some intermediary pieces depending on how you're wiring you application.

Resources