Embedded ElasticSearch can't connect to transport port in integration test - elasticsearch

I'm trying to create an embedded ElasticSearch node for integration testing.
Here is the code creation
private static final String THREAD_NAME = "ES-THREAD";
private static final String CLUSTER_NAME = "ES-INTEGRATION-TEST";
private static final String ES_HOME_PATH = "elastic-search-home";
private static final String ES_DATA_PATH = "elastic-search-data";
private static final String DATA_PORTS = "9500-9599";
private static final String TRANSPORT_PORTS = "9600-9699";
public void before() throws Throwable {
try {
homeDir = Files.createTempDirectory(ES_HOME_PATH);
dataDir = Files.createTempDirectory(ES_DATA_PATH);
log.info("Created temp directory {} and {}", homeDir, dataDir);
} catch (IOException ex) {
throw new IllegalStateException("Temp Elastic Search directory not created", ex);
}
Properties props = new Properties();
props.setProperty("name", THREAD_NAME);
props.setProperty("path.home", homeDir.toString());
props.setProperty("path.data", dataDir.toString());
props.setProperty("http.port", DATA_PORTS);
props.setProperty("transport.tcp.port", TRANSPORT_PORTS);
props.setProperty("node.local", "true");
props.setProperty("script.groovy.sandbox.enabled", "true");
props.setProperty("script.engine.groovy.inline.aggs", "true");
props.setProperty("script.engine.groovy.inline.search", "true");
props.setProperty("script.engine.groovy.inline.update", "true");
props.setProperty("script.engine.groovy.inline.mapping", "true");
esNode = NodeBuilder.nodeBuilder().local(false).client(false)
.settings(Settings.settingsBuilder().put(props).build()).clusterName(CLUSTER_NAME).build();
esNode.start();
}
In the code tested there is the following method which creates transport connection to ElasticSearch
private Client createClient() throws UnknownHostException {
Settings.Builder builder = Settings.builder();
builder.put("cluster.name", clusterName);
builder.put("client.transport.ignore_cluster_name", true);
Settings settings = builder.build();
return TransportClient.builder().settings(settings).build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(hostname), port));
}
when I run the test i receive excepetion
java.net.BindException: Can't assign requested address
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at org.jboss.netty.channel.Channels.connect(Channels.java:634)
at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:216)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:913)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:880)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:852)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:250)
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:354)
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:300)
at org.elasticsearch.client.transport.TransportClientNodesService$ScheduledNodeSampler.run(TransportClientNodesService.java:333)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The values I send for the tested code as hostname is localhost and I send the port I pull from the embedded ElasticSearch by
NodeInfo nodeInfo = esNode.client().admin().cluster().prepareNodesInfo(localNodeId).get().iterator().next();
transportAddress = nodeInfo.getTransport().address().publishAddress().getAddress();
I saw that the transport port is always 0 and when I evaluate nodeInfo.getTransport().address() its value is local[1].
What in the node creation is wrong?
Is there another configuration I need to add?
Thanks,
Daniela

when I changed props.setProperty("node.local", "true"); to props.setProperty("node.local", "false"); the transport port was created.

Related

Unable to delete jetty working directory before jetty starts or creates it's working directory

I am kind of running into this interesting problem. I wanted to delete the jetty working directory before jetty restarts next time. The application has 2 war files and this is implemented using embed jetty. Name of the working folder looks like below:
jetty-0_0_0_0-10000-oaxui_war-_ui_oax-any-4214704288653178451
jetty-0_0_0_0-10000-oaxservice_war-_api_oax-any-1938823993160354573
Options 1: We have run.sh file which actually starts JettyServer.So I thought to place the below code in the file just before that.
echo "now deleting"
DELTEMP="rm -rf /tmp/jetty*"
exec $DELTEMP
echo "deleted....."
Result: It actually deletes the jetty working directory but does not let it create one working directory also.
Option 2: Create a temp directory in the location provided by us as below and then later delete this folder using the run.sh and above command but the path will be custom. Unfortunately, this also didn't help as the working directory was not being created in the first place.
private HandlerCollection getWebAppHandlers() throws SQLException, NamingException{
//Setting the war and context path for the service layer: oaxservice
File tempDir1 = new File("/faw/service/appshell/tempdir/");
File tempDir2 = new File("/faw/service/appshell/tempdir/");
WebAppContext serviceWebapp = new WebAppContext();
tempDir1.mkdirs();
serviceWebapp.setTempDirectory(tempDir1);
serviceWebapp.setWar(APPSHELL_API_WAR_FILE_PATH);
serviceWebapp.setContextPath(APPSHELL_API_CONTEXT_PATH);
//setting the war and context path for the UI layer: oaxui
WebAppContext uiWebapp = new WebAppContext();
tempDir2.mkdirs();
uiWebapp.setTempDirectory(tempDir2);
uiWebapp.setWar(APPSHELL_UI_WAR_FILE_PATH);
uiWebapp.setContextPath(APPSHELL_UI_CONTEXT_PATH);
uiWebapp.setAllowNullPathInfo(true);
uiWebapp.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false");
//set error page handler for the UI context
uiWebapp.setErrorHandler(new CustomErrorHandler());
//handling the multiple war files using HandlerCollection.
HandlerCollection handlerCollection = new HandlerCollection();
handlerCollection.setHandlers(new Handler[]{serviceWebapp, uiWebapp});
return handlerCollection;
}
Below is the complete JettyServer.java file
public class JettyServer {
private final static Logger logger = Logger.getLogger(JettyServer.class.getName());
private static final int JETTY_PORT = 10000;
private static final String JETTY_REALM_PROPERTIES_FILE_NAME = "realm.properties";
private static final String JETTY_REALM_NAME = "myrealm";
private static final String APPSHELL_WAR_FOLDER = "/faw/service/appshell/target/";
private static final String APPSHELL_UI_WAR_FILE_PATH = APPSHELL_WAR_FOLDER+"oaxui.war";
private static final String APPSHELL_API_WAR_FILE_PATH = APPSHELL_WAR_FOLDER+"oaxservice.war";
private static final String APPSHELL_API_CONTEXT_PATH = "/api/oax";
private static final String APPSHELL_UI_CONTEXT_PATH = "/ui/oax";
private static final String JETTY_CONFIG_FOLDER = "/faw/service/appshell/config/";
private static final String JETTY_CONFIG_FILE = JETTY_CONFIG_FOLDER+"datasource.properties";
private static final String JETTY_CONFIG_DATASOURCE_URL = "datasource.url";
private static final String JETTY_CONFIG_APPSHELL_SCHEMA_NAME = "appshell.datasource.username";
private static final String JETTY_CONFIG_APPSHELL_SCHEMA_PWD = "appshell.datasource.password";
private static final String JETTY_CONFIG_APPSHELL_DS_INITIAL_POOL_SIZE = "appshell.datasource.initialPoolSize";
private static final String JETTY_CONFIG_APPSHELL_DS_MAX_POOL_SIZE = "appshell.datasource.maxPoolSize";
private static final String JETTY_CONFIG_FAWCOMMON_SCHEMA_NAME = "fawcommon.datasource.username";
private static final String JETTY_CONFIG_FAWCOMMON_SCHEMA_PWD = "fawcommon.datasource.password";
private static final String JETTY_CONFIG_FAWCOMMON_DS_INITIAL_POOL_SIZE = "fawcommon.datasource.initialPoolSize";
private static final String JETTY_CONFIG_FAWCOMMON_DS_MAX_POOL_SIZE = "fawcommon.datasource.maxPoolSize";
private static final int INITIAL_POOL_SIZE_DEFAULT = 0;
private static final int MAX_POOL_SIZE_DEFAULT = 50;
private static final String JNDI_NAME_FAWAPPSHELL = "jdbc/CXOMetadataDatasource";
private static final String JNDI_NAME_FAWCOMMON = "jdbc/FawCommonDatasource";
private static final String JETTY_REQUEST_LOG_FILE_NAME = "/faw/logs/appshell/applogs/jetty-request.yyyy_mm_dd.log";
private static final String JETTY_REQUEST_LOG_FILE_NAME_DATE_FORMAT = "yyyy_MM_dd";
private static final boolean JETTY_REQUEST_LOG_FILE_APPEND = true;
private static final int JETTY_REQUEST_LOG_FILE_RETAIN_DAYS = 31;
private static final String JETTY_STDOUT_LOG_FILE_NAME = "/faw/logs/appshell/applogs/jetty-out.yyyy_mm_dd.log";
private static final int MAX_REQUEST_HEADER_SIZE = 65535;
private static final String SSL_SERVER_KEY_STROKE_PATH="/faw/tmp/customscript/certs/server.jks";
private static final String SSL_TRUST_KEY_STROKE_PATH="/faw/tmp/customscript/certs/trust.jks";
private static final String SSL_KEY_STROKE_KEY="changeit";
public static QueuedThreadPool threadPool;
public JettyServer() {
try {
//Redirect system out and system error to our print stream.
RolloverFileOutputStream os = new RolloverFileOutputStream(JETTY_STDOUT_LOG_FILE_NAME, true);
PrintStream logStream = new PrintStream(os);
System.setOut(logStream);
System.setErr(logStream);
Server server = new Server(JETTY_PORT);
server.addBean(getLoginService());
//Set SSL context
if (isIDCSEnvironment) {
try {
logger.info("Configuring Jetty SSL..");
HttpConfiguration http_config = new HttpConfiguration();
http_config.setSecureScheme("https");
http_config.setSecurePort(JETTY_PORT);
SslContextFactory sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath(SSL_SERVER_KEY_STROKE_PATH);
sslContextFactory.setCertAlias("server");
sslContextFactory.setKeyStorePassword(SSL_KEY_STROKE_KEY);
sslContextFactory.setTrustStorePath(SSL_TRUST_KEY_STROKE_PATH);
sslContextFactory.setTrustStorePassword(SSL_KEY_STROKE_KEY);
HttpConfiguration https_config = new HttpConfiguration(http_config);
https_config.addCustomizer(new SecureRequestCustomizer());
ServerConnector https = new ServerConnector(server, new SslConnectionFactory(sslContextFactory, HttpVersion.HTTP_1_1.asString()), new HttpConnectionFactory(https_config));
https.setPort(JETTY_PORT);
server.setConnectors(new Connector[]{https});
logger.info("Jetty SSL successfully configured..");
} catch (Exception e){
logger.severe("Error configuring Jetty SSL.."+e);
throw e;
}
}
Configuration.ClassList classlist = Configuration.ClassList.setServerDefault(server);
classlist.addAfter("org.eclipse.jetty.webapp.FragmentConfiguration",
"org.eclipse.jetty.plus.webapp.EnvConfiguration",
"org.eclipse.jetty.plus.webapp.PlusConfiguration");
//register oaxui and oaxservice web apps
HandlerCollection webAppHandlers = getWebAppHandlers();
for (Connector c : server.getConnectors()) {
c.getConnectionFactory(HttpConnectionFactory.class).getHttpConfiguration().setRequestHeaderSize(MAX_REQUEST_HEADER_SIZE);
c.getConnectionFactory(HttpConnectionFactory.class).getHttpConfiguration().setSendServerVersion(false);
}
threadPool = (QueuedThreadPool) server.getThreadPool();
// request logs
RequestLogHandler requestLogHandler = new RequestLogHandler();
AsyncRequestLogWriter asyncRequestLogWriter = new AsyncRequestLogWriter(JETTY_REQUEST_LOG_FILE_NAME);
asyncRequestLogWriter.setFilenameDateFormat(JETTY_REQUEST_LOG_FILE_NAME_DATE_FORMAT);
asyncRequestLogWriter.setAppend(JETTY_REQUEST_LOG_FILE_APPEND);
asyncRequestLogWriter.setRetainDays(JETTY_REQUEST_LOG_FILE_RETAIN_DAYS);
asyncRequestLogWriter.setTimeZone(TimeZone.getDefault().getID());
requestLogHandler.setRequestLog(new AppShellCustomRequestLog(asyncRequestLogWriter));
webAppHandlers.addHandler(requestLogHandler);
StatisticsHandler statisticsHandler = new StatisticsHandler();
statisticsHandler.setHandler(new AppshellStatisticsHandler());
webAppHandlers.addHandler(statisticsHandler);
// set handler
server.setHandler(webAppHandlers);
//start jettyMetricsPsr
JettyMetricStatistics.logJettyMetrics();
// set error handler
server.addBean(new CustomErrorHandler());
// GZip Handler
GzipHandler gzip = new GzipHandler();
server.setHandler(gzip);
gzip.setHandler(webAppHandlers);
//setting server attribute for datasources
server.setAttribute("fawappshellDS", new Resource(JNDI_NAME_FAWAPPSHELL, DatasourceUtil.getFawAppshellDatasource()));
server.setAttribute("fawcommonDS", new Resource(JNDI_NAME_FAWCOMMON, DatasourceUtil.getCommonDatasource()));
//new Resource(server, JNDI_NAME_FAWAPPSHELL, getFawAppshellDatasource());
//new Resource(server, JNDI_NAME_FAWCOMMON, getFawCommonDatasource());
Map<String, String> configDetails;
if (isIDCSEnvironment) {
configDetails = DatasourceUtil.getConfigMap();
} else {
configDetails = DatasourceUtil.configMapInternal;
}
if (isIDCSEnvironment && configDetails.containsKey(PODDB_ATP_ENABLED) && BooleanUtils.toBoolean(configDetails.get(PODDB_ATP_ENABLED))){
configDetails.put(DatabagProperties.LCM_MASTER_PROPERTY,"true");
try {
DBUtils.migrateDBATP(configDetails, DatasourceUtil.getFawAppshellDatasourceATP());
configDetails.remove(DatabagProperties.LCM_MASTER_PROPERTY, "true");
} catch(SQLException e){
logger.info("Exception while executing DBUtils.migrateDBATP.");
if(LCMUtils.isATPDBDown(e)) {
logger.info("Redis flow starts....Skipping consumption from Redis for now");
}
else{
logger.info("This is not eligible for redis flow. Actual Exception : "+ e);
throw e;
}
}
} else if (isIDCSEnvironment && configDetails.containsKey(PODDB_ATP_ENABLED) && !BooleanUtils.toBoolean(configDetails.get(PODDB_ATP_ENABLED))){
try{
DBUtils.migrateDB();
} catch (FawLCMPluginException e) {
logger.info(" This is not eligible for Redis consumption and exception is : " + e);
}
}
//For Dev env..
if (!isIDCSEnvironment && configDetails.containsKey(PODDB_ATP_ENABLED) && BooleanUtils.toBoolean(configDetails.get(PODDB_ATP_ENABLED))){
configDetails.put(DatabagProperties.LCM_MASTER_PROPERTY,"true");
try{
DBUtils.migrateDBATP(configDetails,DatasourceUtil.getDevFawAppshellDatasourceATP());
configDetails.remove(DatabagProperties.LCM_MASTER_PROPERTY,"true");
}
catch(SQLException e){
logger.info("Exception while executing DBUtils.migrateDBATP for dev environment.");
if(LCMUtils.isATPDBDown(e)) {
logger.info("Redis flow starts....Skipping consumption from Redis for now");
} else{
logger.info("This is not eligible for redis flow. Actual Exception for dev: "+ e);
}
}
} else if (!isIDCSEnvironment && configDetails.containsKey(PODDB_ATP_ENABLED) && !BooleanUtils.toBoolean(configDetails.get(PODDB_ATP_ENABLED))){
try {
DBUtils.migrateDB();
}catch (FawLCMPluginException e) {
logger.info(" This is not eligible for Redis consumption and exception in dev is : " + e);
}
}
server.start();
server.join();
} catch (Exception e) {
e.printStackTrace();
}
}
private HandlerCollection getWebAppHandlers() throws SQLException, NamingException{
//Setting the war and context path for the service layer: oaxservice
File tempDir1 = new File("/faw/service/appshell/tempdir/");
File tempDir2 = new File("/faw/service/appshell/tempdir/");
WebAppContext serviceWebapp = new WebAppContext();
tempDir1.mkdirs();
serviceWebapp.setTempDirectory(tempDir1);
serviceWebapp.setWar(APPSHELL_API_WAR_FILE_PATH);
serviceWebapp.setContextPath(APPSHELL_API_CONTEXT_PATH);
//setting the war and context path for the UI layer: oaxui
WebAppContext uiWebapp = new WebAppContext();
tempDir2.mkdirs();
uiWebapp.setTempDirectory(tempDir2);
uiWebapp.setWar(APPSHELL_UI_WAR_FILE_PATH);
uiWebapp.setContextPath(APPSHELL_UI_CONTEXT_PATH);
uiWebapp.setAllowNullPathInfo(true);
uiWebapp.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false");
//set error page handler for the UI context
uiWebapp.setErrorHandler(new CustomErrorHandler());
//handling the multiple war files using HandlerCollection.
HandlerCollection handlerCollection = new HandlerCollection();
handlerCollection.setHandlers(new Handler[]{serviceWebapp, uiWebapp});
return handlerCollection;
}
/**
* The name of the LoginService needs to correspond to what is configured a webapp's web.xml which is
* <realm-name>myrealm</realm-name> and since it has a lifecycle of its own, we register it as a bean
* with the Jetty server object so it can be started and stopped according to the lifecycle of the server itself.
*
* #return the login service instance
* #throws FileNotFoundException In case realmProps is null
*/
public LoginService getLoginService() throws IOException {
URL realmProps = JettyServer.class.getClassLoader().getResource(JETTY_REALM_PROPERTIES_FILE_NAME);
if (realmProps == null)
throw new FileNotFoundException("Unable to find " + JETTY_REALM_PROPERTIES_FILE_NAME);
return new HashLoginService(JETTY_REALM_NAME, realmProps.toExternalForm());
}
public static void main(String[] args) {
new JettyServer();
}
}
Run.sh file :
#!/bin/bash
set -eu # Exit on error
set -o pipefail # Fail a pipe if any sub-command fails.
VERSION=1.0
cd "$(dirname "$0")"
RED='\033[0;31m'
ORANGE='\033[0;33m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
function die() {
printf "${RED}%s\n${NC}" "$1" >&2
exit 1
}
function warn() {
printf "${ORANGE}%s\n${NC}" "$1"
}
function info() {
printf "${GREEN}%s\n${NC}" "$1"
}
CURR_PID=$$
mem_args=8192
info "mem args "$mem_args
export MAX_MEM=$mem_args
#put the managed server specific variable setting below, then export the variables
USER_MEM_ARGS="-Xms4096m -Xmx"$MAX_MEM"m -XX:MetaspaceSize=1024M -XX:MaxMetaspaceSize=1024m"
export USER_MEM_ARGS
info "USER_MEM_ARGS= ${USER_MEM_ARGS}"
#FAW_JAVA_OPTIONS="-Djava.util.logging.config.file=/faw/service/appshell/config/ucp_log.properties -Doracle.jdbc.fanEnabled=false $USER_MEM_ARGS -Xloggc:/faw/logs/appshell/applogs/gc.jetty.log -XX:+HeapDumpOnOutOfMemoryError -XshowSettings:vm -XX:+PrintCodeCache -XX:ReservedCodeCacheSize=512M -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:FlightRecorderOptions=maxage=30m,defaultrecording=true,stackdepth=1024,dumponexit=true,dumponexitpath=/faw/logs/jetty/jetty_1618996595.jfr -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+ExitOnOutOfMemoryError -XX:+DisableExplicitGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=5M -XX:HeapDumpPath=/faw/logs/appshell_hprof-dumps_`date +%x_%r|awk -F" " '{print $1}'|sed 's/\//_/g'`.hprof"
FAW_JAVA_OPTIONS="-Djava.util.logging.config.file=/faw/service/appshell/config/ucp_log.properties -Doracle.jdbc.fanEnabled=false $USER_MEM_ARGS -Xloggc:/faw/logs/appshell/applogs/gc.jetty.log -XX:+HeapDumpOnOutOfMemoryError -XshowSettings:vm -XX:+PrintCodeCache -XX:ReservedCodeCacheSize=512M -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:+ExitOnOutOfMemoryError -XX:+DisableExplicitGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=5M -XX:HeapDumpPath=/faw/logs/appshell_hprof-dumps_`date +%x_%r|awk -F" " '{print $1}'|sed 's/\//_/g'`.hprof"
echo "now deleting"
DELTEMP="rm -rf /tmp/jetty*"
exec $DELTEMP
echo "deleted....."
info "FAW_JAVA_OPTIONS= ${FAW_JAVA_OPTIONS}"
if [[ -e "/faw/environment" ]]; then
sh /faw/tmp/customscript/setUpTLSCerts.sh
fi
CMD="java ${FAW_JAVA_OPTIONS} -cp /faw/service/appshell/target:/faw/service/appshell/libs/* oracle.biapps.cxo.docker.jetty.server.JettyServer --lib /faw/service/appshell/libs"
ulimit -c 0
# Now execute it
info "Executing: $CMD"
exec $CMD
Any help is appreciated. Thank you.
Thank you "Joakim Erdfelt" for the input. Got it. The below solution works fine for me in embedded jetty. I will be creating a custom delete method for deleting the directory first and then creating the jetty working directory.
public void cleanJettyWorkingDirectory(){
final File folder = new File(JETTY_TEMP_WORKING_DIRECTORY);
final File[] files = folder.listFiles((dir, name) -> name.matches( "oax.*" ));
if (files!=null && files.length > 0) {
for ( final File file : files ) {
try {
FileUtils.deleteDirectory(file);
logger.info("Cleaning of the directory is completed.");
} catch (IOException e) {
logger.info("Unable to delete the Jetty working directory");
}
}
}else{
logger.info("No working directory file is present starting with oax for deletion.");
}
}
Also while setting the war , we will do below to create directories:
try {
cleanJettyWorkingDirectory();
logger.info("starting to create temp working directory for oax service ");
java.nio.file.Path oaxServicePath = Files.createTempDirectory("oaxservice");
String oaxServiceTempPath = oaxServicePath.toString();
logger.info("starting to create temp working directory for oax ui ");
java.nio.file.Path uiPath = Files.createTempDirectory("oaxui");
String oaxUIServiceTempPath = uiPath.toString();
File oaxServiceTempDir = new File(oaxServiceTempPath);
File oaxUITempDir = new File(oaxUIServiceTempPath);
serviceWebapp.setTempDirectory(oaxServiceTempDir);
uiWebapp.setTempDirectory(oaxUITempDir);
logger.info("The process of creating working directory is completed.");
} catch (IOException e) {
logger.log(Level.SEVERE, "Exception while creating directories.",e);
}

Nifi Custom Processor errors with a "ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75"

*** Update: I have changed my approach as described in my answer to the question, due to which the original issue reported becomes moot. ***
I'm trying to develop a Nifi application that provides a WebSocket interface to Kakfa. I could not accomplish this using the standard Nifi components as I have tried below (it may not make sense but intuitively this is what I want to accomplish):
I have now created a custom Processor "ReadFromKafka" that I intend to use as shown in the image below. "ReadFromKafka" would use the same implementation as the standard "PutWebSocket" component but would read messages from a Kafka Topic and send as response to the WebSocket client.
I have provided a code snippet of the implementation below:
#SystemResourceConsideration(resource = SystemResource.MEMORY)
public class ReadFromKafka extends AbstractProcessor {
public static final PropertyDescriptor PROP_WS_SESSION_ID = new PropertyDescriptor.Builder()
.name("websocket-session-id")
.displayName("WebSocket Session Id")
.description("A NiFi Expression to retrieve the session id. If not specified, a message will be " +
"sent to all connected WebSocket peers for the WebSocket controller service endpoint.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_SESSION_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ID = new PropertyDescriptor.Builder()
.name("websocket-controller-service-id")
.displayName("WebSocket ControllerService Id")
.description("A NiFi Expression to retrieve the id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_CS_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ENDPOINT = new PropertyDescriptor.Builder()
.name("websocket-endpoint-id")
.displayName("WebSocket Endpoint Id")
.description("A NiFi Expression to retrieve the endpoint id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_ENDPOINT_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_MESSAGE_TYPE = new PropertyDescriptor.Builder()
.name("websocket-message-type")
.displayName("WebSocket Message Type")
.description("The type of message content: TEXT or BINARY")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.defaultValue(WebSocketMessage.Type.TEXT.toString())
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
public static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("FlowFiles that are sent successfully to the destination are transferred to this relationship.")
.build();
public static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("FlowFiles that failed to send to the destination are transferred to this relationship.")
.build();
private static final List<PropertyDescriptor> descriptors;
private static final Set<Relationship> relationships;
static{
final List<PropertyDescriptor> innerDescriptorsList = new ArrayList<>();
innerDescriptorsList.add(PROP_WS_SESSION_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ENDPOINT);
innerDescriptorsList.add(PROP_WS_MESSAGE_TYPE);
descriptors = Collections.unmodifiableList(innerDescriptorsList);
final Set<Relationship> innerRelationshipsSet = new HashSet<>();
innerRelationshipsSet.add(REL_SUCCESS);
innerRelationshipsSet.add(REL_FAILURE);
relationships = Collections.unmodifiableSet(innerRelationshipsSet);
}
#Override
public Set<Relationship> getRelationships() {
return relationships;
}
#Override
public final List<PropertyDescriptor> getSupportedPropertyDescriptors() {
return descriptors;
}
#Override
public void onTrigger(final ProcessContext context, final ProcessSession processSession) throws ProcessException {
final FlowFile flowfile = processSession.get();
if (flowfile == null) {
return;
}
final String sessionId = context.getProperty(PROP_WS_SESSION_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceId = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceEndpoint = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ENDPOINT)
.evaluateAttributeExpressions(flowfile).getValue();
final String messageTypeStr = context.getProperty(PROP_WS_MESSAGE_TYPE)
.evaluateAttributeExpressions(flowfile).getValue();
final WebSocketMessage.Type messageType = WebSocketMessage.Type.valueOf(messageTypeStr);
if (StringUtils.isEmpty(sessionId)) {
getLogger().debug("Specific SessionID not specified. Message will be broadcast to all connected clients.");
}
if (StringUtils.isEmpty(webSocketServiceId)
|| StringUtils.isEmpty(webSocketServiceEndpoint)) {
transferToFailure(processSession, flowfile, "Required WebSocket attribute was not found.");
return;
}
final ControllerService controllerService = context.getControllerServiceLookup().getControllerService(webSocketServiceId);
if (controllerService == null) {
getLogger().debug("ControllerService is NULL");
transferToFailure(processSession, flowfile, "WebSocket ControllerService was not found.");
return;
} else if (!(controllerService instanceof WebSocketService)) {
getLogger().debug("ControllerService is not instance of WebSocketService");
transferToFailure(processSession, flowfile, "The ControllerService found was not a WebSocket ControllerService but a "
+ controllerService.getClass().getName());
return;
}
...
processSession.getProvenanceReporter().send(updatedFlowFile, transitUri.get(), transmissionMillis);
processSession.transfer(updatedFlowFile, REL_SUCCESS);
processSession.commit();
} catch (WebSocketConfigurationException|IllegalStateException|IOException e) {
// WebSocketConfigurationException: If the corresponding WebSocketGatewayProcessor has been stopped.
// IllegalStateException: Session is already closed or not found.
// IOException: other IO error.
getLogger().error("Failed to send message via WebSocket due to " + e, e);
transferToFailure(processSession, flowfile, e.toString());
}
}
private FlowFile transferToFailure(final ProcessSession processSession, FlowFile flowfile, final String value) {
flowfile = processSession.putAttribute(flowfile, ATTR_WS_FAILURE_DETAIL, value);
processSession.transfer(flowfile, REL_FAILURE);
return flowfile;
}
}
I have deployed the custom processor and when I connect to it using the Chrome "Simple Web Socket Client" I can see the following message in the logs:
ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75
I'm using the exact same code as in PutWebSocket and can't figure out why it would behave any different when I use my custom Processor. I have configured "JettyWebSocketServer" as the ControllerService under "ListenWebSocket" as shown in the image below.
Additional exception details seen in the log are provided below:
java.lang.ClassCastException: class com.sun.proxy.$Proxy75 cannot be cast to class org.apache.nifi.websocket.WebSocketService (com.sun.proxy.$Proxy75 is in unnamed module of loader org.apache.nifi.nar.InstanceClassLoader #35c646b5; org.apache.nifi.websocket.WebSocketService is in unnamed module of loader org.apache.nifi.nar.NarClassLoader #361abd01)
I ended up modifying my flow to utilize out-of-box ListenWebSocket, PutWebSocket Processors, and a custom "FetchFromKafka" Processor that is a modified version of ConsumeKafkaRecord. With this I'm able to provide a WebSocket interface to Kafka. I have provided a screenshot of the updated flow below. More work needs to be done with the custom Processor to support multiple sessions.

Intermittent SocketTimeoutException with elasticsearch-rest-client-7.2.0

I am using RestHighLevelClient version 7.2 to connect to the ElasticSearch cluster version 7.2. My cluster has 3 Master nodes and 2 data nodes. Data node memory config: 2 core and 8 GB. I have used to below code in my spring boot project to create RestHighLevelClient instance.
#Bean(destroyMethod = "close")
#Qualifier("readClient")
public RestHighLevelClient readClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
builder.setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder.setConnectTimeout(30000).setSocketTimeout(60000)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
RestHighLevelClient is a singleton bean. Intermittently I am getting SocketTimeoutException with both GET and PUT request. The index size is around 50 MB. I have tried increasing the socket timeout value, but still, I receive the same error. Am I missing some configuration? Any help would be appreciated.
I got the issue just wanted to share so that it can help others.
I was using Load Balancer to connect to the ElasticSerach Cluster.
As you can see from my RestClientBuilder code that I was using only the loadbalancer host and port. Although I have multiple master node, still RestClient was not retrying my request in case of connection timeout.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
According to the RestClient code if we use a single host then it won't retry in case of any connection issue.
So I changed my code as below and it started working.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, 9200),new HttpHost(elasticHost, 9201))).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider));
For complete RestClient code please refer https://github.com/elastic/elasticsearch/blob/master/client/rest/src/main/java/org/elasticsearch/client/RestClient.java
Retry code block in RestClient
private Response performRequest(final NodeTuple<Iterator<Node>> nodeTuple,
final InternalRequest request,
Exception previousException) throws IOException {
RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache);
HttpResponse httpResponse;
try {
httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get();
} catch(Exception e) {
RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e);
onFailure(context.node);
Exception cause = extractAndWrapCause(e);
addSuppressedException(previousException, cause);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, cause);
}
if (cause instanceof IOException) {
throw (IOException) cause;
}
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
}
throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause);
}
ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse);
if (responseOrResponseException.responseException == null) {
return responseOrResponseException.response;
}
addSuppressedException(previousException, responseOrResponseException.responseException);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, responseOrResponseException.responseException);
}
throw responseOrResponseException.responseException;
}
I'm facing the same issue, and seeing this I realized that the retry is happening on my side too in each host (I have 3 host and the exception happens in 3 threads). I wanted to post it since you might face the same issue or someone else might come to this post because of the same SocketConnection Exception.
Searching the official docs, the HighLevelRestClient uses under the hood the RestClient, and the RestClient uses CloseableHttpAsyncClient which have a connection pool. ElasticSearch specifies that you should close the connection once that you are done, (which sounds ambiguous the definition of "done" in an application), but in general in internet I have found that you should close it when the application is closing or ending, rather than when you finished querying.
Now on the official documentation of apache they have an example to handle the connection pool, which i'm trying to follow, I'll try to replicate the scenario and will post if that fixes my issue, the code can be found here:
https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/examples/org/apache/http/examples/nio/client/AsyncClientEvictExpiredConnections.java
This is what i have so far:
#Bean(name = "RestHighLevelClientWithCredentials", destroyMethod = "close")
public RestHighLevelClient elasticsearchClient(ElasticSearchClientConfiguration elasticSearchClientConfiguration,
RestClientBuilder.HttpClientConfigCallback httpClientConfigCallback) {
return new RestHighLevelClient(
RestClient
.builder(getElasticSearchHosts(elasticSearchClientConfiguration))
.setHttpClientConfigCallback(httpClientConfigCallback)
);
}
#Bean
#RefreshScope
public RestClientBuilder.HttpClientConfigCallback getHttpClientConfigCallback(
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager,
CredentialsProvider credentialsProvider
) {
return httpAsyncClientBuilder -> {
httpAsyncClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
httpAsyncClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
httpAsyncClientBuilder.setConnectionManager(poolingNHttpClientConnectionManager);
return httpAsyncClientBuilder;
};
}
public class ElasticSearchClientManager {
private ElasticSearchClientManager.IdleConnectionEvictor idleConnectionEvictor;
/**
* Custom client connection manager to create a connection watcher
*
* #param elasticSearchClientConfiguration elasticSearchClientConfiguration
* #return PoolingNHttpClientConnectionManager
*/
#Bean
#RefreshScope
public PoolingNHttpClientConnectionManager getPoolingNHttpClientConnectionManager(
ElasticSearchClientConfiguration elasticSearchClientConfiguration
) {
try {
SSLIOSessionStrategy sslSessionStrategy = new SSLIOSessionStrategy(getTrustAllSSLContext());
Registry<SchemeIOSessionStrategy> sessionStrategyRegistry = RegistryBuilder.<SchemeIOSessionStrategy>create()
.register("http", NoopIOSessionStrategy.INSTANCE)
.register("https", sslSessionStrategy)
.build();
ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager =
new PoolingNHttpClientConnectionManager(ioReactor, sessionStrategyRegistry);
idleConnectionEvictor = new ElasticSearchClientManager.IdleConnectionEvictor(poolingNHttpClientConnectionManager,
elasticSearchClientConfiguration);
idleConnectionEvictor.start();
return poolingNHttpClientConnectionManager;
} catch (IOReactorException e) {
throw new RuntimeException("Failed to create a watcher for the connection pool");
}
}
private SSLContext getTrustAllSSLContext() {
try {
return new SSLContextBuilder()
.loadTrustMaterial(null, (x509Certificates, string) -> true)
.build();
} catch (Exception e) {
throw new RuntimeException("Failed to create SSL Context with open certificate", e);
}
}
public IdleConnectionEvictor.State state() {
return idleConnectionEvictor.evictorState;
}
#PreDestroy
private void finishManager() {
idleConnectionEvictor.shutdown();
}
public static class IdleConnectionEvictor extends Thread {
private final NHttpClientConnectionManager nhttpClientConnectionManager;
private final ElasticSearchClientConfiguration elasticSearchClientConfiguration;
#Getter
private State evictorState;
private volatile boolean shutdown;
public IdleConnectionEvictor(NHttpClientConnectionManager nhttpClientConnectionManager,
ElasticSearchClientConfiguration elasticSearchClientConfiguration) {
super();
this.nhttpClientConnectionManager = nhttpClientConnectionManager;
this.elasticSearchClientConfiguration = elasticSearchClientConfiguration;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(elasticSearchClientConfiguration.getExpiredConnectionsCheckTime());
// Close expired connections
nhttpClientConnectionManager.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 5 sec
nhttpClientConnectionManager.closeIdleConnections(elasticSearchClientConfiguration.getMaxTimeIdleConnections(),
TimeUnit.SECONDS);
this.evictorState = State.RUNNING;
}
}
} catch (InterruptedException ex) {
this.evictorState = State.NOT_RUNNING;
}
}
private void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
public enum State {
RUNNING,
NOT_RUNNING
}
}
}

Spring boot - C3P0 connection pooling with Spring Data

I have a spring boot(1.4.3.RELEASE) application with MySQL as a backend. Using Spring Data with Oracle UCP and MySQL Java connection for connection pooling. Things are working fine.
For, UCP connection pooling, I create a data source with the below code.
#Bean(name = "appDataSource")
#Primary
public PoolDataSource appDataSource() {
this.pooledDataSource = PoolDataSourceFactory.getPoolDataSource();
final String databaseDriver = environment.getRequiredProperty("application.datasource.driverClassName");
final String databaseUrl = environment.getRequiredProperty("application.datasource.url");
final String databaseUsername = environment.getRequiredProperty("application.datasource.username");
final String databasePassword = environment.getRequiredProperty("application.datasource.password");
final String initialSize = environment.getRequiredProperty("application.datasource.initialSize");
final String maxPoolSize = environment.getRequiredProperty("application.datasource.maxPoolSize");
final String minPoolSize = environment.getRequiredProperty("application.datasource.minPoolSize");
// final String poolName =
// environment.getRequiredProperty("application.datasource.poolName");
try {
pooledDataSource.setConnectionFactoryClassName(databaseDriver);
pooledDataSource.setURL(databaseUrl);
pooledDataSource.setUser(databaseUsername);
pooledDataSource.setPassword(databasePassword);
pooledDataSource.setInitialPoolSize(Integer.parseInt(initialSize));
pooledDataSource.setMaxPoolSize(Integer.parseInt(maxPoolSize));
pooledDataSource.setMinPoolSize(Integer.parseInt(minPoolSize));
pooledDataSource.setSQLForValidateConnection(TEST_SQL);
pooledDataSource.setValidateConnectionOnBorrow(Boolean.TRUE);
pooledDataSource.setConnectionWaitTimeout(CONNECTION_WAIT_TIMEOUT_SECS);
// pooledDataSource.setConnectionPoolName(poolName);
}
catch (final NumberFormatException e) {
LOGGER.error("Unable to parse passed numeric value", e);
}
catch (final SQLException e) {
LOGGER.error("exception creating data pool", e);
}
LOGGER.info("Setting up datasource for user:{} and databaseUrl:{}", databaseUsername, databaseUrl);
return this.pooledDataSource;
}
Now, I am trying to move to C3P0 based connection pooling and hence added the below property in my application.properties
Configure the C3P0 database connection pooling module
spring.jpa.properties.hibernate.c3p0.max_size = 15
spring.jpa.properties.hibernate.c3p0.min_size = 6
spring.jpa.properties.hibernate.c3p0.timeout = 2500
spring.jpa.properties.hibernate.c3p0.max_statements_per_connection = 10
spring.jpa.properties.hibernate.c3p0.idle_test_period = 3000
spring.jpa.properties.hibernate.c3p0.acquire_increment = 3
spring.jpa.properties.hibernate.c3p0.validate = false
spring.jpa.properties.hibernate.c3p0.numHelperThreads = 15
and injecting Datasource a below.
#Bean
public ComboPooledDataSource dataSource() {
return new ComboPooledDataSource();
}
But, it is not working. Is there anything I am missing here.
Hope, C3P0 is better than UCP.
Thanks,

How to purge/delete message from weblogic JMS queue

Is there and way (apart from consuming the message) I can purge/delete message programmatically from JMS queue. Even if it is possible by wlst command line tool, it will be of much help.
Here is an example in WLST for a Managed Server running on port 7005:
connect('weblogic', 'weblogic', 't3://localhost:7005')
serverRuntime()
cd('/JMSRuntime/ManagedSrv1.jms/JMSServers/MyAppJMSServer/Destinations/MyAppJMSModule!QueueNameToClear')
cmo.deleteMessages('')
The last command should return the number of messages it deleted.
You can use JMX to purge the queue, either from Java or from WLST (Python). You can find the MBean definitions for WLS 10.0 on http://download.oracle.com/docs/cd/E11035_01/wls100/wlsmbeanref/core/index.html.
Here is a basic Java example (don't forget to put weblogic.jar in the CLASSPATH):
import java.util.Hashtable;
import javax.management.MBeanServerConnection;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import javax.management.ObjectName;
import javax.naming.Context;
import weblogic.management.mbeanservers.runtime.RuntimeServiceMBean;
public class PurgeWLSQueue {
private static final String WLS_USERNAME = "weblogic";
private static final String WLS_PASSWORD = "weblogic";
private static final String WLS_HOST = "localhost";
private static final int WLS_PORT = 7001;
private static final String JMS_SERVER = "wlsbJMSServer";
private static final String JMS_DESTINATION = "test.q";
private static JMXConnector getMBeanServerConnector(String jndiName) throws Exception {
Hashtable<String,String> h = new Hashtable<String,String>();
JMXServiceURL serviceURL = new JMXServiceURL("t3", WLS_HOST, WLS_PORT, jndiName);
h.put(Context.SECURITY_PRINCIPAL, WLS_USERNAME);
h.put(Context.SECURITY_CREDENTIALS, WLS_PASSWORD);
h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);
return connector;
}
public static void main(String[] args) {
try {
JMXConnector connector =
getMBeanServerConnector("/jndi/"+RuntimeServiceMBean.MBEANSERVER_JNDI_NAME);
MBeanServerConnection mbeanServerConnection =
connector.getMBeanServerConnection();
ObjectName service = new ObjectName("com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean");
ObjectName serverRuntime = (ObjectName) mbeanServerConnection.getAttribute(service, "ServerRuntime");
ObjectName jmsRuntime = (ObjectName) mbeanServerConnection.getAttribute(serverRuntime, "JMSRuntime");
ObjectName[] jmsServers = (ObjectName[]) mbeanServerConnection.getAttribute(jmsRuntime, "JMSServers");
for (ObjectName jmsServer: jmsServers) {
if (JMS_SERVER.equals(jmsServer.getKeyProperty("Name"))) {
ObjectName[] destinations = (ObjectName[]) mbeanServerConnection.getAttribute(jmsServer, "Destinations");
for (ObjectName destination: destinations) {
if (destination.getKeyProperty("Name").endsWith("!"+JMS_DESTINATION)) {
Object o = mbeanServerConnection.invoke(
destination,
"deleteMessages",
new Object[] {""}, // selector expression
new String[] {"java.lang.String"});
System.out.println("Result: "+o);
break;
}
}
break;
}
}
connector.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Works great on a single node environment, but what happens if you are on an clustered environment with ONE migratable JMSServer (currently on node #1) and this code is executing on node #2. Then there will be no JMSServer available and no message will be deleted.
This is the problem I'm facing right now...
Is there a way to connect to the JMSQueue without having the JMSServer available?
[edit]
Found a solution: Use the domain runtime service instead:
ObjectName service = new ObjectName("com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");
and be sure to access the admin port on the WLS-cluster.
if this is one time, the easiest would be to do it through the console...
the program in below link helps you to clear only pending messages from queue based on redelivered message parameter
http://techytalks.blogspot.in/2016/02/deletepurge-pending-messages-from-jms.html

Resources