IBM MQ(7.0.1.9) throws NumberFormatException on Weblogic startup - jms

I have a feature through which i am trying to post a message onto JMS topic.This feature is exposed as rest endpoint and is deployed in weblogic server.However , i face an issue which is onetime alone on weblogic startup. When i try to do an integration testing against the rest endpoint, it throws the following exception when the in the code snippet where session is trying to get created.
I have used IBM MQ Jars for JMS integration along with classes from Javax extension. (IBM MQ version - 7.0.1.9)
The following is my code snippet and exception thrown while trying to create a connection
private Connection createConnection() {
ombConnectionProperties = loadOmbConnectionProperties();
if (ombConnectionProperties.size() == 0 || ombConnectionProperties == null) {
buildBusinessException("Could not load OMB properties from environment resource", ExceptionStatus.INTERNAL_SERVER_ERROR);
}
try {
initSSL(ombConnectionProperties);
Hashtable<String, String> initialContextPropertiesMatrix = buildInitialContextPropertiesMatrix(ombConnectionProperties);
ic = new InitialContext(initialContextPropertiesMatrix);
if(ombConnectionProperties.getProperty("workflow.omb.queueManager")!=null) {
if(ic!=null) {
connectionFactory = (ConnectionFactory) ic.lookup(ombConnectionProperties.getProperty("workflow.omb.queueManager"));
}
}
if(connectionFactory!=null) {
connection = connectionFactory.createConnection();
}
} catch (Exception e) {
propagateException(e);
}
return connection;
}
private Session createSession() {
try {
if(connection!=null) {
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
}catch (Exception e) {
propagateException(e);
}
return session;
}
private Hashtable<String, String> buildInitialContextPropertiesMatrix(Properties ombConnectionProperties) {
Hashtable<String, String> initialContextPropertiesMatrix = new Hashtable<String, String>();
if (ombConnectionProperties.getProperty("workflow.omb.ldapProviderUrl") != null) {
initialContextPropertiesMatrix.put(PROVIDER_URL, ombConnectionProperties.getProperty("workflow.omb.ldapProviderUrl"));
}
initialContextPropertiesMatrix.put(INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
initialContextPropertiesMatrix.put(SECURITY_AUTHENTICATION, "simple");
if (ombConnectionProperties.getProperty("workflow.omb.ldapPrincipal") != null) {
initialContextPropertiesMatrix.put(SECURITY_PRINCIPAL, ombConnectionProperties.getProperty("workflow.omb.ldapPrincipal"));
}
if (ombConnectionProperties.getProperty("workflow.omb.ldapCredential") != null) {
initialContextPropertiesMatrix.put(SECURITY_CREDENTIALS, ombConnectionProperties.getProperty("workflow.omb.ldapCredential"));
}
return initialContextPropertiesMatrix;
}
public void publishWorkflowControlMessage(String workflowName, String action) {
try {
init();
String inboundControlMessageXML = buildWorkflowControlMessageXML(workflowName, action);
Destination destination = null;
if (ombConnectionProperties.getProperty("workflow.omb.topicName") != null) {
destination = (Destination) ic.lookup("cn=" + ombConnectionProperties.getProperty("workflow.omb.topicName"));
}
MessageProducer producer;
producer = session.createProducer(destination);
TextMessage message = session.createTextMessage(inboundControlMessageXML);
producer.send(message);
} catch (Exception e) {
propagateException(e);
} finally {
try {
if(session!=null){
session.close();
}
if(connection!=null){
connection.close();
}
} catch (JMSException e) {
LOG.error("Exception Occured while closing the OMB connection", e);
throw new RuntimeException("Exception occured while trying to close the OMB connection", e);
}
}
}
workflow.omb.ldapProviderUrl = ldap://esdqa.csfb.net/ou=MQ,ou=Services,dc=csfb,dc=CS-Group,dc=com
workflow.omb.ldapPrincipal = uid=MQRDP,ou=People,o=Administrators,dc=CS- Group,dc=com
workflow.omb.ldapCredentials = aGMk643R
workflow.omb.queueManager = cn=USTCMN01_CF
workflow.omb.topicName = EDM.BIRS.RDP.ONEPPM.TO_RDA
workflow.omb.certificate.url = properties//jks//test//keystore.jks
workflow.omb.certificate.password = rFzv0UOS
P.S: The issue is seen only when i try to connect to JMS at the server startup.When i try for second time it works fine and from then on there are no issues and i am able to post message to the topic.I have no idea whats going wrong.However , when I execute this program as a standalone client it works fine.
Caused by: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:417)
at java.lang.Short.parseShort(Short.java:120)
at java.lang.Short.valueOf(Short.java:153)
at java.lang.Short.valueOf(Short.java:178)
at com.ibm.msg.client.jms.internal.JmsReadablePropertyContextImpl.parseShort(JmsReadablePropertyContextImpl.java:752)
at com.ibm.msg.client.jms.internal.JmsReadablePropertyContextImpl.getShortProperty(JmsReadablePropertyContextImpl.java:326)
at com.ibm.msg.client.wmq.common.internal.WMQPropertyContext.getShortProperty(WMQPropertyContext.java:360)
at com.ibm.msg.client.wmq.internal.WMQSession.<init>(WMQSession.java:311)
at com.ibm.msg.client.wmq.internal.WMQConnection.createSession(WMQConnection.java:980)
at com.ibm.msg.client.jms.internal.JmsConnectionImpl.createSession(JmsConnectionImpl.java:572)
at com.ibm.mq.jms.MQConnection.createSession(MQConnection.java:339)
Am adding the properties in the connection object which I evaluated in debug bug. When I invoke the call the very first time the size of the object is 77 and when I invoke the call again then the size is increased to 78 which resolves it and it was the property "XMSC_ADMIN_OBJECT_TYPE" which is missing in the initial connection object. I am pasting the entire list below
0 = {HashMap$Entry#18264} "XMSC_WMQ_HEADER_COMP" -> " size = 1"
1 = {HashMap$Entry#18265} "wildcardFormat" -> "0"
2 = {HashMap$Entry#18266} "XMSC_WMQ_BROKER_PUBQ" -> "SYSTEM.BROKER.DEFAULT.STREAM"
3 = {HashMap$Entry#18267} "XMSC_WMQ_CONNECTION_TAG" ->
4 = {HashMap$Entry#18268} "XMSC_WMQ_BROKER_SUBQ" -> "SYSTEM.JMS.ND.SUBSCRIBER.QUEUE"
5 = {HashMap$Entry#18269} "XMSC_WMQ_SHARE_CONV_ALLOWED" -> "1"
6 = {HashMap$Entry#18270} "XMSC_WMQ_SSL_SOCKET_FACTORY" -> "null"
7 = {HashMap$Entry#18271} "multicast" -> "0"
8 = {HashMap$Entry#18272} "brokerVersion" -> "-1"
9 = {HashMap$Entry#18273} "XMSC_WMQ_MESSAGE_SELECTION" -> "0"
10 = {HashMap$Entry#18274} "XMSC_WMQ_CLEANUP_LEVEL" -> "1"
11 = {HashMap$Entry#18275} "XMSC_CLIENT_ID" -> "null"
12 = {HashMap$Entry#18276} "XMSC_WMQ_SUBSCRIPTION_STORE" -> "1"
13 = {HashMap$Entry#18277} "XMSC_WMQ_RECEIVE_EXIT" -> "null"
14 = {HashMap$Entry#18278} "XMSC_WMQ_SSL_CERT_STORES_COL" -> "null"
15 = {HashMap$Entry#18279} "XMSC_WMQ_CLIENT_RECONNECT_TIMEOUT" -> "1800"
16 = {HashMap$Entry#18280} "XMSC_WMQ_RECEIVE_EXIT_INIT" -> "null"
17 = {HashMap$Entry#18281} "XMSC_WMQ_TEMP_TOPIC_PREFIX" ->
18 = {HashMap$Entry#18282} "XMSC_WMQ_CONNECT_OPTIONS" -> "0"
19 = {HashMap$Entry#18283} "XMSC_WMQ_MAP_NAME_STYLE" -> "true"
20 = {HashMap$Entry#18284} "XMSC_WMQ_MSG_BATCH_SIZE" -> "10"
21 = {HashMap$Entry#18285} "XMSC_WMQ_USE_CONNECTION_POOLING" -> "true"
22 = {HashMap$Entry#18286} "XMSC_WMQ_TARGET_CLIENT_MATCHING" -> "true"
23 = {HashMap$Entry#18287} "XMSC_USERID" ->
24 = {HashMap$Entry#18288} "XMSC_WMQ_SPARSE_SUBSCRIPTIONS" -> "false"
25 = {HashMap$Entry#18289} "XMSC_WMQ_MSG_COMP" -> " size = 1"
26 = {HashMap$Entry#18290} "XMSC_WMQ_MAX_BUFFER_SIZE" -> "1000"
27 = {HashMap$Entry#18291} "XMSC_WMQ_CONNECTION_MODE" -> "1"
28 = {HashMap$Entry#18292} "XMSC_WMQ_CLIENT_RECONNECT_OPTIONS" -> "0"
29 = {HashMap$Entry#18293} "XMSC_WMQ_CHANNEL" -> "USTCMN01_CLS_01"
30 = {HashMap$Entry#18294} "XMSC_WMQ_TEMP_Q_PREFIX" ->
31 = {HashMap$Entry#18295} "XMSC_WMQ_RECEIVE_ISOLATION" -> "0"
32 = {HashMap$Entry#18296} "XMSC_WMQ_POLLING_INTERVAL" -> "5000"
33 = {HashMap$Entry#18297} "XMSC_CONNECTION_TYPE" -> "1"
34 = {HashMap$Entry#18298} "XMSC_WMQ_QUEUE_MANAGER" -> "USTCMN01"
35 = {HashMap$Entry#18299} "XMSC_WMQ_PROCESS_DURATION" -> "0"
36 = {HashMap$Entry#18300} "XMSC_WMQ_CLONE_SUPPORT" -> "0"
37 = {HashMap$Entry#18301} "XMSC_WMQ_SSL_CIPHER_SUITE"
..............
........
48 = {HashMap$Entry#18312} "XMSC_ADMIN_OBJECT_TYPE" -> "20"
..........
............

Related

Error during JavaScript execution com.gargoylesoftware.htmlunit.ScriptException

I am using htmlunit driver 2.36 getting error in "currentPage = orderStatus.setSelectedAttribute(option, true);"
error message:- Error during JavaScript execution com.gargoylesoftware.htmlunit.ScriptException after from catch exception is com.gargoylesoftware.htmlunit.TextPage cannot be cast to com.gargoylesoftware.htmlunit.html.HtmlPage
HtmlPage currentPage = (HtmlPage) context.getVariables().get("currentPage");
DataRepository dr = (DataRepository) context.getVariables().get("dataRepository");
final WebClient webClient= currentPage.getWebClient();
webClient.getOptions().setJavaScriptEnabled(true);
webClient.getOptions().setThrowExceptionOnScriptError(false);
webClient.waitForBackgroundJavaScript(3000);
List<String> alertmsgs = new ArrayList<String>();
CollectingAlertHandler alertHandler = new CollectingAlertHandler();
webClient.setAlertHandler(alertHandler);
try {
String remarks = dr.get("REMARKS");
if(StringUtils.isEmpty(remarks)){
remarks = "Please process the change request";
}
HtmlTextArea remarksText = currentPage.getElementByName("contactNewRemarks");
remarksText.setText(remarks);
String orderstatusOpt = "0";//dr.get("ORDERSTATUS");
HtmlSelect orderStatus = (HtmlSelect) currentPage.getElementByName("orderStatus");
logger.info("HtmlSelect Order Status: :: " + orderStatus);
for (HtmlOption option : orderStatus.getOptions()) {
logger.info("HtmlOption option: :: " + option.getValueAttribute());
if(option.getValueAttribute().equals(orderstatusOpt)) {
currentPage = orderStatus.setSelectedAttribute(option, true);
} else {
option.removeAttribute("selected");
}
}

Why am I getting error when I publish to config topic in Spring MQTT?

I'm sending a message to "config" topic from my Spring boot backend application
Here's my mqtt setup
final String mqttServerAddress =
String.format("ssl://%s:%s", options.mqttBridgeHostname, options.mqttBridgePort);
// Create our MQTT client. The mqttClientId is a unique string that identifies this device. For
// Google Cloud IoT Core, it must be in the format below.
final String mqttClientId =
String.format(
"projects/%s/locations/%s/registries/%s/devices/%s",
options.projectId, options.cloudRegion, options.registryId, options.deviceId);
MqttConnectOptions connectOptions = new MqttConnectOptions();
// Note that the Google Cloud IoT Core only supports MQTT 3.1.1, and Paho requires that we
// explictly set this. If you don't set MQTT version, the server will immediately close its
// connection to your device.
connectOptions.setMqttVersion(MqttConnectOptions.MQTT_VERSION_3_1_1);
Properties sslProps = new Properties();
sslProps.setProperty("com.ibm.ssl.protocol", "TLSv1.2");
connectOptions.setSSLProperties(sslProps);
// With Google Cloud IoT Core, the username field is ignored, however it must be set for the
// Paho client library to send the password field. The password field is used to transmit a JWT
// to authorize the device.
connectOptions.setUserName(options.userName);
DateTime iat = new DateTime();
if ("RS256".equals(options.algorithm)) {
connectOptions.setPassword(
createJwtRsa(options.projectId, options.privateKeyFile).toCharArray());
} else if ("ES256".equals(options.algorithm)) {
connectOptions.setPassword(
createJwtEs(options.projectId, options.privateKeyFileEC).toCharArray());
} else {
throw new IllegalArgumentException(
"Invalid algorithm " + options.algorithm + ". Should be one of 'RS256' or 'ES256'.");
}
// [START iot_mqtt_publish]
// Create a client, and connect to the Google MQTT bridge.
MqttClient client = new MqttClient(mqttServerAddress, mqttClientId, new MemoryPersistence());
// Both connect and publish operations may fail. If they do, allow retries but with an
// exponential backoff time period.
long initialConnectIntervalMillis = 500L;
long maxConnectIntervalMillis = 6000L;
long maxConnectRetryTimeElapsedMillis = 900000L;
float intervalMultiplier = 1.5f;
long retryIntervalMs = initialConnectIntervalMillis;
long totalRetryTimeMs = 0;
while ((totalRetryTimeMs < maxConnectRetryTimeElapsedMillis) && !client.isConnected()) {
try {
client.connect(connectOptions);
} catch (MqttException e) {
int reason = e.getReasonCode();
// If the connection is lost or if the server cannot be connected, allow retries, but with
// exponential backoff.
System.out.println("An error occurred: " + e.getMessage());
if (reason == MqttException.REASON_CODE_CONNECTION_LOST
|| reason == MqttException.REASON_CODE_SERVER_CONNECT_ERROR) {
System.out.println("Retrying in " + retryIntervalMs / 1000.0 + " seconds.");
Thread.sleep(retryIntervalMs);
totalRetryTimeMs += retryIntervalMs;
retryIntervalMs *= intervalMultiplier;
if (retryIntervalMs > maxConnectIntervalMillis) {
retryIntervalMs = maxConnectIntervalMillis;
}
} else {
throw e;
}
}
}
attachCallback(client, options.deviceId);
// The MQTT topic that this device will publish telemetry data to. The MQTT topic name is
// required to be in the format below. Note that this is not the same as the device registry's
// Cloud Pub/Sub topic.
String mqttTopic = String.format("/devices/%s/%s", options.deviceId, options.messageType);
long secsSinceRefresh = ((new DateTime()).getMillis() - iat.getMillis()) / 1000;
if (secsSinceRefresh > (options.tokenExpMins * MINUTES_PER_HOUR)) {
System.out.format("\tRefreshing token after: %d seconds%n", secsSinceRefresh);
iat = new DateTime();
if ("RS256".equals(options.algorithm)) {
connectOptions.setPassword(
createJwtRsa(options.projectId, options.privateKeyFile).toCharArray());
} else if ("ES256".equals(options.algorithm)) {
connectOptions.setPassword(
createJwtEs(options.projectId, options.privateKeyFileEC).toCharArray());
} else {
throw new IllegalArgumentException(
"Invalid algorithm " + options.algorithm + ". Should be one of 'RS256' or 'ES256'.");
}
client.disconnect();
client.connect(connectOptions);
attachCallback(client, options.deviceId);
}
MqttMessage message = new MqttMessage(data.getBytes(StandardCharsets.UTF_8.name()));
message.setQos(1);
client.publish(mqttTopic, message);
here's the options class
public class MqttExampleOptions {
String mqttBridgeHostname = "mqtt.googleapis.com";
short mqttBridgePort = 8883;
String projectId =
String cloudRegion = "europe-west1";
String userName = "unused";
String registryId = <I don't want to show>
String gatewayId = <I don't want to show>
String algorithm = "RS256";
String command = "demo";
String deviceId = <I don't want to show>
String privateKeyFile = "rsa_private_pkcs8";
String privateKeyFileEC = "ec_private_pkcs8";
int numMessages = 100;
int tokenExpMins = 20;
String telemetryData = "Specify with -telemetry_data";
String messageType = "config";
int waitTime = 120
}
When I try to publish message to topic "config" I get this error
ERROR 12556 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service()
for servlet [dispatcherServlet] in context with path [/iot-admin] threw exception [Request processing failed; nested exception is Connection Lost (32109) - java.io.EOFException] with root cause
java.io.EOFException: null
at java.base/java.io.DataInputStream.readByte(DataInputStream.java:273) ~[na:na]
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:92) ~[org.eclipse.paho.client.mqttv3-1.2.5.jar:na]
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:137) ~[org.eclipse.paho.client.mqttv3-1.2.5.jar:na]
this is the message I am sending
{
"Led": {
"id": "e36b5877-2579-4db1-b595-0e06410bde11",
"rgbColors": [{
"id": "1488acfe-baa7-4de8-b4a2-4e01b9f89fc5",
"configName": "Terminal",
"rgbColor": [150, 150, 150]
}, {
"id": "b8ce6a35-4219-4dba-a8de-a9070f17f1d2",
"configName": "PayZone",
"rgbColor": [150, 150, 150]
}, {
"id": "bf62cef4-8e22-4804-a7d8-a0996bef392e",
"configName": "PayfreeLogo",
"rgbColor": [150, 150, 150]
}, {
"id": "c62d25a4-678b-4833-9123-fe3836863400",
"configName": "BagDetection",
"rgbColor": [200, 200, 200]
}, {
"id": "e19e1ff3-327e-4132-9661-073f853cf913",
"configName": "PersonDetection",
"rgbColor": [150, 150, 150]
}]
}
}
How can I properly send a message to a config topic without getting this error? I am able to send message to "state" topic, but not to "config" topic.

syntax error, unexpected SYMBOL, expecting ',' while converting list to data.frame in R

Trying to convert java resultSet data to data.frame in R. Up to 5 rows no issue. But when more than 5 records are converting then getting the error "syntax error, unexpected SYMBOL, expecting ','"
public static void main(String[] args) throws ScriptException, FileNotFoundException {
RenjinScriptEngineFactory factory = new RenjinScriptEngineFactory();
ScriptEngine engine = factory.getScriptEngine();
engine.eval("source(\"script.R\")");
engine.eval("workflow.predict("+getData()+")");
}
public static ListVector getData() {
Connection connection = null;
PreparedStatement statement = null;
ResultSet resultSet = null;
StringArrayVector.Builder userid = new StringArrayVector.Builder();
StringArrayVector.Builder defaultCC = new StringArrayVector.Builder();
StringArrayVector.Builder typeId = new StringArrayVector.Builder();
StringArrayVector.Builder amount = new StringArrayVector.Builder();
StringArrayVector.Builder cc = new StringArrayVector.Builder();
StringArrayVector.Builder activity = new StringArrayVector.Builder();
try{
String query = "select top 6 wi.owner_user_id as userId, u.cost_center_id as defaultCC, li.expense_type_id typeId, wi.doc_specific_amount as amount, al.cost_center_id as cc,wi.activity_id as activity from alwf_work_item wi, alco_user u, aler_expense_line_item li,aler_line_allocation al where u.user_id = wi.owner_user_id and wi.business_object_id=li.parent_id and li.exp_line_item_id=al.exp_line_item_id";
connection = getConnection();
statement = connection.prepareStatement(query);
resultSet = statement.executeQuery();
while(resultSet.next()) {
userid.add(resultSet.getLong("userId"));
defaultCC.add(resultSet.getLong("defaultCC"));
typeId.add(resultSet.getLong("typeId"));
amount.add(resultSet.getLong("amount"));
cc.add(resultSet.getLong("cc"));
activity.add(resultSet.getLong("activity"));
}
}catch (Exception e) {
e.printStackTrace();
}
ListVector.NamedBuilder myDf = new ListVector.NamedBuilder();
myDf.setAttribute(Symbols.CLASS, StringVector.valueOf("data.frame"));
myDf.setAttribute(Symbols.ROW_NAMES, new RowNamesVector(userid.length()));
myDf.add("userId", userid.build());
myDf.add("defaultCC", defaultCC.build());
myDf.add("typeId", typeId.build());
myDf.add("amount", amount.build());
myDf.add("cc", cc.build());
myDf.add("activity", activity.build());
return myDf.build();
}
R Script
workflow.predict <- function(abc) {
print(abc)
print(data.frame(lapply(abc, as.character), stringsAsFactors=FALSE))
dataset = data.frame(lapply(abc, as.character), stringsAsFactors=FALSE)
library(randomForest)
classifier = randomForest(x = dataset[-6], y = as.factor(dataset$activity))
new.data=c(62020,3141877,46013,950,3141877)
y_pred = predict(classifier, newdata = new.data)
print(y_pred)
return(y_pred)
}
Below are the errors while running the script with top 6 records. But for top 5 records it is running without any error. Thanks in advance.
Exception in thread "main" org.renjin.parser.ParseException: Syntax error at 1 68 1 70 68 70: syntax error, unexpected SYMBOL, expecting ','
at org.renjin.parser.RParser.parseAll(RParser.java:146)
at org.renjin.parser.RParser.parseSource(RParser.java:69)
at org.renjin.parser.RParser.parseSource(RParser.java:122)
at org.renjin.parser.RParser.parseSource(RParser.java:129)
at org.renjin.script.RenjinScriptEngine.eval(RenjinScriptEngine.java:127)
at RTest.main(RTest.java:27)
Sample Data of the resultset is here
By using Builder userid = new ListVector.Builder() instead of StringArrayVector.Builder userid = new StringArrayVector.Builder() it is accepting more records without any error. I am not sure why StringArrayVector it is restricting to 5 records only.

NetMQ client to client messaging

I'm trying to create an rpc program to communicate hosts located on different networks and chose Router-Dealer configuration of NetMQ provided here: http://netmq.readthedocs.io/en/latest/router-dealer/#router-dealer
But the problem is that router always selects a random dealer when routing a message to backend.
Code which I used :
using (var frontend = new RouterSocket(string.Format("#tcp://{0}:{1}", "127.0.0.1", "5556")))//"#tcp://10.0.2.218:5559"
using (var backend = new DealerSocket(string.Format("#tcp://{0}:{1}", "127.0.0.1", "5557")))//"#tcp://10.0.2.218:5560"
{
// Handler for messages coming in to the frontend
frontend.ReceiveReady += (s, e) =>
{
Console.WriteLine("message arrived on frontEnd");
NetMQMessage msg = e.Socket.ReceiveMultipartMessage();
string clientAddress = msg[0].ConvertToString();
Console.WriteLine("Sending to :" + clientAddress);
//TODO: Make routing here
backend.SendMultipartMessage(msg); // Relay this message to the backend };
// Handler for messages coming in to the backend
backend.ReceiveReady += (s, e) =>
{
Console.WriteLine("message arrived on backend");
var msg = e.Socket.ReceiveMultipartMessage();
frontend.SendMultipartMessage(msg); // Relay this message to the frontend
};
using (var poller = new NetMQPoller { backend, frontend })
{
// Listen out for events on both sockets and raise events when messages come in
poller.Run();
}
}
Code for Client:
using (var client = new RequestSocket(">tcp://" + "127.0.0.1" + ":5556"))
{
var messageBytes = UTF8Encoding.UTF8.GetBytes("Hello");
var messageToServer = new NetMQMessage();
//messageToServer.AppendEmptyFrame();
messageToServer.Append("Server2");
messageToServer.Append(messageBytes);
WriteToConsoleVoid("======================================");
WriteToConsoleVoid(" OUTGOING MESSAGE TO SERVER ");
WriteToConsoleVoid("======================================");
//PrintFrames("Client Sending", messageToServer);
client.SendMultipartMessage(messageToServer);
NetMQMessage serverMessage = client.ReceiveMultipartMessage();
WriteToConsoleVoid("======================================");
WriteToConsoleVoid(" INCOMING MESSAGE FROM SERVER");
WriteToConsoleVoid("======================================");
//PrintFrames("Server receiving", clientMessage);
byte[] rpcByteArray = null;
if (serverMessage.FrameCount == 3)
{
var clientAddress = serverMessage[0];
rpcByteArray = serverMessage[2].ToByteArray();
}
WriteToConsoleVoid("======================================");
Console.ReadLine();
}
Code for Dealer:
using (var server = new ResponseSocket())
{
server.Options.Identity = UTF8Encoding.UTF8.GetBytes(confItem.ResponseServerID);
Console.WriteLine("Server ID:" + confItem.ResponseServerID);
server.Connect(string.Format("tcp://{0}:{1}", "127.0.0.1", "5557"));
using (var poller = new NetMQPoller { server })
{
server.ReceiveReady += (s, a) =>
{
byte[] response = null;
NetMQMessage serverMessage = null;
try
{
serverMessage = a.Socket.ReceiveMultipartMessage();
}
catch (Exception ex)
{
Console.WriteLine("Exception on ReceiveMultipartMessage : " + ex.ToString());
//continue;
}
byte[] eaBody = null;
string clientAddress = "";
if (serverMessage.FrameCount == 2)
{
clientAddress = serverMessage[0].ConvertToString();
Console.WriteLine("ClientAddress:" + clientAddress);
eaBody = serverMessage[1].ToByteArray();
Console.WriteLine("Received message from remote computer: {0} bytes , CurrentID : {1}", eaBody.Length, confItem.ResponseServerID);
}
else
{
Console.WriteLine("Received message from remote computer: CurrentID : {0}", confItem.ResponseServerID);
}
};
poller.Run();
}
}
Is it possible to choose a specific backend on frontend.ReceiveReady?
Thanks!
Your backend should be router as well. You need the worker to register or you need to know all the available workers and their identity. When send on the backend push the worker identity at the beginning of the server.
Take a look at the Majordomo example in the zeromq guide:
http://zguide.zeromq.org/page:all#toc72
http://zguide.zeromq.org/page:all#toc98

how to move files inside my spark streaming application

here I streaming the data from streaming directory and the write it to a output location. I am also trying to implement the process of moving hdfs files from a input folder to the streaming directory. This move happens one time before the streaming context starts. But I want this move to get executed every time for each Batch of Dstream. is that even possible?
val streamed_rdd = ssc.fileStream[LongWritable, Text, TextInputFormat](streaming_directory, (t:Path)=> true , true).map { case (x, y) => (y.toString) }
streamed_rdd.foreachRDD( rdd => {
rdd.map(x =>x.split("\t")).map(x => x(3)).foreachPartition { partitionOfRecords =>
val connection: Connection = connectionFactory.createConnection()
connection.setClientID("Email_send_module_client_id")
println("connection started with active mq")
val session: Session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE)
println("created session")
val dest = session.createQueue("dwEmailsQueue2")
println("destination queue name = dwEmailsQueue2")
val prod_queue = session.createProducer(dest)
connection.start()
partitionOfRecords.foreach { record =>
val rec_to_send: TextMessage = session.createTextMessage(record)
println("started creating a text message")
prod_queue.send(rec_to_send)
println("sent the record")
}
connection.close()
}
}
)
**val LIST = scala.collection.mutable.MutableList[String]()
val files_to_move = scala.collection.mutable.MutableList[String]()
val cmd = "hdfs dfs -ls -d "+load_directory+"/*"
println(cmd)
val system_time = System.currentTimeMillis
println(system_time)
val output = cmd.!!
output.split("\n").foreach(x => x.split(" ").foreach(x => if (x.startsWith("/user/hdpprod/")) LIST += x))
LIST.foreach(x => if (x.toString.split("/").last.split("_").last.toLong < system_time) files_to_move += x)
println("files to move" +files_to_move)
var mv_cmd :String = "hdfs dfs -mv "
for (file <- files_to_move){
mv_cmd += file+" "
}
mv_cmd += streaming_directory
println(mv_cmd)
val mv_output = mv_cmd.!!
println("moved the data to the folder")**
if (streamed_rdd.count().toString == "0") {
println("no data in the streamed list")
} else {
println("saving the Dstream at "+System.currentTimeMillis())
streamed_rdd.transform(rdd => {rdd.map(x => (check_time_to_send+"\t"+check_time_to_send_utc+"\t"+x))}).saveAsTextFiles("/user/hdpprod/temp/spark_streaming_output_sent/sent")
}
ssc.start()
ssc.awaitTermination()
}
}
I tried doing same stuff in java implementation as below. you can call this method from foreachPartion on rdd
public static void moveFiles(final String moveFilePath,
final JavaRDD rdd) {
for (final Partition partition : rdd.partitions()) {
final UnionPartition unionPartition = (UnionPartition) partition;
final NewHadoopPartition newHadoopPartition = (NewHadoopPartition)
unionPartition.parentPartition();
final String fPath = newHadoopPartition.serializableHadoopSplit()
.value().toString();
final String[] filespaths = fPath.split(":");
if ((filespaths != null) && (filespaths.length > 0)) {
for (final String filepath : filespaths) {
if ((filepath != null) && filepath.contains("/")) {
final File file = new File(filepath);
if (file.exists() && file.isFile()) {
try {
File destFile = new File(moveFilePath + "/" +
file.getName());
if (destFile.exists()) {
destFile = new File(moveFilePath + "/" +
file.getName() + "_");
}
java.nio.file.Files.move((file
.toPath()), destFile.toPath(),
StandardCopyOption.REPLACE_EXISTING);
} catch (Exception e) {
logger.error(
"Exception while moving file",
e);
}
}
}
}
}
}
}

Resources