AutoReconnect with multiple server uri's - paho

Consider the scenario where I have below code.
MqttConnectOptions connOpt = new MqttConnectOptions();
connOpt.setServerURIs(new String[]{"tcp://localhost:1883", "tcp://some-other-host:1883"});
connOpt.setAutomaticReconnect(true);
client.setCallback( new TemperatureSubscriber() );
client.connect(connOpt);
So when I say connect, it connects to localhost.
Then I get connection lost, due to say any reason. So at this point of time, since automaticReconnect is true, will it connect to localhost or some-other-host ?

Let me show how to find such answers yourself -
First you visit the Github repository for Paho source code.
Then you enter setAutomaticReconnect into the search field:
This is of course just the public name. You need to spot the corresponding private member.
In the MqttConnectOptions.java with the very simple code you find that member:
private boolean automaticReconnect = false;
Then you perform another search, this time for the automaticReconnect word:
And that leads you to the hint in the MqttAsyncClient.java file -
// Insert our own callback to iterate through the URIs till the connect
// succeeds
MqttToken userToken = new MqttToken(getClientId());
ConnectActionListener connectActionListener = new ConnectActionListener(this, persistence, comms, options,
userToken, userContext, callback, reconnecting);
userToken.setActionCallback(connectActionListener);
userToken.setUserContext(this);
Finally, in the ConnectActionListener.java file you can confirm that the URLs are being tried one after each other:
/**
* The connect failed, so try the next URI on the list.
* If there are no more URIs, then fail the overall connect.
*
* #param token the {#link IMqttToken} from the failed connection attempt
* #param exception the {#link Throwable} exception from the failed connection attempt
*/
public void onFailure(IMqttToken token, Throwable exception) {
int numberOfURIs = comms.getNetworkModules().length;
int index = comms.getNetworkModuleIndex();
...
...
comms.setNetworkModuleIndex(index + 1);

Related

Chainlink API Get Request isn't returning a value

I am creating the following Smart contract. It makes a Chainlink request which isn't returning a value from the API. I am using the following [jobID][1] and this [node][2], but the node doesn't start the job and I don't know why. I checked that the node has a balance of LINK tokens but it doesn't have any and I don't know how to send a balance to the contract node.
I am using Kovan Testnet to try the smart contract. Could you tell me something which I can try?
// This example code is designed to quickly deploy an example contract using Remix.
pragma solidity ^0.6.0;
import "#chainlink/contracts/src/v0.6/ChainlinkClient.sol";
contract APIConsumer is ChainlinkClient {
uint256 public volume;
address private oracle;
bytes32 private jobId;
uint256 private fee;
/**
* Network: Kovan
* Chainlink - 0x2f90A6D021db21e1B2A077c5a37B3C7E75D15b7e
* Chainlink - 29fa9aa13bf1468788b7cc4a500a45b8
* Fee: 0.1 LINK
*/
constructor() public {
setPublicChainlinkToken();
oracle = 0x56dd6586DB0D08c6Ce7B2f2805af28616E082455;
jobId = "b6602d14e4734c49a5e1ce19d45a4632";
fee = 0.1 * 10 ** 18; // 0.1 LINK
}
/**
* Create a Chainlink request to retrieve API response, find the target
* data, then multiply by 1000000000000000000 (to remove decimal places from data).
************************************************************************************
* STOP! *
* THIS FUNCTION WILL FAIL IF THIS CONTRACT DOES NOT OWN LINK *
* ---------------------------------------------------------- *
* Learn how to obtain testnet LINK and fund this contract: *
* ------- https://docs.chain.link/docs/acquire-link -------- *
* ---- https://docs.chain.link/docs/fund-your-contract ----- *
* *
************************************************************************************/
function requestVolumeData() public returns (bytes32 requestId) {
Chainlink.Request memory request = buildChainlinkRequest(jobId, address(this), this.fulfill.selector);
// Set the URL to perform the GET request on
request.add("get", "https://min-api.cryptocompare.com/data/pricemultifull?fsyms=ETH&tsyms=USD");
// Set the path to find the desired data in the API response, where the response format is:
// {"RAW":
// {"ETH":
// {"USD":
// {
// ...,
// "VOLUME24HOUR": xxx.xxx,
// ...
// }
// }
// }
// }
request.add("path", "RAW.ETH.USD.VOLUME24HOUR");
//request.add("path", "data.0.Myvalue");
// Multiply the result by 1000000000000000000 to remove decimals
// Multiply the result by 1000000000000000000 to remove decimals
int timesAmount = 10**18;
request.addInt("times", timesAmount);
// Sends the request
return sendChainlinkRequestTo(oracle, request, fee);
}
/**
* Receive the response in the form of uint256
*/
function fulfill(bytes32 _requestId, uint256 _volume) public recordChainlinkFulfillment(_requestId) {
volume = _volume;
}
/**
* Withdraw LINK from this contract
*
* NOTE: DO NOT USE THIS IN PRODUCTION AS IT CAN BE CALLED BY ANY ADDRESS.
* THIS IS PURELY FOR EXAMPLE PURPOSES ONLY.
*/
function withdrawLink() external {
LinkTokenInterface linkToken = LinkTokenInterface(chainlinkTokenAddress());
require(linkToken.transfer(msg.sender, linkToken.balanceOf(address(this))), "Unable to transfer");
}
}
``
[1]: https://market.link/jobs/0609deab-6d61-4937-85e4-a8e810b8b272/runs
[2]: https://market.link/nodes/323602b9-3831-4f8d-a66b-3fb7531649eb/metrics?start=1631783169&end=1632387969
Looking at the Etherscan activity, it looks like the node you are using may be inactive. Try this node and jobId:
Oracle = 0xc57B33452b4F7BB189bB5AfaE9cc4aBa1f7a4FD8;
JobId = "d5270d1c311941d0b08bead21fea7747";
These were taken from the Chainlink Official Docs.
To check to see if a node may be inactive or not, check out the oracle address in a block explorer. You can see here that the original node you tried to use hasn't posted a transaction in quite a long time.
If a node is inactive you will need to find a new one or host one yourself. To find more nodes and jobs, you can check market.link or use the one found in the docs as mentioned earlier.

Spring Integration (SFTP) message source isn't getting more than 1 file per poll despite setting to unlimited

I have following code to read xml files from a sftp server as InputStream:
#Configuration
public class SftpConfig {
...
#Bean
#InboundChannelAdapter(channel = "stream", poller = #Poller(fixedDelay="60000"))
public MessageSource<InputStream> messageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template());
messageSource.setRemoteDirectory(sftpProperties.getBaseDir());
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.xml"));
// messageSource.setMaxFetchSize(-1); no matter what i set this to, it only fetches one file
return messageSource;
}
#ServiceActivator(inputChannel = "stream", adviceChain = "after")
#Bean
public MessageHandler handle() {
return message -> {
Assert.isTrue(message.getPayload() instanceof InputStream, "Payload must be of type $InputStream");
String filename = (String) message.getHeaders().get(FileHeaders.REMOTE_FILE);
InputStream is = (InputStream) message.getPayload();
log.info("I am here"); // each poll only prints this once
};
}
...
}
When I debugged or checked the logs for MessageHanlder$handleMessage, I continuously only saw one message (file object) came through. And there are more than one .xml file sitting on the sftp server as I could verify by seeing file coming through in the next poll. The documentation says
/**
* Set the maximum number of objects the source should fetch if it is necessary to
* fetch objects. Setting the
* maxFetchSize to 0 disables remote fetching, a negative value indicates no limit.
* #param maxFetchSize the max fetch size; a negative value means unlimited.
*/
void setMaxFetchSize(int maxFetchSize);
So that I fiddled with different numbers but to no avail. What am I missing here?
Sorry for misleading, but fetch doesn't mean poll. The fetch options just take as many remote entities to the local cache on a first poll and every single subsequent polls just take entries from that cache until it is exhausted.
The option about max messages per poll belongs to that #Poller configuration. See a respective option:
/**
* #return The maximum number of messages to receive for each poll.
* Can be specified as 'property placeholder', e.g. {#code ${poller.maxMessagesPerPoll}}.
* Defaults to -1 (infinity) for polling consumers and 1 for polling inbound channel adapters.
*/
String maxMessagesPerPoll() default "";
Pay attention to that 1 for polling inbound channel adapters. That's how you see only one message coming through.
Nevertheless the logic is like push only one message to the channel. There is no batching for how many files you have a the moment. Independently of fetch perPoll only one message is sent to the channel. Although I agree that with infinite perPoll all the messages are sent in the same thread and during the same poll cycle.

Google Drive API - update file metadata only

I am trying to rename google drive file resource. I guess that I just am missing something since all other actions like getting list of files, inserting files, moving files between directories are working.
Precondition: trying to rename file resource using this doc https://developers.google.com/drive/v2/reference/files/update with java (with only JDK stuff). Also, I do not use gdrive java sdk, apache http client or other libraries... Just clean JDK tools.
So what I do:
Here is the file metadata I am trying to send.
Modify title property in this metadata
Here is the code:
URLConnection urlConnection = new URL("https://www.googleapis.com/drive/v2/files/" + fileId).openConnection();
if (urlConnection instanceof HttpURLConnection) {
HttpURLConnection httpURLConnection = (HttpURLConnection) urlConnection;
httpURLConnection.setRequestMethod("PUT");
httpURLConnection.setDoOutput(true);
httpURLConnection.setRequestProperty("Authorization", "Bearer " + accessToken);
DataOutputStream outputStream = new DataOutputStream(httpURLConnection.getOutputStream());
outputStream.writeBytes(FILE_RESOURCE_METADATA_WITH_CHANGED_TITLE_IN_JSON);
outputStream.flush();
outputStream.close();
}
After making an actual call to API I receive 200 status code and File resource in response body (as expected) but title remains the same. So I got no error no changed title.
Moreover, the google drive api ignores any change in the file resource. It just returns same file resource without any changes applied (tried with title, description, originalFileName, parents properties).
What I tried also so far:
Sending only the properties that should be changed, like
{"title":"some_new_name"}
Result is same.
Changing PUT to PATCH. Unfortunately, PATCH is not supported by HttpURLConnection but workarounds gave same results. Changes are ignored.
Used google api exlorer (which can be found on the right side of API reference page) - and... it works. Filled only fileId and title property in request body and it worked. File is renamed.
What I am missing ?
Found the solution...
Adding this request property fixed the problem.
httpURLConnection.setRequestProperty("Content-Type", "application/json")
Try the sample java code given in the documentation.
Since the code deals to update existing file's metadata and content.
From the code, you will find file.setTitle(newTitle) which I think the one what you want to implement.
import com.google.api.client.http.FileContent;
import com.google.api.services.drive.Drive;
import com.google.api.services.drive.model.File;
import java.io.IOException;
// ...
public class MyClass {
// ...
/**
* Update an existing file's metadata and content.
*
* #param service Drive API service instance.
* #param fileId ID of the file to update.
* #param newTitle New title for the file.
* #param newDescription New description for the file.
* #param newMimeType New MIME type for the file.
* #param newFilename Filename of the new content to upload.
* #param newRevision Whether or not to create a new revision for this
* file.
* #return Updated file metadata if successful, {#code null} otherwise.
*/
private static File updateFile(Drive service, String fileId, String newTitle,
String newDescription, String newMimeType, String newFilename, boolean newRevision) {
try {
// First retrieve the file from the API.
File file = service.files().get(fileId).execute();
// File's new metadata.
file.setTitle(newTitle);
file.setDescription(newDescription);
file.setMimeType(newMimeType);
// File's new content.
java.io.File fileContent = new java.io.File(newFilename);
FileContent mediaContent = new FileContent(newMimeType, fileContent);
// Send the request to the API.
File updatedFile = service.files().update(fileId, file, mediaContent).execute();
return updatedFile;
} catch (IOException e) {
System.out.println("An error occurred: " + e);
return null;
}
}
// ...
}
Hope this give you some points.

is there a way to get Spark Tracking URL other than mining log files for the log output?

I have an Scala application that creates a Spark Session and I have set up health checks that use the Spark REST API. The Spark Application itself runs on Hadoop Yarn. The REST API URL is currently retrieved by reading the Spark logging generated when the Spark Session is created. This works most of the time but there are some edge cases in my application where it doesn't work so well.
Does anyone know of another way to get this tracking URL?
"You can do this by reading the yarn.resourcemanager.webapp.address value from YARN's config and the application ID (which is exposed both in an event sent on the listener bus, and an existing SparkContext method."
Copied the paragraph above as is from the developer's response found at: https://issues.apache.org/jira/browse/SPARK-20458
UPDATE:
I did try the solution and got pretty close. Here's some Scala/Spark code to build that URL:
#transient val ssc: StreamingContext = StreamingContext.getActiveOrCreate(rabbitSettings.checkpointPath, CreateStreamingContext)
// Update yarn logs URL in Elasticsearch
YarnLogsTracker.update(
ssc.sparkContext.uiWebUrl,
ssc.sparkContext.applicationId,
"test2")
And the YarnLogsTracker object goes something like this:
object YarnLogsTracker {
private def recoverURL(u: Option[String]): String = u match {
case Some(a) => a.split(":").take(2).mkString(":")
case None => ""
}
def update(rawUrl: Option[String], rawAppId: String, tenant: String): Unit = {
val logUrl = s"${recoverURL(rawUrl)}:8042/node/containerlogs/container${rawAppId.substring(11)}_01_000002/$tenant/stdout/?start=-4096"
...
Which produces something like this: http://10.99.25.146:8042/node/containerlogs/container_1516203096033_91164_01_000002/test2/stdout/?start=-4096
I've discovered a "reasonable" way to obtain this. Obviously, the best way would be for Spark libraries to expose the ApplicationReport that they're already fetching to the launcher application directly, since they go to the trouble of setting delegation tokens, etc. However, this seems unlikely to happen.
This approach is two-pronged. First, it attempts to build a YarnClient itself, in order to fetch the ApplicationReport, which will have the authoritative tracking URL. However, from my experience, this can fail (ex: if the job was run in CLUSTER mode, with a --proxy-user in a Kerberized environment, then this will not be able to properly authenticate to YARN).
In my case, I'm calling this helper method from the driver itself, and reporting the result back to my launcher application on the side. However, in principle, any place where you have the Hadoop Configuration available should work (including, possibly, your launcher application). You can obviously use either "prong" of this implementation (or both) depending on your needs and tolerance for complexity, extra processing, etc.
/**
* Given a Hadoop {#link org.apache.hadoop.conf.Configuration} and appId, use the YARN API (via an
* {#link YarnClient} instance) to get the application report, which includes the trackingUrl. If this fails,
* then as a fallback, it attempts to "guess" the URL by looking at various YARN configuration properties,
* and assumes that the URL will be something like: <pre>[yarnWebUI:port]/proxy/[appId]</pre>.
*
* #param hadoopConf the Hadoop {#link org.apache.hadoop.conf.Configuration}
* #param appId the YARN application ID
* #return the app trackingUrl, either retrieved using the {#link YarnClient}, or manually constructed using
* the fallback approach
*/
public static String getYarnApplicationTrackingUrl(org.apache.hadoop.conf.Configuration hadoopConf, String appId) {
LOG.debug("Attempting to look up YARN url for applicationId {}", appId);
YarnClient yarnClient = null;
try {
// do not attempt to fail over on authentication error (ex: running with proxy-user and Kerberos)
hadoopConf.set("yarn.client.failover-max-attempts", "0");
yarnClient = YarnClient.createYarnClient();
yarnClient.init(hadoopConf);
yarnClient.start();
final ApplicationReport report = yarnClient.getApplicationReport(ConverterUtils.toApplicationId(appId));
return report.getTrackingUrl();
} catch (YarnException | IOException e) {
LOG.warn(
"{} attempting to get report for YARN appId {}; attempting to use manually constructed fallback",
e.getClass().getSimpleName(),
appId,
e
);
String baseYarnWebappUrl;
String protocol;
if ("HTTPS_ONLY".equals(hadoopConf.get("yarn.http.policy"))) {
// YARN is configured to use HTTPS only, hence return the https address
baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.https.address");
protocol = "https";
} else {
baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.address");
protocol = "http";
}
return String.format("%s://%s/proxy/%s", protocol, baseYarnWebappUrl, appId);
} finally {
if (yarnClient != null) {
yarnClient.stop();
}
}
}

struts jqgrid server validation error messages

I have a project using Struts2 on the server side and I am trying to make it work with jqGrid (using JSON format). I have several tables made with jqGrid and I am using the add/edit/delete buttons from navGrid.
The main problem I have is with server validation error messages. I have created custom validators and they work with jsp pages, using s:fielderror, but I don't know how to make them work for add/edit popups from jqGrid. I am aware that jqGrid provides the users with custom validation on client, but this has its limitations(think about testing whether the email of a user is unique, you definitely must use the database for that, or if some fields depend on each other and must be tested together, like if isManager is true, then the managerCode must be not empty and vice versa...).
When I use the client validation, there is a message in the add/edit window whenever an error occurs. Can I somehow display my server validation error messages in the window in the same way?
I managed to solve the issue. I will explain how using a simple custom validator for age field, which must be > 18 for an Employee. It is supposed next that the validator was already declared in validators.xml and mapped on the action and that the message in case of ValidationException is "An employee should be older than 18.".
Using Firebug, I figured out that the id of the error area in the form is FormError. It is possible to configure a callback function errorTextFormat in jqgrid, in order to get a response from the server and process it. In the jqgrid configuration, one could write
errorTextFormat : errorFormat,
with
var errorFormat = function(response) {
var text = response.responseText;
$('#FormError').text(text); //sets the text in the error area to the validation //message from the server
return text;
};
The problem is now that the server will send implicitly a response containing the whole exception stack trace. To deal with it, I decided to create a new result type.
public class MyResult implements Result {
/**
*
*/
private static final long serialVersionUID = -6814596446076941639L;
private int errorCode = 500;
public void execute(ActionInvocation invocation) throws Exception {
ActionContext actionContext = invocation.getInvocationContext();
HttpServletResponse response = (HttpServletResponse) actionContext
.get("com.opensymphony.xwork2.dispatcher.HttpServletResponse");
Exception exception = (Exception) actionContext
.getValueStack().findValue("exception");
response.setStatus(getErrorCode());
try {
PrintWriter out = response.getWriter();
out.print(exception.getMessage());
} catch (IOException e) {
throw e;
}
}
/**
* #return the errorCode
*/
public int getErrorCode() {
return errorCode;
}
/**
* #param errorCode the errorCode to set
*/
public void setErrorCode(int errorCode) {
this.errorCode = errorCode;
}
}
It must also be configured in struts.xml as follows:
<package name="default" abstract="true" extends="struts-default">
...
<result-types>
<result-type name="validationError"
class="exercises.ex5.result.MyResult">
</result-type>
</result-types>
...
<action name="myaction">
...
<result name="validationException" type="validationError"></result>
<exception-mapping result="validationException"
exception="java.lang.Exception"></exception-mapping>
</action>
...
</package>
These are the steps I followed to get a validation error message in the add/edit window and now it works.

Resources