I'm a beginner of spring webflux. While researching I found some code like:
Mono result = someMethodThatReturnMono().cache();
The name "cache" tell me about caching something, but where is the cache and how to retrieve cached things? Is it something like caffeine?
It cache the result of the previous steps of the Flux/Mono until the cache() method is called, check the output of this code to see it in action:
import reactor.core.publisher.Mono;
public class CacheExample {
public static void main(String[] args) {
var mono = Mono.fromCallable(() -> {
System.out.println("Go!");
return 5;
})
.map(i -> {
System.out.println("Double!");
return i * 2;
});
var cached = mono.cache();
System.out.println("Using cached");
System.out.println("1. " + cached.block());
System.out.println("2. " + cached.block());
System.out.println("3. " + cached.block());
System.out.println("Using NOT cached");
System.out.println("1. " + mono.block());
System.out.println("2. " + mono.block());
System.out.println("3. " + mono.block());
}
}
output:
Using cached
Go!
Double!
1. 10
2. 10
3. 10
Using NOT cached
Go!
Double!
1. 10
Go!
Double!
2. 10
Go!
Double!
3. 10
Related
I am attempting to look around to see how I can modify my actuator end points (specifically health) to limit its frequency. I want to see if I can set it up to being set to trigger once a minute for a specific dataset (ex mail) but leave it for others?
So far I can't seem to find that logic anywhere. The only known way I can think of is creating your own health server:
#Component
#RefreshScope
public class HealthCheckService implements HealthIndicator, Closeable {
#Override
public Health health() {
// check if things are stale
if (System.currentTimeMillis() - this.lastUpdate.get() > this.serviceProperties.getMonitorFailedThreshold()) {
String errMsg = '[' + this.serviceName + "] health status has not been updated in over ["
+ this.serviceProperties.getMonitorFailedThreshold() + "] milliseconds. Last updated: ["
+ this.lastUpdate.get() + ']';
log.error(errMsg);
return Health.down().withDetail(this.serviceName, errMsg).build();
}
// trace level since this could be called a lot.
if (this.detailMsg != null) {
Health.status(this.status);
}
Health.Builder health = Health.status(this.status);
return health.build();
}
/**
* Scheduled, low latency health check.
*/
#Scheduled(fixedDelayString = "${health.update-delay:60000}")
public void healthUpdate() {
if (this.isRunning.get()) {
if (log.isDebugEnabled()) {
log.debug("Updating Health Status of [" + this.serviceName + "]. Last Status = ["
+ this.status.getCode() + ']');
}
// do some sort of checking and update the value appropriately.
this.status = Status.UP;
this.lastUpdate.set(System.currentTimeMillis());
if (log.isDebugEnabled()) {
log.debug("Health Status of [" + this.serviceName + "] updated to [" + this.status.getCode() + ']');
}
}
}
I am not sure if there is a way to set this specifically in spring as a configuration or is the only way around this is to build a custom HealthIndicator?
I create some OneTimeWorkRequest When I use the android-arch work WorkManager.
I can watch WorkStatus by observer like this
final WorkManager workManager = WorkManager.getInstance();
final LiveData<List<WorkStatus>> workStatus =
workManager.getStatusesByTag(DailyWorker.DAILY_WORK);
observer = new Observer<List<WorkStatus>>() {
#Override public void onChanged(#Nullable List<WorkStatus> workStatuses) {
Log.d("WorkManager", "onChanged = workStatuses = " + workStatuses);
if (workStatuses == null || workStatuses.size() == 0) {
//DailyWorker.createNewPeriodWork();
} else {
Log.d("WorkManager ", "onChanged = workStatuses.size() = " + workStatuses.size());
for (int i = 0; i < workStatuses.size(); i++) {
Log.d("WorkManager ", "onChanged Work Status Id: " + workStatuses.get(i).getId());
Log.d("WorkManager ", "onChanged Work Status State: " + workStatuses.get(i).getState());
}
}
workStatus.removeObserver(observer);
}
};
workStatus.observe(this, observer);
My Android arch version is android.arch.work:work-runtime:1.0.0-alpha02
But there are a lot of WorkStatus in the list,some SUCCEEDED ,some ENQUEUED , some CANCELLED, and the number of the list continue increase.
how can I clear the WorkStatus List?
You can call the method pruneWork() on your WorkManager to clear the List<WorkStatus>.
myWorkManager.pruneWork();
Hope it helps!
You can call WorkRequest#keepResultsForAtLeast when you make the work
I've a basic for loop that's basically download files. It's supposed to update the label as long as it progress.
By searching here at Stack Overflow, I found an orientation to use SetNeedsDisplay(). But it's still refuses to update. Any idea ?
for (int i = 0; i < files.Length; i++)
{
status.Text = "Downloading file " + (i + 1) + " of " + files.Length + "...";
status.SetNeedsDisplay();
string remoteFile = assetServer + files[i];
var webClient2 = new WebClient();
string localFile = files[i];
string localPath3 = Path.Combine(documentsPath, localFile);
webClient2.DownloadFile(remoteFile, localPath3);
}
As previously suggested try to avoid blocking the UI when doing heavy transactions in it. WebClient already has a async method which you can use.
webClient2.DownloadFileasync(new System.Uri(remoteFile), localPath3);
and to prevent you from accessing the UI from a different thread use the built-in method InvokeOnMainThread when accessing UI elements.
InvokeOnMainThread (() => {
status.Text = "Downloading file " + (i + 1) + " of " + files.Length + "...";
status.SetNeedsDisplay ();
});
and finally use the using statement to help you with the resources disposal.
using (var webClient2 = new WebClient ())
{
webClient2.DownloadFileAsync (new System.Uri (remoteFile), localPath3);
}
You could also have the iteration inside the using statement this way you don't have to create a WebClient object for each file instead you will use the same object to download all files available in your files array.
I am aware that hadoop REST API provides access to job status via program.
Similarly is there any way to get the spark job status in a program?
It is not similar to a REST API, but you can track the status of jobs from inside the application by registering a SparkListener with SparkContext.addSparkListener. It goes something like this:
sc.addSparkListener(new SparkListener {
override def onStageCompleted(event: SparkListenerStageCompleted) = {
if (event.stageInfo.stageId == myStage) {
println(s"Stage $myStage is done.")
}
}
})
Providing the answer for Java. In Scala would be almost similar just using SparkContext instead of JavaSparkContext.
Assume you have a JavaSparkContext:
private final JavaSparkContext sc;
Following code allow to get all info available from Jobs and Stages tabs:
JavaSparkStatusTracker statusTracker = sc.statusTracker();
for(int jobId: statusTracker.getActiveJobIds()) {
SparkJobInfo jobInfo = statusTracker.getJobInfo(jobId);
log.info("Job " + jobId + " status is " + jobInfo.status().name());
log.info("Stages status:");
for(int stageId: jobInfo.stageIds()) {
SparkStageInfo stageInfo = statusTracker.getStageInfo(stageId);
log.info("Stage id=" + stageId + "; name = " + stageInfo.name()
+ "; completed tasks:" + stageInfo.numCompletedTasks()
+ "; active tasks: " + stageInfo.numActiveTasks()
+ "; all tasks: " + stageInfo.numTasks()
+ "; submission time: " + stageInfo.submissionTime());
}
}
Unfortunately everything else is accessible only from scala Spark Context, so could be some difficulties to work with provided structures from Java.
Pools list: sc.sc().getAllPools()
Executor Memory Status: sc.sc().getExecutorMemoryStatus()
Executor ids: sc.sc().getExecutorIds()
Storage info: sc.sc().getRddStorageInfo()
... you can try to find there more useful info.
There's a (n)(almost) undocumented REST API feature that delivers almost everything you can see on the Spark UI:
http://<sparkMasterHost>:<uiPort>/api/v1/...
For local installation you can start from here:
http://localhost:8080/api/v1/applications
Possible end points you can find here: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/status/api/v1/ApiRootResource.scala
There's a (n)(almost) undocumented REST API feature on the Spark UI that delivers metrics about the job and performance.
You can access it with:
http://<driverHost>:<uiPort>/metrics/json/
(UIPort is 4040 by default)
You can get Spark job status without using Spark Job History server too. You can use SparkLauncher 2.0.1 (even Spark 1.6 version will work too) for launching your Spark job from Java program:
SparkAppHandle appHandle = sparkLauncher.startApplication();
You can also add listener to startApplication() method:
SparkAppHandle appHandle = sparkLauncher.startApplication(sparkAppListener);
Where listener has 2 methods which will inform you about job state change and info change.
I implemented using CountDownLatch, and it works as expected. This is for SparkLauncher version 2.0.1 and it works in Yarn-cluster mode too.
...
final CountDownLatch countDownLatch = new CountDownLatch(1);
SparkAppListener sparkAppListener = new SparkAppListener(countDownLatch);
SparkAppHandle appHandle = sparkLauncher.startApplication(sparkAppListener);
Thread sparkAppListenerThread = new Thread(sparkAppListener);
sparkAppListenerThread.start();
long timeout = 120;
countDownLatch.await(timeout, TimeUnit.SECONDS);
...
private static class SparkAppListener implements SparkAppHandle.Listener, Runnable {
private static final Log log = LogFactory.getLog(SparkAppListener.class);
private final CountDownLatch countDownLatch;
public SparkAppListener(CountDownLatch countDownLatch) {
this.countDownLatch = countDownLatch;
}
#Override
public void stateChanged(SparkAppHandle handle) {
String sparkAppId = handle.getAppId();
State appState = handle.getState();
if (sparkAppId != null) {
log.info("Spark job with app id: " + sparkAppId + ",\t State changed to: " + appState + " - "
+ SPARK_STATE_MSG.get(appState));
} else {
log.info("Spark job's state changed to: " + appState + " - " + SPARK_STATE_MSG.get(appState));
}
if (appState != null && appState.isFinal()) {
countDownLatch.countDown();
}
}
#Override
public void infoChanged(SparkAppHandle handle) {}
#Override
public void run() {}
}
I have a functional interface in Java 8:
public interface IFuncLambda1 {
public int someInt();
}
in main:
IFuncLambda1 iFuncL1 = () -> 5;
System.out.println("\niFuncL1.someInt: " + iFuncL1.someInt());
iFuncL1 = () -> 1;
System.out.println("iFuncL1.someInt: " + iFuncL1.someInt());
Running this will yield:
iFuncL1.someInt: 5
iFuncL1.someInt: 1
Is this functionality OK as it is? Is it intended?
If the overriding would be done in an implementing class, and the implementation would change at some point, then in every place that that method is called, the behaviour would be the same, we would have consistency. But if I change the behaviour/implementation through lambda expressions like in the example, the behaviour will only be valid til the next change, later on in the flow. This feels unreliable and hard to follow.
EDIT:
#assylias I don't see how someInt() has its behaviour changed...
What if I added a param to someInt and have this code:
IFuncLambda1 iFuncL1 = (x) -> x - 1;
System.out.println("\niFuncL1.someInt: " + iFuncL1.someInt(var));
iFuncL1 = (x) -> x + 1;
System.out.println("iFuncL1.someInt: " + iFuncL1.someInt(var));
with var being a final even, how would you re-write that with classes?
In your example, () -> 5 is one object and () -> 1 is another object. You happen to use the same variable to refer to them but that is just how references work in Java.
By the way it behaves exactly the same way as if you had used anonymous classes:
IFuncLambda1 iFuncL1 = new IFuncLambda1() { public int someInt() { return 5; } };
System.out.println("\niFuncL1.someInt: " + iFuncL1.someInt());
iFuncL1 = new IFuncLambda1() { public int someInt() { return 1; } };
System.out.println("iFuncL1.someInt: " + iFuncL1.someInt());
Or using "normal" classes:
public static class A implements IFuncLambda1 {
private final int i;
public A(int i) { this.i = i; }
public int someInt() { return i; }
}
IFuncLambda1 iFuncL1 = new A(5);
System.out.println("\niFuncL1.someInt: " + iFuncL1.someInt());
iFuncL1 = new A(1);
System.out.println("iFuncL1.someInt: " + iFuncL1.someInt());
There again there are two instances of A but you lose the reference to the first instance when you reassign iFuncL1.