Possible to somehow limit the update download 0.3 seconds using a ReactiveCocoa?
example:
if (the_previous_update == indefinitely)
{
update;
}
if else (current_time - the_previous_update>=0.3)
{
the_previous_update = current_time;
update;
}
else{
do nothing;
}
Maybe something like this?
RACSignal *updateSignal = ... // a signal that sends a 'next' whenever download has progressed.
[[updateSignal throttle:0.3] subscribeNext:^(id x) {
updateUI();
}];
Yes as #Grav says a throttle seems like the best operation for your use case. A throttle will basically store up next events and dispatch the last one received within your given time interval.
With a throttle you can make sure that you can update your UI every 0.3 seconds and be sure that the value that you use to update will be the last one received in that given time interval.
This differs from delay.
Related
I am trying to change the number automatically after every five seconds between 1 to 90.
Here is my code,
public function getNumber(Request $request)
{
$numbers = rand(1, 90);
$data = array('message'=>ResponseMessage::statusResponses(ResponseMessage::_STATUS_DATA_FOUND), 'number'=>$numbers);
return $this->sendSuccessResponse($data);
}
Now, when I hit this I am getting the number between 1 to 90. I am getting different numbers whenever I hit.
But now, I just want that, when I hit it. It will start and after every 5 seconds number will change automatically.
Can we do that, if yes, please help me out. Thanks in advance.
you can use javascript for this and call your function or send ajax rewuest to your controller there:
window.setInterval(function(){
/// call your function here
}, 5000);
it will run every 5 secs
and for stop that :
clearInterval()
Need some opinion/help around one use case of KStream/KTable usage.
Scenario:
I have 2 topics with common key--requestId.
input_time(requestId,StartTime)
completion_time(requestId,EndTime)
The data in input_time is populated at time t1 and the data in completion_time is populated at t+n.(n being the time taken for a process to complete).
Objective
To compare the time taken for a request by joining data from the topics and raised alert in case of breach of a threshold time.
It may happen that the process may fail and the data may not arrive on the completion_time topic at all for the request.
In that case we intend to use a check that if the currentTime is well past a specific(lets say 5s) threshold since the start time.
input_time(req1,100) completion_time(req1,104) --> no alert to be raised as 104-100 < 5(configured value)
input_time(req2,100) completion_time(req2,108) --> alert to be raised with req2,108 as 108-100 >5
input_time(req3,100) completion_time no record--> if current Time is beyond 105 raise an alert with req3,currentSysTime as currentSysTime - 100 > 5
Options Tried.
1) Tried both KTable-KTable and KStream-Kstream outer joins but the third case always fails.
final KTable<String,Long> startTimeTable = builder.table("input_time",Consumed.with(Serdes.String(),Serdes.Long()));
final KTable<String,Long> completionTimeTable = builder.table("completion_time",Consumed.with(Serdes.String(),Serdes.Long()));
KTable<String,Long> thresholdBreached =startTimeTable .outerJoin(completionTimeTable,
new MyValueJoiner());
thresholdBreached.toStream().filter((k,v)->v!=null)
.to("finalTopic",Produced.with(Serdes.String(),Serdes.Long()));
Joiner
public Long apply(Long startTime,Long endTime){
// if input record itself is not available then we cant use any alerting.
if (null==startTime){
log.info("AlertValueJoiner check: the start time itself is null so returning null");
return null;
}
// current processing time is the time used.
long currentTime= System.currentTimeMillis();
log.info("Checking startTime {} end time {} sysTime {}",startTime,endTime,currentTime);
if(null==endTime && currentTime-startTime>5000){
log.info("Alert:No corresponding record from file completion yet currentTime {} startTime {}"
,currentTime,startTime);
return currentTime-startTime;
}else if(null !=endTime && endTime-startTime>5000){
log.info("Alert: threshold breach for file completion startTime {} endTime {}"
,startTime,endTime);
return endTime-startTime;
}
return null;
}
2) Tried the custom logic approach recommended as per the thread
How to manage Kafka KStream to Kstream windowed join?
-- This approach stopped working for scenarios 2 and 3.
Is there any case of handling all three scenarios using DSL or Processors?
Not sure of we can use some kind of punctuator to listen to when the window changes and check for the stream records in current window and if there is no matching records found,produce a result with systime.?
Due to the nature of the logic involve it surely had to be done with combination of DSL and processor API.
Used a custom transformer and state store to compare with configured
values.(case 1 &2)
Added a punctuator based on wall clock for
handling the 3rd case
Currently i making a med schedule which fires a local notification everyday at the time your med is due, so far i have got it working for the first day but it will not fire again unless the user clicks on the notification.
else if (Device.RuntimePlatform == Device.iOS)
{
App.BadgeCount = App.BadgeCount + 1;
CrossNotifications.Current.Badge = App.BadgeCount;
Random rnd = new Random();
string stringid = rnd.Next(1, 1000000000).ToString();
stringid = CrossNotifications.Current.Send(usermedid, "Please take " + dosage + " of " + medname, "ding", ms);
Debug.WriteLine("Notification saved" + stringid);
Is there any way to add in my code a way to set the notification to repeat daily at that exact time without having to click on the notification ? Would be best to revert to using UILocalNotifications and using the repeat interval ?
Any help appreciated .. Thanks
From what I can see in the ACR source it can only schedule something through tje calendar, so yes, then you will need a manual action to schedule the next one. Also, I think there is a limit to how many notifications you can schedule in the future.
You are mentioning UILocalNotifications, not that this API is deprecated as of iOS 10. You probably want to use the replacement: UNNotificationRequest. Looking at that API, there is a function to schedule notifications with an interval and the option to let them repeat for each interval. In native code this looks like this:
let trigger = UNTimeIntervalNotificationTrigger(timeInterval: (30*60), repeats: false)
That is probably what you are after. So now you need to either find a plugin that supports this, or write something yourself
Given an Observable.timer(10000). Say, that I'd like to continuously update the timer and not allow it to emit, is it possible?
For example, at t = 2000, I want to increase the timeout time by 2000. Given this dynamic code change, the timer will now emit at t = 12000 rather than the original t = 10000.
try the code below.
Rx.Observable.fromEvent(document,"click")
.scan((curr,acc)=>++curr,0)
.flatMap(e=>{
console.log(e)
return Rx.Observable.timer(e*1000)}
)
.subscribe(console.log)
We are storing small documents in ES that represent a sequence of events for an object. Each event has a date/time stamp. We need to analyze the time between events for all objects over a period of time.
For example, imagine these event json documents:
{ "object":"one", "event":"start", "datetime":"2016-02-09 11:23:01" }
{ "object":"one", "event":"stop", "datetime":"2016-02-09 11:25:01" }
{ "object":"two", "event":"start", "datetime":"2016-01-02 11:23:01" }
{ "object":"two", "event":"stop", "datetime":"2016-01-02 11:24:01" }
What we would want to get out of this is a histogram plotting the two resulting time stamp deltas (from start to stop): 2 minutes / 120 seconds for object one and 1 minute / 60 seconds for object two.
Ultimately we want to monitor the time between start and stop events but it requires that we calculate the time between those events then aggregate them or provide them to the Kibana UI to be aggregated / plotted. Ideally we would like to feed the results directly to Kibana so we can avoid creating any custom UI.
Thanks in advance for any ideas or suggestions.
Since you're open to use Logstash, there's a way to do it using the aggregate filter
Note that this is a community plugin that needs to be installed first. (i.e. it doesn't ship with Logstash by default)
The main idea of the aggregate filter is to merge two "related" log lines. You can configure the plugin so it knows what "related" means. In your case, "related" means that both events must share the same object name (i.e. one or two) and then that the first event has its event field with the start value and the second event has its event field with the stop value.
When the filter encounters the start event, it stores the datetime field of that event in an internal map. When it encounters the stop event, it computes the time difference between the two datetimes and stores the duration in seconds in the new duration field.
input {
...
}
filter {
...other filters
if [event] == "start" {
aggregate {
task_id => "%{object}"
code => "map['start'] = event['datetime']"
map_action => "create"
}
} else if [event] == "stop" {
aggregate {
task_id => "%{object}"
code => "map['duration'] = event['datetime'] - map['start']"
end_of_task => true
timeout => 120
}
}
}
output {
elasticsearch {
...
}
}
Note that you can adjust the timeout value (here 120 seconds) to better suit your needs. When the timeout has elapsed and no stop event has happened yet, the existing start event will be ditched.