Pine Script - buy sell signal comes and goes - time

What needs to be done so that buy/sell gets activated only when the candle is closed?
Sometimes, I have seen that buy/sell signal gets generated while candle is on (not closed). And by the time candle closes, depending on last trading price (ltp) the signal isn't there. How do I avoid generating buy-sell mid-way of the candle?
additional information - I am using higher time frame (HTF) to get macd value which is part of buy/sell strategy. Not sure whether that causes any issue. Function security is used to get the value from htf. and I am using Pine-script v4.

You don't mention it but I'll assume you're using a study and not a strategy.
If you are generating alerts, the simplest way is to configure alerts to trigger "Once Per Bar Close".
If you are concerned about the signals plotted on the chart, then you will need to ensure your code does not produce repainting conditions. See:
How to avoid repainting when NOT using security().
How to avoid repainting when using security() - PineCoders FAQ.

Related

Is it possible to pause and resume Kafka Stream conditionally?

I have a requirement as stated # https://kafka.apache.org/21/documentation/streams/developer-guide/dsl-api.html#window-final-results for waiting until window is closed in order to handle late out of order event by buffering it for duration of window.
Per my understanding of this feature is once windowing is created, the window works like wall clock processing, e.g. Creating for 1 hour window, The window starts ticking once first event comes. This 1hr window is closed exactly one hour later and all the events buffered so far will be forwarded to down stream. However, i need to be able to hold this window even longer say conditionally for as long as required e.g. based on state / information in external system such as database.
To be precise my requirement for event forwarding is (windows of 1 hour if external state record says it is good) or (hold for as long as required until external record says it's good and resume tracking of the event until the event make it fully 1hr, disregarding the time when external system is not good)
To elaborate this 2nd condition, e.g. if my window duration 1 1hr , my event starts at 00:00, if on 00:30 it is down and back normal on 00:45, the window should extend until 01:15.
Is it possible to pause and resume the forwarding of events conditionally based on my requirement above ?
Do I have to use transformation / processor and use value store manually to track the first processing time of my event and conditionally forwarding buffered events in punctuator ?
I appreciate all kind of work around and suggestion for this requirement.
the window works like wall clock processing
No. Kafka Streams work on event-time, hence, the timestamps as returned from the TimestampExtractor (by default the embedded record timestamp) are use to advance time.
To be precise my requirement for event forwarding is (windows of 1 hour if external state record says it is good)
This would need a custom solution IMHO.
or (hold for as long as required until external record says it's good and resume tracking of the event until the event make it fully 1hr, disregarding the time when external system is not good)
Not 100% if I understand this part.
Is it possible to pause and resume the forwarding of events conditionally based on my requirement above ?
No.
Do I have to use transformation / processor and use value store manually to track the first processing time of my event and conditionally forwarding buffered events in punctuator ?
I think this might be required.
Check out this blog post, that explains how suppress() work in details, and when it emits based on observed event-time: https://www.confluent.io/blog/kafka-streams-take-on-watermarks-and-triggers

What is the Proper Non-blocking Algorithm for Modbus Master on Microcontroller

Modbus is a a request and response type serial communication. Basically the master send out a request and one of the slave response.
I am modifing the code on a microcontroller which is a master unit on a modbus network. This unit also has a small dot-matrix LCD and some buttons for user interface. The microcontroller is running at 16MHz.
The problem is after the master unit send out a request, it does not know when the slave response, so it may need to wait for a relatively long time. However as this unit has buttons and LCD, it can not wait at a point for too long because the user will feel lag when he pressed a button. The original code is using a RTOS. It seperate the user interface task and the serial communication tasks so it has no problem. Now I need to change it to non-RTOS code. I have implemented a system tick timer which will interrupt at each 1ms. What is the proper (or common) way to do that?
It is possible to do quite a lot with just a single task, especially if you have interrupts. The intermediate position between a single very simple task and an RTOS is a cyclic executive. See http://www3.nd.edu/~cpoellab/teaching/cse40463/slides10.pdf for a brief overview of the spectrum of functionality from a cyclic executive up to a fully preemptive multitasking operating system. You will find much more if you search on this phrase and related phrases, including very sophisticated schemes for making sure that the system never misses its deadlines. If you are an aircraft flight control system, forgetting to check the aircraft pitch angle every X ms can cause problems elsewhere :-)
One way to rewrite code which is naturally multi-threaded is to maintain a model of the state of the system, such as a collection of objects each representing a modbus connection, indexed by a connection id. Then write a routine for every sort of event that can happen, including the arrival of a clock interrupt. When that event happens these routines typically work out which connection is involved, retrieve it from the main collection (or create it from scratch and enter it there if necessary) do the work associated with that particular sort of event, and then return.
It is often convenient to keep a queue of future events, indexed by time, and to have a routine that creates an object representing something to be done at some future time (such as calling a method to check for the expiration of a timeout) and puts this object on the queue.
You need to worry about interrupt processing getting called halfway through an event service routine. One way to deal with this is to lock out interrupts when that could cause a problem. Another way is to have the interrupt routine do nothing more than put an object on a queue that something else will check for later, or just set a flag. Then you need only lock out interrupts when you are checking for items on the queue and removing them.
A number of communications protocols are implemented in this way. Even in a true multitasking operating system you very often don't want to have to create a new thread every time you need to create a new connection. The two main problems with this is that the code is less clear than code which has a thread per object, because stuff that naturally goes together is chopped up into loads of event service events, and if any of the event service methods burn significant amounts of cpu, the system will stall because nothing else will happen when this is going on.

Coalescing GCD file system events

I have a class that implements a file-monitoring service to detect when a file I am interested in has been changed by something other than my application. I use the standard technique of opening the file (with the O_EVTONLY flag) and binding the file descriptor to a Grand Central Dispatch source of type DISPATCH_SOURCE_TYPE_VNODE. When I get an event, I notify my main thread with NSNotificationCenter's postNotificationName:object:userInfo: which calls an observer in my app delegate. So far so good. It works great. But, in general, if the triggering event is an attributes change (i.e. the DISPATCH_VNODE_ATTRIB flag is set on return from dispatch_source_get_data()) then I usually get two closely-spaced events. The behaviour is easily exhibited if I touch(1) the object I am monitoring. I hypothesise this is due to the file's mtime and atime being set non-atomically although I can't verify this. This can lead to spurious notifications being sent to my observer and this raises the possibility of race conditions etc.
What is the best way of dealing with this? I thought of storing a timestamp for the last event received and only sending a notification if the current event is later than this timestamp by some amount (a few tens of milliseconds?) Does this sound like a reasonable solution?
You can't ever escape the "race condition" in this situation, because the notification of your GCD event source in your process is not synchronous with the other process's modification of the underlying file. So, no matter what, you must always be tolerant of the possibility that the change you're being notified for could already be "gone."
As for coalescing, do whatever makes sense for your app. There are two obvious strategies. You can act immediately on a received event, and then drop subsequent events received in some time window on the floor, or you can delay every event for some time period during which you will drop other events for the same file on the floor. It really just depends on what's more important, acting quickly, or having a higher likelihood of a quiescent state (knowing that you can never be sure things are quiescent.)
The only thing I would add is to suggest that you do all your coalescence before dispatching anything to the main thread. The main thread has things like tracking loops, etc that will make it harder to get time-based coalescing right in certain cases.

EventSourcing fix business logic bugs

I'm making some study of eventsourcing before applying it (or not).
Quick question : When using EventSourcing pattern we can imagine this scenario to handle an event :
command sent
command handler receive the previous command, validate it then
command handler persist this event and publish it
business model apply (business logic algorithm v1 for example) this event mutating its internal state
We can replay all the events and reconstruct the business object state.
How to handle business logic bugs (business logic algorithm v1 contains a nasty bugs).
I read we can fix the bug and replay the events and then we got the business model in a valid state once again.
But what happens if when fixing the business rule when applying event#1 would have caused the 'futurs' commands to fails ? In other words, the event#2, event#3, event#n was dependend of the state of the domain model after applying event#0. How can we fix the cascading events failure ?
I don't have a specific usecase : but we can imagine an account where balance is currently positive. Applying Event#0 increment the balance but this was a bug, the developer wanted to reduce the balance. Event#1 is a purchase that was valid because of the positive balance at this time.
The developer fixes the bug and replay the events. Event#0 decrease the balance which becomes negative. Event#1 is replayed : what happens ?
Do we need to handle this case with 'compensation' ? how ?
thanks in advance for your comments, external ressources that can be of any help (articles, blogs).
bye
Minor correction
When using EventSourcing pattern we can imagine this scenario to handle an event
command sent
command handler receive the previous command, validate it then
business model verifies that the command can be satisfied without violating the business invariant, and calculates the ensuing events
command handler persist this events and publish them
The command handler (specifically, the anti-corruption layer) is responsible for making sure that the command is well formed. The business model decides if the command is permitted by the business.
The good news: the events are just state changes; all of the rule validation is already done. When you fix the bug in the domain object so that it produces the correct events in response to the command, you aren't changing the way the event is applied.
And you certainly aren't changing the history -- if the ATM gave away $20 that it wasn't supposed to, you can't get the money back by editing the record.
What that means is that deploying the bug fix keeps the problem from getting worse; but it doesn't do anything for the event histories that are incorrect.
Compensating events are the right answer here. Ever have a grocery clerk double scan an item, and have to back one of them out? If you look closely, you'll see all three items
+1 candy bar
+1 candy bar
-1 candy bar
That's the idiom of the compensating event being appended to end of the stream.
So if the error showed first appeared in event #0, and then [event #1 .. event #99] have been played on top of that, the remedy for the error is to publish a compensating event #100.
Notice that this is exactly what book keepers would do. You put the wrong sign on the entry on line #1, add a bunch more entries, realize your mistake, and add a new entry that compensates for the earlier mistake.
More good news: in mature business processes, there are already mitigation procedures in place to handle various contingencies. So you can grab a meeting with your domain experts, and doodle on the whiteboard explaining the problem, and your experts should be able to show you the right way to compensate for it. Everything after that is feature management (does the mitigation need to be automated? Does the system need to do the mitigation automatically, or can it let human experts tell it what mitigation to apply, etc. etc.)

What is the reasoning for and the basic concepts behind an interstitial loading page?

I'm interested in finding out why this is used on some Web sites for processing user-initiated search submissions, how it affects the request and response flow, and programmatically why it would be necessary (or beneficial). In an MVC framework it seems difficult to execute since you are injecting another page into the middle of the flow.
EDIT:
Not advertising related. For instance, most travel sites used to do this, and there were no ads... some banking sites do it too, where there is just a loader that says something like "Please wait while we process your transaction...".
It is often used in long running requests to prevent the web server from timing out the request. With an interstitial page, you are able to continuously refresh the page until you get results back.
EDIT:
Also, for long running requests, it is beneficial to have a "Loading.." page in order to show the user that something is happening. Without the interstitial page, the request can appear to have hung up if it takes too long.
To supplement what HVS said, interstitials which appear before, say a homepage loads, are very much there for the purpose of advertising, we've all seen the 'close this ad' link.
One instance where they can be helpful from a user experience point of view is when a user initiates an action which requires feedback from a process which may take some time to respond - either because it's slow, busy or just has a lot of processing to do.
Think of a site where you book a flight online for example. You often get an interstitial on hitting 'find flights' because the the system is having to go off and ask for all relevant flight information and then sort them for you before displaying them on your screen. If this round-trip of 'request, interrogate, return, display' is likely to take an amount of time beyond that which a normal webpage transitions from one to the next, a UXDesigner may consider an interstitial screen (or message) to let the user know something is happening whilst at the same time allowing the system the time it needs to complete the request. Any screen with this sort of face-time is going to get the attention of your marketing department from a 'well while we've got them we might as well show them something' point of view.
As a UX Designer myself interstitials like this are not always preferred as I'd love every system to return data immediately but if it can't for whatever reason, I'm very much for keeping the user in the loop as much as possible about what is happening - rather than leaving them to stare at the browser status bar until they either try again or get fed up and leave.
One final point when considering this is also to have a lower and upper time limit on a screen like this. If you need to show an interstitial, show it for long enough so people can read it and understand it but not too long that they get fed up of waiting. As a rough guide, leave it open for at least 3-4 seconds (even if the process averages 4 seconds but has finished after 1 on this occasion). Between 4 and 10 seconds check every second to see if the process has responded (and then take the user to the next page f it has) and after 10 seconds seriously consider telling the user to either try again or telling them you've failed (whilst at the same time getting your tech team to fix what is ultimately a problem which will affect your bottom line).
I believe the vast majority of interstitial pages are there to run advertising.

Resources