I have a FlexRay PDU called TEMP in our case, which is getting filled in every cycle, and i would like to simulate a timeout on this PDU, but it has no updatebit.
I have a panel where i check the enable button to decide if it should be sent out or not.
frPdu FR::TEMP_Pdu;
on preStart {
if (#FR_namespace::TEMP_Enablebutton)
{
FRSetSendPDU(TEMP_Pdu);
}
}
on frStartCycle * {
if (#FR_namespace::TEMP_Enablebutton)
FrUpdatePDU(TEMP_Pdu, 1, 1));
}
Whatever I set my button to, the PDU gets transmitted every loop.
FRUpdatePDU() is not sending another instance of PDU, it is just updating the next iteration with data from the frPDU object, since we are talking about PDU of a static frame, once the VN starts sending it, you cannot stop it individually by PDU control.
Frame vs. PDU
A frame is physical Flexray Frame on the network. a PDU, is a virtualization of a part (or of the entire) of the frame payload.
Analogy: The postal truck is the FlexRay Frame, and the boxes in it are PDUs. For the Postal service (Flexray Protocol - OSI layer 1) the important unit is the truck itself, and for you (the client), the boxes in it. You will never be interested in what kind of truck delivered your goodies in a box, you are interested only in the content itself.
When you call the FrUpdatePDU(), you not only start sending the PDU, but you activate (send non-null frames) its slot also.
By setting the frame underneath to non-null frame, you ensured (in a case of static frame) that it will be sent cyclically from that point, automatically, regardless what you want to do with the PDU (the trucks are going anyway, even if you don't want to send boxes in them).
Solution:
I presume you do not have IL DLLs to aid you, therefore you are limited to the functions CANoe environment provides as General FlexRay IL functions.
You need to identify the frame carrying the PDU.
You need to manipulate the frame also with frUpdateStatFrame().
frframe FramefromDBCofTEMPPDU InstanceofFrameTemp;
on frStartCycle *
{
if (#FR_namespace::TEMP_Enablebutton==1)
{
mframe.fr_flags=0x0;
frUpdateStatFrame(mframe);
TEMP_Pdu.byte(0xAA);
FrUpdatePDU(TEMP_Pdu, 1, 1));
}
else
{
mframe.fr_flags=0x80;
frUpdateStatFrame(mframe);
}
}
So, in fact you need to modify the frame also. About the frame flags, check out the definition of the frframe in Help.
I don't use Vector CANoe or CANalyzer myself and there is almost no public documentation, so I can only offer some general debugging hints:
Have you checked that #FR_namespace::TEMP_Enablebutton has the expected value, i.e. that it is really false when you turn the button off? You could add a write() before the if() inside "on frStartCycle" to check this.
This question looks similar: Sending Periodic CAN signals on button press using CAPL and CANalyzer - they use "on sysvar" to react to the button press, maybe you have to do something similar. Namely: React to the button press, set an environment variable, check that variable in "on frStartCycle"
Since there is not a lot of public documentation, you can also try asking around inside your company, there might be some people around you who can help you.
Related
Does anyone know how to detect when the user changes the current input source in OSX?
I can call TISCopyCurrentKeyboardInputSource() to find out which input source ID is being used like this:
TISInputSourceRef isource = TISCopyCurrentKeyboardInputSource();
if ( isource == NULL )
{
cerr << "Couldn't get the current input source\n.";
return -1;
}
CFStringRef id = (CFStringRef)TISGetInputSourceProperty(
isource,
kTISPropertyInputSourceID);
CFRelease(isource);
If my input source is "German", then id ends up being "com.apple.keylayout.German", which is mostly what I want. Except:
The results of TISCopyCurrentKeyboardInputSource() doesn't change once my process starts? In particular, I can call TISCopyCurrentKeyboardInputSource() in a loop and switch my input source, but TISCopyCurrentKeyboardInputSource() keeps returning the input source that my process started with.
I'd really like to be notified when the input source changes. Is there any way of doing this? To get a notification or an event of some kind telling me that the input source has been changed?
You can observe the NSTextInputContextKeyboardSelectionDidChangeNotification notification posted by NSTextInputContext to the default Cocoa notification center. Alternatively, you can observe the kTISNotifySelectedKeyboardInputSourceChanged notification delivered via the Core Foundation distributed notification center.
However, any such change starts in a system process external to your app. The system then notifies the frameworks in each app process. The frameworks can only receive such notifications when it is allowed to run its event loop. Likewise, if you're observing the distributed notification yourself, that can only happen when the event loop (or at least the main thread's run loop) is allowed to run.
So, that explains why running a loop which repeatedly checks the result of TISCopyCurrentKeyboardInputSource() doesn't work. You're not allowing the frameworks to monitor the channel over which it would be informed of the change. If, rather than a loop, you were to use a repeating timer with a low enough frequency that other stuff has a chance to run, and you returned control to the app's event loop, you would see the result of TISCopyCurrentKeyboardInputSource() changing.
(
SynthDef(\testEvt,{
arg out, gate = 1;
var sint = Blip.ar(440) * Linen.kr(gate,doneAction:2,releaseTime:0.8);
Out.ar(out, Pan2.ar(sint, 0));
}).add();
Synth(\testEvt)
(instrument: \testEvt, freq:220, sustain: inf).play;
(instrument: \testEvt,freq:220).play;
)
Executing the first and the second line after the SynthDef would create a synth which playes forever, whereas the third line's synth plays for 0.8 seconds as per default value the generated event.
The problem is that I don't use 'sustain' anywhere in my SynthDef and it uses automatically just because there is a Linen.
The same this doesn't happen for freq: both the events play at 440 and not at 220, and that's just because the SynthDef doesn't use 'freq' as an argument. So why sustain doesn't follow the same rule ?
Also, is there a way to reference synths created by an event ? So that, when they have sustain: inf as argument, I can free them on a later time.
(instrument: \testEvt, freq:220, sustain: inf).play;
and
(instrument: \testEvt,freq:220).play;
are events. Events handle a lot of things for you. One thing they do is calculate when to set a gate to 0. (Remember that gate is one of the arguments in your SynthDef.) In the first example, because you sustain for infinite duration, the gate never goes to zero. In the second example, it uses the default duration and sets the gate to zero after that duration has passed. You can find out what key words are used in event environment variables by looking at the Event.sc source file. If you search for sustain, you'll find out the other keywords it uses for timing. One of these is dur. Try this:
(instrument: \testEvt, dur:3).play
Freq is also a keyword for events, but since you have no freq argument, it can't effect your synthDef. If you want to set the freq, you'll need to make a change:
SynthDef(\testEvt,{
arg out, gate = 1, freq = 440;
var sint = Blip.ar(freq) * Linen.kr(gate,doneAction:2,releaseTime:0.8);
Out.ar(out, Pan2.ar(sint, 0));
}).add();
For contrast between events and controlling a synth directly, try:
a = Synth.new(\testEvt, [\out, 0, \gate, 1])
You can add in any other arguments you want, like freq or sustain, but they have no effect because those aren't arguments to your synthdef. And, unlike Event, synth doesn't do any calculations on your behalf. When you want the note to end, set the gate to 0 yourself:
a.set(\gate, 0)
It's good to be aware of event environment variables because they're also used by Pbinds and by using them, you can effect other things. If you had a synthdef that used sustain as an argument for something else, you could be surprised by it changing your durations.
Regarding your last sub-question,
Also, is there a way to reference synths created by an event ? So that, when they have sustain: inf as argument, I can free them on a later time.
Yes, by "indexing" the Event by \id key. This actually returns an array of node ids because an Event with \strum can fire up more than one node/synth. Also, the \id value is nil while the event is not playing. But this indexing method is fairly unnecessary for what you want, because...
You can end the (associated) synth by ending the Event early with release, just like for the Synth itself. What this does is basically gate-out its internal synth. (In your example, this release call transitions to the release point of the ASR envelope generated by Linen, by lowering gate to 0.). And, of course, use a variable to save the "reference" to the synth and/or event, if don't plan to release it right away in a program (which would produce no sound with a gated envelope).
Basically
fork { var x = Synth(\testEvt); 2.wait; x.release }
does the same as
fork { var e = (instrument: \testEvt, sustain: inf).play; 2.wait; e.release }
except there's one layer of indirection in the latter case for the release. The first example is also equivalent to
fork { var x = Synth(\testEvt); 2.wait; x.set(\gate, 0); }
which does the work of release explicitly. Event also supports set and it passes the value to the corresponding Synth control (if the latter was properly added on the server.)
Now the complicated method you asked about (retrieving node ids for the event and sending them messages) is possible too... although hardly necessary:
fork { var e = (instrument: \testEvt, sustain: inf).play; 2.wait;
e[\id].do({ arg n; s.sendMsg("/n_set", n, "gate", 0); }) }
By the way, you can't use wait outside of a Routine, that's why fork was needed in the above examples. Interactively, in the editor, you can "wait manually", of course, before calling release on either the Synth or the Event.
As a somewhat subtle point of how envelope gating works, it doesn't actually start playing (technically begin transitioning to the endpoint of the first [attack] envelope segment) until you set gate to 1. I.e. you can delay the (envelope) start as in:
fork { x = Synth(\testEvt, [\gate, 0]); 3.wait; x.set(\gate, 1); 2.wait; x.release }
Beware that the default Event.play doesn't generate this 0 to 1 gate transition though, i.e. you can't rely on it to fire your synth's envelope if you set the initial gate value to zero in your SynthDef.
Also, I'm assuming that by "free" you mean "stop playing" rather than "free their memory on the server". There's no need to manually free those (event) synths in the latter sense since they have doneAction:2 in the envelope, which does that for you once they are released and the final segment of the envelope finishes playing. If you somehow want to kill the synth right away (like Ctrl+. does) instead of triggering its fade-out you can replace the message sent in the inner function of the "complicated" example (above) with s.sendMsg("/n_free", n). Or much more simply
fork { var e = (instrument: \testEvt, sustain: inf).play; 2.wait; e.free }
Also, if you wonder about \strum, an example is:
e = (instrument: \testEvt, sustain: inf, strum: 1, out: #[0, 0]).play
Now e[\id] is an array of two nodes. Event is a bit cheeky in that it will only create multiple nodes for arrays passed to actual Synth controls rather than random fields, so "strumming" \freq (or its precursors like \degree etc.) only creates multiple nodes if your SynthDesc has a freq control.
Alas the "complicated" method is almost useless when it comes to playing Pbinds (patterns). This is because Pbind.play returns and EventStreamPlayer... that alas makes a private copy of the prototype event being played and plays that private copy, which is inaccessible to the caller context (unless you hack EventStreamPlayer.prNext). Confusingly EventStreamPlayer has an accessible event variable, but that's only the "prototype", not the private copy event being played... So if p is an instance of an EventStreamPlayer then p.event[\id] is always nil (or whatever you set it to beforehand) even while playing. Since one seldom plays Events individually and much more often patterns...
Simply as a hacking exercise tough, it turns out there is an even more convoluted way to access the ids of nodes that EventStreamPlayer fires... This relies on overriding the default Event play which thankfully can be extended outside class inheritance because the default is conveniently saved in a class dictionary...
(p = Pbind(\instrument, \testEvt, \sustain, Pseq([1, 2]), \play, {
arg tempo, srv;
var rv;
"playhack".postln;
rv = Event.parentEvents[\default][\play].value(tempo, srv);
~id.postln;
rv;
}).play)
In general however, patterns are clearly not designed to be used this way, i.e. by hacking "a layer below" to get to the node ids. As "proof", while the above works well enough with Pbind (which uses the default Event type \note) it doesn't work reliably with Pmono which doesn't set the Event \id on its first note (Event type \monoNote) but only on subsequent notes (which generate a different Event type, \monoSet). Pmono keeps an internal copy of the node id, but this is completely inaccessible on the first mono note; it only copies it to the Events on the subsequent notes for some reason (bug perhaps, but could be "by design"). Also, if you use Pdef which extends Event with type \phrase... the above hack doesn't work all, i.e. \id is never set by type \phrase; perhaps you can get to the underlying sub-events generated somehow... I haven't bothered to investigate further.
The SC documentation (in the pattern guide) even says at one point
Remember that streams made from patterns don't expose their internals. That means you can't adjust the parameters of an effect synth directly, because you have no way to find out what its node ID is.
That's not entirely correct given the above hack, but it is true in some contexts.
I'm writing a program that has an X11/Xlib interface, and my event processing loop looks like this:
while (XNextEvent(display, &ev) >= 0) {
switch (ev.type) {
// Process events
}
}
The problem is when the window is resized, I get a bunch of Expose events telling me which parts of the window to redraw. If I redraw them in direct response to the events, the redraw operation lags terribly because it is so slow (after resizing I get to see all the newly invalidated rectangles refresh one by one.)
What I would like to do is to record the updated window size as it changes, and only run one redraw operation on the entire window (or at least only two rectangles) when there are no more events left to process.
Unfortunately I can't see a way to do this. I tried this:
do {
XPeekEvent(display, &ev);
while (XCheckMaskEvent(display, ExposureMask | StructureNotifyMask, &ev)) {
switch (ev.type) {
// Process events, record but don't process redraw events
}
}
// No more events, do combined redraw here
}
Which does actually work, but it's a little inefficient, and if an event arrives that I am not interested in the XCheckMaskEvent call doesn't remove it from the queue, so it stays there stopping XPeekEvent from blocking, resulting in 100% CPU use.
I was just wondering whether there is a standard way to achieve the delayed/combined redraw that I am after? Many of the Xlib event processing functions seem to block, so they're not really suitable to use if you want to do some processing just before they block, but only if they would block!
EDIT: For the record, this is the solution I used. It's a simplified version of n.m.'s:
while (XNextEvent(display, &ev) >= 0) {
switch (ev.type) {
// Process events, remember any redraws needed later
}
if (!XPending(display)) {
// No more events, redraw if needed
}
}
FWIW a UI toolkit such as GTK+ does it this way:
for each window, maintains a "damage region" (union of all expose events)
when the damage region becomes non-empty, adds an "idle handler" which is a function the event loop will run when it doesn't have anything else to do
the idle handler will run when the event queue is empty AND the X socket has nothing to read (according to poll() on ConnectionNumber(dpy))
the idle handler of course repaints the damage region
In GTK+, they're changing this over to a more modern 3D-engine oriented way (clean up the damage region on vertical sync) in a future version, but it's worked in the fairly simple way above for many years.
When translated to raw Xlib, this looks about like n.m.'s answer: repaint when you have a damage region and !XPending(). So feel free to accept that answer I just figured I'd add a little extra info.
If you wanted things like timers and idles, you could consider something lke libev http://software.schmorp.de/pkg/libev.html it's designed to just drop a couple of source files in your app (it isn't set up to be an external dependency). You would add the display's file descriptor to the event loop.
For tracking damage regions, people often cut-and-paste the file "miregion.c" which is from the "machine independent" code in the X server. Just google for miregion.c or download the X server sources and look for it. A "region" here is simply a list of rectangles which supports operations such as union and intersect. To add damage, union it with the old region, to repair damage, subtract it, etc.
Try something like the following (not actually tested):
while (TRUE) {
if (XPending(display) || !pendingRedraws) {
// if an event is pending, fetch it and process it
// otherwise, we have neither events nor pending redraws, so we can
// safely block on the event queue
XNextEvent (display, &ev);
if (isExposeEvent(&ev)) {
pendingRedraws = TRUE;
}
else {
processEvent(&ev);
}
}
else {
// we must have a pending redraw
redraw();
pendingRedraws = FALSE;
}
}
It could be beneficial to wait for 10 ms or so before doing the redraw. Unfortunately the raw Xlib has no interface for timers. You need a higher-level toolkit for that (all toolkits including Xt have some kind of timer interface), or work directly with the underlying socket of the X11 connection.
In a previous SO question it was recommended to me to use callback/event firing instead of polling. Can someone explain this in a little more detail, perhaps with references to online tutorials that show how this can be done for Java based web apps.
Thanks.
The definition of a callback from Wikipedia is:
In computer programming, a callback is
executable code that is passed as an
argument to other code. It allows a
lower-level software layer to call a
subroutine (or function) defined in a
higher-level layer.
In it's very basic form a callback could be used like this (pseudocode):
void function Foo()
{
MessageBox.Show("Operation Complete");
}
void function Bar(Method myCallback)
{
//Perform some operation
//When completed execute the callback method
myCallBack().Invoke();
}
static int Main()
{
Bar(Foo); //Pops a message box when Bar is completed
}
Modern languages like Java and c# have a standardized way of doing this and they call it events. An event is simply a special type of property added to a class that contains a list of Delegates / Method Pointers / Callbacks (all three of these things are the same thing. When the event gets "fired" it simply iterates through it's list of callbacks and executes them. These are also referred to as listeners.
Here's an example
public class Button
{
public event Clicked;
void override OnMouseUp()
{
//User has clicked on the button. Let's notify anyone listening to this event.
Clicked(); //Iterates through all the callbacks in it's list and calls Invoke();
}
}
public class MyForm
{
private _Button;
public Constructor()
{
_Button = new Button();
//Different languages provide different ways of registering listeners to events.
// _Button.Clicked += Button_Clicked_Handler;
// _Button.Clicked.AddListener(Button_Clicked_Handler);
}
public void Button_Clicked_Handler()
{
MessageBox.Show("Button Was Clicked");
}
}
In this example the Button class has an event called Clicked. It allows anyone who wants to be notified when is clicked to register a callback method. In this case the "Button_Clicked_Handler" method would be executed by Clicked event.
Eventing/Callback architecture is very handy whenever you need to be notified that something has occurred elsewhere in the program and you have no direct knowledge of when or how this happens.
This greatly simplifies notification. Polling makes it much more difficult because you are responsible for checking every so often whether or not an operation has completed. A simple polling mechanism would be like this:
static void CheckIfDone()
{
while(!Button.IsClicked)
{
//Sleep
}
//Button has been clicked.
}
The problem is that this particular situation would block your existing thread and have to continue checking until Button.IsClicked is true. The nice thing about eventing architecture is that it is asynchronous and let's the Acting Item (button) notify the listener when it is completed instead of the listener having to keep checking,
The difference between polling and callback/event is simple:
Polling: You are asking, continuously or every fixed amount of time, if some condition is meet, for example, if some keyboard key have been pressed.
Callback: You say to some driver, other code or whatever: When something happens (the keyboard have been pressed in our example), call this function, and you pass it what function you want to be called when the event happens. This way, you can "forget" about that event, knowing that it will be handled correctly when it happens.
Callback is when you pass a function/object to be called/notified when something it cares about happens. This is used a lot in UI - A function is passed to a button that is called whenever the button is pressed, for example.
There are two players involved in this scenario. First you have the "observed" which from time to time does things in which other players are interested. These other players are called "observers". The "observed" could be a timer, the "observers" could be tasks, interested in alarm events.
This "pattern" is described in the book "Design Patterns, Elements of Reusable Object-Oriented Software" by Gamma, Helm, Johnson and Vlissides.
Two examples:
The SAX parser to parse XML walks
trough an XML file and raises events
each time an element is encountered.
A listener can listen to these
elements and do something with it.
Swing and AWT are based on this
pattern. When the user moves the
mouse, clicks or types something on
the keyboard, these actions are
converted into events. The UI
components listen to these
events and react to them.
Being notified via an event is almost always preferable to polling, especially if hardware is involved and that event originates from a driver issuing a CPU interrupt. In that case, you're not using ANY cpu at all while you wait for some piece of hardware to complete a task.
This is a Windows Forms application. I have a function which captures some mouse events modally till a condition is met. For example, I would like to wait for the user to select a point in the window's client area (or optionally cancel the operation using the Escape key) before the function returns. I am using the following structure:
Application::AddMessageFilter(someFilter);
while(someFilter->HasUserSelectedAPoint_Or_HitEscapeKey()){
Application::DoEvents();
}
Application::RemoveMessageFilter(someFilter);
This works quite nicely except for taking up nearly 100% CPU usage when control enters the while loop. I am looking for an alternative similar to what is shown below:
Application::AddMessageFilter(someFilter);
while(someFilter->HasUserSelectedAPoint_Or_HitEscapeKey()){
// Assuming that ManagedGetMessage() below is a blocking
// call which yields control to the OS
if(ManagedGetMessage())
Application::DoEvents();
}
Application::RemoveMessageFilter(someFilter);
What is the right way to use IMessageFilter and DoEvents? How do I surrender control to the OS till a message is received? Any GetMessage equivalent in the managed world?
You could sleep the thread for 500ms or so between DoEvents() calls. Experiment with different values to see what feels right.