CANoe use 2 LDF - capl

Is it possible to use 2 LDF in the same CANoe configuration ?
Right now when I try to add the second LDF I get this error message:

From my experience you can only use one ldf per LIN bus. If you have a second bus this one can have another ldf.
Maybe consider to combine the two ldf files into one before importing? The concept of lin with a single master controlling all the traffic should lead to no need of two different description files.

Related

win performance monitor: monitor service by using "command line" instead of "name"

I need to track a process by using performance monitor , and this is pretty simple
The problem is that I have more processes with the same name and basically i don't know which one i need
For example, take a look at the picture, i have two sql server but let's say i want to track the first one , how can i do that?
From what i see , the only difference is the command line so , would be great if i could track the one i need by using cmd line instead of name.
is it possible?
any idea?
img sample
Aside from the process name, there is also the process identification number (PID), which is unique for each process. It is listed in the task manager under the details tab.

How to implement the equivalent of the Aggregator EIP in Nifi

I'm very experienced with Apache Camel and EIPs and am struggling to understand how to implement equivalents in Nifi. I understand that Nifi uses a different paradigm (flow based programming) but I don't think what I'm trying to do is unreasonable.
In a nutshell I want the contents of each file to be sent to many rest services and I want to aggregate the responses into a single document which will stored in elasticsearch. I might also do some further processing and cleanup to improve what is stored (but this isn't my immediate issue)
The screenshot is a quick mock-up of what I'm trying to achieve but I don't understand enough about Nifi to know how to implement this pattern correctly.
If you are going to take a single piece of data and then fork to multiple parts of the flow and then converge back, there needs to be a way for MergeContent to know which pieces go together.
There are generally two ways this can be done...
The first is using MergeContent in "defragment mode". Think of this as reversing a split operation that was performed by one of the split processors like SplitText. For example, you split a file of 100 lines into 100 flow files of 1 line each, then do some stuff to each one, then want to converge back. The split processors produce a standard set of split attributes (described in the docs of the processors) and the defragment mode knows how to bin the splits accordingly and merge them back together. This probably doesn't apply to your example since you didn't start with a split processor.
The second approach is the "Correlation Attribute" in MergeConent. This tells merge content to only merge flow files together that have the same value for the attribute specified. In your example, when a file gets picked up by GetFile and sent to 3 InvokeHttp processors, there are 3 flow files created, and they all should have their "filename" attribute set to the name of the file picked up from disk. So telling MergeContent to correlate on filename should do the trick, and probably setting the min and max number of entries to the number you expect like 3, and a maximum time in case one of them fails or hangs.

Assigning chunk of data from an ipcore output to next ipcore input

I have a set of 16 data stored in BRAM ipcore. Now I have to fetch first 4 at a time and give it to the next IPcore (say FFT) for further processing. Once done with this, I have to feed the next set of 4 data. Is the situation handled by state machine? OR how can i assign values from one ipcore to the next ipcore ? Please help!!
BRAM is dual-ported. Simplest is to make a FIFO out of it with one IP core writing and the other IP core reading.
This will not work for an FFT as that requires a special addressing scheme: the data write order is different from the read order. There you need each IP core to make an address and connect it to the two ports of the BPRAM.
In all cases you need handshaking to guarantee reading only starts after the data has been written of course.

Nifi: how to avoid copying file that are partially written

I am trying to use Nifi to get a file from SFTP server. Potentially the file can be big , so my question is how to avoid getting the file while it is being written. I am planning to use ListSFTP+FetchSFTP but also okay with GetSFTP if it can avoid copying partially written files.
thank you
In addition to Andy's solid answer you can also be a bit more flexible by using the ListSFTP/FetchSFTP processor pair by doing some metadata based routing.
After ListSFTP each flowfile will have attributes such as 'file.lastModifiedTime' and others. You can read about them here https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.3.0/org.apache.nifi.processors.standard.ListSFTP/index.html
You can put a RouteOnAttribute process in between the List and Fetch to detect objects that at least based on the reported last modified time are 'too new'. You could route those to a processor that is just a slow pass through to intentionally wait a bit. You can then run those back through the first router until they are 'old enough'. Now, this is admittedly a power user approach but it does give you a lot of flexibility and control. The approach I'm mentioning here is not fool proof as the source system may not report the last mod time correctly, it may not mean the source file is doing being written, etc.. But it gives you additional options IF you cannot do the definitely correct thing above that Andy talks about.
If you have control over the process which writes the file in, a common pattern to solve this is to initially write the file with a specific naming structure, such as beginning with .. After the successful write operation, the file is renamed without the . and it is picked up by the processor. Both GetSFTP and ListSFTP have a processor property called Ignore Dotted Files which is set to true by default and means those processors will not operate on or return files beginning with the dot character.
There is a minimum file age property you can use. The last modification time gets updated as the file is being written. Setting this value to something other than 0 will help fix the problem:

GStreamer: Using type find

I've got a filesrc connected to a typefind element. On the "have-type" signal I print out the capibilities. What can I do with this information? I.e:
"Media type video/mpeg, systemstream=(boolean)false, mpegversion=(int)4, parsed=(boolean)false found, probability 79%"
Can I search for compatible elements or do I have to process this manually? How do I decide what the next element in the pipeline should be?
Also, please do not suggest using playbin2 - it is not suitable for my application.
Thanks!
This tells you at least whats in your file. Now you might want to connect a demux (according to typefinds info) and use the demultiplexers "pad-added" signal to process the media streams inside. (until it says "no-more-pads")
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-dynamic.html
uridecode2 ,playbin2 ,decodebin2
this all are auto-pluggers means you just need to give some input file name they will automatically create pipeline for that.
1st they take filesrc element and open that file and depending upon some header info they set caps of filesrc.
so now depending upon the caps of filesrc's src pad next demuxer is going to find from registery and it linked ...and so on
and i think you are going to some kind of this things in your application so i suggest you to one look in this autoplugger's source code..
start with playbin2 code..

Resources