How to synchronise AUTOSAR RTE write and read values with different task runnable? - events

I am new to AUTOSAR and I would like to call a RTE function to get a specific value from one SWC to another SWC. The RTE_Write is performing by one SWC 1 runnable with 10 msec task and RTE_read is performing by another SWC 2 with 15 msec task. I am using sender receiver interface to implement this concept.
SWC 1 :
Task_10msec()
{
eg:
int val = 0;
val = val +1 ;
Rte_write_test_port(val);
}
SWC 2 :
Task_15msec()
{
eg:
int val = 0;
Rte_read_test_port(&val);
}
Now here i am facing the problem is that RTE_read value is not sync with RTE_Write value because of the runnable timing (SWC 1 is 10 msec and SWC 2 is 15 msec). I would like to know, is there any way to design the interface/ any approach to get the exact read value in SWC 2 after writing from SWC 1?

You could try to add a QueuedReceiverComSpec on the receiver port and set the queueLength to e.g. 2. You should then use Rte_Receive instead of Rte_Read API and read until it returns RTE_E_NO_DATA in order to get all values provided by the other component.

What do you want to achieve?
Shall the receiver only get the latest value written by the sender? Then your solution is already sufficient.
Shall the receiver get all values that the sender wrote to the interface? Then you need to introduce a queue on the receiver port. In the receiver runnable, you can then read all elements in the queue until it is empty.
For more details, check the AUTOSAR RTE specification (chapter 4.3.1).

Related

How to record custom performance metrics with GCP Monitoring from python3 apps

I'm working on decorator that can be added to python methods that send a metric to GCP Monitoring. The approach is confirmed but the API calls to push the metrics fail if I attempt to send more than 1 observation. The patter is collect metrics and flush after the process finishes to keep it simple for this test. The code to capture the metric inline is here:
def append(self, value):
now = time.time()
seconds = int(now)
nanos = int((now - seconds) * 10 ** 9)
interval = monitoring_v3.TimeInterval(
{"end_time": {"seconds": seconds, "nanos": nanos}}
)
point = monitoring_v3.Point({
"interval": interval,
"value": {"double_value": value}
}
)
self.samples[self.name].append(point)
The code below takes a batch of data points in PerfMetric.samples dict pointing to arrays of the monitoring_v3.Point class which was attached in the method append via a decorator not shown here to call RPC called create_time_series using the MetricServiceClient class. We point to an array of arrays, so perhaps that's not right or somehow our meta data isn't right in append?
#staticmethod
def flush():
client = monitoring_v3.MetricServiceClient()
for x in PerfMetric.samples:
print('{} has {} points'.format(x, len(PerfMetric.samples[x])))
series = monitoring_v3.TimeSeries()
series.metric.type = 'custom.googleapis.com/perf/{}'.format(x)
series.resource.type = "global"
series.points = PerfMetric.samples[x]
client.create_time_series(request={
"name": PerfMetric.project_name,
"time_series": [series]}
)
Thanks in advance for any suggestions!
I believe this is a documented limitation in the TimeSeries call from the Cloud Monitoring API regarding the points[] object for its data points:
When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric.

How to run load test Scenario once every hour?

I have a loadTest with several scenarios, running for 12 hours.
I want to add another scenario, that will run once a hour, by 10 virtual users.
The ugly solution I'm using is to have 12 additional scenarios, each one has its own "delayed start", with 1 hour interval.
This is an ugly solution.
How can I tell a specific scenario to run once a hour.
Note: For this case I don't need it to run sharp each hour. The main idea is to have a task that run +/- every hour.
I suggest having a load test with two scenarios, one for the main user load, the other for the hourly 10-user case. Then arrange that the number of virtual users (VUs) for the 10-user is set to 10 at the start of every hour and reduced as appropriate. The question does not state how long the 10-user tests runs each hour.
The basic way of achieving this is to modify m_loadTest.Scenarios[N].CurrentLoad, for a suitable N, in a load test heartbeat plugin. The heartbeat is called, as it name suggests, frequently during the test. So arrange that it checks the run time of the test and at the start of each hour assign m_loadTest.Scenarios[N].CurrentLoad = 10 and a short time later set it back to 0 (i.e. zero). I believe that setting the value to a smaller value than its previous value allows the individual test executions by a VU to run to their natural end but the VUs will not start new tests that would exceed the value.
The plugin code then could look similar to the following (untested):
public class TenUserLoadtPlugin : ILoadTestPlugin
{
const int durationOf10UserTestInSeconds = ...; // Not specified in question.
const int scenarioNumber = ...; // Experiment to determine this.
public void Initialize(LoadTest loadTest)
{
m_loadTest = loadTest;
// Register to listen for the heartbeat event
loadTest.Heartbeat += new EventHandler<HeartbeatEventArgs>(loadTest_Heartbeat);
}
void loadTest_Heartbeat(object sender, HeartbeatEventArgs e)
{
int secondsWithinCurrentHour = e.ElapsedSeconds % (60*60);
int loadWanted = secondsWithinCurrentHour > durationOf10UserTestInSeconds ? 0 : 10;
m_loadTest.Scenarios[scenarioNumber].CurrentLoad = loadWanted;
}
LoadTest m_loadTest;
}
There are several web pages about variations on this topic. Searching for terms such as "Visual Studio custom load patterns". See this page for one example.

How to design a Springbatch job which fetches records from DB in batches and run multiple processor and writer in parallel

Scenario: Read records from DB and create 4 different output files from it.
Tech Stack:
Springboot 2.x
springBatch 4.2.x
ArangoDB 3.6.x
Current Approach: SpringBatch job which has the below steps in sequence:
jobBuilderFactory.get("alljobs")
.start(step("readAllData")) //reads all records from db, stores it in Obj1 (R1)
.next(step("processData1")) //(P1)
.next(step("writer1")) // writes it to file1(W1)
.next(step("reader2")) // reads the same obj1(R2)
.next(step("processor2")) // processes it (P2)
.next(step("writer2")) // writes it to file1(W2)
.next(step("reader3")) // reads the same obj1 (R3)
.next(step("processor3")) // processes it (P3)
.next(step("writer3")) // writes it to file1(W3)
.next(step("reader4")) // reads the same obj1(R4)
.next(step("processor4")) // processes it (P4)
.next(step("writer4")) // writes it to file1 (W4)
.build()
Problem: Since the volume of data coming from DB is HUGE, > 200,000 records, hence now we are fetching the records via cursor in a batch of 10,000 records.
Target state of the job: A reader job which fetches the records from DB via a cursor in batch of 1000 records:
For each batch of 1000 records I have to run processor and writer for the same.
Also, since for all the rest 3 processor and writers, the data set will be the same (Obj1 which will be fetched from the cursor), triggering them in parallel.
Reader1() {
while(cursor.hasNext()) {
Obj1 = cursor.next();
a) P1(Obj1); | c) R2(Obj1); | c) R3(Obj1); | c) R4(Obj1); ||
b) W1(Obj1); | d) P2(Obj1); | d) P3(Obj1); | d) P4(Obj1); || All these running in parallel.
| e) W2(Obj1); | e) W3(Obj1); | e) W4(Obj1); ||
}
}
Below are approaches that popped in my mind:
Invoke the Job inside the cursor itself and execute all steps P1....W4 inside the cursor iteration by iteration.
Invoke a job which has first step as Reader1, and then inside the cursor, invoke another subJob which has all these P1....W4 in parallel, since we can not go out of the cursor.
Kindly suggest the best way to implement.
Thanks in Advance.
Update:
I was trying to make the steps(P1....W4) inside My Reader1 step in a loop , but am stuck with the implementation as everything here is written as a Step and am not sure how to call multiple steps inside R1 step in a loop . I tried using a Decider , putting P1...W4 in a Flow(flow) :
flowbuilder.start(step("R1"))
.next(decider())
.on(COMPLETED).end()
.from(decider())
.on(CONTINUE)
.flow(flow)
job.start(flow)
.next(flow).on("CONTINUE").to(endJob()).on("FINISHED").end()
.end()
.build()
But I am not able to go back to the next cursor iterations , since the cursor iteration is there in the R1 step only.
I also tried to put all steps R1...W4(including Reader1) in the same flow, but the flow ended up throwing cyclic flow error .
Kindly suggest what should be a better way to implement this? How to make all the other steps called in parallel inside the cursor iterating in R1 step.
I believe using 4 parallel steps is a good option for you. Even if you would have 4 threads reading from the same data, you should benefit from parallel steps during the processing/writing phases. This should definitely perform better than 4 steps in sequence. BTW, 200k records is not that much (of course it depends on the record size and how it is mapped, but I think this should be ok, reading data is never the bottleneck).
It's always about trade-offs.. Here I'm trading a bit of read duplication for a better overall throughput thanks to parallel steps. I would not kill my self to make sure items are read only once and complicate things.
A good analogy of such a trade-off in the database world is accepting some data duplication in favor of faster queries (think of NoSQL design where it is sometime recommended to duplicate some data to avoid expensive joins).
This is how I finally designed the solution:
So, I re-framed the whole flow from a Tasklet based approach to an Orchestrated Chunk Based approach .
Job will have 1 step called - fetchProcessAndWriteData .
jobBuilderFactory.get("allChunkJob")
.start(step("fetchProcessAndWriteData"))
.next(step("updatePostJobRunDetails"))
.build()
fetchProcessAndWriteData : will have a reader , masterProcessor and masterWriter with a chunk size of 10,000 .
steps
.get("fetchProcessAndWriteData")
.chunk(BATCHSIZE)
.reader(chunkReader)
.processecor(masterProcessor)
.writer(masterWriter)
.listener(listener())
.build()
chunkReader- reader data in chunks from the database cursor and pass it on to the masterProcessor .
masterProcessor - accepts data one by one and pass the records to all the other processecors - P1, P2, P3, P4
and stores the processed data in a compositeResultBean .
CompositeResultBean consists of data holders for all 4 types of records .
List<Record> recordType1.
List<Record> recordType2.
List<Record> recordType3.
List<Record> recordType4.
This bean is then returned from the process method of the masterProcessor .
public Object process(Object item){
..
bean.setRecordType1(P1.process(item));
bean.setRecordType2(P2.process(item));
bean.setRecordType3(P3.process(item));
bean.setRecordType4(P4.process(item));
return bean;
}
masterWriter - this step accepts a List of records i.e. list of compositeResultBean here. Iterate on the list of bean and call the respective
writers W1, W2,W3,W4 writer() method with the data held in each of compositeResultBean attributes .
public void write(List list) {
list.forEach(record -> {
W1.write(isInitialBatch,list.getRecordType1());
W2.write(isInitialBatch,list.getRecordType2());
W3.write(isInitialBatch,list.getRecordType3());
W4.write(isInitialBatch,list.getRecordType4());
});
}
These whole steps are carried in a batch of 10k records and write the data into the file .
Another challenge that I faced during writing the File was that I would have to replace the already existing file for the very first time the record are written ,but have to append for the later ones in the same file .
I solved this problem by overring chunkListener in the masterWriter - where I pulled in the batch # and set a static flag isInitialBatch defaulting to TRUE.
This variable is set inside the
beforeChunk()
if chunkContext.getStepContext().getStepExecution().getCommitCount()==0 as TRUE , else FALSE .
The same boolean is passed int he FileWriter which opens the file in append - TRUE or FALSE mode .
W1.write(isInitialBatch,list.getRecordType1());

Reactive stream backpressure with spring reactor project

I have research and read documents by they are not very understandable.
What i am trying to achieve is the following functionality:
I am using Spring Reactor project and using the eventBus. My event bus is throwing event to module A.
Module A should receive the event and insert into Hot Stream that will hold unique values. Every 250 Milisecons the stream should pull all value and make calulcation on them.. and so on.
For example:
The eventBus is throwing event with number: 1,2,3,2,3,2
The Stream should get and hold unique values -> 1,2,3
After 250 miliseconds the stream should print the number and empty values
Anyone has an idea how to start? I tried the examples but nothing really works and i guess i don't understand something. Anyone has an example?
Tnx
EDIT:
When trying to do the next i always get exception:
Stream<List<Integer>> s = Streams.wrap(p).buffer(1, TimeUnit.SECONDS);
s.consume(i -> System.out.println(Thread.currentThread() + " data=" + i));
for (int i = 0; i < 10000; i++) {
p.onNext(i);
}
The exception:
java.lang.IllegalStateException: The environment has not been initialized yet
at reactor.Environment.get(Environment.java:156) ~[reactor-core-2.0.7.RELEASE.jar:?]
at reactor.Environment.timer(Environment.java:184) ~[reactor-core-2.0.7.RELEASE.jar:?]
at reactor.rx.Stream.getTimer(Stream.java:3052) ~[reactor-stream-2.0.7.RELEASE.jar:?]
at reactor.rx.Stream.buffer(Stream.java:2246) ~[reactor-stream-2.0.7.RELEASE.jar:?]
at com.ta.ng.server.controllers.user.UserController.getUsersByOrgId(UserController.java:70) ~[classes/:?]
As you can see i cannot proceed trying without passing this issue.
BY THE WAY: This is happeing only when i use buffer(1, TimeUnit.SECONDS) If i use buffer(50) for example it works.. Although this is not the final solution its a start.
Well after reading doc again i missed this:
static {
Environment.initialize();
}
This solved the problem. Tnx

protobuffer file with many sub messages - one big file or imports?

We recently started using protobuffers in the company I work for, i was wondering what was the best practice regarding a message that holds other messages as fields.
Is it common to write everything in one big proto file or is it better to separate the different messages to different files and import the messages you need in the main file?
For example:
Option 1:
message A {
message B {
required int id = 1;
}
repeated B ids = 1;
}
Option 2:
import B.proto;
message A {
repeated B ids = 1;
}
And in a different file:
message B {
required int id = 1;
}
It depends on your dataset and the usage.
if your data set is small, you should prefer option 1. It leeds to less coding for serialization and deserialization.
if your data set is big, you should prefer option 2. If the file is too big, you can't load it completely into memory. And it will be very slow, if you need only one information and you read all the information of the file.
Maybe this is helpful.

Resources