I want to use the single maximum value of a signal (with increasing values) as an input to an integrator. I have tried the max-min block but yet its not giving a single value as the maximum value. I also want ot use the maximum value of a signal as an input value to a simulink block in a single run of a model. Is it possible in with Simulink?
I have to give initial condition of an integrator as a temperature signal which is increasing from 78 to 280 degC which can again change with time. I want to input the max value (e.g.280) as an intial value to another block too. But I am not able to retrive max value from this increasing signal.
Related
I have a device that is sending continuously data. The data received changes the waveform in time. For example, for some hours I could receive data like this one:
https://www.dropbox.com/s/g6thhtat1zx9rxm/1.PNG?dl=0
and after some time to begin receiving data like this:
https://www.dropbox.com/s/u10vckcplev0qyh/2.JPG?dl=0
What do I need:
Count the number of cycles
If the waveform is changed, to detect and to count cycles based on the new pattern
In the first image the algorithm shall count: 4 cycles
In the second image the algorithm shall count: 3 cycles
Calculate auto-correlation for signal.
If period does exist, its value should correspond to the first non-zero peak in AC power spectrum. Divide full length by period value to get number of periods.
Don't forget to check whether determined period is real one (perhaps it is not so simple problem in signal processing)
When counting for events based on a specific sampling period, how to handle the last recorded sample when the last counter value of the leader is less than the sampling period.
Update:
I have checked the value of type which is a member of struct perf_event_header. For the last recorded sample this value is zero and according to perf_event.h header file, it does not seem that the value of zero has a corresponding sample record type!
To put my question in other words: How does perf_event API deal with the case when the workload finishes execution but the group leader counter value is less than the value of the sampling period? Is the data discarded at this case?
How does perf_event API deal with the case when the workload finishes execution but the group leader counter value is less than the value of the sampling period?
Nothing happens. If the event count is not reached yet, no sample is written.
You should consider that samples are typically statistical information.
If you really need to know you could possibly use some form of ptrace and manually read the counter value before the thread terminates.
If you read a perf_event_header with a type == 0, I would be concerned. I don't think that should ever happen.
Edit:
As per the manpage, I believe you cannot read the remaining value from that particular event because sampling and counting events are exclusive.
Events come in two flavors: counting and sampled. A counting event
one that is used for counting the aggregate number of events that.
In general, counting event results are gathered with a
read(2) call. A sampling event periodically writes measurements to a buffer
that can then be accessed via mmap(2).
Say I have an array which is initialized in the Master process (rank=0) and contains random integers.
I want to sum all its (the array) elements by a Slave process (rank=1) when the full array is only available to the Master process (meaning I can't just MPI_SEND the full array to the slave).
I know I can use schedule in order to divide the work between multiple threads, but I'm not sure how to do it without sending the whole array to the Slave process.
Also, I've been checking different clauses while trying to solve the problem and came across REDUCTION, I'm not sure exactly how it works.
Thanks!
What you want to do is indeed a reduction with sum as the operation. Here is how a reduction works: You have a collection of items and an operation you wish to perform that reduces them to a single item. For example, you want to sum every element in an array and end with a single number that is their sum.
To do this efficiently you divide your collection into equal sized chunks and distribute them to each participating process. Each process applies the operation to the elements in the collection until the process has a single value. In our running example, each process adds together its chunk of the array. Then half the processes send their results to another node which then applies the operation to the value it computed and the value it received. At this point only half the original processes are participating. We repeat this until one process has the final result.
Here is a link to a graphic that should make this a lot easier to understand: http://3.bp.blogspot.com/-ybPe3bJrpgc/UzCoG9BUFuI/AAAAAAAAB2U/Jz6UcwV_Urk/s1600/TreeStructure.JPG
Here is some MPI code for a reduction: https://computing.llnl.gov/tutorials/mpi/samples/C/mpi_array.c
I have a bunch of rows on HBase which store varying sizes of data (0.5 MB to 120 MB). When the scanner cache is set to say 100, the response sometimes gets too large and the region server dies. I tried but couldn't find a solution. Can someone help me finding
What is the maximum response size that HBase supports?
Is there a way to limit the response size at the server so that the result will be limited to a particular value (answer to the first question) so that the result will be returned as soon as the limit is reached?
What happens if a single record exceeds this limit? There should be a way to increase it but I don't know how.
1. What is the maximum response size that HBase supports?
It is Long.MAX_VALUE and represented by the constant DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE
public static long DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE = Long.MAX_VALUE;
2. Is there a way to limit the response size at the server so that the result will be limited to a particular value (answer to the first question) so that the result will be returned as soon as the limit is reached?
You could make use of the property hbase.client.scanner.max.result.size to handle this. It allows us to set a maximum size rather than count of rows on what a scanner gets in one go. It is actually the maximum number of bytes returned when calling a scanner's next method.
3. What happens if a single record exceeds this limit? There should be a way to increase it but I don't know how.
Complete record(row) will be returned even if it exceeds the limit.
CREATE SEQUENCE S1
START WITH 100
INCREMENT BY 10
CACHE 10000000000000000000000000000000000000000000000000000000000000000000000000
If i fire a query with such a big size even if it creates the sequence s1.
What is the max size that I can provide with it???
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_6015.htm#SQLRF01314
Quote from 11g docs ...
Specify how many values of the sequence the database preallocates and keeps in memory for faster access. This integer value can have 28 or fewer digits. The minimum value for this parameter is 2. For sequences that cycle, this value must be less than the number of values in the cycle. You cannot cache more values than will fit in a given cycle of sequence numbers. Therefore, the maximum value allowed for CACHE must be less than the value determined by the following formula:
(CEIL (MAXVALUE - MINVALUE)) / ABS (INCREMENT)
If a system failure occurs, then all cached sequence values that have not been used in committed DML statements are lost. The potential number of lost values is equal to the value of the CACHE parameter.
Determining the optimal value is a matter of determining the rate at which you will generate new values, and thus the frequency with which recursive SQL will have to be executed to update the sequence record in the data disctionanry. Typically it's higher for RAC systems to avoid contention, but then they are also generally busier as well. Performance problems relating to insufficient sequence cache are generally easy to sport through AWR/Statspack and other diagnostic tools.
Looking in the Oracle API, I don't see a maximum cache size specified (Reference).
Here are some guidelines on setting an optimal cache size.