Drain a 3V coin battery with Labview to test the lifetime of the battery? - measurement

I want to simulate/measure the lifetime of a 3V coin battery. This is the circuit which will give the burst to the battery:
link text
The burst is controlled with the CTRL1 and CTRL2 lines with some timing rquirements.
One burst is:
ARM TX RX TX RX TX
CTRL1 L H H H H H
CTRL2 H H L H L H
Length (ms) 3.72 2.6 0.84 4.04 0.8 1
H = 10 V
L = 0 V
Now I want to drain the battery by applying one Burst gap one Burst gap etc..
The gap should be variable. First I want to use 10 seconds as a gap. I want to draw the battery characteristic. I wanna simulate for example 5 years which should be 1.2 days in the simulation. I have the NI PCI 6221 (37 pin) DAQ card. Can somebody help me to make a VI for this project. The bursts should be in a loop which I should be able to control how long it should run (i.e. 1 day, or 1.5 days) And how I can apply 10 V or 0 V to the CTRL1 and CTRL2 lines in Labview?
Thanks in advance.
EDIT:
Ok I have now made a continously acquiring voltage and graphing VI which has a physical channel as an input from where the voltage is acquired. But I dont know how to do the counters part which will output the timed signals for the TTL (MOSFET) to create the burst signal with gaps which then will drain the batterry.

Start with the DAQ Assistant. Look for tutorials that include this VI.
On the block diagram look under Measurement I/O > DAQmx - Data Acquisition > DAQ Assistant. You should be able to hook that up to your DAQ card.
Look at NI Forums too.
EDIT: If you're unfamiliar with or new to LabVIEW, just browse the example code located in National Instruments\LabVIEW x.x\examples. For your applications, examples\DAQmx would probably have relevant code. Stick to basics at first, such as examples\general. You can even modify the examples just don't overwrite by accident.

Related

How to improve ESP32 sleep time accuracy

When sending an esp32 device to sleep for approximately a day, it wakes up 3.34% earlier than expected. This amounts to approx 48 minutes.
Is this the expected accuracy of this device or can it be tuned to be more accurate?
The concrete device is an ESP32-CAM and it is running at 80MHz at approx 25°C room temperature.
Code to send the device to sleep:
unsigned int time_to_sleep_sec = 86400;
esp_sleep_enable_timer_wakeup(1ULL * unsigned int time_to_sleep_sec * 1000 * 1000);
esp_deep_sleep_start();
Instead of 86400 seconds, the device woke up after approx 83465 seconds.
the default RTC clock source is only 5% accurate, so it's within spec. Check the info:
The RTC timer has the following clock sources:
Internal 150kHz RC oscillator (default): Features lowest deep sleep
current consumption and no dependence on any external components.
However, as frequency stability is affected by temperature
fluctuations, time may drift in both Deep and Light sleep modes.
External 32kHz crystal: Requires a 32kHz crystal to be connected to
the 32K_XP and 32K_XN pins. Provides better frequency stability at the
expense of slightly higher (by 1 uA) Deep sleep current consumption.
External 32kHz oscillator at 32K_XN pin: Allows using 32kHz clock
generated by an external circuit. The external clock signal must be
connected to the 32K_XN pin. The amplitude should be less than 1.2 V
for sine wave signal and less than 1 V for square wave signal. Common
mode voltage should be in the range of 0.1 < Vcm < 0.5xVamp, where
Vamp is signal amplitude. Additionally, a 1 nF capacitor must be
placed between the 32K_XP pin and ground. In this case, the 32K_XP pin
cannot be used as a GPIO pin.
Internal 8.5MHz oscillator, divided by
256 (~33kHz): Provides better frequency stability than the internal
150kHz RC oscillator at the expense of higher (by 5 uA) deep sleep
current consumption. It also does not require external components.
So if you aren't able to add components, the 5uA 'expense' appears to be reasonable. Otherwise the best solution is to add an external 32kHz crystal.
Or you wakeup the device during sleep to correct the time with help of the internet like SNTP.

Computing End-To-End Delay in Veins

I have read a bunch of posts on SO regarding the computation of end-to-end delay in Veins, but have not found an answer to be fulfilling in explaining why the delay is seemingly too low.
I am using:
Veins 4.7
Sumo 0.32.0
Omnetpp 5.3
Channel switching is turned off.
I have the following code, sending a message from the transmitting node:
if(sendMessage) {
WaveShortMessage* wsm = new WaveShortMessage();
sendDown(wsm);
}
The receiving node computes the delay using the wsm creation time, but I have also tried setting the timestamp on the transmitting side. The result is the same.
simtime_t delay = simTime() - wsm -> getCreationTime();
delayVector.record(delay);
The sample output for the delay vector is as follows:
Item# Event# Time Value
0 165 14.400239402394 2.39402394E-4
1 186 14.500240403299 2.40403299E-4
2 207 14.600241404069 2.41404069E-4
3 228 14.700242404729 2.42404729E-4
Which means that the end-to-end delay (from creation to reception) is equivalent to roughly a quarter of a millisecond, which seems to be quite low - and a fair bit below what is typically reported in the literature. This seems to be consistent with what other people have reported as being an issue (e.g. end to end delay in Veins)
Am I missing something in this computation? I have tried adding load on the network by adding a high number of vehicular nodes (21 nodes within a 1000x50 sandbox on a straight highway, with an average speed of 50 km/h), but the result seems to be the same. The difference is negligible. I have read several research papers that suggest that end-to-end delay should increase dramatically in high vehicular densities.
This end-to-end delay is to be expected. If your application's simulation model does not explicitly model processing delay (e.g., by an application running on a slow general purpose computer), all you would expect to delay a frame is propagation delay (lightspeed, so negligible here) and queueing delay on the MAC (time from inserting frame into TX queue until transmission finishes).
To give an example, for a 2400 bit frame sent at 6 Mbit/s this delay is roughly 0.45 ms. You are likely using slightly shorter frames, so your values appear to be reasonable.
For background information, see F. Klingler, F. Dressler, C. Sommer: "The Impact of Head of Line Blocking in Highly Dynamic WLANs" (DOI 10.1109/TVT.2018.2837157), which also includes a comparison of theory vs. Veins vs. real measurements.

PWM transistor heating - Rapberry

I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.

Real Time Workaround using windows for fixed sampling time

I am trying to collect data off an accelerometer sensor. I have an Arduino doing the analog to digital conversion of the signal and sending it through a serial port to MATLAB on Windows.
I send a reading every 5ms from the Arduino through the serial port. I am saving that data using MATLAB's serial read in a vector as well as the time at which it was read using the clock method.
If I was to plot the column of the vector where I have saved at which second I read, I get a curve (non-linear), and when I look at the difference between 1 read and another, I see that it is slightly varying.
Is there any way to get the data saved in real time with fixed sampling time?
Note: I am using 250000 baud rate.
Matlab Code:
%%%%% Initialisation %%%%%
clear all
clc
format shortg
cnt = 1;%File name changer
sw = 1;%switch: 0 we add to current vector and 1 to start new vector
%%%%% Initialisation %%%%%
%%%%% Communication %%%%%
arduino=serial('COM7','BaudRate',250000);
fopen(arduino);
%%%%% Communication %%%%%
%%%%% Reading from Serial and Writing to .mat file%%%%%
while true,
if sw == 0,
if (length(Vib(:,1))==1000),% XXXX Samples in XX minutes
filename = sprintf('C:/Directory/%d_VibrationReading.mat',cnt);
save (filename,'Vib');
clear Vib
cnt= cnt+1;
sw = 1;
end
end
scan = fscanf(arduino,'%f');
if isfloat(scan) && length(scan(:,1))==6,% Change length for validation
vib = scan';
if sw == 1,
Vib = [vib clock];
sw = 0;
else
Vib = [Vib;vib clock];
end
end
end
%%%%% Reading from Serial and Writing to .mat file%%%%%
% Close Arduino Serial Port
fclose(arduino);
Image 1 shows the data received through serial (each Row corresponding to 1 serial read)
Image 2 shows that data saved with the clock
Image 1:
Image 2:
I know that my answer does not contain a quick and easy solution. Instead it primarily gives advice how to redesign your system. I worked with real-time systems for several years and saw it done wrong too many time. It might be possible to just "fix", but working with your current communication pattern tweaking the performance but I am convinced you will never receive reliable time information.
I will answer this from a general system design perspective, instead of trying to fix your code. Where I see the problems:
In general, it is a bad idea to append time information on the receiving PC. Whenever the sensor is capable and has a clock, append the time information on the sensor system itself. This allows for an accurate relative timing between the measurements. Some clock adjustment might be necessary when the clock on the sensor is not set properly, but that is just a constant offset.
Switch from ASCII-encoded data to binary data. With your sample rate and baut rate set, you only have 50 bytes for each message.
Write a robust receiver. Just dropping messages you "don't understand" is not a good idea. Whenever the buffer is full, you might receive multiple messages unless you use a proper terminator.
Use preallocation. You know how large the batches you want to write are.
A simple solution for a message:
2 bytes - clock milliseconds
4 bytes - unix timestamp of measurement
For each sensor
2 bytes int32 sensor data
2 bytes - Terminator, constant value. Use a value which is outside the range for all previous integers, e.g. intmax
This message format should theoretically allow you to use 21 sensors. Now to the receiving part:
To get a first version running with a good performance, call fread (serial) with large batches of data (size parameter) and dump all readings into a large cell array. Something like:
C=cell(1000,1)
%seek until you hit a terminator
while not(terminator==fread(arduino,1));
for ix=1:numel(C)
C{ix}=fread(arduino,'int16',1000)
end
fclose(arduino);
Once you read the data append it to a single vector: C=[C{:}]; and try to parse it in post-processing. If you manage the performance you may later return to on-the-fly processing, but I recommend to start this way to get the system established.

Processor performance complex and simple instructions

I have been stuck on a problem in my class for a week now.
I was hoping someone could help steer me in the right direction.
a busy cat http://www.designedbychristian.com/unnamed.png
Processor R IS A 64-BIT RISC processor with a 2GHz clock rate. The average instruction requires one cycle to complete, assuming zero wait state memory accesses. Processor C is a CISC processor with a 1.8GHz clock rate. The average simple instruction requires one cycle to complete, assuming zero wait state memory accesses. The average complex instruction requires two cycles to complete, assuming zero wait state memory accesses. Processor R can't directly implement the complex processing instructions or Processor C. Executing an equivalent set of simple instructions requires an average of three cycles to complete, assuming zero wait state to memory accesses.
Program S contains nothing but simple instructions. Program C executes 70% simple instructions and 30% complex instructions. Which processor will execute program S more quickly? What percentage of complex instructions will the performance of the two processor be equal?
I attached an image above translating the data into excel the best I could.
I am not asking you guys to answer this for me but I am stuck completely and I would some help on where to start and what my answer should look like.
For the second part:
Processor R Total cycles = 1 x #simpleInstructions + 3 x #complexInstructions
Processor C Total cycles = 1 x #simpleInstructions + 2 x #complexInstructions
So, how much time for R, and how much time for C?
When expressing complex/simple instructions as a percentage,
RCycles = 1 x 0.7 x totalInstructions + 3 x 0.3 x totalInstructions
CCycles = 1 x 0.7 x totalInstructions + 2 x 0.3 x totalInstructions
Which is faster?
Now replace the percentages with a variable, equate the Rtime and Ctime and calculate percentage.

Resources