I'm trying to control a servo with a PWM signal with an ESP32 with micropython. I cannot seem to get the servo to move and therefore would like to check my PWM signal.
I created a testing script to generate a PWM signal on GPIO32 and measure this back on GPIO36. I connected a jumper wire between 32 and 36 and I'm using the following code:
"""Testing script"""
import machine
from machine import Pin, PWM
import utime
# PWM on pin 32
p_out = Pin(32, Pin.OUT)
pwm = PWM(p_out)
f = 500
pwm.freq(f)
dc = 512
pwm.duty(dc)
# Measure on pin 36
p_echo = Pin(36, Pin.IN)
while True:
timeout_us = int(2 * 1 / f * 1e6)
print(
f"Trying to measure pulse length of {dc/1024*1 / f * 1e6} us with a timeout of {timeout_us} us"
)
print(f"Pulse length: {machine.time_pulse_us(p_echo,0,timeout_us)} us")
utime.sleep_ms(100)
The only thing I get back is
Trying to measure pulse length of 1000.0 us with a timeout of 4000 us
Pulse length: -1 us
I'm obviously missing something here. The documentation says:
machine.time_pulse_us(pin, pulse_level, timeout_us=1000000, /)
Time a pulse on the given pin, and return the duration of the pulse in
microseconds. The pulse_level argument should be 0 to time a low pulse
or 1 to time a high pulse.
If the current input value of the pin is different to pulse_level, the
function first (*) waits until the pin input becomes equal to
pulse_level, then (**) times the duration that the pin is equal to
pulse_level. If the pin is already equal to pulse_level then timing
starts straight away.
The function will return -2 if there was timeout waiting for condition
marked (*) above, and -1 if there was timeout during the main
measurement, marked (**) above. The timeout is the same for both cases
and given by timeout_us (which is in microseconds).
Seems like the timeout expired and nothing happened. I don't really have something else to verify that the PWM output is actually doing something like a scope.
Figured it out, my firmware had a bug (v1.18), updating to esp32-ota-20220213-unstable-v1.18-128-g2ea21abae fixed the issue for me.
Related
Using Micropython for the ESP32 microcontroller, flashed with the latest firmware at time of writing (v1.18)
I'm making an alarm (sort-of) system where I get multiple time values ("13:15" for example) from my website, and then I have to ring an alarm bell at those times.
I've done the website and I can do the ring stuff, but I don't know how to actually create time objects from the previously mentioned strings ("13:15"), and then check if any of the times inputted match the current time, the date is irrelevant.
From reading the documentation, im getting the sense that this cant be done, since ive looked through the micropython module github, and you apparently cant get datetime in micropython, and i know that in regular python my problem can be solved with datetime.
import ntptime
import time
import network
# Set esp as a wifi station
station = network.WLAN(network.STA_IF)
# Activate wifi station
station.active(True)
# Connect to wifi ap
station.connect(ssid,passwd)
while station.isconnected() == False:
print('.')
time.sleep(1)
print(station.ifconfig())
try:
print("Local time before synchronization: %s" %str(time.localtime()))
ntptime.settime()
print("Local time after synchronization: %s" %str(time.localtime()))
except:
print("Error syncing time, exiting...")
this is the shortened code from my project, with only the time parts, now comes into play the time comparison thing I don't know how to do.
Using ntptime to get time from server. I use "time.google.com", to get the time. Then, I transform it into seconds (st) to be more accurate. And set my targets hour in seconds 1 hour = 3600 s.
import utime
import ntptime
def server_time():
try:
# Ask to time.google.com server the current time.
ntptime.host = "time.google.com"
ntptime.settime()
t = time.localtime()
# print(t)
# transform tuple time 't' to seconds value. 1 hour =
st = t[3]*3600 + t[4]*60 + t[5]
return st
except:
# print('no time')
st = -1
return st
while True:
# Returns an increasing millisecond counter since the Board reset.
now = utime.ticks_ms()
# Check current time every 5000 ms (5s) without going to sleep or stop any other process.
if now >= period + 5000:
period += 5000
# call your servertime function
st = server_time()
if ((st > 0) and (st < 39600)) or (st > 82800): # Turn On 17:00 Mexico Time.
# something will be On between 17:00 - 06:00
elif ((st <82800) and (st > 39600)): # Turn Off 6:00.
# something will be Off between 06:00 - 17:00
else:
pass
After running ntptime.settime() you can do the following to retrieve the time, keep in mind this is in UTC:
rtc = machine.RTC()
hour = rtc.datetime()[4] if (rtc.datetime()[4]) > 9 else "0%s" % rtc.datetime()[4]
minute = rtc.datetime()[5] if rtc.datetime()[5] > 9 else "0%s" % rtc.datetime()[5]
The if else statement makes sure that numbers lower or equal to 9 are padded with a zero.
I'm working with the TMediaPlayer1 control in an FMX app using C++ Builder 10.2 Version 25.0.29899.2631. The code below runs fine in Win32 and gives the expected result after loading an mp3 file that is 35 minutes, 16 seconds long.
When i run this same code targeting iOS i get the following error:
[bcciosarm64 Error] Unit1.cpp(337): use of overloaded operator '/' is ambiguous (with operand types 'Fmx::Media::TMediaTime' and 'int')
Here is my code that takes the TMediaPlayer1->Duration and converts it to min:sec,
UnicodeString S = System::Ioutils::TPath::Combine(System::Ioutils::TPath::GetDocumentsPath(),"43506.mp3");
if (FileExists(S)) {
MediaPlayer1->FileName = S;
int sec = MediaPlayer1->Duration / 10000000; // <-- this is problem line
int min = sec / 60;
sec = sec - (60 * min);
lblEndTime->Text = IntToStr(min) + ":" + IntToStr(sec);
}
How should i be doing that division?
UPDATE 1: I fumbled around and figured out how to see the values with this code below. When i run on Win32 i get 21169987500 for the Duration (35 min, 16 seconds) and i get 10000000 for MediaTimeScale - both correct. When i run on iOS i get 0 for Duration and 10000000 for MediaTimeScale. But, if i start the audio playing (e.g. MediaPlayer1->Play();) first and THEN run those 2 showmessages i get the correct result for Duration.
MediaPlayer1->FileName = S; // load the mp3
ShowMessage(IntToStr((__int64) Form1->MediaPlayer1->Media->Duration));
ShowMessage(IntToStr((__int64) MediaTimeScale));
It looks like the Duration does not get set on iOS until the audio actually starts playing. I tried a 5 second delay after setting MediaPlayer1->Filename but that doesn't work. I tried a MediaPlayer1->Play(); followed by MediaPlayer->Stop(); but that didn't work either.
Why isn't Duration set when the FileName is assigned? I'd like to show the Duration before the user ever starts playing the audio.
I have troubles to get my streaming over OTG-USB-FS configured as VCP. In my disposition I have nucleo-h743zi board that seems to doing a good job at sending me data, but on PC side I have a problem to receive that data.
for(;;) {
#define number_of_ccr 1024
unsigned int lpBuffer[number_of_ccr] = {0};
unsigned long nNumberOfBytesToRead = number_of_ccr*4;
unsigned long lpNumberOfBytesRead;
QueryPerformanceCounter(&startCounter);
ReadFile(
hSerial,
lpBuffer,
nNumberOfBytesToRead,
&lpNumberOfBytesRead,
NULL
);
if(!strcmp(lpBuffer, "end\r\n")) {
CloseHandle(FileHandle);
fprintf(stderr, "end flag was received\n");
break;
}
else if(lpNumberOfBytesRead > 0) {
// NOTE(): succeed
QueryPerformanceCounter(&endCounter);
time = Win32GetSecondsElapsed(startCounter, endCounter);
char *copyString = "copy";
WriteFile(hSerial, copyString , strlen(copyString), &bytes_written, NULL);
DWORD BytesWritten;
// write data to file
WriteFile(FileHandle, lpBuffer, nNumberOfBytesToRead, &BytesWritten, 0);
}
}
QPC shows that speed was 0.00733297970 - it's one time for one successful data block transfer (1024*4 bytes).
this is the Listener code, I bet that this is not how it should be done, so I here to seek advices. I was hopping that maybe full streaming without control sequences ("copy") will be possible, but in that case I can't receive adjacent data (within one transfer block it's OKAY, but two consecutive received blocks aren't adjacent.
Example:
block_1: 1 2 3 4 5 6
block_2: 13 14 15 16 17 18
Is there any way to speed up my receiving?
(I was trying O2 key without any success)
You need to configure buffer on PC side that will be 2 or 3 times the buffer you are transfer from your board, and use something like double buffer scheme for transferring the data. You transfer the first buffer while filing the second, then alternate.
Good thing to do is to activate caches, and place the buffers in fast memory for stm32h7 (it's 1 domain RAM).
But if your interface do not match the speed you needed, there will be no tricks to do this. Except maybe one, if your controller is fast enough -> you can implement and use lossless data compression on that data of yours and transfer compressed files. If you transmit low entropy data, this could give you a solid boost in speed.
I'm pretty new to coding. I'm trying to read a PT100 rtd via my Raspberry Pi 3. I read that I needed the Max31865 RTD amplifier to properly read the data because the resistances are so small. I am fairly certain I have it plugged in correctly.
I'm using this code, only slightly editted.
https://github.com/steve71/MAX31865
I'm getting two different outputs so far but it doesn't seem to correlate with anything I'm changing (The byte associated with the readTemp mostly) since I've run the same code twice and gotten both outputs. The outputs are as follows:
config register byte: ff
RTD ADC Code: 32767
PT100 Resistance: 429.986877 ohms
Straight Line Approx. Temp: 767.968750 degC
Callendar-Van Dusen Temp (degC > 0): 988.792111 degC
high fault threshold: 32767
low fault threshold: 32767
and
config register byte: 08
RTD ADC Code: 0
PT100 Resistance: 0.000000 ohms
Straight Line Approx. Temp: -256.000000 degC
Callendar-Van Dusen Temp (degC > 0): -246.861024 degC
high fault threshold: 0
low fault threshold: 0
Any help would be appreciated.
I'am dealing exactly with the same issue right now. Do you use your Pt100 with 3- or 4-wires?
I fixed the problem by setting the correct configuration status register in Line 78 of the original code (https://github.com/steve71/MAX31865) to 0xA2
self.writeRegister(0, 0xA2)
I am using 4-wires, so i had to change bit4 from 1 (3-wires) to 0 (2- or 4-wires)
0xb10100010
After this, i've got this as output
config register byte: 80
RTD ADC Code: 8333
PT100 Resistance: 101.721191 ohms
Straight Line Approx. Temp: 4.406250 degC
Callendar-Van Dusen Temp (degC > 0): 4.406808 degC
high fault threshold: 32767
low fault threshold: 0
Brrr... it's very cold in my room, isn't it? To fix this, i had to change the reference resistance in Line 170 to 430 Ohm
R_REF = 430.0 # Reference Resistor
It's curious, because i red a lot of times, there is a 400 Ohm resistance mounted on this devices as the reference. Indeed, on the SMD resistor is a 3-digit Code "431" which means 430 Ohm. Humm...
But now i have it nice and warm in here
Callendar-Van Dusen Temp (degC > 0): 25.091629 degC
Best regards
Did you get this resolved ? In case you didn't, the below python class method works for me. I remember that I had some trouble with wiring the force terminals, from memory for 2-wire you have to bridge both force terminals.
def _take_Resistance_Reading(self):
msg = '%s: taking resistance reading...' % self.Name
try:
self.Logger.debug(msg + 'entered method take_resistance_Reading()')
with self._RLock:
reg = self.spi.readbytes(9)
del reg[0] # delete 0th dummy data
self.Logger.debug("%s: register values: %s", self.Name, reg)
RTDdata = reg[1] << 8 | reg[2]
self.Logger.debug("%s: RTD data: %s", self.Name, hex(RTDdata))
ADCcode = RTDdata >> 1
self.Logger.debug("%s: ADC code: %s", self.Name, hex(ADCcode))
self.Vout = ADCcode
self._Resistance = round(ADCcode * self.Rref / 8192, 1)
self.Logger.debug(msg + "success, Vout: %s, resistance: %s Ohm" % (self.Vout, self._Resistance))
return True
except Exception as e:
I had coded the 80c51 architecture in VHDL using xilinx. In an attempt to increase the clock frequency, I had pipelined all the 80c51 instructions. The instructions were able to execute as desired, for eg. when the 1st instruction is being processed, the second instruction gets fetched.
However, I only get a slightly higher clock frequency of (around +/-10Hz) despite creating a pipeline depth of 3, from the synthesis report. I figured out that the bottleneck is due to one operation as specified by the synthesis report, but I could not understand synthesis report.
May I ask what is the data path from 'SEQ/decode_3 to SEQ/i_ram_addr_7' trying to do?
(From my guess, i deduce that the use a case, when statement to check the 100+ relevant opcode but not sure if that is the bottleneck. But I am clueless)
Hence, my only 2 queries are:
Firstly, is it possible that pipelining does not increase the clock frequency and the testbench is the only way to explain the reduce in timing?
Secondly, how could I deduce which path in my code that is the bottleneck from 'SEQ/decode_3 to SEQ/i_ram_addr_7'.
Thank you for anyone who can help to explain my doubts!
Timing Summary:
---------------
Speed Grade: -4
Minimum period: 12.542ns (Maximum Frequency: 79.730MHz)
Minimum input arrival time before clock: 10.501ns
Maximum output required time after clock: 5.698ns
Maximum combinational path delay: No path found
Timing Detail:
--------------
All values displayed in nanoseconds (ns)
=========================================================================
Timing constraint: Default period analysis for Clock 'clk'
Clock period: 12.542ns (frequency: 79.730MHz)
Total number of paths / destination ports: 113114 / 2670
-------------------------------------------------------------------------
Delay: 12.542ns (Levels of Logic = 10)
Source: SEQ/decode_3 (FF)
Destination: SEQ/i_ram_addr_7 (FF)
Source Clock: clk rising
Destination Clock: clk rising
Data Path: SEQ/decode_3 to SEQ/i_ram_addr_7
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
FDC:C->Q 102 0.591 1.364 SEQ/decode_3 (SEQ/decode_3)
LUT4_D:I1->O 10 0.643 0.885 SEQ/de_state_cmp_eq002111 (N314)
LUT4:I3->O 7 0.648 0.740 SEQ/de_state_cmp_eq00711 (SEQ/de_state_cmp_eq0071)
LUT4:I2->O 3 0.648 0.534 SEQ/i_ram_addr_mux0000<0>11111 (N2301)
LUT4:I3->O 1 0.648 0.000 SEQ/i_ram_addr_mux0000<0>11270_SW0_SW0_F (N1284)
MUXF5:I0->O 1 0.276 0.423 SEQ/i_ram_addr_mux0000<0>11270_SW0_SW0 (N955)
LUT4_D:I3->O 6 0.648 0.701 SEQ/i_ram_addr_mux0000<0>11270 (SEQ/i_ram_addr_mux0000<0>11270)
LUT3_L:I2->LO 1 0.648 0.103 SEQ/i_ram_addr_mux0000<7>221_SW2_SW0 (N1208)
LUT4:I3->O 1 0.648 0.423 SEQ/i_ram_addr_mux0000<7>351_SW1 (N1085)
LUT4:I3->O 1 0.648 0.423 SEQ/i_ram_addr_mux0000<7>2 (SEQ/i_ram_addr_mux0000<7>2)
LUT4:I3->O 1 0.648 0.000 SEQ/i_ram_addr_mux0000<7>167 (SEQ/i_ram_addr_mux0000<7>)
FDE:D 0.252 SEQ/i_ram_addr_7
----------------------------------------
Total 12.542ns (6.946ns logic, 5.596ns route)
(55.4% logic, 44.6% route)
=========================================================================
Timing constraint: Default OFFSET IN BEFORE for Clock 'clk'
Total number of paths / destination ports: 154 / 154
-------------------------------------------------------------------------
Offset: 8.946ns (Levels of Logic = 6)
Source: rst (PAD)
Destination: SEQ/i_ram_diByte_1 (FF)
Destination Clock: clk rising
Data Path: rst to SEQ/i_ram_diByte_1
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
IBUF:I->O 444 0.849 1.392 rst_IBUF (REG/ext_int/fd_out1_0__or0000)
BUF:I->O 445 0.648 1.425 rst_IBUF_1 (rst_IBUF_1)
LUT3:I2->O 4 0.648 0.730 ROM/data<1>1 (i_rom_data<1>)
LUT4:I0->O 1 0.648 0.500 SEQ/i_ram_diByte_mux0000<1>17_SW0 (N1262)
LUT4:I1->O 1 0.643 0.563 SEQ/i_ram_diByte_mux0000<1>32 (SEQ/i_ram_diByte_mux0000<1>32)
LUT4:I0->O 1 0.648 0.000 SEQ/i_ram_diByte_mux0000<1>60 (SEQ/i_ram_diByte_mux0000<1>)
FDE:D 0.252 SEQ/i_ram_diByte_1
----------------------------------------
Total 8.946ns (4.336ns logic, 4.610ns route)
(48.5% logic, 51.5% route)
=========================================================================
To allow me to be more specfic, I will give a snipplet of an example code in the decode phase of 1 opcode.
The following is 1 such case when decoding an opdcode, which is a mov instruction. There are about 100+ opcodes (100+ instructions), which means this case statements has over 100 when statements.
case OPCODE is
--MOV A, Rn
when "11101000" | "11101001" | "11101010" | "11101011" | "11101100" | "11101101" |
"11101110" | "11101111" => case de_state is
when E7 =>
de_state <= E8;
when E8 =>
de_state <= E9;
when E9 =>
de_state <= E10;
when E10 =>
--Draw PSW
i_ram_addr <= xD0;
i_ram_rdByte <= '1';
de_state <= E11;
when E11 =>
--Draw from Rn
i_ram_addr <= "000" & i_ram_doByte(4 downto 3)& opcode(2 downto 0);
i_ram_rdByte <= '1';
de_state <= E12;
when E12 =>
--Place into EDR
EDR <= i_ram_doByte;
--close rdByte
i_ram_rdByte <= '0';
when others =>
end case;
I hope you could have a better idea of my vhdl code. I would appreciate any form of help. Thank you!
Since you're using Xilinx, I presume you also have access to PlanAhead? Try "Analyze Timing / Floorplan Design (PlanAhead)" (under "Implement Design" -> "Place & Route").
PlanAhead should open, and give you a view of your timing results in the bottom. Pick the critical path (the one with the least slack), right click it and choose "Schematic", which will bring up a graphical view of the involved primitives. You can then right-click the primitives and choose "Expand Cone" -> "To Flops" to get a view of the surrounding components too.
This should help you get a much better idea of what signals are involved. Try tracing the input and output signals to your VHDL code, and focus on that path for optimization.
There will be no good answers from this information only; we can only guess what source code produced this hardware.
But it is clear that you need to examine the source, make a hypothesis why it is slow, take action to correct the problem, and test the solution.
And repeat until fast enough.
My guess, given your hint that there is a case statement to decode the opcodes...
one of the arms is something like:
when <some expression involving decode> =>
address <= <some address calculation>;
The problem is that often the two expressions are inter-related so that they are evaluated in the same cycle. An example solution would be to precompute the address expression (i.e. in the previous cycle) into a register, and rewrite the case arm as:
when <some expression involving decode> =>
address <= register;
If you guessed right, the result will be slightly faster and you have another (similar) bottleneck to fix. Repeat until fast enough...
But without the source AND the timing analysis, don't expect a more specific answer.
EDIT : having posted a fraction of source code, the picture is a little clearer :
you have two nested Case statements, each quite large. You clearly need some simplification...
I note that only 2 of the inner case arms assign to i_ram_addr, yet the timing analysis shows a huge and complex mux on i_ram_addr; clearly there are a lot of other case arms that contribute terms to i_ram_addr...
I would suggest that you might have to treat i_ram_addr separately from the main Case statement and write the simplest machine you can to generate i_ram_addr alone.
For example I would note that the OPCODE case arm is equivalent to:
if OPCODE(7 downto 3) = "11101" then ...
and ask how simple you can get a decoder for i_ram_addr alone.
You may find that a lot of other case arms do very similar things with i_ram_addr (the original 8051 designers would have jumped at the chance to simplify logic!).
Synthesis tools can be quite clever at simplifying logic, but when things get too complex they can miss opportunities.
(At this stage I would comment out the i_ram_addr assignments and leave the rest of the decoder alone)