Zynq Clock To Use With Devfreq - linux-kernel

I am looking at the exynos4_bus.c driver that is used with devfreq power management to try to develop a similar driver for a peripheral on the a Zynq SoC. The method I'm concerned about is this one:
static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp)
{
...
__raw_writel(tmp, EXYNOS4_CLKDIV_DMC0);
...
}
It seems to me that raw_writel is writing to the Exynos clock register the frequency that it should run at. This register is defined in arch/arm/mach-exynos/include/mach/regs-clock.h. I am now looking at arch\arm\mach-zynq\include\mach\zynq_soc.h to try to find something equivalent for the Zynq setup, but there are quite a few clocks that are being defined, so I'm not sure which is the one I should be setting. Can anyone help?

Zynq uses the kernel clock framework.
Include the declaration:
#include <linux/clk.h>
Get a handle on the clock you want, by name:
struct clk *fclk = clk_get_sys("FPGA0", NULL);
long requested_rate = 125000000;
Find the nearest supported frequency:
long actual_rate = clk_round_rate(fclk, requested_rate);
Then set the clock rate:
int status;
if ((status = clk_set_rate(fclk, actual_rate))) {
printk(KERN_INFO "[%s:%d] err\n", __FUNCTION__, __LINE__);
return status;
}

You can change the Zynq FCLK directly by its register. The easiest way to test it is to write to the address directly using devmem from busybox.
The name of the register is FPGA0_CLK_CTRL for the FCLK_CLK0 Zynq output. A possible solution to get the address is opening SDK and open the ps7_init.html and search the register.
Bit 0-7 selects the clock source (0x00 for IO_PLL, 0x20 ARM_PLL, 0x30 DDR_PLL)
Bit 8-15 is the first divisor
Bit 16-19 is reserved
Bit 20-25 is the second divisor
Example: for my Zynq-7000 I want to have 28MHz so I have DIV0=36, DIV1=1 and want to use the IO_PLL.You can get the values in the Zynq block by setting the corresponding FCLK in the Clock Configuration section and look in the Advanced Clocking tab. The command to set this in Linux would be "devmem 0xF8000170 32 0x002401".

Related

Esp32 Micropython Max31865 Spi connection and data read

I need to read temperature data with using MAX31865 SPI communication. First of all, I tried to read 4 byte data:
import machine
import ubinascii
spi = machine.SPI(1, baudrate=5000000, polarity=0, phase=0)
#baudrate controls the speed of the clock line in hertz.
#polarity controls the polarity of the clock line, i.e. if it's idle at a low or high level.
#phase controls the phase of the clock line, i.e. when data is read and written during a clock cycle
cs = machine.Pin(15, machine.Pin.OUT)
cs.off()
cs.on()
data = spi.read(4)
cs.off()
print(ubinascii.hexlify(data))
I tried many times with different codes but result is always similar b'00000000'.
I am using ESP32 WROOM.
I used this pins:
ESP32 : D12 - D14 - 3V3 - GND - D15
Max31865: SDO - CLK - VIN - GND - CS
I am new on micropython and esp32.
I don't know what should I do. Is there any suggestions , recommended tutorials or idea?
Short answer: see if you can use CircuitPython and its drivers for MAX31865.
Long answer: a bunch of stuff. I suspect you've been following the Adafruit tutorial for MAX31855, but its SPI interface is very different from the MAX31865.
Your SPI connection is missing the SDI pin. You have to connect it, as communication is bidirectional. Also, I suggest using the default SPI pinout on ESP32 side as described in the micropython documetation for ESP32.
The SPI startup looks to be missing stuff. Looking at the SPI documentation a call to machine.SPI() requires that you assign values to arguments sck, mosi, miso. Those would probably be the pins on ESP32 side where you've connected SCLK, SDI, SDO on MAX31865 (note mosi means "master out, slave in" and miso is "master in, slave out").
The chip select signal on the MAX is inverted (that's what the line above CS input in the datasheet means). You have to set it low to activate the chip and high to disable it.
You can't just read data out of the chip, it has a protocol you must follow. First you have to request a temperature-to-resistance conversion from the chip. The datasheet for your chip explains how to do that, the relevant info starts on page 13 (it's a bit difficult to read for a beginner, but try anyway as it's the authoritative source of information for this chip). On a high level, it works like this:
Write to Configuration register a value which initiates the conversion.
Wait for the conversion to complete.
Read from the RTD (Resistance-To-Digital) registers to get the conversion result.
Calculate the temperature value from the conversion result.
The code might be closer to this (not tested, and very likely to not work off the bat - but it should convey the idea):
import ubinascii, time
from machine import Pin, SPI
cs = Pin(15, Pin.OUT)
# Assuming you've rewired according to default SPI pinout
spi = machine.SPI(1, baudrate=100000, polarity=0, phase=0, sck=Pin(14), mosi=Pin(13), miso=Pin(12))
# Enable chip
cs.off()
# Prime a 1-shot read by writing 0x40 to Configration register 0x00
spi.write(b'\x00\x40')
# Wait for conversion to complete (up to 66 ms)
time.sleep_ms(100)
# Select the RTD MSBs register (0x01) and read 1 byte from it
spi.write(b'\x01')
msb = spi.read(1)
# Select the RTD LSBs register (0x02) and read 1 byte from it
spi.write(b'\x02')
lsb = spi.read(1)
# Disable chip
cs.on()
# Join the 2 bytes
result = msb * 256 + lsb
print(ubinascii.hexlify(result))
Convert result to temperature according to section "Converting RTD Data Register
Values to Temperature" in datasheet.
Side note 1: here spi = machine.SPI(1, baudrate=5000000, polarity=0, phase=0) you've configured a baud rate of 5MHz which is the maximum for this chip. Depending on how you've connected your devices, it may not work. The SPI protocol is synchronous and driven by master device, so you can set any baud rate you want. Start with a much, much lower value, maybe 100KHz or so. Increase this after you've figured out how to talk to the chip.
Side note 2: if you want your conversion result faster than the 100ms sleep in my code, connect the DRDY line from MAX to ESP32 and wait for it to go low. This means the conversion is finished and you can read out the result immediately.

How to use kernel GPIO descriptor interface

I'm trying to develop a simple Linux kernel module that manages a bunch of sensors/actuators pinned on the GPIO of a Raspberry Pi.
The GPIO functionalities I need are quite simple: get/set pin values, receive IRQs, ...
In my code, I have a misc_device which implements the usual open, read, write and open operations. In my read operation, for instance, I'd like to get the value (high/low) of a specific GPIO pin.
Luckily, the kernel provides an interface for such GPIO operations. Actually, there are two interfaces, according to the official GPIO doc: the legacy one, which is extremely simple yet deprecated, and the new descriptor-based one.
I'd like to use the latter for my project, and I understand how to implement all I need except for one thing: the device tree stuff.
With reference to board.txt, before I can call gpiod_get_index() and later gpiod_get_value(), first I need to setup the device tree somehow like this:
foo_device {
compatible = "acme,foo";
...
led-gpios = <&gpio 15 GPIO_ACTIVE_HIGH>, /* red */
<&gpio 16 GPIO_ACTIVE_HIGH>, /* green */
<&gpio 17 GPIO_ACTIVE_HIGH>; /* blue */
power-gpios = <&gpio 1 GPIO_ACTIVE_LOW>;
};
However, I've absolutely no clue where to put that chunk of code, nor if I really need it. Mind that I have a misc device which looks like this, where aaa_fops contains the read operation:
static struct miscdevice aaa = {
MISC_DYNAMIC_MINOR, "aaa", &aaa_fops
};
Using the old deprecated interface, my problem would be solved because it doesn't require to mess with the device tree, but I'd still like to use the new one if not too complex.
I've read a bunch of documentation, both official and unofficial, but couldn't find a straight and simple answer to my issue. I tried to find an answer in the kernel source code, especially in the drivers section, but only got lost in a valley of complex and messy stuff.
The lack of working, minimal examples (WME) about kernel is significantly slowing down my learning process, just my opinion about it.
Could you please give me a WME of a simple device (preferably a misc) whose read() operation gets the value of a pin, using the new GPIO interface?
If you need more details about my code, just ask. Thanks in advance!
Note 1: I'm aware that most of my work could be done in userspace rather than kernelspace; my project is for educational purposes only, to learn the kernel.
Note 2: I choose a misc device because it's simple, but I can switch to a char device if needed.
... first I need to setup the device tree somehow like this:
...
However, I've absolutely no clue where to put that chunk of code
Device Tree nodes and properties should not be called "code".
Most devices are connected to a peripheral bus, so device nodes typically are child nodes of the peripheral bus node.
Could you please give me a WME of a simple device
You can find numerous examples of descriptor-based GPIO usage in the kernel source.
Since the documentation specifies the GPIO descriptor as a property named
<function>-gpios, a grep of the directory arch/arm/boot/dts for the string "\-gpios" reports many possible examples.
In particular there's
./bcm2835-rpi-b.dts: hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
This hpd-gpios property belongs to the hdmi base-node defined in bcm283x.dtsi, and is used by the gpu/drm/vc4/vc4_hdmi.c driver.
/* General HDMI hardware state. */
struct vc4_hdmi {
...
int hpd_gpio;
...
};
static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
{
...
/* Only use the GPIO HPD pin if present in the DT, otherwise
* we'll use the HDMI core's register.
*/
if (of_find_property(dev->of_node, "hpd-gpios", &value)) {
...
hdmi->hpd_gpio = of_get_named_gpio_flags(dev->of_node,
"hpd-gpios", 0,
&hpd_gpio_flags);
if (hdmi->hpd_gpio < 0) {
ret = hdmi->hpd_gpio;
goto err_unprepare_hsm;
}
...
}
If the hpd-gpios property is defined/found and successfully retrieved from the board's DeviceTree, then the driver's structure member hpd_gpio holds the GPIO pin number.
Since this driver does not call devm_gpio_request(), the framework apparently allocates the GPIO pin for the driver.
The driver can then access the GPIO pin.
static enum drm_connector_status
vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
{
...
if (vc4->hdmi->hpd_gpio) {
if (gpio_get_value_cansleep(vc4->hdmi->hpd_gpio) ^
vc4->hdmi->hpd_active_low)
return connector_status_connected;
else
return connector_status_disconnected;
}

What's the relationship between GPIO and SPI?

I found GPIO driver in the kernel leave /sys/class/gpio to control gpio, but I found GPIO can be controlled by /dev/mem as well, I found this mapping may be done in the spi-bcm2708 (which call the __ioremap as a platform driver), but I don't understand the relationship between spi and GPIO,how they work together in the linux?
As I understand you are talking about this driver (which is used, for example, in Raspberry Pi). First of all, take a look at BCM2835 datasheet. Review next sections:
1.1 Overview
6.0 General Purpose I/O (GPIO)
6.2 Alternative Functions Assignments (see Table 6-31)
From driver code (see bcm2708_init_pinmode() function) and datasheet (table 6-31), we can see that SPI pins are actually GPIO7..11 pins. Those pins can be actually connected to different hardware modules (either SPI or SD, in this case).
Such a selection is done using pin muxing. So basically you need to connect GPIO7..GPIO11 pins to SPI module. To do so you need to select ALT0 function for each of GPIO7..GPIO11 pins. This can be done by writing corresponding values to GPFSEL0 and GPFSEL1 registers (see tables 6-1..6-3 in datasheet):
And this is how the driver is actually doing this:
/*
* This function sets the ALT mode on the SPI pins so that we can use them with
* the SPI hardware.
*
* FIXME: This is a hack. Use pinmux / pinctrl.
*/
static void bcm2708_init_pinmode(void)
{
#define INP_GPIO(g) *(gpio+((g)/10)) &= ~(7<<(((g)%10)*3))
#define SET_GPIO_ALT(g, a) *(gpio+(((g)/10))) |= (((a) <= 3 ? (a)+4 : (a) == 4 ? 3 : 2)<<(((g)%10)*3))
int pin;
u32 *gpio = ioremap(GPIO_BASE, SZ_16K);
/* SPI is on GPIO 7..11 */
for (pin = 7; pin <= 11; pin++) {
INP_GPIO(pin); /* set mode to GPIO input first */
SET_GPIO_ALT(pin, 0); /* set mode to ALT 0 */
}
iounmap(gpio);
#undef INP_GPIO
#undef SET_GPIO_ALT
}
which looks like quick hack to me, and they actually mentioned it: the correct way would be to use kernel mechanism called pinctrl.
Conclusion: BCM2708 driver doesn't actually trigger any GPIO pins, it's just doing pin muxing in order to connect GPIO7..GPIO11 pins to SPI module. And to do so this driver writes to GPFSELn registers, which happen to be GPIO registers. This is pretty much all relationship between SPI and GPIO in this driver.
P.S.: if you are curious about possible relationship between SPI and GPIO, read about bit banging. See for example spi-bitbang.c driver in Linux kernel.

Interrupt vector and irq mapping in do_IRQ

I'm working on a x86 system with Linux 3.6.0. For some experiments, I need to know how the IRQ is mapped to the vector. I learn from many book saying that for vector 0x0 to 0x20 is for traps and exceptions, and from vector 0x20 afterward is for the external device interrupts. And this also defined in the source code Linux/arch/x86/include/asm/irq_vectors.h
However, what I'm puzzled is that when I check the do_IRQ function,
http://lxr.linux.no/linux+v3.6/arch/x86/kernel/irq.c#L181
I found the IRQ is fetched by looking up the "vector_irq" array:
unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
/* high bit used in ret_from_ code */
unsigned vector = ~regs->orig_ax;
unsigned irq;
...
irq = __this_cpu_read(vector_irq[vector]); // get the IRQ from the vector_irq
// print out the vector_irq
prink("CPU-ID:%d, vector: 0x%x - irq: %d", smp_processor_id(), vector, irq);
}
By instrumenting the code with printk, the vector-irq mapping I got is like below and I don't have any clue why this is the mapping. I though the mapping should be (irq + 0x20 = vector), but it seems not the case.
from: Linux/arch/x86/include/asm/irq_vector.h
* Vectors 0 ... 31 : system traps and exceptions - hardcoded events
* Vectors 32 ... 127 : device interrupts = 0x20 – 0x7F
But my output is:
CPU-ID=0.Vector=0x56 (irq=58)
CPU-ID=0.Vector=0x66 (irq=59)
CPU-ID=0.Vector=0x76 (irq=60)
CPU-ID=0.Vector=0x86 (irq=61)
CPU-ID=0.Vector=0x96 (irq=62)
CPU-ID=0.Vector=0xa6 (irq=63)
CPU-ID=0.Vector=0xb6 (irq=64)
BTW, these irqs are my 10GB ethernet cards with MSIX enabled. Could anyone give me some ideas about why this is the mapping? and what's the rules for making this mapping?
Thanks.
William
The irq number (which is what you use in software) is not the same as the vector number (which is what the interrupt controller actually uses).
The x86 I/OAPIC interrupt controller assigns interrupt priorities in groups of 16, so the vector numbers are spaced out to prevent them from interfering with each other
(see the function __assign_irq_vector in arch/x86/kernel/apic/io_apic.c).
I guess my question is how the vectors are assigned for a particular
IRQ number and what's are the rules behind.
The IOAPIC supports a register called IOREDTBL for each IRQ input. Software assigns the desired vector number for the IRQ input using bit 7-0 of this register. It is this vector number that serves as an index into the processors Interrupt Descriptor Table. Quoting the IOAPIC manual (82093AA)
7:0 Interrupt Vector (INTVEC)—R/W: The vector field is an 8 bit field
containing the interrupt vector for this interrupt. Vector values
range from 10h to FEh.
Note that these registers are not directly accessible to software. To access IOAPIC registers (not to be confused with Local APIC registers) software must use the IOREGSEL and IOWIN registers to indirectly interact with the IOAPIC. These registers are also described in the IOAPIC manual.
The source information for the IOAPIC can be a little tricky to dig up. Here's a link to the example I used:
IOAPIC data sheet link

Turn an LED on or off in AT Mega-1284P Xplained

I am a beginner in AT Mega-1284P Xplained.
I want to turn an LED on and then off (say, LED0) after some specified time in AT Mega 1284P Xplained board from ATMEL. To my surprise, I found no official documentation for this rudimentary task but several different function calls - all of which failed compilation - searching on the web.
Please mention the API call as well as the header file that needs to be included for this. I am using AVR Studio 6.
I will assume a led is connected to pin 0 at port b on the AtMega1284P. The following program should make the led blink.
#include <util/delay.h>
#include <avr/io.h>
int main() {
// Set the pin 0 at port B as output
DDRB |= (1<<PB0);
while(1) {
// Turn led on by setting corresponding bit high in the PORTB register.
PORTB |= (1<<PB0);
_delay_ms(500);
// Turn led off by setting corresponding bit low in the PORTB register.
PORTB &= ~(1<<PB0);
_delay_ms(500);
}
}
Answering my own question: I found Atmel had an example code that covered a bunch of sensors and other peripheral components including LEDs for Mega-1284P. The links are link and link. Besides, very hard to find locations (they did not show up on web searches), the websites are _very_slow. Atmel, are you listening?

Resources