How A53 driver to configure pl310? - caching

I didn't find any information that any A53 platform is working with pl310 cache contronller.
I'm not sure using cache controller like pl310 is still a typical way for A53 aarch64 platform.
If I have to use pl310 in my A53, how can I write driver for it under Linux? Lots of pl310 registers are "secure" write only.

I am no Linux kernel expert, but I would say there already is a "arm,pl310-cache" compatible driver for the PL310 and friends in Linux, arch/arm/mm/cache-l2x0.c.
Update: this is a five years old whitepaper, but its does mention the pl310.
But the kernel documentation for the l2c2x0 driver has the following note:
Note 1: The description in this document doesn't apply to integrated L2
cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These
integrated L2 controllers are assumed to be all preconfigured by
early secure boot code. Thus no need to deal with their configuration
in the kernel at all.

Related

How to use Python to communicate with NIC PCIe

I am testing a custom FPGA NIC and I need to send management information (such as header info for matching) and traffic data to it using a traffic generator from within the user space.
The driver built for the FPGA is a modified version of IXGBE with DMA support for management, and also supports DPDK for kernel bypass to achieve high throughput.
I am trying to understand how the various software (driver, userspace application, etc) should be stacked/connected to each-other so I can achieve the objective of reading and writing to PCIe on the NIC using set of scripts from user space?
I have also been looking at this project
https://github.com/CospanDesign/python-pci
which is useful however based on Xilinx XDMA.
Would appreciate any help, pointers on this.
Sorry, the question is too broad. For such a broad question there is a generic answer: have a look at Inter Process Communication:
https://en.wikipedia.org/wiki/Inter-process_communication
There are variety of methods, like Unix sockets, shared memory, netlink etc, to communicate between user space processes. As well as a variety of methods to communicate between user space and kernel space.
Just pick the best for you and try to do something. If it fails, come again on SO and ask ;)

Is there a caching mechanism when using directly Linux VFS functions?

I'm building an application on top of KVM hypervisor where I access (thousand of times) a small database (3 mb) by calling directly Linux kernel VFS functions.
After building a stable prototype of my application, I want to optimise its access to the database (by adding a cache for example).
I know that by doing file operations from userspace, Linux uses Page cache to accelerate the application, is this true also when using VFS functions from kernel space ?
Yes, as I expect you know, VFS is an abstraction layer, the idea being that all file systems look the same no matter what their implementation details are.
VFS can therefore do some caching at the VFS level, and then there is a buffer cache for all block devices further down the layer cake.

How to choose one version of the drivers to be loaded on boot when multiple drivers for the same hardware exist?

I'm working with embedded linux.
There are two USB gadget drivers built as LKM, g_ether.ko and g_file_storage.ko.
I did depmod and then in modprobe -l both drivers show up in the list.
kernel/drivers/usb/gadget/g_ether.ko
kernel/drivers/usb/gadget/g_file_storage.ko
The problem is, the kernel doesn't load either of them on boot anatomically.
Curently my solution is to add boot scripts to /etc/init.d etc/rcX.d to force g_ether.ko to be loaded on boot as the default driver.
Are there other (better) ways to make g_ether.ko default driver?
A possible solution is, I make g_ether.ko a static driver, and make g_file_storage.ko an LKM, but I don't know how to turn off a static driver to release the hardware so that another LKM driver can be loaded.
Any suggestions?
It's a user choice to use USB peripheral controller as ethernet or storage. So there is no related hardware event for automatic client driver loading.
But there is a way to bind/unbind driver in user space through sysfs. Look at this: https://lwn.net/Articles/143397/

Do Access points use softMAC or hardMAC?

I am trying to understand the working of wireless in linux. I started with wpa_supplicant, hostapd applications with the help of their documentation and source code.Understood the flow and basic functionalities of :
wpa_supplicant,nl80211(driver interface)
libnl library(socket communication between user space and kernel using netlink protocol)
cfg80211(kernel interface used for communicating with the driver from userspace with the help of nl80211 implementation in user space),mac80211(software media access control layer)
driver(loadable driver ex:ath6kl - atheros driver).
I understood the above software flow and in my exploration I came to know that for providing freedom for developers MAC layer is implemented in software(popular implementation mac80211).
Is this true in all the cases ? If so what are pros and cons of softMAC and hardMAC ? Do cfg80211 interface in kernel directly communicates with the driver ? who and how communication with mac80211 happens ?
Thanks in advance.
The term 'SoftMAC' refers to a wireless network interface device (WNIC) which does not implement the MAC layer in hardware, rather it expects the drivers to implement the MAC layer.
'HardMAC' (also called 'FullMAC') describes a WNIC which implements the MAC layer in hardware.
The advantages of SoftMAC are:
Potentially lower hardware costs
Possibility to upgrade to newer standards by updating the driver only
Possibility to correct faults in the MAC implementation by updating the driver only
An additional advantage (in the Linux kernel at least) is that many different drivers for different types of WNIC can all share the same MAC implementation, provided by the kernel itself.
Despite the advantages, not all WNICs use SoftMAC. The main advantages of HardMAC is that since the MAC functions are implemented in hardware, they contribute less CPU load.
mac80211 is the framework within the Linux kernel for implementing SoftMAC drivers. It implements the cfg80211 callbacks which would otherwise have to be implemented by the driver itself, and also implements the MAC layer functions. As such it goes between cfg80211 and the SoftMAC drivers.
HardMAC drivers have to implement the cfg80211 interfaces fully themselves.
Also to add :-
Hardmac drivers helps in better as compared to SoftMAC, power save and quick connection/disconnection recovery due to MLME implemented in HW. Better power save is because HW/FW need not to wake up host on disconnection and still can connect and recover .

How Application is interacting with hardware in linux?

I am new in Linux and i want to know the internals of drivers and how it is interacting with hardware, so my question is that How the application is interacting with hardware means when the core part will come in picture and what it will do?
When the controller of that driver will come and how it will handle the request generated by the application.
And what is Firmware and when it comes into picture in Linux?
For eg: if i am using usb device like
$ cat /dev/usb0.1
then which is the core of usb(usb_storage.c) and
which is the controller(usb_hub.c)
and how they are related to each other.
Thanks in advance..
Basically linux kernel driver provide interface such as device file node then application can use standard read()/write()/ioctl() to pass parameter to operate hardware. Provide interface in /proc or /sys can be facilities to do that too. Detail implementation how to handle request depends on respective hardware spec.

Resources