Add i2s Audio in device tree for SAM9x60 board - linux-kernel

Our team has a SAM9x60 board and recently add an external audio board (UDA1334A, link: Documents). Unfortunately, this document has guide for Raspberry Pi only, and somehow it's really different with our board device tree. So I have tried myself to add into device tree, mostly based on SAM9x60's Tutorial with another board, but it's really different.
As I understand, the audio board use UDA1334 codec, and I have to add a sound tag to device tree, like SAM9x60 tutorial:
sound {
compatible = "mikroe,mikroe-proto";
model = "wm8731 # sam9x60ek";
i2s-controller = <&i2s>;
audio-codec = <&wm8731>;
dai-format = "i2s";
};
But I haven't found any driver for this card. After look around, I tried with simple-audio-card
sound {
compatible = "simple-audio-card";
simple-audio-card,name = "1334 Card";
simple-audio-card,format = "i2s";
simple-audio-card,widgets = "Speaker", "Speakers";
simple-audio-card,routing = "Speakers", "Speaker";
simple-audio-card,bitclock-master = <&codec_dai>;
simple-audio-card,frame-master = <&codec_dai>;
simple-audio-card,cpu {
#sound-dai-cells = <0>;
sound-dai = <&i2s>;
};
codec_dai: simple-audio-card,codec {
#sound-dai-cells = <1>;
sound-dai = <&uda1334>;
};
};
uda1334: codec#1a {
compatible = "nxp,uda1334";
nxp,mute-gpios = <&pioA 8 GPIO_ACTIVE_LOW>;
nxp,deemph-gpios = <&pioC 3 GPIO_ACTIVE_LOW>;
status = "okay";
};
When booting, I received message:
OF: /sound/simple-audio-card,codec: could not get #sound-dai-cells for /codec#1a
asoc-simple-card sound: parse error -22
asoc-simple-card: probe of sound failed with error -22
So have I do the right way with simple-audio-card? Or any other way? In normal, ALSA recorded a classD sound card, but I think it is just a amplifier. Sorry because I'm an Android SW Developer and have to do the HW job from a quit people.
External Question: I have investigate on Raspberry device tree based on UDA1334 document, it's so different, as I understand, Rasp use HiFiberry Dac already, but how could it work with an external DAC like UDA1334? No external node in device tree I've seen? Look like they just open dtoverlay=hifiberry-dac, dtoverlay=i2s-mmap and it work.

Related

Where can I find the mapping of SAMA5D27-SOM1-EK1 devices and it's GPIOS?

I am using SAMA5D27-SOM-EK1 embedded board.
I build for it Linux image OS using YOCTO project version SUMO.
I need to know device's GPIOS ( gpios-leds and gpios keys specialy) and the mapping of the board.
When I enter in /sys/firmware/devicetree/base/leds/red for example in the board terminal I can find gpio file but when i open it there are symbols which i can't read.
I think that I can find such things in the generated Device Tree but i can't find its path!
Please help me out
Here is the original dts: https://elixir.bootlin.com/linux/v5.2/source/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts#L510
The relevant part is:
leds {
compatible = "gpio-leds";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_led_gpio_default>;
status = "okay"; /* Conflict with pwm0. */
red {
label = "red";
gpios = <&pioA PIN_PA10 GPIO_ACTIVE_HIGH>;
};
green {
label = "green";
gpios = <&pioA PIN_PB1 GPIO_ACTIVE_HIGH>;
};
blue {
label = "blue";
gpios = <&pioA PIN_PA31 GPIO_ACTIVE_HIGH>;
linux,default-trigger = "heartbeat";
};
};
This shows that the red LED is connected to the PA10 gpio, green is on PB1 and blue on PA31.
The other way to find the info is to look at the schematics here:
http://ww1.microchip.com/downloads/en/DeviceDoc/SAMA5D27-SOM1-EK1_Board%20Files_1.B.B.zip
Page 3 of SAMA5D27-SOM1-EK1_REVB.pdf sums the pinmuxing and page 8 shows the actual connection.
Regarding what you want to do (toggling the led if I remember correctly), you can simply have a look at /sys/class/leds/red/brightness writing 0 in that file will turn it off while writing 1, will turn it on.
The device tree sources are available online and are not present in the target system.
Please follow this link
However you could discover how it is working running a sort of reverse engineering using the Device Tree Compiler (DTC) if it is available on the target, run
dtc -I fs /sys/firmware/devicetree/base

Why isn't my v210 format video showing as such through a V4L loopback device?

In a user-space application, I'm writing v210 formatted video data to a V4L2 loopback device. When I watch the video in VLC or other viewer, I just get clownbarf and claims that the stream is UYUV or other, not v210. I suspect I need to tell the loopback device something more than what I have, to make the stream appear as v210 to the viewer. Is there one more place/way to tell it that it'll be handling a certain format?
What I do now:
int frame_w, frame_h = ((some sane values))
outputfd = open("/dev/video4", O_RDWR);
// check VIDIOC_QUERYCAPS, ...
struct v4l2_format fmt;
memset(&fmt, 0, sizeof(fmt));
fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
fmt.fmt.pix.width = frame_w;
fmt.fmt.pix.height = frame_h;
fmt.fmt.pix.bytesperline = inbpr; // no padding
fmt.fmt.pix.field = 1;
fmt.fmt.pix.sizeimage = frame_h * fmt.fmt.pix.bytesperline;
fmt.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
v210width = (((frame_w+47)/48)*48); // round up to mult of 48 px
byte_per_row = (v210width*8)/3;
fmt.fmt.pix.pixelformat = 'v' | '2' << 8 | '1' << 16 | '0' << 24;
fmt.fmt.pix.width = v210width;
fmt.fmt.pix.bytesperline = byte_per_row ;
ioctl(outputfd, VIDIOC_S_FMT, &fmt);
// later, in some inner loop...
... write stuff to uint8_t buffer[] ...
write(outputfd, buffer, buffersize);
If I write UYVY format, or RGB or others, it can be made to work. Viewers display the video and report the correct format.
This code is based on examples, reading the V4L docs, and some working in-house code. No one here knows exactly what are all the things one must do to open and write to a video device.
While there is an easily found example online of how to read video from a V4L device, I couldn't find a similar quality example for writing. If such exists, it may show the missing piece.

MultiRoute Audio Input in iOS

We've been working with AudioUnits in Core Audio. It is simultaniously a very powerful audio framework, and one of the worst documented which makes it both a joy and a frustration to work with.
We want to accomplish something we know iPads had been able to do since iOS 6.0 - Multiple audio inputs.
So far - from the 2012 Developer Talk - It appears you have to set the audio session to MultiRoute. We've done this. If I plug in an a soundcard from a keyboard. I can see that there are two inputs. Great. We're then told that we need to set a ChannelMap on a Remote I/O unit.
To what? Well... here's where it gets vague. We need to set all the channels we don't want to -1 and the channels we want to 0 and 1 (for stereo input or for mono?).
We attempt this and... nothing. Sound still plays through on the 'last in wins' principle. Microphone if everything plugged out, soundcard if that's the one plugged in. But we can't switch between them.
This setup code is always run before the other function listed
func setupAudioSession() {
self.audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryMultiRoute, with: [.mixWithOthers])
try audioSession.setActive(true)
audioSessionWasSetup = true
} catch let error {
//TODO: Implement something here
print(error)
audioSessionWasSetup = false
}
}
We then have a remote I/O with an associated audiograph set up. This has been tested and works beautifully. But we need to be able to set where it's pulling sound from.
I've attempted to do it with this, but not only doesn't it have any effect... nothing happens.
Am I missing something?
private func setChannelMap(onAudioUnit audioUnit: AudioUnit?, toChannel channelIndex: Int = 0) {
var channelMap: [Int32] = []
if audioUnit == nil {
return
}
var numberOfInputChannels: UInt32 = 4 // Two stereo inputs? - I'm just guessing here
let mapSize: UInt32 = numberOfInputChannels * UInt32(MemoryLayout<Int32>.size);
for _ in 0...(numberOfInputChannels) {
channelMap.append(-1)
}
channelMap[2 * channelIndex] = 0;
channelMap[2 * channelIndex + 1] = 1;
let status = AudioUnitSetProperty(audioUnit!,
kAudioOutputUnitProperty_ChannelMap,
kAudioUnitScope_Input,
0,
&channelMap,
mapSize);
self.checkError(status, "Failed to set Channel Map on input unit")
}
There isn't any documentation on this at all as far as I've been able to find. Nor any code examples.
I hope you can help us.

device-tree mismatch: .probe never called

I'm having trouble understanding how device-tree works, or specifically why this driver won't init. This is in the rockchip vendor kernel for android, version 3.10
drivers/watchdog/rk29_wdt.c (reduced for readability)
static const struct of_device_id of_rk29_wdt_match[] = {
{ .compatible = "rockchip,watch dog" }
};
static struct platform_driver rk29_wdt_driver = {
.probe = rk29_wdt_probe,
[..]
.of_match_table = of_rk29_wdt_match,
.name = "rk29-wdt",
},
};
static int __init watchdog_init(void)
{
printk("watchdog_init\n");
return platform_driver_register(&rk29_wdt_driver);
}
and this is the soc dtsi
arch/arm/boot/dts/rk3288.dtsi
watchdog: wdt#2004c000 {
compatible = "rockchip,watch dog";
reg = <0xff800000 0x100>;
clocks = <&pclk_pd_alive>;
clock-names = "pclk_wdt";
interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
rockchip,irq = <0>;
rockchip,timeout = <2>;
rockchip,atboot = <1>;
rockchip,debug = <0>;
status = "okay";
};
however, the .probe function of the driver is never called. It is compiled in and the __init function is called. I suspect it has something to do witch the device tree entry not matching? Maybe the space is an issue?
Or is there anything else that runs before .probe that determines if the driver should continue?
Also i'm not sure how a flattened tree works, so maybe this is relevant:
arch/arm/mach-rockchip/rk3288
DT_MACHINE_START(RK3288_DT, "Rockchip RK3288 (Flattened Device Tree)")
.smp = smp_ops(rockchip_smp_ops),
.map_io = rk3288_dt_map_io,
.init_time = rk3288_dt_init_timer,
.dt_compat = rk3288_dt_compat,
.init_late = rk3288_init_late,
.reserve = rk3288_reserve,
.restart = rk3288_restart,
MACHINE_END
There are a number of possible ways this might happen, and most of them are well away from the driver code itself. Firstly, a .dtsi fragment alone doesn't tell the whole story - the device tree syntax is hierarchical, so the properties (in particular the status) might still be overridden by the board-level .dts which includes a basic SoC .dtsi file. Secondly, the compiled DTB isn't the last word either, since the bootloader may dynamically modify it before passing it to the kernel - this is typically done for memory nodes and SMP enable methods, but could potentially affect anything.
This kind of debugging is often best tackled in reverse, by examining the state of the booted system, then working backwards to figure out how things got that way - the specifics of this particular question rule some of this out already, but for the sake of completeness:
If the kernel knows about the driver, and it's loaded and properly initialised, it should show up somewhere in /sys/bus/*/drivers/ - otherwise, it may be in a module which needs loading, or it may have failed to initialise due to some unmet dependency on some other driver or resource.
If the kernel knows about the device, it should show up somewhere in /sys/bus/*/devices/, and if it's correctly bound to a driver and probed then they should both have a symlink to each other.
If the device is nowhere to be found, then on a DT-based system the next place to check would be /proc/device-tree/ (dependent on CONFIG_PROC_DEVICETREE on older kernels, and canonically found in /sys/firmware/devicetree/base/ on newer ones) - this will show the view of the DT as the kernel found it, and a bit of poking around there should hopefully make clear any missing nodes or out-of-place properties, such as a disabled node causing the kernel to skip creating a device altogether. Beware that the property files themselves are just the raw data - so you probably want to go snooping with hexdump rather than cat - and that all numeric cells are in big-endian byte order.
I notice that in your definition you miss so called SENTINEL in your array, null empty struct.
Look here an example:
static const struct of_device_id clk_ids[] = {
{ .compatible = "sirf,atlas7-clkc" },
{},
};

Firefox 37 throwing error when trying to add microphone volume control for WebRTC audio context

Since firefox 37 I cannot add volume control to the input(microphone), i get the error :
IndexSizeError: Index or size is negative or greater than the allowed amount
It works fine on Chrome.
Here is the code sample :
var audioContext = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
var microphone = audioContext.createMediaStreamDestination();
var gain = audioContext.createGain();
var speaker = audioContext.createMediaStreamDestination(gain);
gain.gain.value = 1;
microphone.connect(gain);
gain.connect(speaker);
The error is thrown here :
microphone.connect(gain);
weirdly it works on firefox nightly.
This error is similar to this stackoverflow :link
Related link :
link on StackOverflow
Shouldn't you use this for microphone?
var microphone = audioContext.createMediaStreamSource();
instead of this
var microphone = audioContext.createMediaStreamDestination();
A microphone is not a destination. It is a source.
Firstly I think it should be
var microphone = audioContext.createMediaStreamSource(stream);
Here stream is the microphone audio stream. Find more info here.
Also check out this demo with elaboration here. It is similar to what you are trying. Replace createMediaElementSource with createMediaStreamSource will work.

Resources