BeagleBone Black Boot Failed with USER2 LED solid on - boot

I am trying to boot a BeagleBone Black with a microSD. I push the boot button and then power up the board and release the button. The user LEDs start blinking but stop after 1 second and only the USER2 LED stays solid. I can't figure out why?
-- Update: I was using a custom image and The image was not compatible with the Beaglebone. I rebuilt the image and it was Ok.

This means something failed in the boot process.
You have two possibilities at this point:
Attach a USB-to-UART converter to the debug UART port and look at the actual failure messages
Try a different image, different SD-card, rewrite your image to the card, make sure you wrote it correctly
You haven't specified any details about the image you're trying to boot, so it's impossible to go into further details.

Related

How can I capture microphone data and route it to a virtual microphone device?

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

Interfacing PIC24E with ICD3

I am trying to connect ICD-3 to PIC24E micro-controller, and I'm finding difficulty in detecting the device. All the recommendations, as in ICD-3 interface document has been ensured, and still on sending command via PRD line of ICD-3, the device is showing an error of Target ID not detected. The MCLR line is pulled by a 10kOhm resister as well. Is there anything else I have to ensure during communication and have I missed something in between. I have attached the ICD-3 document link as well. (https://ww1.microchip.com/downloads/en/DeviceDoc/DS-51765C.pdf).

How to ensure bootloader is started

I am working with openmote-cc2538 for the project: https://github.com/cetic/6lbr/wiki
Here when I was trying to flash openmote with this command
sudo make TARGET=cc2538dk BOARD=openmote-CC2538 bootload0/dev/ttyUSB0 slip-radio.upload
This error message is displayed
ERROR:Cant CONNECT TO TARGET:Ensure bootloader is started(no answer on sync sequence)
cc2538 bsl.py script is available for uploading binary to openmote. But I don't have much experience about this. Can anybody give me some suggestions regarding how to proceed??
Grab a scope and probe different digital I/O pins on the CC2538. Typically a bootloader will initialize all the ports on the chip and you will see them change state, which is an indication that the bootloader is indeed running. I would guess one of the LEDs would change state as well (which doesn't require a scope).

MCP25625 doesnt send CAN messages

Im using MCP25625 which is MCP2515+integrated MCP2551 and trying to send messages in a loop.
For some reson I dont see any signal at all on CANH, CANL lines.
SPI communication works correctly
I use software reset procedure
There is clear 20Mhz sinewave from Crystal
There is TXCAN signal
At the moment there is nothing at all connected to CANL,CANH, just the probe.
I also tried to run in LOOPBACK mode and it works, but in the NORMAL modethere is nothing coming out.
Seems like transciever is broken? I changed 2 chips already, so it shouldnt be the problem.
Any suggestion guys?
Schematics
have you considered the modes of operation of the CAN transceiver?
In your schematic, the pins value is not clear.
If you have connected it to the MCU, Please pull it to LOW to select the normal operation mode for the transceiver (it is different configuration then the CAN controller settings, hence might cause some confusion!).
Controlling it by MCU is a good choice as it gives more control to prevent network communication from being blocked, due to a CAN controller which is out of control.
Else, connect it to ground to ensure normal operation mode specifically for the build in transceiver.
I have referred the data-sheet's of MCP25625, MCP2515 and TJA1050 to bring out this conclusion.
TJA1050 has pin-S for selecting high-speed mode and silent mode. Both modes are similar to normal mode and standby mode respectively of the transceiver of MCP25625.
Also, pin-S configuration in TJA1050 is similar to pin-STBY configuration in MCP25625.
0(LOW) for high-speed/normal mode of TJA1050/MCP25625-Transceiver
1(HIGH) for silent/standby mode of TJA1050/MCP25625-Transceiver
Hope this helps.
At the moment there is nothing at all connected to CANL,CANH, just the probe.
Hope you have connected the termination resistor? It is on the schematic, but ...

Multiple Kinects using Kinect for Windows SDK 1.5

I'm trying to make two Kinects getting along under the same application. I've tried to start all the connected Kinects (by calling the Start() method), but only one has the "isRunning" flag set on true. Does anyone know why is only one sensor running?
LE: I connected the two kinects on different USB controllers ... same problem. I've enabled all exceptions, and I get this, when the start method is called for the second kinect:
This API has returned an exception from an HRESULT: 0x830100AA
The stack trace:
at Microsoft.Kinect.KinectExceptionHelper.CheckHr(Int32 hr)
at Microsoft.Kinect.NuiSensor.NuiInitialize(UInt32 dwFlags)
at Microsoft.Kinect.KinectSensor.Initialize(SensorOptions options)
at Microsoft.Kinect.KinectSensor.Start()
Regards!
Kinect sensor requires a lot of USB bandwidth so each Kinect should be connected to a separate USB controller. Try connecting both of them to different USB host controllers. You can also verify the status under "Microsoft.Kinect" node in Device manager.
I disabled the skeletal tracking and now both kinect sensors are running. But now the question that arises: Why it's not possible to enable also the skeletal tracking?

Resources