Default for 0db sound level as an absolute float value - winapi

I'm currentyl building something like a tiny software audio synthesizer on Window 7 in c++. The core engine is running and upon receiving midi events it plays notes, changes programmes, etc. What puzzles me at the moment is where to put the 0 db reference sound pressure level of the output channels.
Let's say the synthesizer produces a sinewave with 440 Hz with an amplitude of |0.5f| . In order to calculate the sound level in db I need to set the reference level (0 db). Does anyone know something like a default for this?

When decibel relative to full scale is in question, AKA dBFS, zero dB is assigned to the maximum possible digital level. A quote from Wiki:
0 dBFS is assigned to the maximum possible digital level.[1] for
example, a signal that reaches 50% of the maximum level at any point
would peak at -6 dBFS i.e. 6 dB below full scale. All peak
measurements will be negative numbers, unless they reach the maximum
digital value.

First you need to be clear about units. dB on its own is a ratio, not an absolute value. As #Roman R. suggested, you can just use 0 dB to mean "full scale" and then your range will be 0 dB (max) to some negative dB value which corresponds to the minimum value that you are interested in (e.g. -120 dB). However this is just an arbitrary measurement which doesn't tell you anything about the absolute value of the signal.
In your question though you refer to dB SPL (SPL = Sound Pressure Level), which is an absolute unit. 0 dB SPL is typically defined as 20 µPa (RMS), which is around the threshold of human hearing, and in this case the range of interest might be say -20 dB SPL to say +120 dB SPL. However if you really do want to measure dB SPL and not just an arbitrary dB value then you will need to calibrate your system to take into account microphone gain, microphone frequency response, A-D sensitivity/gain, and various other factors. This is non-trivial, but essential if you actually want to implement some kind of SPL measuring system.

Related

Is there real-time DC offset compensation in Ettus N310?

I am working with an Ettus N310 that is being controlled by some 3rd party software. I don't have much of an insight of how they set up and control the device, just tell it what center frequency to tune to and when to grab IQ. If I receive a signal, let's say a tone, at or very near the center frequency, I end up with a large DC offset that jumps around every few 100 usec. If I offset the signal well away from the center frequency, the DC offset is negligible. From what I see in Ettus' documentation, DC offset compensation is something that's set once when the device starts receiving but it looks to me like here it is being done periodically while the USRP is acquiring data. If I receive a signal near center frequency, the DC offset compensator gets messed up and creates a worse bias. Is this a feature on the N310 that I am not aware of or is this probably something that the 3rd party controller is doing?
Yes, there's a DC offset compensation in the N310. The N310 uses an Analog Devices RFIC (the AD9371), which has these calibrations built-in. Both the AD9371 and the AD9361 (used in the USRP E3xx and B2xx series) don't like narrow-band signals close to DC due to their calibration algorithms (those chips are optimized for telecoms signals).
Like you said, the RX DC offset compensation is happening at initialization. At runtime, the quadrature error correction kicks in. The manual holds a table of those: https://uhd.readthedocs.io/en/latest/page_usrp_n3xx.html#n3xx_mg_calibrations). You can try turning off the QEC tracking and see if it improves your system's performance.

USRP N320 recording the edges

When I record a signal with USRP N320 SDR, it has some problems on the edges of the spectrum. For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results. When it see a pulse on the edges it decreases the power and changes the frequency little bit. But 46 MHz bandwidth is perfectly working.
Sample rate: 50 Msps, Properly working bandwidth: 46 MHz
Sample rate: 100 Msps, Properly working bandwidth: 90 MHz
Sample rate: 200 Msps, Properly working bandwidth: 180 MHz
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem. Even if I choose the sample rate 50 Msps. But normally, I can record successfully without bandpass filter when I choose sample rate 200 Msps.
Is there a solution to record the edges correctly. Or filtering it without dropping samples.
First off:
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem
means that your computer isn't fast enough to apply the filter to the data stream. That might mean two things: you've designed a filter that's too long and could be shorter and still do what you want, or what you want to do requires a filter of that length and you will need to find a faster PC (hard) or use a faster filter implementation (did you try the FFT filters?).
For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results.
This is not surprising! Remember that anything with a ADC needs an anti-aliasing filter on the analog side, and these can't be arbitrarily sharp. So, necessarily, the spectrum at the edge of your band gets a bit dampened, and there's a bit of aliasing there. The dampening, you could counteract by throwing an equalizing filter on your PC at it, which would need to necessarily be more compute-intense than what is happening on the USRP, but the aliasing of the lowest frequencies onto the highest, and vice versa, due to finite steepness of the analog anti-aliasing filter you cannot repair. That's the signal processing blues for any kind of acquisition device.
There's one trick though, which the USRP uses: when your requested sampling rate is lower than the ADC's sampling rate, the USRP can internally apply a (better!) digital filter to select that target sampling rate as bandwidth, and decimate to that.
Thus, depending on the ADC rate to output sampling rate relationship (in UHD, the ADC rate is called "master clock rate", MCR), there's further digital filtering and decimation going on in the digital logic inside the N320. These filters also can't be infinitely sharp – and you might see that.
Generally, you'd want that decimation between MCR and the sampling rate you've requested to be an even number, and not too large. Don't have the N320's digital signal processing architecture in my head right now, but I bet using a decimation that's a multiple of 4 or even 8 is a good move – you get to use the nicer half-band filters then.
Modern UHD also has the filter API, with which you can work with these digital filters manually; this rarely is what you really want to do here, though.

How to find average seek time in disk scheduling algorithms?

Seek Time : The amount of time required to move the read/write head from its current position to desired track.
I am looking for formula of average seek time used in disk scheduling algorithms.
How to find average seek time in disk scheduling algorithms?
I am looking for formula of average seek time used in disk scheduling algorithms.
The first step is to determine the geography of the device itself. This is difficult. Modern hard disks can not defined by the old "cylinders, heads, sectors" triplet, the number of sectors per track is different for different tracks (more sectors on outer tracks where the circumference is greater, less sectors on inner tracks where the circumference is smaller), and all of the information you can get about the drive (from the device itself, or from any firmware or OS API) is a lie to make legacy software happy.
To work around that you need to resort to "benchmarking tactics". Specifically, read from LBA sector 0 then LBA sector 1 and measure the time it took (to establish a "time taken when both sectors are in the same track" assumption), then read from LBA sector 0 then LBA sector N in a loop (with N starting at 2 and increasing) while measure the time it takes and comparing it to the previous value and looking for a larger increase in time taken that indicates that you've found the boundary between "track 0" and "track 1". Then repeat this (starting with the first sector in "track 1") to find the boundary between "track 1" and "track 2"; and keep repeating this to build an array of "how many sectors on each track". Note that it is not this simple - there's various pitfalls (e.g. larger physical sectors than logical sectors, sectors that are interleaved on the track, bad block replacement, internal caches built into the disk drive, etc) that need to be taken into account. Of course this will be excessively time consuming (e.g. you don't want to do this for every disk every time an OS boots), so you'll want to obtain the hard disk's identification (manufacturer and model number) and store the auto-detected geometry somewhere, so that you can skip the auto-detection if the geometry for that model of disk was previously stored.
The next step is to use the information about the real geometry (and not the fake geometry) combined with more "benchmarking tactics" to determine performance characteristics. Ideally you'd be trying to find constants for a formula like expected_time = sector_read_time + rotational_latency + distance_between_tracks * head_travel_time + head_settle_time, which could be done like:
measure time to read from first sector in first track then sector N in the first track; for every value of N (for every sector in the first track) and find the minimum time it can take, divide it by 2 and call it sector_read_time.
measure time to read from first sector in first track then sector N in the first track; for every value of N (for every sector in the first track) and find the maximum time it can take, divide it by the number of sectors in the first track, and call it rotational_latency.
measure time to read from first sector in track N then first sector in track N+1, with N ranging from 0 to "max_track - 1", and determine the average, and call it time0
measure time to read from first sector in track N then first sector in track N+2, with N ranging from 0 to "max_track - 1", and determine the average, and call it time1
assume head_travel_time = time1 - time0
assume head_settle_time = time0 - head_travel_time - sector_read_time
Note that there are various pitfalls with this too (same as before), and (if you work around them) the best you can hope for is a generic estimate (and not an accurate predictor).
Of course this will also be excessively time consuming, and if you're storing the auto-detected geometry somewhere it'd be a good idea to also store the auto-detected performance characteristics in the same place; so that you can skip all of the auto-detection if all of the information for that model of disk was previously stored.
Note that all of the above assumes "stand alone rotating platter hard disk with no caching and no hybrid/flash layer" and will be completely useless for a lot of cases. For some of the other cases (SSD, CD/DVD) you'd need different techniques to auto-detect their geometry and/or characteristics. Then there's things like RAID and virtualisation to complicate things more.
Mostly; it's far too much hassle to bother in practice.
Instead; just assume that cost = abs(previous_LBA_sector_number - next_LBA_sector_number) and/or let the hard disk sort out the optimum order itself (e.g. using Native Command Queuing - see https://en.wikipedia.org/wiki/Native_Command_Queuing ).

PWM transistor heating - Rapberry

I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.

What would be the effect of increasing the number of bytes?

One byte is used to store each of the three color channels in a pixel. This gives 256 different levels each of red, green and blue. What would be the effect of increasing the number of bytes per channel to 2 bytes?
2^16 = 65536 values per channel.
The raw image size doubles.
Processing the file takes roughly 2 times more time ("roughly", because you have more data, but then again this new data size may be better suited for your CPU and/or memory alignment than the previous sections of 3 bytes -- "3" is an awkward data size for CPUs).
Displaying the image on a typical screen may take more time (where "a typical screen" is 24- or 32-bit and would as yet not have hardware acceleration for this particular job).
Chances are you cannot use the original data format to store the image back into. (Currently, TIFF is the only file format I know that routinely uses 16 bits/channel. There may be more. Can yours?)
The image quality may degrade. (If you add bytes you cannot set them to a sensible value. If 3 bytes of 0xFF signified 'white' in your original image, what would be the comparable 16-bit value? 0xFFFF, or 0xFF00? Why? (For either choice-- and remember, you have to make a similar choice for black.))
Common library routines may stop working correctly. Only the very best libraries are data size-ignorant (and they'd still need to be rewritten to make use of this new size.)
If this is a real world scenario -- say, I just finished writing a fully antialiased graphics 2D library, and then my boss offhandedly adds this "requirement" -- it'd have a particular graphic effect on me as well.

Resources