## How do you convert sound decibel (dB) measurements to a common loudness scale?

0

I'm using Android IP Webcam as a sound sensor to monitor environmental noise levels in an interior room. It is outputting values in the 135-230 dB range where ~135 dB is dead-of-night, everything off, and ~230 is loud action scene in a movie being played. I need to convert these values to something meaningful for common loudness levels. What formula should be used to make that conversion?

What is the reference for the cameras dB claims. They are not in any standard units for noise at those numbers. sounds more like dBFS with floating point on a camera that uses AGC and has the signal really jacked up. – joe sixpak – 2019-11-23T21:11:28.040

2

There is no formula you can use to make this conversion. It is unlikely your webcam would be suitable as a sound sensor due to the processing involved in detecting and multiplexing the audio into the visual data stream. Also, the 'dB' values you are quoting are meaningless.

In order to be able to detect something useful, you would need a raw feed from the microphone, a known calibration reference source and the correct filters. Only then could you begin to determine what the coefficients are to get anything remotely meaningful out of the system. Any 'formula' would be unique to the device.

Perfect accuracy isn't necessary for this use case, rough equivalence is fine. Given that the measurements coming out of the sensor are consistent across different levels of real-world noise in the environment, and that I can say "this measurement from the sensor corresponds to this level from this other scale", what formula can I use to convert individual measurements to the other scale. Is a simple ratio of the minimum measurement to the minimum on the other scale appropriate for a logarithmic scale like dB? Something else? – Ross Patterson – 2019-10-20T03:05:17.973

I wasn't suggesting a way of getting perfect accuracy. I was just suggesting the minimum you need to do to get something you can use. – Mark – 2019-10-20T03:15:25.273

0

The levels look to be about 100 dB too high, but that's easily ± 10 dB off, which is a large margin.

At 'dead of night' you run into the noise floor of your cheap mic, but 25-30 dB is about right for night in a quiet neighborhood.

At the top of the range, it's possible the mic is clipping, you'd have to record some loud sounds and look at the waveform in an audio editor to be sure. 100 dB is already unpleasantly loud. My iPhone mic is limited to about 100 dB.

As Mark says, calibration is the only way to get numbers that are reasonably close. Once you have done a calibration at one level, the rest of the scale should be reasonably close: if it weren't, you'd get really strange-sounding audio.

Thanks, but precision isn't necessary here. What I'm asking for is how one should convert between 2 different dB scales. Is subtracting the difference the right way for logarithmic scales like this? That leaves the lout movie at ~130 dB which is obviously not right. Is a ratio correct? (35 dB / 135 dB) * 230 dB = ~60 dB which seems possibly correct. What is the correct formula for converting/translating between dB scales? – Ross Patterson – 2019-10-21T03:17:39.687

Without having an accurate comparison [see Mark's answer], you will never know. Those figures are so far out of whack that it is absolute guesswork. You might be able to guesstimate 'dead of night' but there's no way you're going to guesstimate 'loud movie'. – Tetsujin – 2019-10-21T07:13:42.030

To translate between dB scales, you can just add/subtract. So in this case, subtract 100. – Hobbes – 2019-10-21T07:40:58.233

1

@Hobbes - subtracting any figure is not going to work on those numbers. If dead quiet is 35dB, then who do you know with a TV or 5.1 system that can generate 130dB?? https://en.wikipedia.org/wiki/Loudest_band

– Tetsujin – 2019-10-22T10:10:06.540

0

Your measurements are in a log scale, and Common Loudness is a log scale, so the conversion is linear. Just pick a Common Loudness value that you think corresponds to dead-of-night and one that you think corresponds to a loud action scene. Then, it's simple algebra to interpolate between them. Whatever bias may exist in your sensor gets removed, because your results will be relative to the endpoints. Actually, you only need one point to remove bias; having two endpoints also removes scaling error.

As Hobbes and Mark point out, non-linearities in the sensor can significantly distort your results. As Hobbes mentions, clipping can be a big factor. If the measurements seem to flatten out when the sound gets very loud, then you need to take the high calibration point at the knee of that curve and consider measurements above that point useless.

If the camera is giving you the measurements directly, then I would expect the measurements to be taken before AGC is applied. If you are feeding the camera audio into some other device to take the measurement, then AGC will completely distort the results.

Note that the transfer function of your webcam might not be flat, meaning that sounds at different pitches might not be measured on the same scale. A room full of screaming children and a charging herd of cattle might sound like the same level to you and me but be measured at very different levels by your camera.