Audio Basics
How is Audio Represented Digitally
Sound waves are captured by a microphone, which converts the acoustic energy into electrical analog signals. The analog signals are then fed into an ADC (Analog-to-Digital Conversion). Here, two critical processes occur - sampling and quantization.
Sampling
Definition
Sampling is the process of measuring the amplitude of an analog signal at regular intervals. These intervals are determined by the sample rate, expressed in Hertz (Hz). For example, a sample rate of 44.1 kHz means the signal is sampled 44,100 times per second.
Purpose
By sampling the audio signal, we create a series of discrete data points that approximate the continuous analog waveform.
Implication
The Nyquist Theorem states that the sample rate must be at least twice the highest frequency component in the audio signal to accurately reconstruct the original signal. For example, human hearing typically ranges up to 20 kHz, hence the standard CD sample rate of 44.1 kHz.
Quantization
Definition
Quantization is the process of converting each sampled amplitude value into a digital value. This involves assigning a specific numerical value (quantization level) to each sample, based on its amplitude.
Purpose
The range of possible amplitude values is divided into discrete steps. Each step is assigned a digital value. The bit depth determines the number of possible quantization levels. For instance, a 16-bit system can represent 65,536 (2^16) different levels.
Implication
Quantization introduces a small amount of error, known as quantization noise, because the process involves rounding the true amplitude value to the nearest quantization level. Higher bit depths can reduce this error, leading to higher fidelity audio.
Terminology
The sample rate is the number of samples of audio carried per second. It’s measured in Hertz (Hz).
This refers to the number of separate audio channels (e.g., mono, stereo,
surround sound) in the recording.
Mono means single channel. All audio is combined into one channel.
Bit depth refers to the number of bits used to represent each audio sample.
Audio Encoding
Audio encoding refers to the process of converting audio data into a format that can be easily stored, transmitted, and decoded by audio playback devices. This process often involves compression to reduce file size while trying to maintain the quality of the original audio. There are several popular audio encoding formats, each with its own specific use cases and characteristics.
Here’re some examples:
-
Description
: PCM (Pulse Code Modulation) is the most straightforward form of digital audio encoding. It represents the amplitude of the audio signal at uniformly spaced intervals. -
Usage
: It’s the standard form of digital audio in computers, CDs, digital telephony, and other digital audio applications.
Note that audio encoding is not the same as audio format. An audio format refers to the entire structure of the audio file, which includes the encoding, but also encompasses other elements like metadata, file headers, and containers. For example, a WAV file typically uses PCM encoding, and has its own header that specifies audio sample rate, number of samples, etc.
PCM Audio Representation
When audio is played, it is typically decoded into PCM (Pulse Code Modulation). This process is true for most digital audio systems, regardless of the original audio format or encoding method.
There are generally two types of PCM audio representation:
Float 32 Array
: It uses a 32-bit floating-point format to represent each sample. When capturing mic stream and setting up playback in web environment, PCM will be represented in this format.Unsigned 8 Array
: It uses an array of 8-bit unsigned integers (aka bytes), and each sample can be multiple bytes. For example, for a mono PCM audio with bit depth of 16 bit, each sample will be two bytes. This is a lower-level representation and is often used in programming for audio processing.
Here’s the code snippet to convert between these two format:
Audio Spec Retell AI Use
Phone Calls
: Different telephony providers have different audio codecs. Our telephony integrations handle that for you internally, and you don’t need to worry about encoding and decoding.Web Calls
: The frontend web JS SDK abstracts away audio complexity for you. The user audio is captured in PCM format and sent to the backend for processing.
Was this page helpful?