Bandwidth has a variety of meanings in different contexts. In signal processing, it is the difference in frequency (Hertz) between the upper and lower limits in a continuous frequency band. In instrumentation, such as an oscilloscope, it is the range of frequencies above 0 Hz in which the instrument exhibits a specified level of performance. (There is no such thing as negative frequency. Harmonics can appear to the left of the Y-axis only when that axis is set at a positive value.)
Most signals other than an ideal sine wave occupy a portion of the signal spectrum as displayed in the frequency domain. Farther in frequency from the fundamental, the amplitude of harmonics falls off, while noise is relatively level. If the signal is unbounded in regard to frequency, it, of course, extends indefinitely beyond the bandwidth of the instrument that displays it. Therefore, a meaningful definition of bandwidth must be based on an amplitude value expressed in
dB, defining a threshold. Decibel values relate to a fixed reference level and for bandwidth calculations, the convention is 3 dB relative to the maximum signal amplitude, generally at the fundamental, or first harmonic. Here the spectral density is half its maximum value. The spectral amplitude in volts is 70.7% of that maximum.
Another related meaning of bandwidth is, in computer technology, the rate of data transfer, specifically throughput
or bit rate, measured in bits per second. You see it in the rapidly blinking green LED on a computer modem or Ethernet hub, switch or router. Bandwidth is the maximum data transfer speed, as shown in manufacturers’ specifications.
A significant factor is channel noise. Paths in a digital communication system can be logical or physical. To measure maximum computer network throughput, one or more bandwidth tests are performed using appropriate instrumentation. One measurement protocol involves transferring a test file between systems. Transfer time is recorded, and throughput is calculated by dividing file size by elapsed transfer time. But several relevant ingredients do not appear in this calculation, such as window size, latency and imperfections in receiver and transmitter. Throughput is generally less than the TCP receive window (Basically, the amount of data a computer can accept without acknowledging the sender.) divided by round-trip time for the transmission, setting an upper limit to the tested bandwidth.
Bandwidth test software attempts to provide an accurate measurement of maximum bandwidth by transferring a standard amount of data during a predetermined time interval, or a specified amount of data in a minimum amount of time. During the test, Internet transmission can be delayed. A more accurate assessment is usually required and several types of dedicated software can be used to accurately measure throughput and to visualize network protocol results.
While communication links throughput is measured in bit/second units, file sizes are measured in bytes. IEC standards define a megabyte as one million bytes. This is in contrast to the Windows system convention where a megabyte is equivalent to (1.024 bytes), also called “one megabyte”. Kilobytes and Gigabytes share similar dual nomenclatures.
The maximum rate at which information can be conveyed over a communication channel of specific bandwidth in the presence of Gaussian noise is stated in the Shannon-Hartley theorem. By way of background, Ralph Hartley and Harry Nyquist had collaborated in the 1920s, and their ideas were further developed by Claude Shannon in the 1940s. This work amounted to a comprehensive information theory including the new concept of channel capacity. The end product, of great importance in the digital age, was the Nyquist-Shannon sampling theorem.
An analog to digital converter (ADC) is an integral part of the signal path for each analog channel in a digital storage oscilloscope. To convert an analog signal to a digital signal, the analog signal must be sampled at some specific rate. The Nyquist sampling theorem states that perfect information reproduction in the digital signal occurs when the sampling rate is twice the highest frequency component in the analog signal. This level often exceeds the frequency of the fundamental.
Returning to the work of Shannon and Hartley, the theorem states that the maximum rate of information transfer across a communication link is dependent upon the bandwidth in Hertz and also upon channel noise. Since noise restricts signal transmission, oscilloscope users, seeking to improve a communication link, have an interest in displaying the unobscured signal. Other than waveform averaging, an effective
way to do this is bandwidth limiting. This may seem to be a strange approach since we pay a lot for high-bandwidth instrumentation, but it works quite well. Thermal noise is a broad-spectrum phenomenon, and temporarily limiting the combined bandwidth permits the signal to be displayed. We are limiting the bandwidth of signal plus noise, not the bandwidth of the instrument. In mitigating noise, signal averaging is sometimes a better approach than bandwidth limiting, in which the signal under investigation may go out of range.
Passband bandwidth, baseband bandwidth and essential bandwidth are different ways of defining and measuring the presence of electrical and electromagnetic energy within the frequency spectrum. Passband bandwidth refers to the sampling theorem and Nyquist sampling rate, and it defines the Shannon-Hartley channel capacity for communications systems. Passband implies a specific range of frequencies with lower non-zero bounds and higher non-infinite bounds. In contrast, baseband bandwidth begins at 0 Hz and extends a finite amount, subject to the bandwidth of the instrument that is measuring it.
Bandwidth, then, is applicable to systems such as filters and communications channels as well as the signals that are conveyed or processed. An important benchmark is the 3 dB level. A system’s rated frequency response occurs within 3 dB of the peak. For a passband filter, this lies close to the center frequency. For a low-pass filter, it is not far from 0 Hz. Front-panel controls and menus in a digital storage oscilloscope in the FFT mode permit the user to position frequency and amplitude relative to the X- and Y- axes. 3 dB also defines the range of frequencies in which amplitude gain is 70.7% of peak amplitude gain and the corresponding power gain is greater than half of peak power gain.
Essential bandwidth is the portion of the frequency spectrum that contains most of the signal energy. Fractional bandwidth is the bandwidth of a device, circuit or component divided by its center frequency. If the bandwidth is 4 MHz and the center frequency is 8 MHz, the fractional bandwidth is 50%.
Rayleigh bandwidth is the central concept in radar technology. It is defined as the inverse of the pulse duration. A 1-μsec pulse has a Rayleigh bandwidth of 1 MHz. That is how distance is measured. Radio waves determine the range, angle or velocity of an object of interest. Radio waves from the transmitter bounce off the target or they are absorbed or scattered. A portion of them are picked up at the receiver and processed to obtain the desired information. Transmitting and receiving antennas are usually in the same location and may be the same conductive body.
A band of a given width can carry a specific maximum amount of information regardless of where it sits in the frequency spectrum. Modulating the band at a high frequency does not affect this inherent limitation, which is not based on the frequency of the carrier.
Data rate or bit rate (and the related concept of bit error rate) is the number of bits that are conveyed or processed over a unit of time, usually one second. One byte per second (1 B/sec) is equivalent to 8 bit/sec. Bit error rate (BER) is the number of errors that arise in a specified time interval. An important metric is the bit error rate, which is the number of bit errors divided by the total number of transferred bits, expressed as a percentage.
Finally, the bit error rate (BER) complicates bandwidth calculations in the real world. The data D actually getting through a data channel is equal to the data sent, Ds, times (1-BER), or Dsx(1-BER) with BER expressed as a fractional rate such as 1 in 106 = 0.000001. But when an error arises, the data must be resent or there must be some bits used as forward correction overhead. In the case of a resend, throughput drops to a smaller rate. So D becomes = (number of sent packets/second) – (number of packets with error(s)/second) x (packet data content length).
To convert BER to the number of packets with error(s)/second, you either assume every error destroys a packet (as above), or you must know something about the statistical grouping of errors.
Assume each error bit garbles one packet, the net error-free data rate is D, and packet length is P. Then BER x P bits/second get lost from errors. So the rate reduces to ~= (D – BERxP)/D.