A digital signal processor (DSP) is a specialized microprocessor (or a SIP block) chip, with its architecture optimized for the operational needs of digital signal processing.[1][2] DSPs are fabricated on MOS integrated circuit chips.[3][4] They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products.[3]
The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints.[5] DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. DSPs often also implement data compression technology, with the discrete cosine transform (DCT) in particular being a widely used compression technology in DSPs.
Overview[]
Digital signal processing algorithms typically require a large number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP applications have constraints on latency; that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred (or batch) processing is not viable.
Most general-purpose microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints.[5] A specialized DSP, however, will tend to provide a lower-cost solution, with better performance, lower latency, and no requirements for specialised cooling or large batteries.Template:Citation needed
Such performance improvements have led to the introduction of digital signal processing in commercial communications satellites where hundreds or even thousands of analog filters, switches, frequency converters and so on are required to receive and process the uplinked signals and ready them for downlinking, and can be replaced with specialised DSPs with significant benefits to the satellites' weight, power consumption, complexity/cost of construction, reliability and flexibility of operation. For example, the SES-12 and SES-14 satellites from operator SES launched in 2018, were both built by Airbus Defence and Space with 25% of capacity using DSP.[6]
The architecture of a DSP is optimized specifically for digital signal processing. Most also support some of the features as an applications processor or microcontroller, since signal processing is rarely the only task of a system. Some useful features for optimizing DSP algorithms are outlined below.
History[]
Background[]
Prior to the advent of stand-alone digital signal processor (DSP) chips, early digital signal processing applications were typically implemented using bit-slice chips. The AMD 2901 bit-slice chip with its family of components was a very popular choice. There were reference designs from AMD, but very often the specifics of a particular design were application specific. These bit slice architectures would sometimes include a peripheral multiplier chip. Examples of these multipliers were a series from TRW including the TDC1008 and TDC1010, some of which included an accumulator, providing the requisite multiply–accumulate (MAC) function.
Electronic signal processing was revolutionized in the 1970s by the wide adoption of the MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor),[7] which was originally invented by Mohamed M. Atalla and Dawon Kahng in 1959.[8] MOS integrated circuit technology was the basis for the first single-chip microprocessors and microcontrollers in the early 1970s,[9] and then the first single-chip DSPs in the late 1970s.[3][4]
Another important development in digital signal processing was data compression. Linear predictive coding (LPC) was first developed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966, and then further developed by Bishnu S. Atal and Manfred R. Schroeder at Bell Labs during the early-to-mid-1970s, becoming a basis for the first speech synthesizer DSP chips in the late 1970s.[10] The discrete cosine transform (DCT) was first proposed by Nasir Ahmed in the early 1970s, and has since been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation, inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and motion vectors, and decoding operations such as the inverse operation between different color formats (YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.[11]
Development[]
In 1976, Richard Wiggins proposed the Speak & Spell concept to Paul Breedlove, Larry Brantingham, and Gene Frantz at Texas Instruments' Dallas research facility. Two years later in 1978, they produced the first Speak & Spell, with the technological centerpiece being the TMS5100,[12] the industry's first digital signal processor. It also set other milestones, being the first chip to use linear predictive coding to perform speech synthesis.[13] The chip was made possible with a 7 µm PMOS fabrication process.[14]
In 1978, American Microsystems (AMI) released the S2811.[3][4] The AMI S2811 "signal processing peripheral", like many later DSPs, has a hardware multiplier that enables it to do multiply–accumulate operation in a single instruction.[15] The S2281 was the first integrated circuit chip specifically designed as a DSP, and fabricated using VMOS (V-groove MOS), a technology that had previously not been mass-produced.[4] It was designed as a microprocessor peripheral, for the Motorola 6800,[3] and it had to be initialized by the host. The S2811 was not successful in the market.
In 1979, Intel released the 2920 as an "analog signal processor".[16] It had an on-chip ADC/DAC with an internal signal processor, but it didn't have a hardware multiplier and was not successful in the market.
In 1980, the first stand-alone, complete DSPs – Nippon Electric Corporation's NEC µPD7720 and AT&T's DSP1 – were presented at the International Solid-State Circuits Conference '80. Both processors were inspired by the research in public switched telephone network (PSTN) telecommunications. The µPD7720, introduced for voiceband applications, was one of the most commercially successful early DSPs.[3]
The Altamira DX-1 was another early DSP, utilizing quad integer pipelines with delayed branches and branch prediction.Template:Citation needed
Another DSP produced by Texas Instruments (TI), the TMS32010 presented in 1983, proved to be an even bigger success. It was based on the Harvard architecture, and so had separate instruction and data memory. It already had a special instruction set, with instructions like load-and-accumulate or multiply-and-accumulate. It could work on 16-bit numbers and needed 390 ns for a multiply–add operation. TI is now the market leader in general-purpose DSPs.
About five years later, the second generation of DSPs began to spread. They had 3 memories for storing two operands simultaneously and included hardware to accelerate tight loops; they also had an addressing unit capable of loop-addressing. Some of them operated on 24-bit variables and a typical model only required about 21 ns for a MAC. Members of this generation were for example the AT&T DSP16A or the Motorola 56000.
The main improvement in the third generation was the appearance of application-specific units and instructions in the data path, or sometimes as coprocessors. These units allowed direct hardware acceleration of very specific but complex mathematical problems, like the Fourier-transform or matrix operations. Some chips, like the Motorola MC68356, even included more than one processor core to work in parallel. Other DSPs from 1995 are the TI TMS320C541 or the TMS 320C80.
The fourth generation is best characterized by the changes in the instruction set and the instruction encoding/decoding. SIMD extensions were added, and VLIW and the superscalar architecture appeared. As always, the clock-speeds have increased; a 3 ns MAC now became possible.
See also[]
- Digital signal controller
- Graphics processing unit
- System on a chip
- Hardware acceleration
- Vision processing unit
- MDSP – a multiprocessor DSP
- OpenCL
References[]
- ↑ Dyer, S. A.; Harms, B. K. (1993). "Digital Signal Processing". In Yovits, M. C. (ed.). Advances in Computers. Vol. 37. Academic Press. pp. 104–107. doi:10.1016/S0065-2458(08)60403-9. ISBN 9780120121373.
- ↑ Liptak, B. G. (2006). Process Control and Optimization. Instrument Engineers' Handbook. Vol. 2 (4th ed.). CRC Press. pp. 11–12. ISBN 9780849310812.
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 "1979: Single Chip Digital Signal Processor Introduced". The Silicon Engine. Computer History Museum. Retrieved 14 October 2019.
- ↑ 4.0 4.1 4.2 4.3 Taranovich, Steve (August 27, 2012). "30 years of DSP: From a child's toy to 4G and beyond". EDN. Retrieved 14 October 2019.
- ↑ 5.0 5.1 Ingrid Verbauwhede; Patrick Schaumont; Christian Piguet; Bart Kienhuis (2005-12-24). "Architectures and Design techniques for energy efficient embedded DSP and multimedia processing" (PDF). rijndael.ece.vt.edu. Retrieved 2017-06-13.
- ↑ Beyond Frontiers Broadgate Publications (September 2016) pp22
- ↑ Grant, Duncan Andrew; Gowar, John (1989). Power MOSFETS: theory and applications. Wiley. p. 1. ISBN 9780471828679.
The metal-oxide-semiconductor field-effect transistor (MOSFET) is the most commonly used active device in the very large-scale integration of digital integrated circuits (VLSI). During the 1970s these components revolutionized electronic signal processing, control systems and computers.
- ↑ "1960: Metal Oxide Semiconductor (MOS) Transistor Demonstrated". The Silicon Engine: A Timeline of Semiconductors in Computers. Computer History Museum. Retrieved August 31, 2019.
- ↑ Shirriff, Ken (30 August 2016). "The Surprising Story of the First Microprocessors". IEEE Spectrum. Institute of Electrical and Electronics Engineers. Retrieved 13 October 2019.
- ↑ Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol" (PDF). Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN 1932-8346.
- ↑ Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Retrieved 13 October 2019.
- ↑ "Speak & Spell, the First Use of a Digital Signal Processing IC for Speech Generation, 1978". IEEE Milestones. IEEE. Retrieved 2012-03-02.
- ↑ Bogdanowicz, A. (2009-10-06). "IEEE Milestones Honor Three". The Institute. IEEE. Archived from the original on 2016-03-04. Retrieved 2012-03-02.
- ↑ Khan, Gul N.; Iniewski, Krzysztof (2017). Embedded and Networking Systems: Design, Software, and Implementation. CRC Press. p. 2. ISBN 9781351831567.
- ↑ Alberto Luis Andres. "Digital Graphic Audio Equalizer". p. 48.
- ↑ https://www.intel.com/Assets/PDF/General/35yrs.pdf#page=17