This blog about Computer History

Sunday, January 2, 2011

Microprocessor

A microprocessor incorporates most or all of the functions of a computer's central processing unit (CPU) on a single integrated circuit (IC, or microchip).The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on.
During the 1960s, computer processors were often constructed out of small and medium-scale ICs containing from tens to a few hundred transistors. The integration of a whole CPU onto a single chip greatly reduced the cost of processing power. From these humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has been a consequence of Moore's Law, which suggests that the number of transistors that can be fitted onto a chip doubles every two years. Although originally calculated as a doubling every year Moore later refined the period to two years.It is often incorrectly quoted as a doubling of transistors every 18 months
 
 

No comments: