Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Computer Architecture and Organization – Second Edition

You're reading from   Modern Computer Architecture and Organization – Second Edition Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers

Arrow left icon
Product type Paperback
Published in May 2022
Publisher Packt
ISBN-13 9781803234519
Length 666 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Jim Ledin Jim Ledin
Author Profile Icon Jim Ledin
Jim Ledin
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Introducing Computer Architecture FREE CHAPTER 2. Digital Logic 3. Processor Elements 4. Computer System Components 5. Hardware-Software Interface 6. Specialized Computing Domains 7. Processor and Memory Architectures 8. Performance-Enhancing Techniques 9. Specialized Processor Extensions 10. Modern Processor Architectures and Instruction Sets 11. The RISC-V Architecture and Instruction Set 12. Processor Virtualization 13. Domain-Specific Computer Architectures 14. Cybersecurity and Confidential Computing Architectures 15. Blockchain and Bitcoin Mining Architectures 16. Self-Driving Vehicle Architectures 17. Quantum Computing and Other Future Directions in Computer Architectures 18. Other Books You May Enjoy
19. Index
Appendix

Moore’s law

For those working in the rapidly advancing field of computer technology, it is a significant challenge to make plans for the future. This is true whether the goal is to plot your own career path or for a giant semiconductor corporation to identify optimal R&D investments. No one can ever be completely sure what the next leap in technology will be, what effects from it will ripple across the industry and its users, or when it will happen. One approach that has proven useful in this difficult environment is to develop a rule of thumb, or empirical law, based on experience.

Gordon Moore co-founded Fairchild Semiconductor in 1957 and was later the chairman and CEO of Intel. In 1965, Moore published an article in Electronics magazine in which he offered his prediction of the changes that would occur in the semiconductor industry over the next 10 years. In the article, he observed that the number of formerly discrete components, such as transistors, diodes, and capacitors, that could be integrated onto a single chip had been doubling approximately yearly and the trend was likely to continue over the next 10 years. This doubling formula came to be known as Moore’s law. This was not a scientific law in the sense of the law of gravity. Rather, it was based on an observation of historical trends, and he believed this formulation had some ability to predict the future.

Moore’s law turned out to be impressively accurate over those 10 years. In 1975, he revised the predicted growth rate for the following 10 years to double the number of components per integrated circuit every 2 years, rather than yearly. This pace continued for decades, up until about 2010. In more recent years, the growth rate has appeared to decline slightly. In 2015, Brian Krzanich, Intel CEO, stated that the company’s growth rate had slowed to doubling about every two and a half years.

Even though the time to double integrated circuit density is increasing, the current pace represents a phenomenal rate of growth that can be expected to continue into the future, just not quite as rapidly as it once progressed.

Moore’s law has proven to be a reliable tool for evaluating the performance of semiconductor companies over the decades.

Companies have used it to set goals for the performance of their products and to plan their investments. By comparing the integrated circuit density increases for a company’s products against prior performance, and against other companies, semiconductor executives and industry analysts can evaluate and score company performance. The results of these analyses have fed directly into decisions to invest in enormous new fabrication plants and to push the boundaries of ever-smaller integrated circuit feature sizes.

The decades since the introduction of the IBM PC have seen tremendous growth in the capabilities of single-chip microprocessors. Current processor generations are hundreds of times faster, operate natively on 32-bit and 64-bit data, have far more integrated memory resources, and unleash vastly more functionality, all packed into a single integrated circuit.

The increasing density of semiconductor features, as predicted by Moore’s law, has enabled these improvements. Smaller transistors run at higher clock speeds due to the shorter connection paths between circuit elements. Smaller transistors also, obviously, allow more functionality to be packed into a given amount of die area. Being smaller and closer to neighboring components allows the transistors to consume less power and generate less heat.

There was nothing magical about Moore’s law. It was an observation of the trends in progress at the time. One trend was the steadily increasing size of semiconductor dies. This was the result of improving production processes that reduced the density of defects, which allowed acceptable production yield with larger integrated circuit dies. Another trend was the ongoing reduction in the size of the smallest components that could be reliably produced in a circuit. The final trend was what Moore referred to as the “cleverness” of circuit designers in making increasingly efficient and effective use of the growing number of circuit elements placed on a chip.

Traditional semiconductor manufacturing processes have begun to approach physical limits that will eventually put the brakes on growth under Moore’s law. The smallest features on current commercially available integrated circuits are around 5 nanometers (nm). For comparison, a typical human hair is about 50,000 nm thick, and a water molecule (one of the smallest molecules) is 0.28 nm across. There is a point beyond which it is simply not possible for circuit elements to become smaller as the sizes approach atomic scale.

In addition to the challenge of building reliable circuit components from a small number of molecules, other physical effects with names such as Abbe diffraction limit become significant impediments to single-digit nanometer-scale circuit production.

We won’t get into the details of these phenomena; it’s sufficient to know the steady increase in integrated circuit component density that has proceeded for decades under Moore’s law is going to become a lot harder to continue over the coming years.

This does not mean we will be stuck with processors essentially the same as those that are now commercially available. Even as the rate of growth in transistor density slows, semiconductor manufacturers are pursuing several alternative methods to continue growing the power of computing devices. One approach is specialization, in which circuits are designed to perform a specific category of tasks extremely well rather than performing a wide variety of tasks merely adequately.

Graphics Processing Units (GPUs) are an excellent example of specialization. The original generation of GPUs focused exclusively on improving the speed at which three-dimensional graphics scenes could be rendered, mostly for use in video gaming. The calculations involved in generating a three-dimensional scene are well defined and must be applied to thousands of pixels to create a single frame. The process is repeated for each subsequent frame, and frames must be redrawn at a 60 Hz or higher rate to provide a satisfactory user experience. The computationally demanding and repetitive nature of this task is ideally suited for acceleration via hardware parallelism. Multiple computing units within a GPU simultaneously perform essentially the same calculations on different input data to produce separate outputs. Those outputs are combined to generate the entire scene. Modern GPU architectures have been enhanced to support other computing domains, such as training neural networks on massive amounts of data. GPU architectures will be covered in detail in Chapter 6, Specialized Computing Domains.

As Moore’s law shows signs of fading over the coming years, what advances might take its place to kick off the next round of innovations in computer architectures? We don’t know for sure today, but some tantalizing options are currently under intense study. Quantum computing is one example of these technologies. We will cover that technology in Chapter 17, Quantum Computing and Other Future Directions in Computer Architectures.

Quantum computing takes advantage of the properties of subatomic particles to perform computations in a manner that traditional computers cannot. A basic element of quantum computing is the qubit, or quantum bit. A qubit is similar to a regular binary bit, but in addition to representing the states 0 and 1, qubits can attain a state that is a superposition (or mixture) of the 0 and 1 states. When measured, the qubit output will always be 0 or 1, but the probability of producing either output is a function of the qubit’s quantum state prior to being read. Specialized algorithms are required to take advantage of the unique features of quantum computing.

Another future possibility is that the next great technological breakthrough in computing devices will be something that we either haven’t thought of or, if we have thought about it, we may have dismissed the idea out of hand as unrealistic. The iPhone, discussed in the preceding section, is an example of a category-defining product that revolutionized personal communication and enabled the use of the internet in new ways. The next major advance may be a new type of product, a surprising new technology, or some combination of product and technology. Right now, we don’t know what it will be or when it will happen, but we can say with confidence that such changes are coming.

The next section introduces some fundamental digital computing concepts that must be understood before we delve into digital circuitry and the details of modern computer architecture in the coming chapters.

You have been reading a chapter from
Modern Computer Architecture and Organization – Second Edition - Second Edition
Published in: May 2022
Publisher: Packt
ISBN-13: 9781803234519
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime