Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Computer Architecture and Organization – Second Edition

You're reading from   Modern Computer Architecture and Organization – Second Edition Learn x86, ARM, and RISC-V architectures and the design of smartphones, PCs, and cloud servers

Arrow left icon
Product type Paperback
Published in May 2022
Publisher Packt
ISBN-13 9781803234519
Length 666 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Jim Ledin Jim Ledin
Author Profile Icon Jim Ledin
Jim Ledin
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Introducing Computer Architecture FREE CHAPTER 2. Digital Logic 3. Processor Elements 4. Computer System Components 5. Hardware-Software Interface 6. Specialized Computing Domains 7. Processor and Memory Architectures 8. Performance-Enhancing Techniques 9. Specialized Processor Extensions 10. Modern Processor Architectures and Instruction Sets 11. The RISC-V Architecture and Instruction Set 12. Processor Virtualization 13. Domain-Specific Computer Architectures 14. Cybersecurity and Confidential Computing Architectures 15. Blockchain and Bitcoin Mining Architectures 16. Self-Driving Vehicle Architectures 17. Quantum Computing and Other Future Directions in Computer Architectures 18. Other Books You May Enjoy
19. Index
Appendix

Cache memory

Cache memory is a high-speed memory region (compared to the speed of main memory) that temporarily stores program instructions or data for future use. Usually, these instructions or data items have been retrieved from main memory recently and are likely to be needed again shortly.

The primary purpose of cache memory is to increase the speed of repeatedly accessing the same memory location and nearby memory locations. To be effective, accessing the cached items must be significantly faster than accessing the original source of the instructions or data, referred to as the backing store.

When caching is in use, each attempt to access a memory location begins with a search of the cache. If the requested item is present, the processor retrieves and uses it immediately. This is called a cache hit. If the cache search is unsuccessful (a cache miss), the instruction or data item must be retrieved from the backing store. In the process of retrieving the requested item...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image