Chapter 6: Moore’s Law and More
What Chapter 6 is mainly about
Chapter 6 explains why computing power keeps improving so rapidly, why the falling cost of technology matters so much for business, and how hardware trends enable major systems like AI, cloud computing, IoT, and wearable technologies. It also shows that managers need to understand not just software, but the physical and economic realities of computing infrastructure.
What this page includes
- Precise Chapter 6 vocabulary
- Explanations from the textbook and slides
- A scenario-based 5-question quiz
- Visible chapter citations and works cited
How to study with it
- Know why Moore’s Law matters to managers
- Understand CPU, memory, storage, and I/O clearly
- Distinguish bandwidth from latency
- Connect hardware trends to AI, cloud, IoT, and Disney’s MagicBand case
Chapter 6 Vocabulary
Key Concepts and Explanations
1. Moore’s Law is about exponential change, not ordinary improvement
One of the most important ideas in this chapter is that computing does not improve in a slow, linear way. It improves exponentially. That means managers who think in straight lines often underestimate how quickly computing capability can transform products, costs, customer expectations, and business models.
2. Falling computing costs create more demand, not less
Computing demand is highly price elastic. As the cost of processing, storage, and connectivity falls, people do not simply buy the same amount more cheaply. They find entirely new things to do with it. That is why lower-cost computing enables cloud services, wearable devices, IoT sensors, generative AI, and many other systems that would have once seemed too expensive to scale.
3. Hardware matters because software always runs on physical limits
Managers often talk about software and digital services as if they exist independently, but Chapter 6 reminds us that every app, AI system, cloud workload, or data-intensive platform depends on chips, memory, storage, and network infrastructure. Even strategic questions like cloud adoption, AI scale, or chip independence ultimately rest on hardware realities.
4. A computer is best understood as a coordinated system
The slides describe a computer as a “data kitchen.” The CPU acts like the chef, memory acts as workspace, storage holds what is needed later, and input/output devices bring data in and send results out. This matters because managers need to understand that performance bottlenecks are not always “the computer is slow”, sometimes the issue is memory, sometimes storage, sometimes data transfer, and sometimes latency.
5. Parallelism is now one of the main ways computing keeps getting faster
Instead of relying only on a single processor getting endlessly faster, modern computing often improves by using more cores, more processors, and more machines at the same time. Multicore CPUs, GPUs, cloud clusters, and supercomputers all reflect the same logic: divide the work and solve more of it simultaneously.
6. Better computing creates value, but also new risks and trade-offs
Offloading computing to the cloud can make very powerful processing widely available, but it also introduces latency, security concerns for data in transit, and enormous energy demands in data centers. Similarly, frequent hardware replacement enables innovation but contributes to the growing e-waste problem. So Chapter 6 is not just “tech is getting better”, it is also about the managerial consequences of that improvement.
Hardware, Networks, and Why It Matters for Strategy
Chapter 6 connects hardware trends to broader strategy questions. It shows why technical concepts like multicore CPUs, fabs, compatibility layers, and latency are not just engineering details, they directly affect ecosystems, customer experience, and competitive advantage.
Konana’s software ecosystem shows that hardware, operating systems, databases, middleware, and applications depend on each other. Because those layers are connected, strategic decisions must consider switching costs and entrenchment, not just the strength of a single product.
Apple’s transition to M-series chips worked much better because emulation helped older software keep running. That reduced the pain of switching hardware architectures and protected the value of the ecosystem during a major transition.
Semiconductor fabrication plants cost tens of billions of dollars, require massive infrastructure, and are concentrated in a small number of firms and countries. That makes chips not just a business issue, but also a supply-chain and national-security issue.
High bandwidth means lots of data can move. Low latency means data starts moving quickly. Streaming video cares more about bandwidth; gaming cares more about latency; many real-time business systems need both.
Disney’s MagicBand system is important because it shows how embedded hardware and data systems can improve convenience for customers while also giving the company better scheduling, staffing, inventory, and spending data.
Chapter 6 Quiz
These questions are scenario-based and designed to feel closer to actual MIS test questions.
Answer Key and Explanations
Question 1
Correct answer: B
This question tests the difference between bandwidth and latency. Bandwidth is how much data can move, while latency is the delay before it starts moving. Gaming often does not require huge amounts of data, but it does require very quick response times. So even very high bandwidth can still feel bad if latency is high.
Question 2
Correct answer: A
This reflects Apple’s chip transition logic. Emulation helps older software keep running during a hardware change, which reduces switching friction and preserves the value of the ecosystem. That matters strategically because customers are much less likely to adopt new hardware if it breaks the software they already depend on.
Question 3
Correct answer: B
Chapter 6 emphasizes that demand for computing is highly price elastic. When computing becomes cheaper, people and firms do not just save money, they expand usage and discover new applications. That is why lower-cost computing supports new waves of innovation such as AI, cloud services, wearable tech, and IoT.
Question 4
Correct answer: A
The recommendation to use many processors simultaneously is the core idea of parallel computing. Modern high-performance systems often improve speed not by relying on a single ultra-fast processor, but by coordinating many processors, cores, or machines at the same time.
Question 5
Correct answer: A
This is the logic of Disney’s MagicBand case. The technology improves convenience for guests while also generating operational data that improves staffing, scheduling, inventory decisions, and spending patterns. The best technology investments often create value for both the customer and the organization at the same time.
Works Cited
- Chapter 6 Review. Notes on Moore’s Law, memory and storage, IoT, parallel computing, price elasticity, e-waste, quantum computing, and Disney’s technology case.
- Moore’s Law and Hardware. Lecture slides on exponential growth, the data kitchen model, chips, multicore CPUs, cloud offloading, bandwidth versus latency, e-waste, and MagicBand.