MIS 301 Extra Credit Study Guide

Chapter 6 review page covering Moore’s Law, hardware, memory and storage, multicore and parallel computing, cloud offloading, latency versus bandwidth, e-waste, quantum computing, and the business value of faster and cheaper computing.

Chapter 6: Moore’s Law and More

What Chapter 6 is mainly about

Chapter 6 explains why computing power keeps improving so rapidly, why the falling cost of technology matters so much for business, and how hardware trends enable major systems like AI, cloud computing, IoT, and wearable technologies. It also shows that managers need to understand not just software, but the physical and economic realities of computing infrastructure.

Main takeaway: Moore’s Law and related hardware trends do more than make computers faster. They reshape demand, enable new business models, reduce the cost of innovation, create new strategic opportunities, and force managers to adapt to technology that improves at an exponential pace.

What this page includes

  • Precise Chapter 6 vocabulary
  • Explanations from the textbook and slides
  • A scenario-based 5-question quiz
  • Visible chapter citations and works cited

How to study with it

  • Know why Moore’s Law matters to managers
  • Understand CPU, memory, storage, and I/O clearly
  • Distinguish bandwidth from latency
  • Connect hardware trends to AI, cloud, IoT, and Disney’s MagicBand case

Chapter 6 Vocabulary

Moore’s Law The observation that computing capability, often measured by transistor density, tends to double roughly every 18 to 24 months, producing exponential improvements in performance and sharp declines in cost.
Citation: Chapter 6 review guide; Moore’s Law and Hardware slides
Microprocessor The central processing unit of a computer that executes instructions, processes data, and coordinates system activity using billions of transistors.
Citation: Chapter 6 review guide and hardware slides
Memory (RAM) Fast, temporary, volatile storage used to hold data and instructions that a computer is actively working on.
Citation: Chapter 6 review guide
Volatile Memory Memory that loses its contents when power is turned off, such as RAM.
Citation: Chapter 6 review guide
Non-Volatile Memory Storage that keeps data even when power is off, such as SSDs and hard drives.
Citation: Chapter 6 review guide
Storage Long-term data retention used for files, programs, and information that must persist beyond the current task.
Citation: Chapter 6 review guide and “Data Kitchen” slide
Bandwidth The amount of data that can be transmitted over a connection in a given time period, usually measured in bits per second.
Citation: Chapter 6 review guide; Storage vs. Data Transfer slide
Latency The delay before a data transfer or computing task begins, often experienced as lag or response time.
Citation: Cloud offloading slides; Latency vs. Bandwidth slide
Parallel Computing The use of multiple processors, cores, or machines at the same time to solve problems faster and more efficiently.
Citation: Chapter 6 review guide; multicore and parallel processing slides
Multicore Microprocessor A chip with two or more processor cores on the same piece of silicon, allowing multiple tasks or threads to run more efficiently.
Citation: “The Rise of Multicore CPUs” slide
GPU A graphics processing unit originally designed for highly parallel graphics tasks that later became extremely important for AI and other parallel workloads.
Citation: Multicore and AI chip slides
ASIC An application-specific integrated circuit designed for a specialized purpose rather than broad general-purpose computing.
Citation: “The Rise of Multicore CPUs” slide
Internet of Things (IoT) A network of connected physical devices with sensors and communication capabilities that collect, send, and exchange data.
Citation: Chapter 6 review guide
Price Elasticity of Demand The degree to which demand changes as price changes; in computing, lower prices often trigger dramatic growth in adoption and new uses.
Citation: Chapter 6 review guide; price elasticity slide
Konana’s Model of the Software Ecosystem A layered model of hardware, operating system, database, middleware, and applications that shows how each layer depends on the one below and creates switching costs.
Citation: Konana’s Software Ecosystem slide; Chapter 6 review guide
Fab A semiconductor fabrication plant that manufactures chips in highly specialized cleanroom environments using enormous amounts of capital, power, and purified water.
Citation: “What is a Fab?” slide
e-Waste Discarded, often obsolete electronic technology that creates environmental and human health risks if not recycled properly.
Citation: Chapter 6 review guide; e-waste slides
Quantum Computing A type of computing that uses qubits instead of traditional binary bits and can potentially solve certain classes of problems much faster than conventional machines.
Citation: Chapter 6 review guide; “Computing Going Forward” slide
Compiler Software that translates a programmer’s source code into machine-level instructions a processor can execute.
Citation: Apple CPU optimization slide
Emulator Software that allows programs written for one hardware architecture to run on another architecture, often used to preserve compatibility during transitions.
Citation: Apple CPU optimization slide

Key Concepts and Explanations

1. Moore’s Law is about exponential change, not ordinary improvement

One of the most important ideas in this chapter is that computing does not improve in a slow, linear way. It improves exponentially. That means managers who think in straight lines often underestimate how quickly computing capability can transform products, costs, customer expectations, and business models.

2. Falling computing costs create more demand, not less

Computing demand is highly price elastic. As the cost of processing, storage, and connectivity falls, people do not simply buy the same amount more cheaply. They find entirely new things to do with it. That is why lower-cost computing enables cloud services, wearable devices, IoT sensors, generative AI, and many other systems that would have once seemed too expensive to scale.

3. Hardware matters because software always runs on physical limits

Managers often talk about software and digital services as if they exist independently, but Chapter 6 reminds us that every app, AI system, cloud workload, or data-intensive platform depends on chips, memory, storage, and network infrastructure. Even strategic questions like cloud adoption, AI scale, or chip independence ultimately rest on hardware realities.

4. A computer is best understood as a coordinated system

The slides describe a computer as a “data kitchen.” The CPU acts like the chef, memory acts as workspace, storage holds what is needed later, and input/output devices bring data in and send results out. This matters because managers need to understand that performance bottlenecks are not always “the computer is slow”, sometimes the issue is memory, sometimes storage, sometimes data transfer, and sometimes latency.

5. Parallelism is now one of the main ways computing keeps getting faster

Instead of relying only on a single processor getting endlessly faster, modern computing often improves by using more cores, more processors, and more machines at the same time. Multicore CPUs, GPUs, cloud clusters, and supercomputers all reflect the same logic: divide the work and solve more of it simultaneously.

6. Better computing creates value, but also new risks and trade-offs

Offloading computing to the cloud can make very powerful processing widely available, but it also introduces latency, security concerns for data in transit, and enormous energy demands in data centers. Similarly, frequent hardware replacement enables innovation but contributes to the growing e-waste problem. So Chapter 6 is not just “tech is getting better”, it is also about the managerial consequences of that improvement.

Good Chapter 6 study habit: do not memorize Moore’s Law as just “chips get better.” Ask what that improvement enables, what constraints it removes, what new systems it makes possible, and what trade-offs or risks it introduces.

Hardware, Networks, and Why It Matters for Strategy

Chapter 6 connects hardware trends to broader strategy questions. It shows why technical concepts like multicore CPUs, fabs, compatibility layers, and latency are not just engineering details, they directly affect ecosystems, customer experience, and competitive advantage.

1. Ecosystems create lock-in
Konana’s software ecosystem shows that hardware, operating systems, databases, middleware, and applications depend on each other. Because those layers are connected, strategic decisions must consider switching costs and entrenchment, not just the strength of a single product.
2. Compatibility can preserve an installed base
Apple’s transition to M-series chips worked much better because emulation helped older software keep running. That reduced the pain of switching hardware architectures and protected the value of the ecosystem during a major transition.
3. Fabs are strategic, expensive, and geopolitical
Semiconductor fabrication plants cost tens of billions of dollars, require massive infrastructure, and are concentrated in a small number of firms and countries. That makes chips not just a business issue, but also a supply-chain and national-security issue.
4. Bandwidth and latency are different problems
High bandwidth means lots of data can move. Low latency means data starts moving quickly. Streaming video cares more about bandwidth; gaming cares more about latency; many real-time business systems need both.
5. Technology can improve both customer value and business value
Disney’s MagicBand system is important because it shows how embedded hardware and data systems can improve convenience for customers while also giving the company better scheduling, staffing, inventory, and spending data.
Important: faster and cheaper hardware does not automatically create business advantage. Advantage comes when firms use those improvements to redesign experiences, reduce friction, enable better data, or make systems scale in ways competitors are not ready for.

Chapter 6 Quiz

These questions are scenario-based and designed to feel closer to actual MIS test questions.

1. A manager assumes that a 1 Gbps internet connection guarantees great performance for an online game. Players still complain that actions feel delayed even though plenty of data can move through the connection. Which Chapter 6 idea best explains the problem?

2. A company launches a new chip architecture that offers better speed and lower power use, but customers worry their current applications will stop working. The firm adds software so older apps can still run during the transition. Which Chapter 6 concept best explains why this move is strategically important?

3. A retailer notices that as computing and storage costs fall, customers suddenly begin using features that were once considered too expensive or niche, such as AI-powered image search, smart wearables, and cloud backups. Which Chapter 6 principle best explains why demand expands like this?

4. A hospital wants to analyze huge imaging datasets much faster than a traditional single-processor system can handle. Engineers recommend using many processors working at the same time rather than relying on one super-fast core. Which Chapter 6 concept does this recommendation best reflect?

5. A theme park introduces wearable devices that unlock hotel rooms, process payments, speed admission, and help route guests efficiently while collecting operational data for staffing and inventory decisions. Which Chapter 6 lesson does this most clearly illustrate?

Answer Key and Explanations

Question 1

Correct answer: B

This question tests the difference between bandwidth and latency. Bandwidth is how much data can move, while latency is the delay before it starts moving. Gaming often does not require huge amounts of data, but it does require very quick response times. So even very high bandwidth can still feel bad if latency is high.

Question 2

Correct answer: A

This reflects Apple’s chip transition logic. Emulation helps older software keep running during a hardware change, which reduces switching friction and preserves the value of the ecosystem. That matters strategically because customers are much less likely to adopt new hardware if it breaks the software they already depend on.

Question 3

Correct answer: B

Chapter 6 emphasizes that demand for computing is highly price elastic. When computing becomes cheaper, people and firms do not just save money, they expand usage and discover new applications. That is why lower-cost computing supports new waves of innovation such as AI, cloud services, wearable tech, and IoT.

Question 4

Correct answer: A

The recommendation to use many processors simultaneously is the core idea of parallel computing. Modern high-performance systems often improve speed not by relying on a single ultra-fast processor, but by coordinating many processors, cores, or machines at the same time.

Question 5

Correct answer: A

This is the logic of Disney’s MagicBand case. The technology improves convenience for guests while also generating operational data that improves staffing, scheduling, inventory decisions, and spending patterns. The best technology investments often create value for both the customer and the organization at the same time.

Works Cited