1.4 Abstraction
Enduring Understanding
The way a computer represents data internally is different from the way the data are interpreted and displayed for the user. Programs are used to translate data into a representation more easily understood by people.
Essential Questions
How can we use 1s and 0s to represent something complex like a video of the marching band playing a song?
Lesson Objectives
Explain how data can be represented using bits.
Explain the consequences of using bits to represent data.
For binary numbers: a. Calculate the binary (base 2) equivalent of a positive integer (base 10) and vice versa. b. Compare and order binary numbers.
Compare data compression algorithms to determine which is best in a particular context.
Essential Knowledge
Computing devices represent data digitally, meaning that the lowest-level components of any value are bits.
Bit is shorthand for binary digit and is either 0 or 1.
A byte is 8 bits.
Abstraction is the process of reducing complexity by focusing on the main idea. By hiding details irrelevant to the question at hand and bringing together related and useful details, abstraction reduces complexity and allows one to focus on the idea.
Bits are grouped to represent abstractions. These abstractions include, but are not limited to, numbers, characters, and color.
The same sequence of bits may represent different types of data in different contexts.
Analog data have values that change smoothly, rather than in discrete intervals, over time. Some examples of analog data include pitch and volume of music, colors of a painting, or position of a sprinter during a race.
The use of digital data to approximate real-world analog data is an example of abstraction.
Analog data can be closely approximated digitally using a sampling technique, which means measuring values of the analog signal at regular intervals called samples. The samples are measured to figure out the exact bits required to store each sample.
In many programming languages, integers are represented by a fixed number of bits, which limits the range of integer values and mathematical operations on those values. This limitation can result in overflow or other errors.
Other programming languages provide an abstraction through which the size of representable integers is limited only by the size of the computer’s memory; this is the case for the language defined in the exam reference sheet.
In programming languages, the fixed number of bits used to represent real numbers limits the range and mathematical operations on these values; this limitation can result in round- off and other errors. Some real numbers are represented as approximations in computer storage.
Number bases, including binary and decimal, are used to represent data.
Binary (base 2) uses only combinations of the digits zero and one.
Decimal (base 10) uses only combinations of the digits 0 – 9.
As with decimal, a digit’s position in the binary sequence determines its numeric value. The numeric value is equal to the bit’s value (0 or 1) multiplied by the place value of its position.
The place value of each position is determined by the base raised to the power of the position. Positions are numbered starting at the rightmost position with 0 and increasing by 1 for each subsequent position to the left.
Data compression can reduce the size (number of bits) of transmitted or stored data.
Fewer bits does not necessarily mean less information.
The amount of size reduction from compression depends on both the amount of redundancy in the original data representation and the compression algorithm applied.
Lossless data compression algorithms can usually reduce the number of bits stored or transmitted while guaranteeing complete reconstruction of the original data.
Lossy data compression algorithms can significantly reduce the number of bits stored or transmitted but only allow reconstruction of an approximation of the original data.
Lossy data compression algorithms can usually reduce the number of bits stored or transmitted more than lossless compression algorithms.
In situations where quality or ability to reconstruct the original is maximally important, lossless compression algorithms are typically chosen.
In situations where minimizing data size or transmission time is maximally important, lossy compression algorithms are typically chosen.
How do computers use binary to processing data?
How are circuits and logic used in computation?
How are CPU, memory, input, and output related to computing?
How do computers use hardware and software?