**2.1** A variety of abstractions built upon binary sequences can be used to represent all digital data. **2.1.2** Explain how binary sequences are used to represent digital data. *Exclusion Statement (2.1.2B): Range limitations of any one language, compiler, or architecture are beyond the scope of this course and the AP Exam.*

You are watching: How many things can be represented using four bits?

2.1.2C In many programming languages, the fixed number of bits used to represent real numbers (as floating point numbers) limits the range of floating point values and mathematical operations; this limitation can result in round off errors. 2.1.2D The interpretation of a binary sequence depends on how it is used. 2.1.2E A sequence of bits may represent instructions or data. 2.1.2F A sequence of bits may represent different types of data in different contexts. 6.2.2K The latency of a system is the time elapsed between the transmission and the receipt of a request. 6.2.2J The bandwidth of a system is a measure of bit rate – the amount of data (measured in bits) that can be sent in a fixed amount of time.

All data is stored in sequences of binary (ones and zeros), and how any binary sequence is interpreted depends on how it will be used. Binary sequences are used to represent instructions to the computer and various types of data depending on the context.

Computers store information in binary using **bits** and **bytes**. A bit is a “0” or “1”. A byte is eight bits grouped together like 10001001.

Bytes have 8 places, so the left-most place is 27 (or 128). Practice what you”ve learned: What is 100010012 in base 10?

**bit** – a single unit of data that can only have one of two values (such as a 1 or 0) **bit rate** – the amount of data (measured in bits) that can be sent in a specific amount of time **bandwidth** – the transmission capacity of a system (measured by bit rate) **latency** – time between the transmission and the receipt of a message

As computers become more powerful, they are capable of handling more information. We used to measure computer memory in *kilobytes*; one kilobyte is 210 (1,024) bytes, which is about 103, so we call it a “kilo”. These days, your computer memory is likely to be measured in *megabytes*. One megabyte is 220 (about 1 million) bytes.

Your hard drive space is likely to be measured in gigabytes, terabytes, or even petabytes. One *gigabyte* is 230 (about 1 billion) bytes, one *terabyte* is 240 (about 1 trillion) bytes, and one *petabyte* is 250 (about 1 quadrillion) bytes.

measure |
amount |
example |

bit | either a 1 or a 0 | 1 |

byte | 8 bits | 11011001 |

kilobyte | 210 (1,024) bytes | a couple of paragraphs |

megabyte | 220 (1,048,576) bytes | about 1 book |

gigabyte | 230 (1,073,741,824) bytes | a little more than 1 CD |

terabyte | 240 (1,099,511,627,776) bytes | about 1,500 CDs |

petabyte | 250 (1,125,899,906,842,624) bytes | about 20 million filing cabinets of text |

### Storing Integers

Depending on the programming language used, an integer might be stored with two or perhaps four bytes of data allowing for a range of about -32,768 to 32,767 (with two bytes) or -2,147,483,648 to 2,147,483,647 (with four bytes).

**Why are these the ranges for integers?**

Two bytes is 16 bits, so two bytes can represent 216 = 65,536 different values. We use about half of these to represent negative numbers, about half for postive numbers, and one to represent zero.

Four bytes is 32 bits, so four bytes can represent 232 = 4,294,967,296 different values. Again, we use about half for negative numbers, half for postives, and one for zero.

**What about bigger numbers?** Whatever the specific size is for storing integers (two bytes, four bytes, etc.), it will limit the range of the integer values that can be stored and the mathematical operations that can be performed. Values that exceed this limitation may result in errors such as *overflow* errors because big values overflow the amount of space allocated to store one number.

You do not need to learn the factorial function for this class (we are just using it to discuss big numbers), but you will probably see it again in a math class.

The *factorial* of a positive integer *n *(written “n!”) is the product of all the integers from 1 to *n*. For example: 5! = 1 imes 2 imes 3 imes 4 imes 5 = 120 Load this project, and try out these inputs:

Then, try 200!.

infinity”>

**What”s going on?** Although 200! is very large, it”s not infinite. This report is the result of the size limitation. If the result of a computation is bigger than than the range of numbers that can be stored, then the computer returns a special code or an overflow error.

In Snap*!*, there are special codes for infinity, –infinity (smaller than any finite value), and “Not a Number,” which is the notification used for illegal computations such as frac00.

You”ll learn more about Unicode if you do the Caesar Cipher Project in the Cybersecurity lab, where you”ll create a program to encrypt a message.

See more: Jay: You Do Know Elvis Is Not Dead He Just Went Home !, Elvis Has Left The Planet

It”s common for programming languages to use a *specific* number of bits to represent Unicode characters or integers. Unicode is a system that allows computers to store letters as integers. For example, capital A is Unicode number 65, which is 010000012.

### Floating Point

Many languages have another data type used to represent real numbers (including decimals and numbers too big to be stored as integers) called **floating point**, which is essentially a binary version of scientific notation. It”s called floating point because the decimal point “floats” to just after the first digit.

Try 30!.

That e+32 is just a different way to write scientific notation. The “e+” means “times ten to the power of” so: 2.6525285981219103 ext{e}+32 = 2.6525285981219103 imes10^{32} =265,252,859,812,191,030,000,000,000,000,000.

Floating point allows computers to store very large numbers and also decimals, but the format still has a specific number of bits, and that limits the range of floating point values and mathematical operations just like with integers. However with floating point, values that exceed the limitation may result in *round off* errors instead.

For example, try

.

The decimal representation of frac13 is 0.33333… It has infinitely many digits, so the closest you can come in floating point isn”t *exactly* frac13; it gets cut off after a while.

Roundoff errors can result in some pretty weird calculations…

Try

and then try

.

This isn”t a bug with Snap*!*. Many other languages (like JavaScript) will report the same values for these calculations.

Computer arithmetic on *integers* is straightforward. Either you get an exactly correct integer result or, if the result won”t fit in integer representation, you get an *overflow error* and the result is, usually, converted to floating point representation (like 30! was).

By contrast, computer arithmetic on *floating point numbers* is hard to get exactly right. Prior to 1985, every model of computer had a slightly different floating point format, and all of them got wrong answers to certain problems. This situation was resolved by the IEEE 754 floating point standard, which is now used by every computer manufacturer and has been improved several times since it was created in 1985.

See more: Real Mermaid Spells That Work Fast, Mermaid Spell Book That Works Instantly

**How does a programming language know whether to interpret a bit sequence as an integer, a floating point, a text string of Unicode characters, an instruction, or something else?** Programming languages differ in how they do this, but there”s always some *extra* bit sequence that encodes the *data type* of any sequence of bits that tells the computer how to interpret it.

At the lowest level of software abstraction, *everything* in a computer is represented as a binary sequence. For example: A Boolean value is a single bit, 0 for false and 1 for true. A text string is a sequence of Unicode character codes, each of which is stored as a separate integer. Lists and blocks are binary sequences too.

What”s the *rightmost* digit in the binary representation of 15551212? What”s the rightmost bit (“**b**inary dig**it**“) of 123456789? Develop a rule for finding the rightmost bit of any base 10 number.

## Discussion about this post