Data Representation
CLASS: SSS Three
TOPIC: Data Representations
Introduction to Data Representation
In computer science, data representation refers to the methods used to store, process, and transmit information within a computer system. All data, whether it's text, numbers, graphics, or sound, must be converted into a format that a computer can understand. This format is the binary system, a sequence of 0s and 1s.
Methods of Data Representation
1. Bits
A bit is the smallest unit of data in computing. It is a single binary digit, either a 0 or a 1. A group of eight bits is called a byte. A byte is a fundamental unit of computer storage.
1 Bit = A single 0 or 1
2. Binary Coded Decimal (BCD)
BCD is an acronym for Binary Coded Decimal. It's a method of using binary digits to represent each individual decimal digit from 0 to 9. A decimal digit is represented by four binary digits (also known as a nibble).
BCD Table
Decimal | BCD (4-bit) |
---|---|
0 | 0000 |
1 | 0001 |
2 | 0010 |
3 | 0011 |
4 | 0100 |
5 | 0101 |
6 | 0110 |
7 | 0111 |
8 | 1000 |
9 | 1001 |
Example: Convert 4910 to BCD.
From the table above, to represent the number 49 in BCD, we find the binary for each digit separately:
$$4 = 0100_{BCD}$$
$$9 = 1001_{BCD}$$
Therefore, $$49_{10} = 01001001_{BCD}$$
3. EBCDIC
EBCDIC: Stands for Extended Binary Coded Decimal Interchange Code. This is an 8-bit scheme developed by IBM, similar to ASCII but primarily used on IBM mainframe computers. Each character is represented by two nibbles (4-bits each), one for the character class and one for the specific character.
4. ASCII
ASCII: Stands for American Standard Code for Information Interchange. It was one of the earliest and most widely used schemes. It originally used 7 bits to represent 128 characters, primarily for English. An extended version uses 8 bits to represent 256 characters.
Unicode
Unicode: This is the most modern and widely used standard. It was created to solve the limitations of ASCII and EBCDIC by providing a unique number for every character, regardless of language. It supports characters from languages all over the world, including Chinese, Arabic, and Nigerian languages like Yoruba. UTF-8 is the most common form of Unicode.
nice
ReplyDeleteYou too good, thanks
ReplyDeletewell dis was quite helpful keep it
ReplyDelete