Computers store text using character codes. The most prevalent computer character set is the American Standard Code for Information
Interchange, more commonly known as ASCII[1] (pronounced "ask-ee"). The ASCII character set represents each character with a 7-bit binary code (the decimal numbers 0 to 127).
The first 32 codes in the ASCII character set are used for control functions such as line feed, tab, escape, and carriage return. The remaining 96 codes are used for alphanumeric English characters, as shown in the following table.
ascii table conversion
From this table we see that the letter A is represented by the code 65. In binary this code is 1000001, and in hexadecimal it is 41.
ASCII works reasonably well for basic text in English and many other Western European languages, but to support characters in any language along with technical symbols, ASCII does not have nearly enough character codes.
The Unicode Standard is a 16-bit character encoding that provides character codes for all of the characters used in the written languages of the world. Unicode also provides character codes for a wide range of mathematical symbols and other technical characters.
With 16 bits Unicode can represent 65,536 different characters and symbols. The Unicode codes 32 to 127 represent the same characters as the ASCII codes,
thus, the 16-bit Unicode code for the letter A is also the decimal number 65. For additional information on the Unicode Standard please refer to the Resources page. Unicode: A 16-bit character encoding.
This lesson concludes our investigation of how a computer stores numbers and text and wraps up this module.
Floating PointNumbers Ascii Unicode - Quiz
Click the Quiz link below to test your understanding of floating-point numbers and the ASCII and Unicode character codes. Ascii Unicode -Quiz