ASCII Table: Complete Reference with Hex, Decimal, and Binary Values

ASCII Table: Complete Reference with Hex, Decimal, and Binary Values

ASCII Table: Complete Reference with Hex, Decimal, and Binary Values

Try the Hex Converter

ASCII (American Standard Code for Information Interchange) is the foundation of text representation in computing. Whether you're debugging network protocols, analyzing binary data, or simply trying to understand how computers encode characters, having a complete ASCII reference at your fingertips is essential. This comprehensive guide provides the complete ASCII table with decimal, hexadecimal, and binary values, along with practical explanations of how ASCII works and when to use it.

The ASCII standard, established in 1963, defines 128 characters numbered from 0 to 127. Each character is represented by a 7-bit binary number, though it's commonly stored in an 8-bit byte with the most significant bit set to zero. These 128 characters include control characters (non-printable), standard punctuation and symbols, digits, uppercase and lowercase letters, and a delete character. Understanding ASCII is crucial for anyone working with text encoding, data serialization, or low-level programming.

In this reference guide, we'll provide the complete ASCII table organized by character type, explain the purpose of control characters, discuss extended ASCII and its limitations, and show practical examples of working with ASCII values in real-world scenarios. Whether you need to convert hexadecimal values to ASCII text or understand why certain characters cause issues in your application, this guide has you covered.

Complete ASCII Table (0-127)

Control Characters (0-31)

Control characters are non-printable characters originally designed to control devices like printers and terminals. While many are obsolete today, several remain critical in modern computing.

Decimal Hex Binary Character Abbreviation Description
0 00 00000000 NUL ^@ Null character
1 01 00000001 SOH ^A Start of Heading
2 02 00000010 STX ^B Start of Text
3 03 00000011 ETX ^C End of Text
4 04 00000100 EOT ^D End of Transmission
5 05 00000101 ENQ ^E Enquiry
6 06 00000110 ACK ^F Acknowledge
7 07 00000111 BEL ^G Bell (Alert)
8 08 00001000 BS ^H Backspace
9 09 00001001 HT ^I Horizontal Tab
10 0A 00001010 LF ^J Line Feed (Newline)
11 0B 00001011 VT ^K Vertical Tab
12 0C 00001100 FF ^L Form Feed
13 0D 00001101 CR ^M Carriage Return
14 0E 00001110 SO ^N Shift Out
15 0F 00001111 SI ^O Shift In
16 10 00010000 DLE ^P Data Link Escape
17 11 00010001 DC1 ^Q Device Control 1 (XON)
18 12 00010010 DC2 ^R Device Control 2
19 13 00010011 DC3 ^S Device Control 3 (XOFF)
20 14 00010100 DC4 ^T Device Control 4
21 15 00010101 NAK ^U Negative Acknowledge
22 16 00010110 SYN ^V Synchronous Idle
23 17 00010111 ETB ^W End of Transmission Block
24 18 00011000 CAN ^X Cancel
25 19 00011001 EM ^Y End of Medium
26 1A 00011010 SUB ^Z Substitute
27 1B 00011011 ESC ^[ Escape
28 1C 00011100 FS ^\ File Separator
29 1D 00011101 GS ^] Group Separator
30 1E 00011110 RS ^^ Record Separator
31 1F 00011111 US ^_ Unit Separator

Printable Characters (32-126)

These are the visible ASCII characters that you see in text documents, source code, and user interfaces.

Decimal Hex Binary Character Description
32 20 00100000 (space) Space
33 21 00100001 ! Exclamation mark
34 22 00100010 " Double quote
35 23 00100011 # Hash/Number sign
36 24 00100100 $ Dollar sign
37 25 00100101 % Percent sign
38 26 00100110 & Ampersand
39 27 00100111 ' Single quote
40 28 00101000 ( Left parenthesis
41 29 00101001 ) Right parenthesis
42 2A 00101010 * Asterisk
43 2B 00101011 + Plus sign
44 2C 00101100 , Comma
45 2D 00101101 - Hyphen/Minus
46 2E 00101110 . Period
47 2F 00101111 / Forward slash
48 30 00110000 0 Digit zero
49 31 00110001 1 Digit one
50 32 00110010 2 Digit two
51 33 00110011 3 Digit three
52 34 00110100 4 Digit four
53 35 00110101 5 Digit five
54 36 00110110 6 Digit six
55 37 00110111 7 Digit seven
56 38 00111000 8 Digit eight
57 39 00111001 9 Digit nine
58 3A 00111010 : Colon
59 3B 00111011 ; Semicolon
60 3C 00111100 < Less than
61 3D 00111101 = Equal sign
62 3E 00111110 > Greater than
63 3F 00111111 ? Question mark
64 40 01000000 @ At symbol
65 41 01000001 A Uppercase A
66 42 01000010 B Uppercase B
67 43 01000011 C Uppercase C
68 44 01000100 D Uppercase D
69 45 01000101 E Uppercase E
70 46 01000110 F Uppercase F
71 47 01000111 G Uppercase G
72 48 01001000 H Uppercase H
73 49 01001001 I Uppercase I
74 4A 01001010 J Uppercase J
75 4B 01001011 K Uppercase K
76 4C 01001100 L Uppercase L
77 4D 01001101 M Uppercase M
78 4E 01001110 N Uppercase N
79 4F 01001111 O Uppercase O
80 50 01010000 P Uppercase P
81 51 01010001 Q Uppercase Q
82 52 01010010 R Uppercase R
83 53 01010011 S Uppercase S
84 54 01010100 T Uppercase T
85 55 01010101 U Uppercase U
86 56 01010110 V Uppercase V
87 57 01010111 W Uppercase W
88 58 01011000 X Uppercase X
89 59 01011001 Y Uppercase Y
90 5A 01011010 Z Uppercase Z
91 5B 01011011 [ Left square bracket
92 5C 01011100 \ Backslash
93 5D 01011101 ] Right square bracket
94 5E 01011110 ^ Caret
95 5F 01011111 _ Underscore
96 60 01100000 ` Backtick
97 61 01100001 a Lowercase a
98 62 01100010 b Lowercase b
99 63 01100011 c Lowercase c
100 64 01100100 d Lowercase d
101 65 01100101 e Lowercase e
102 66 01100110 f Lowercase f
103 67 01100111 g Lowercase g
104 68 01101000 h Lowercase h
105 69 01101001 i Lowercase i
106 6A 01101010 j Lowercase j
107 6B 01101011 k Lowercase k
108 6C 01101100 l Lowercase l
109 6D 01101101 m Lowercase m
110 6E 01101110 n Lowercase n
111 6F 01101111 o Lowercase o
112 70 01110000 p Lowercase p
113 71 01110001 q Lowercase q
114 72 01110010 r Lowercase r
115 73 01110011 s Lowercase s
116 74 01110100 t Lowercase t
117 75 01110101 u Lowercase u
118 76 01110110 v Lowercase v
119 77 01110111 w Lowercase w
120 78 01111000 x Lowercase x
121 79 01111001 y Lowercase y
122 7A 01111010 z Lowercase z
123 7B 01111011 { Left curly brace
124 7C 01111100 | Vertical bar
125 7D 01111101 } Right curly brace
126 7E 01111110 ~ Tilde

Delete Character (127)

Decimal Hex Binary Character Description
127 7F 01111111 DEL Delete

The DEL character is technically a control character but occupies the final position in the standard ASCII table. It was originally used to mark deleted characters on punch tape.

Understanding Control Characters

Control characters serve special functions beyond representing visible text. While many were designed for teletype machines and early printers, several remain essential in modern computing.

Critical Control Characters You Need to Know

NUL (0x00) - The null character marks string termination in C and many other programming languages. It's also used as a padding character in various protocols. When you encounter unexpected behavior with string handling, null characters are often the culprit.

TAB (0x09) - The horizontal tab character creates indentation in text files and source code. The eternal tabs-versus-spaces debate in programming centers on this character. Different editors may display tabs with different widths, which is why many coding standards prefer spaces for consistent alignment.

LF (0x0A) - Line feed, also called newline, moves the cursor to the next line. Unix and Linux systems use LF alone to end lines. When working with text files across platforms, understanding line endings is crucial.

CR (0x0D) - Carriage return moves the cursor to the beginning of the line. Windows uses the combination CR+LF (0x0D 0x0A) for line endings, while older Mac systems used CR alone. This difference causes the classic "wrong line ending" problem when transferring files between operating systems.

ESC (0x1B) - The escape character introduces escape sequences that control terminal formatting, colors, and cursor positioning. If you've ever seen ANSI color codes in terminal output, they all begin with ESC. Modern terminal emulators still use ESC sequences extensively for formatting.

Why Control Characters Matter Today

Even though we don't use teletype machines anymore, control characters remain relevant for several reasons:

  1. Protocol design - Network protocols and file formats use control characters as delimiters and markers
  2. Terminal emulation - Modern terminals use escape sequences for colors, formatting, and cursor control
  3. Data parsing - When parsing CSV, TSV, or other delimited formats, understanding control characters helps handle edge cases
  4. Debugging - Mysterious bugs in text processing often involve hidden control characters

Extended ASCII (128-255): A Historical Footnote

Extended ASCII refers to various 8-bit character encoding schemes that use values 128-255 to represent additional characters beyond standard ASCII's 0-127 range. Unlike standard ASCII, which is universal, extended ASCII encodings vary by region and purpose.

Why Extended ASCII Exists

Standard ASCII uses only 7 bits, representing 128 characters. Since computers typically work with 8-bit bytes, the unused bit provided an opportunity to define 128 additional characters. Different organizations created different extended ASCII sets:

  • ISO 8859-1 (Latin-1) - Western European languages with characters like é, ñ, ü
  • ISO 8859-2 (Latin-2) - Central European languages
  • Windows-1252 - Microsoft's extension of Latin-1
  • IBM Code Page 437 - Original IBM PC character set with box-drawing characters
  • Mac OS Roman - Apple's encoding for pre-OS X systems

The Problem with Extended ASCII

The fundamental issue with extended ASCII is incompatibility. A byte value like 0xE9 might represent "é" in Latin-1, but a completely different character in Cyrillic encoding. This led to the infamous "mojibake" problem where text displays as garbled characters when interpreted with the wrong encoding.

When you transfer a file created on a Windows system with Windows-1252 encoding to a Linux system expecting UTF-8, characters above 127 may display incorrectly. This is why file encoding matters and why explicitly declaring encoding (like <meta charset="UTF-8"> in HTML) is essential.

Why Extended ASCII Matters Less Today

Unicode and UTF-8 have largely superseded extended ASCII for good reasons:

  1. Universality - UTF-8 can represent every character in every language
  2. Backward compatibility - UTF-8's first 128 characters exactly match ASCII
  3. Web standards - UTF-8 is the dominant encoding on the web
  4. Cross-platform consistency - No more encoding guesswork when sharing files

You'll still encounter extended ASCII in legacy systems, old file formats, and embedded systems where memory constraints favor single-byte encodings. But for new projects, UTF-8 is almost always the right choice.

ASCII vs Unicode and UTF-8

Understanding the relationship between ASCII, Unicode, and UTF-8 is essential for modern development.

ASCII's Limitations

ASCII's 128 characters work fine for English text, but they can't represent:

  • Accented characters in European languages
  • Non-Latin scripts (Chinese, Arabic, Hebrew, etc.)
  • Emoji and symbols
  • Mathematical and technical symbols

How Unicode Solves the Problem

Unicode is a character set that assigns a unique number (called a code point) to every character in every writing system. Unicode currently defines over 140,000 characters, from Latin letters to Chinese ideographs to emoji.

For example:

  • 'A' = U+0041 (same as ASCII 65)
  • 'é' = U+00E9
  • '中' = U+4E2D
  • '😊' = U+1F60A

UTF-8: The Dominant Encoding

UTF-8 is a variable-length encoding that represents Unicode characters using 1 to 4 bytes:

  • ASCII characters (0-127) - Encoded in 1 byte, identical to ASCII
  • Extended Latin and other common scripts - 2 bytes
  • Most other scripts including CJK - 3 bytes
  • Emoji and rare characters - 4 bytes

This backward compatibility with ASCII is UTF-8's killer feature. Any valid ASCII text is also valid UTF-8, which enabled smooth adoption.

When to Use ASCII vs UTF-8

Use ASCII when:

  • Working with legacy systems that don't support UTF-8
  • Protocol specifications explicitly require ASCII
  • Embedded systems with severe memory constraints
  • You know for certain only English characters will be used

Use UTF-8 (almost always) when:

  • Building modern web applications
  • Creating user-facing applications that might need internationalization
  • Working with user-generated content
  • Storing text in databases
  • Unsure which to choose (default to UTF-8)

Practical Examples: Working with ASCII Values

Looking Up ASCII Values

To find the ASCII value of a character, you can use our Hex to ASCII converter or calculate it programmatically:

JavaScript:

// Get ASCII value
'A'.charCodeAt(0); // Returns 65

// Get character from ASCII value
String.fromCharCode(65); // Returns 'A'

// Convert to hex
(65).toString(16); // Returns '41'

Python:

# Get ASCII value
ord('A')  # Returns 65

# Get character from ASCII value
chr(65)  # Returns 'A'

# Convert to hex
hex(65)  # Returns '0x41'

Java:

// Get ASCII value
int ascii = (int)'A';  // Returns 65

// Get character from ASCII value
char ch = (char)65;  // Returns 'A'

// Convert to hex
String hex = Integer.toHexString(65);  // Returns "41"

Checking if Text is Pure ASCII

Sometimes you need to verify whether text contains only ASCII characters:

JavaScript:

function isAscii(str) {
  return /^[\x00-\x7F]*$/.test(str);
}

isAscii("Hello World");  // true
isAscii("Café");  // false

Python:

def is_ascii(text):
    return all(ord(char) < 128 for char in text)

is_ascii("Hello World")  # True
is_ascii("Café")  # False

Converting Between Hex and ASCII

Working with hexadecimal representations of ASCII is common in debugging, network protocols, and data analysis. You can use our hextoascii.co converter for quick conversions, or implement it programmatically:

JavaScript:

// Hex to ASCII
function hexToAscii(hex) {
  return hex.match(/.{1,2}/g)
    .map(byte => String.fromCharCode(parseInt(byte, 16)))
    .join('');
}

hexToAscii('48656c6c6f');  // Returns "Hello"

// ASCII to Hex
function asciiToHex(str) {
  return str.split('')
    .map(char => char.charCodeAt(0).toString(16).padStart(2, '0'))
    .join('');
}

asciiToHex('Hello');  // Returns "48656c6c6f"

Reading Binary Data

When working with binary files or network protocols, you often encounter raw ASCII values:

Python:

# Read binary file and interpret as ASCII
with open('data.bin', 'rb') as f:
    data = f.read()
    # Filter printable ASCII characters
    printable = ''.join(chr(b) for b in data if 32 <= b <= 126)
    print(printable)

This technique is useful for extracting strings from binary files, analyzing network packets, or reverse engineering file formats.

Base64 and ASCII

Base64 encoding represents binary data using only ASCII characters (A-Z, a-z, 0-9, +, /). This encoding is crucial for embedding binary data in text formats like JSON or XML:

// Encode binary data to Base64 (ASCII-safe)
btoa('Hello');  // Returns 'SGVsbG8='

// Decode Base64 back to original
atob('SGVsbG8=');  // Returns 'Hello'

Base64 takes every 3 bytes (24 bits) and represents them as 4 ASCII characters (6 bits each). This makes it about 33% larger than the original data, but ensures safe transmission through text-only channels.

Frequently Asked Questions

What is an ASCII table?

An ASCII table is a reference chart that maps the 128 standard ASCII characters to their numeric values in decimal, hexadecimal, and binary formats. It includes control characters (0-31), printable characters like letters and symbols (32-126), and the delete character (127). Developers and engineers use ASCII tables to convert between character representations and understand text encoding at a low level.

What is the difference between ASCII and Unicode?

ASCII is a 7-bit character encoding standard that represents 128 characters, primarily English letters, digits, and basic symbols. Unicode is a much larger standard that can represent over 140,000 characters from virtually every writing system in the world. UTF-8, the most common Unicode encoding, is backward-compatible with ASCII—the first 128 UTF-8 characters are identical to ASCII, making the transition seamless for English text.

Why are there only 128 characters in standard ASCII?

ASCII was designed as a 7-bit encoding system, and 7 bits can represent exactly 2^7 = 128 different values (0-127). This was sufficient for English text and basic control characters when ASCII was standardized in 1963. The 8th bit was often used for parity checking in early computer systems. Later, extended ASCII schemes used the full 8 bits to define an additional 128 characters, though these extensions were never standardized.

What are ASCII control characters used for?

ASCII control characters (0-31 and 127) were originally designed to control teletype machines and printers. Today, several remain essential: NUL (0) terminates strings in C, TAB (9) creates indentation, LF (10) creates new lines on Unix, CR (13) is part of Windows line endings, and ESC (27) introduces terminal escape sequences for colors and formatting. Many other control characters are now obsolete but still appear in legacy protocols and file formats.

How do I convert hex values to ASCII characters?

To convert hexadecimal values to ASCII, first convert the hex to decimal, then look up the corresponding character in the ASCII table. For example, hex 48 = decimal 72 = 'H'. Most programming languages provide built-in functions: String.fromCharCode(0x48) in JavaScript, chr(0x48) in Python, or use an online tool like hextoascii.co for quick conversions without coding.

What is the ASCII value of the space character?

The space character has an ASCII value of 32 (decimal), 0x20 (hexadecimal), or 00100000 (binary). It's the first printable ASCII character, marking the boundary between control characters (0-31) and visible characters (32-126). Spaces are often problematic in programming because they're invisible but significant—trailing spaces, multiple spaces, or spaces versus tabs can cause subtle bugs.

Should I use ASCII or UTF-8 for my application?

For almost all modern applications, UTF-8 is the better choice. UTF-8 is backward-compatible with ASCII (pure ASCII text is valid UTF-8), but also supports international characters, emoji, and special symbols. Choose ASCII only when working with legacy systems that explicitly require it, or in extremely memory-constrained embedded systems. Web applications, databases, and user-facing software should default to UTF-8 to avoid encoding problems and support international users.


Related Tools:

Further Reading:

Convert Hex to ASCII Instantly

Paste hex strings and get readable text. Supports multiple formats, batch conversion, all client-side.

Open Hex Converter