Number Base Converter Online
Convert numbers between binary, octal, decimal, hexadecimal, and any custom base from 2 to 36. Enter a number, select its base, and see all conversions instantly. Everything runs locally in your browser with zero server requests.
Why Use This Tool
Real-Time Simultaneous Conversion
Every keystroke triggers an instant conversion across all four major number bases and your chosen custom base at the same time. There is no submit button to press and no server to wait for. As soon as you type a digit, binary, octal, decimal, and hexadecimal representations appear simultaneously in clearly labeled output cards. This immediate feedback loop makes it effortless to experiment with values and understand how the same number looks in different positional numeral systems used throughout computer science and digital electronics.
Any Base from 2 to 36
Go beyond the standard four bases with full support for any radix between 2 and 36. Base-36 encoding uses the complete set of digits 0 through 9 plus letters A through Z, which is frequently used for compact URL-safe identifiers, timestamp shortening, and alphanumeric hash generation. The custom base input lets you convert to and from unusual radixes like base-3 (ternary), base-5 (quinary), base-12 (duodecimal), or base-20 (vigesimal) that appear in specialized computing, mathematics, and historical numeral systems.
Large Number Support
This converter handles integers well beyond the standard 32-bit range by using JavaScript BigInt for numbers that exceed the safe integer limit of 2^53 - 1. Whether you are working with 64-bit memory addresses, cryptographic key values, or very long binary strings from hardware register dumps, the tool converts them accurately without rounding errors or silent truncation. Every output displays the complete digit sequence regardless of how many digits the converted number contains, ensuring reliable results for embedded systems, low-level debugging, and large-scale numerical computation.
How to Use the Number Base Converter
- Select the input base. Choose the base of the number you want to convert from the dropdown menu or the quick-select buttons: Binary (2), Octal (8), Decimal (10), or Hexadecimal (16). If your number uses a different radix, select Custom and enter any base from 2 to 36 in the field that appears.
- Enter your number. Type or paste the number into the input field. For hexadecimal, use digits 0-9 and letters A-F (case insensitive). For bases above 10, letters represent values from 10 (A) up to 35 (Z). The tool validates your input against the selected base and highlights errors immediately.
- Read the results. All four standard base outputs (binary, octal, decimal, hexadecimal) update in real time as you type. Set the custom base field at the bottom to see the number converted to any radix from 2 to 36. Click the Copy button on any result card to send that value to your clipboard instantly.
Hexadecimal is a compact shorthand for binary because each hex digit maps to exactly 4 bits. The hex value 0xFF equals binary 11111111, and 0xA3 equals 10100011. This 4:1 ratio makes hex ideal for representing memory addresses, color codes, and byte values without writing long binary strings.
In JavaScript, the legacy octal prefix 0 (e.g., 010 equals 8 in decimal) causes silent bugs and is banned in strict mode. Always use the explicit 0o prefix (e.g., 0o10) for octal literals. This is especially dangerous when parsing user input like "010" -- parseInt("010") returns 10 in modern engines, but older ones returned 8.
Understanding Number Systems in Computing
Number systems are the foundation of how computers store, process, and communicate data. Every piece of information inside a digital device, from a single pixel on your screen to a complex database record, ultimately reduces to sequences of digits in a positional numeral system. Understanding how to convert between these systems is an essential skill for software developers, network engineers, electronics designers, and cybersecurity analysts.
Binary — The Language of Machines
Binary, or base-2, is the most fundamental number system in computing. Digital circuits operate using two voltage levels that correspond to the two binary digits: 0 and 1. Every processor instruction, every byte of memory, and every network packet is ultimately represented in binary. A single binary digit is called a bit, eight bits form a byte, and modern processors handle 64-bit words in a single clock cycle. When you write a program in any language, the compiler or interpreter eventually translates your code into sequences of binary instructions that the CPU can execute. Understanding binary is crucial for tasks like bitwise operations, subnet mask calculation, file permission flags, and hardware register configuration. For example, the decimal number 255 appears as 11111111 in binary, which represents a byte with all eight bits set to one, commonly seen as the maximum value in an unsigned byte, the broadcast component of an IP subnet mask, or full intensity in a single RGB color channel.
Hexadecimal — The Developer's Shorthand
Hexadecimal, or base-16, uses digits 0 through 9 and letters A through F to represent values from zero to fifteen. Because 16 is a power of two (2^4), each hexadecimal digit maps exactly to four binary bits, making it a compact and human-readable way to express binary data. Programmers use hexadecimal constantly: memory addresses in debuggers appear as hex values like 0x7FFE0000; CSS colors are specified as six-digit hex codes like #3B82F6 where each pair of digits represents one RGB channel; Unicode code points are written as U+0041 for the letter A; and raw byte streams in hex editors display two hex digits per byte. A 32-bit integer that requires ten decimal digits or thirty-two binary digits can be written in just eight hexadecimal characters, which makes hex indispensable for reading memory dumps, analyzing network packets, and specifying color values in web design.
Octal — Unix Permissions and Legacy Systems
Octal, or base-8, uses digits 0 through 7. Each octal digit corresponds exactly to three binary bits, which aligns neatly with the Unix file permission model. In Unix and Linux, file permissions are divided into three groups of three bits: owner, group, and others. Each group has read (4), write (2), and execute (1) flags that sum to a single octal digit. The familiar command chmod 755 sets the owner to read-write-execute (7 = 111 in binary), and both group and others to read-execute (5 = 101 in binary). Octal was historically popular in early computing systems like the PDP-8 and IBM mainframes that used word sizes divisible by three. While hexadecimal has largely replaced octal for general binary representation, octal notation remains the standard for Unix permissions, C/C++ octal escape sequences (like \033 for the escape character), and some embedded systems where three-bit groupings are natural.
Decimal — The Human Standard
Decimal, or base-10, is the number system humans use every day. It has ten digits from 0 to 9, and each position represents a successive power of ten. While computers do not natively think in decimal, virtually all user-facing interfaces display numbers in base-10 because it is universally understood. Financial calculations, measurement displays, counters, progress indicators, and statistics are all presented in decimal. Internally, many systems use binary-coded decimal (BCD) to avoid rounding errors when converting between binary and decimal representations, which is particularly important in financial and accounting software where precision to the cent is mandatory.
Real-World Use Cases
Embedded Systems Programmer
A firmware developer reading hardware register values displayed in hexadecimal needs to convert individual bits to binary to check which flags are set. Understanding the exact bit pattern determines whether a sensor is enabled, an interrupt is active, or a peripheral is configured correctly.
Cybersecurity Analyst
A security researcher analyzing network packet captures sees MAC addresses and protocol fields in hexadecimal. Converting these to decimal and binary helps identify device manufacturers from OUI codes and decode protocol flags for forensic analysis.
Computer Science Student
A student studying digital logic needs to convert between binary, octal, and hexadecimal for homework assignments on Boolean algebra, memory addressing, and CPU instruction encoding. Quick conversion verification prevents errors that cascade through multi-step problems.
Common Questions
Why do computers use binary instead of decimal?
Computers use binary because digital electronic circuits are built from transistors that operate most reliably in two distinct states: on and off, corresponding to the binary digits 1 and 0. Designing circuits that distinguish between just two voltage levels is far simpler, faster, and more noise-resistant than building circuits that must reliably differentiate between ten voltage levels (which would be needed for decimal). Every logic gate, memory cell, and processor register is fundamentally a collection of binary switches. Boolean algebra, the mathematical framework behind digital logic, maps directly to binary values. While ternary (base-3) and other multi-valued logic systems have been researched, binary remains dominant because it maximizes reliability and minimizes manufacturing complexity. The entire stack of modern computing, from hardware through operating systems to application software, is built on this binary foundation.
How are hexadecimal color codes used in CSS?
In CSS, hexadecimal color codes specify colors using a pound sign followed by six hex digits, such as #FF5733. The six digits are divided into three pairs: the first pair represents the red channel intensity (00 to FF, or 0 to 255 in decimal), the second pair represents green, and the third pair represents blue. For example, #FF0000 is pure red (red at maximum, green and blue at zero), #00FF00 is pure green, and #0000FF is pure blue. CSS also supports shorthand three-digit hex codes where each digit is doubled: #F00 is equivalent to #FF0000. Modern CSS extends this with eight-digit hex codes that include an alpha (transparency) channel, such as #3B82F680 where the last two digits (80, or 128 in decimal) represent 50 percent opacity. Hex color codes are compact, widely supported, and directly correspond to the underlying RGB byte values that displays use to produce colors.
How do octal numbers relate to Unix file permissions?
Unix file permissions are structured as three groups of three bits, which maps perfectly to octal digits. Each file has permissions for the owner, the group, and all other users. Within each group, the three bits represent read (value 4), write (value 2), and execute (value 1). Adding the values of the enabled permissions gives a single octal digit: 7 means read, write, and execute are all enabled (4+2+1); 6 means read and write (4+2); 5 means read and execute (4+1); 4 means read only. The command chmod 644 file.txt gives the owner read-write (6), and both group and others read-only (4). This three-digit octal notation is far more concise than writing out the nine individual permission flags and has become the universal shorthand for configuring file access on Unix, Linux, and macOS systems.
What is base-36 and where is it used?
Base-36 is the highest base that uses a purely alphanumeric character set: the ten digits 0 through 9 plus the twenty-six letters A through Z, giving thirty-six unique symbols. Because it produces the most compact representation possible using only letters and numbers, base-36 is commonly used for generating short, URL-safe identifiers and unique codes. URL shortening services, tracking codes on shipping labels, product serial numbers, invite codes, and database primary keys often use base-36 encoding to keep strings short while avoiding special characters that could cause parsing issues in URLs, filenames, or command-line arguments. For example, the decimal number 1,000,000 is represented as LFLS in base-36, reducing seven characters to just four. JavaScript natively supports base-36 through the Number.toString(36) and parseInt(string, 36) methods, making it trivial to implement in web applications.
How do I convert a number between bases by hand?
To convert a number from any base to decimal, multiply each digit by the base raised to the power of its position (counting from zero on the right) and sum the results. For example, binary 1011 converts to decimal as (1 times 2^3) + (0 times 2^2) + (1 times 2^1) + (1 times 2^0) = 8 + 0 + 2 + 1 = 11. To convert from decimal to another base, repeatedly divide the decimal number by the target base and record the remainders in reverse order. For example, to convert decimal 255 to hexadecimal: 255 divided by 16 gives 15 remainder 15 (F), then 15 divided by 16 gives 0 remainder 15 (F), reading the remainders bottom-up gives FF. To convert between two non-decimal bases, it is usually easiest to convert to decimal first and then convert from decimal to the target base. For binary-to-hex or hex-to-binary specifically, you can group binary digits into sets of four and convert each group directly, since each hex digit corresponds to exactly four bits.
Can this tool handle very large numbers?
Yes. This converter uses JavaScript BigInt for numbers that exceed the safe integer limit of 2^53 - 1 (which is 9,007,199,254,740,991 in decimal). Standard JavaScript numbers use 64-bit floating point, which can silently lose precision for very large integers. BigInt provides arbitrary-precision integer arithmetic, meaning it can handle numbers with hundreds or thousands of digits without rounding errors. This is particularly useful when converting long binary strings from hardware register dumps, working with cryptographic values, or manipulating 64-bit and 128-bit memory addresses. The tool automatically detects when BigInt is needed and switches to it transparently, so you get accurate results regardless of the number's magnitude.
What is the difference between signed and unsigned binary numbers?
In unsigned binary representation, all bits represent the magnitude of the number, so an 8-bit unsigned number can range from 0 (00000000) to 255 (11111111). In signed binary, the most common format is two's complement, where the leftmost bit indicates the sign: 0 for positive and 1 for negative. An 8-bit signed number in two's complement ranges from negative 128 (10000000) to positive 127 (01111111). To negate a number in two's complement, you invert all the bits and add one. This converter treats all inputs as unsigned non-negative integers. For signed number analysis, convert the unsigned binary representation and interpret the sign bit manually based on the bit width you are working with. Two's complement is the standard representation in virtually all modern processors because it simplifies the hardware needed for addition and subtraction circuits.
Number Systems in Computing
Computers operate in binary, but programmers rarely work in raw ones and zeros. Instead, different number bases serve as convenient shorthand for different tasks. Each base has a specific role in modern computing, chosen because it maps cleanly to the underlying binary architecture while being more readable for humans. Understanding these four number systems is essential for anyone working with low-level code, networking, or digital design.
Binary (Base 2)
Digits: 0, 1
The native language of all digital electronics. Every processor instruction, every byte of memory, and every network packet is ultimately a sequence of binary digits. Each bit represents a single on/off state in a transistor, and eight bits form one byte.
Real-world uses:
Subnet masks (255.255.255.0 = 11111111.11111111.11111111.00000000), bitwise flags in permissions systems, hardware register configuration, and understanding CPU instruction encoding.
Octal (Base 8)
Digits: 0 – 7
Each octal digit maps to exactly three binary digits, making it a compact way to represent binary data. Octal was more popular in early computing when systems used 12-bit, 24-bit, or 36-bit word sizes that divided evenly by three.
Real-world uses:
Unix file permissions (chmod 755 means rwxr-xr-x), some legacy mainframe systems, and escape sequences in C strings (\012 for newline). The three-digit permission model maps perfectly to octal.
Decimal (Base 10)
Digits: 0 – 9
The number system humans learn from childhood, almost certainly because we have ten fingers. While not naturally aligned with binary computing, decimal is used for all user-facing values because it is what people understand intuitively.
Real-world uses:
All user interfaces, financial calculations (where binary floating-point errors are unacceptable), API responses, database values, and any data that humans need to read and verify directly.
Hexadecimal (Base 16)
Digits: 0 – 9, A – F
Each hex digit maps to exactly four binary digits (one nibble), making it the most efficient human-readable representation of binary data. Two hex digits represent one byte, which is why hex is ubiquitous in systems programming.
Real-world uses:
CSS colors (#FF5733), memory addresses (0x7FFF5FBFF8A0), MAC addresses (00:1A:2B:3C:4D:5E), Unicode code points (U+1F600), hex dumps for debugging binary files, and cryptographic hashes (SHA-256).
The relationship between these bases is the key to understanding them: every octal digit is three binary digits, and every hex digit is four binary digits. This means converting between binary, octal, and hex is just a matter of grouping bits. Converting to and from decimal requires actual division. The number base converter above handles all these conversions instantly, saving you from manual arithmetic and potential errors.