#BytesInHardware #DataStorage #ComputerScience
🤔 Bytes – just a concept or it’s also implemented in the hardware?
When I’m dealing with byte streams or byte arrays, for example, are those “bytes” just a linguistic expression that refers to underlying data that are inherently just bits? Or bytes “exist” on a hardware level? In other words, are bytes (8 bits) implemented in any way in the hardware circuitry (electronics), or we just use this term to organize a stream of bits into chunks?
Bytes in Computer Hardware
Bytes are not just a concept in computer science but are also implemented in the hardware level. In computer hardware, bytes are not just chunks of data but play a crucial role in various processes and operations. Let’s dive deeper into how bytes are implemented in hardware:
1. **Binary Representation:** Bytes are essentially a group of 8 bits where each bit can either be a 0 or a 1. This binary representation is the foundation of how bytes are processed and stored in computer hardware.
2. **Memory Storage:** In computer memory systems, data is stored in the form of bytes. For example, when you save a file, the data is organized into bytes before being stored in the memory.
3. **Processor Operations:** Bytes play a vital role in processor operations such as arithmetic calculations, logical operations, and data manipulation. The processor interacts with bytes of data to perform various tasks efficiently.
4. **Data Transmission:** When data is transmitted between different hardware components or over a network, it is broken down into bytes for easier transfer and processing.
5. **Addressing:** Bytes are also used in memory addressing where each byte has a unique address that allows the system to access and retrieve data quickly and accurately.
Bytes vs. Bits
While bytes are commonly used to refer to a group of 8 bits, it is important to note the distinction between the two:
– **Bits:** The smallest unit of data in computers, consisting of a single binary digit (0 or 1).
– **Bytes:** A group of 8 bits, commonly used to represent a character or a piece of data in computer systems.
The Role of Bytes in Hardware
Bytes are not just a theoretical concept but are an integral part of how computer hardware functions. Here are some key roles that bytes play in hardware:
1. **Data Processing:** Bytes are used in various data processing operations such as addition, subtraction, comparison, and logical operations.
2. **File Storage:** When files are saved on a computer or a storage device, they are organized into bytes for efficient data storage and retrieval.
3. **Communication Protocols:** Bytes are used in communication protocols to transmit data between devices in a standardized format.
4. **Image and Video Processing:** In multimedia applications, bytes are crucial for processing images, videos, and other media files.
5. **Operating System Operations:** Bytes are used by the operating system to manage processes, handle memory allocation, and interact with hardware components.
In conclusion, bytes are not just a theoretical concept in computer science but are also implemented in hardware at a fundamental level. Understanding how bytes are used in computer hardware can provide valuable insights into how data is processed, stored, and transmitted in modern computing systems. So next time you encounter bytes in your coding or data analysis tasks, remember that they are not just abstract units but have a tangible presence in the hardware that powers our digital world.
Could be data or an instruction
You should take a look at CS61C
Byte = By Eight
So sort of yes and no.
When we talk about a computer being 8bit, 16bit, 32bit, 36bit, 64bit, etc – we’re referring to the native “word” size of an architecture. These physically exist – adding the contents of one register to a second register is a parallel operation. You don’t add one bit to one bit, take the carry, then load the next bits – you load one word into one register, one word into a second register, and then do the entire addition in one operation. The arithmetic hardware is however-many bits wide.
Once upon a time, this word was the definition of a byte – a 36bit machine had 36 bits in a byte, which was sometimes broken down into 6 or 8 characters. This didn’t survive long into the age of computers actually talking to each other. Can you imagine the utter insanity if a 32bit machine couldn’t communicate with a 64bit machine over the internet because the 64bit host was sending bytes too big?
The 8-bit-byte *is* something of a convenient fiction, to solve this issue. It’s rarely a fundamental type for the processor, it’s more established as a common-denominator for things like data transmission, data storage, etc.
(That said, [1-bit processors ](https://en.wikipedia.org/wiki/1-bit_computing)do exist, and fit what you’re describing. They’re very uncommon though – either very archiac or very specialised.)
Kind of. Bytes were the were original width of a CPU. The largest value it could store in its registers were 8bits. There were also some machines that ran off 7bits in the early days. So it is kind of arbitrary. But 8bits being a byte is kind of mathematically nice since it falls in a power of 2 and we’re dealing with binary.
After 8bit byte cpus came 16bit “Word” CPUs. Then 32bit “double word”. Then 64bit “quad words”.
These all relate to the size of the registers in your processor. And in the case of x64 architecture. You have all of them! Sixteen general purpose registers where you can access the whole qword, dword, word, or byte at a time. And in the case of the bytes of a word you can even access the upper and lower byte. So for the register A. You have AL the lower byte, AH the upper byte. AX the 16bit word. EAX the 32bit dword. and RAX the 64bit qword.
Your computer works by putting values in to one or more of these registers at a time. And then calling one of its operations. So you might say “`ADD %RAX, %RBX`” in assembly and it would add the value in RAX to the value in RBX and store it in RBX. But you could also just add the upper bytes together. Such as “`ADD %AH, %BH`”.
But to get back to your question. There is an alternative reality out there where 7bit cpus won. And we might have 14bit words, 28bit dwords, and 56bit qwords.
In fact you kind of already have this with various subfeatures of x64 processors. Where things like floating point registers have 80bit mode for extra precision. Though that’s partially to help keep 64bit floats accurate when stringing together lots of operations.
keep in my when learning programming unless you’re coding in assembly your code is made for you to be able to read it not for the machine.
Bytes are a myth
I’m not an expert in hardware, but AFAIK bits are grouped in bytes at a hardware level. You can’t always directly manipulate bits because the processor doesn’t support it.
For example, in a 64-bit processor you’ll typically deal with registers that can be referenced as different sizes:
Let’s take the A register for example, usually used for arithmetic operations. It can be referenced as AH/AL, which will use the ‘higher’ and the ‘lower’ 8 bits respectively. Or as AX, which refers both of them together, so a 16 bit value. As EAX, the extended variant, which refers them together with another ’16’ bits. Or you can reference the full 64 bit register as RAX. But if you need to use a specific isolated bit, you’ll have to isolate it trough shifts or divisions just like you would do in a programming language when you deal with an ‘int’ and are only interested in 1 bit.
But there are also situations where you’ll deal with single bits directly. There are ‘flag’ bits that you can reference directly. Those are usually used to give you some information about the last operation performed. For example, there is an overflow flag bit that tells you if the result of the operation didn’t fit in memory.
It’s in the hardware. The circuits connecting cpu to ram and other components are all parallel sets of 8 wires (or 16, or 32, 64, etc) so that one byte (or 2 or 4,8 bytes) can be processed at a time
Yes, bytes do exist on a hardware level. If you tried to address data by bit, you’d end up with a very large address space and a level of granularity that would be mostly unnecessary for practical computation.
The size of byte is actually variable and is not strictly defined; for all practical purposes though, you can generally expect that byte is 8 bits.
The history is that a 8-bit byte was roughly the size needed to store a single text character in the ASCII format. So 8 bits forming a single byte became the most common format.
When the computer actually fetches or writes data, it uses a data address. In hexadecimal, the address might be say, 0xff04ba. In that address, is a single byte, which can be done computation on. Additionally, data tends to move in chunks between the various components of a computer, and these chunks can be larger than a byte. E.g. cache lines are typically between 16 and 256 bytes.
You could build a computer that had no any kind of hardware level support for bytes. I am not sure if any have really existed, as the concept of a byte (or, perhaps more accurately when speaking of this in its full historical context: [word](https://en.wikipedia.org/wiki/Word_(computer_architecture)). Not the exact same thing, but practically similar concepts) has existed for longer than fully digital programmable computers have, and when building a computer, you might very quickly realize that operating solely at the level of single bits is both inefficient and impractical.
.