I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
…wait why is it not 1000 in the first place? Is this some kind of rounding thing???
Short answer: It’s because of binary.
Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.
Long answer
So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎
/Joking, it it isn’t obvious. Thank you for the explanation.
I believe it’s because you always use bytes in pairs in a computer. If you always pair the pairs, you would eventually get the number 1024, which is the closest number to a 1000.
The logic is like this:
2+2 = 4
4+4 = 8
8+8 = 16
16+16 = 32
32+32 = 64
64+64 = 128
128+128 = 256
256+256 = 512
512+512 = 1024
Harvard’s CS50 has a great explanation on it. Makes a ton of sense. In fact CS50 should be required for high school, people would have a much better understanding of how software works in general.
Understanding that has very little advantage for the average person.
So teaching it alongside things like the quadratic equation makes perfect sense then.
Would be better to not teach either.