I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
Short answer: It’s because of binary.
Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.
Long answer
So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎
/Joking, it it isn’t obvious. Thank you for the explanation.