I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

    • Humanius@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Short answer: It’s because of binary.
      Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.

      1024 = 27

    • TheMurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I believe it’s because you always use bytes in pairs in a computer. If you always pair the pairs, you would eventually get the number 1024, which is the closest number to a 1000.

      The logic is like this:

      2+2 = 4

      4+4 = 8

      8+8 = 16

      16+16 = 32

      32+32 = 64

      64+64 = 128

      128+128 = 256

      256+256 = 512

      512+512 = 1024

    • Kalkaline @leminal.space
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Harvard’s CS50 has a great explanation on it. Makes a ton of sense. In fact CS50 should be required for high school, people would have a much better understanding of how software works in general.