So that’s 20 Hz? Lol. That is like a baloney slicer.
They had the technology to make a much faster platter. But, I suppose the way they were using disk storage at the time might have meant a decent seek time was measured in seconds, and the 800ms was “instant”. The main alternative would have been spooling through a tape.
RAMAC 350 would allow businesses to get rid of their old tub files full of punch cards, and many human filing operatives.
For anyone wondering how it made financial sense, as usual, the goal was to replace expensive, touchy, uppity humans with machines.
No, it wasn’t.
It was to speed access to data. Unless you have some evidence the researchers who work with electromechanics at the time were thinking “how can we replace humans”, rather than “how can we represent 80 columns of data electromechanically?”
No need for this nonsensical hyperbole.
I mean, I’m sure cost savings on labour were noticed as well. And that’s not a bad thing.
I hate uppity humans.
I like the bit where they wheel in the equivalent amount of data in stacks of punch cards, and the hard drive takes up more space.
(Not fair I know because they didn’t show the punch card reader, but the bits on these platters must be ridiculously large.)
How else are they supposed to program with handheld bar magnets? /s
Haven’t RTFA yet, but I’m gonna go ahead out on a limb, and say this was one of those “if you have to ask the price, you can’t afford it” scenarios.
The Tom’s Hardware article doesn’t discuss the pricing structure, however the Wikipedia article does – the RAMAC 305 was leased, not purchased, for $35,800 per month in 2024 USD).
From those huge machines to microSDs with terabytes. Now we’re discussing AI models and quantum computing. Wondering what we’ll see in future.
… I’ve got a bad feeling that someone will be seeing it but it won’t be human.
Based on some plausible sounding number I found online, a megabygte is around 500 typed pages. So this thing was 1875 pages of text.
I wonder when the break-even point was for digital vs paper media from a size/weight standpoint.
TLDR: Even if I’m off by 50%, it’s the mid 1970s for size. Weight? Like 2 of them mentioned the weight, don’t have the data, but double sided 5.25 inch floppies (1976) probably probably win on weight
Looking into this, it’s hard the dimensions of the large storage machines, they are often described vaguely or in a kinda anything-but-the-metric-system kinda way. So there is a lot of assumptions here that we just have to live with.
That ends pretty close to your numbers, 700 MB to 330,000 pieces of paper, ~2121 bytes per page, ~1,060,606 bytes for 500 pages. I’ll use 500 per MB to math
Volume a piece of A4 paper is 0.3553 cubic inches. 500 pages is 177.65 cubic inches, therefore so is a MB stored in paper.
I won’t look much at weight since I could find it for most on them, but 500 sheets of paper weighs about 5.5lbs
Looking at an IBM 1311, available 1962 you get 12.6 MB in the size of washing machine. Average washing machine is 32,400 cubic inches. This gives us 2571 cubic inches per MB. So starting off ~14.5 times worse.
An IBM 2302 from 1965 stored 112 MB in 123,915.5 cubic inches, 1,106 cubic inches per MB.
The lowest capacity 8 inch floppies (80kb) available in 1971, 12.5 of them per MB, at a volume of 7.56 cubic inches a piece gives us 94.5 cubic inches per MB. This is just the floppies on there own, with a reader it would be at least large enough to not yet beat paper.
IBM 3340 (1973) and assuming it’s the basically the same size a 1311, and average washing machine, 32400 for 70 MB, 462 cubic inch per MB
Applying that same logic the later 3350 (1975) we get 32400 for 317 MB, 102 cubic inches per MB, which beats paper
Double sided 5.25 inch floppies (1976) 360 KB, 3.256 cubic inches each, I’ll round up to 3 per MB so 9.768 cubic inch per MB. Like the 8 inch floppies earlier, this doesn’t account for the size of the reader. I’d still say this is the point where we are beating paper for both size and weight.
E:fixed image
They used 6 bit encoding for the text, so the drive stored around 5 million characters which would be around 2500-3000 pages of text.
For the break even point you also have to consider how much time it takes to find and access a file and how much time it takes to edit it.
I suppose they did without the grotesque luxury of lower case letters, haha.
I wonder whay type if data protection/redundancy they had on this thing.
Let’s spin up 2 of this bad boys and do a RAID1 configuration!
Edit: RAID0 to RAID1. Don’t want to spread incorrect information.
RAID0 doesn’t give you any protection or redundancy, just speed.
Oh shit, I got my raids confused, I was thinking mirrored drives (I see now that’s RAID1). I haven’t used hardware raid in years, current set up is using unraid.