Researchers at the University of Shanghai for Science and Technology have developed an optical disc with a capacity of over a petabit of data, equivalent to well...
Anytime you get to that length, you always have to think about whether or not someone will have a drive to read it, a computer that it works on, and matching programs to decode the data. Think about some of the formats we had in the 70’s and 80’s and how often people actually have that hardware and software in working order now.
Think about some of the formats we had in the 70’s and 80’s and how often people actually have that hardware and software in working order now.
Well yea, but it’s a matter of funding and business/government desire. 99% of the time the only people who care about accessing things that old are hobbyists and enthusiasts.
If something critical to a fortune 500 company or government was stored on it and they needed it they would have the means to contract out a specialty one off device just to read it (Or contract out to a very pricey data recovery shop)
And software is software, we can still run 70s and 80s software through a myriad of virtualization technologies fairly easily and cheaply.
You want to make some money? Start manufacturing microfiche readers. There was a brief time in the 20th century where microfilm and microfiche was all the rage for archiving and even publishing technical documents, and now there’s a lot of data people need for various reasons and no device to retrieve it on because they all got put in a room in the back of a library and got kicked in when someone backed into the room carrying a heavy box.
Assuming the software isn’t lost, then yeah, typically it can be emulated or reverse engineered to work.
The bigger hurdle is the hardware, especially if the encoding of the data was proprietary, meaning that even if you could get a reading without it, you’d still need to figure out how to decode it into useful data
Anytime you get to that length, you always have to think about whether or not someone will have a drive to read it, a computer that it works on, and matching programs to decode the data. Think about some of the formats we had in the 70’s and 80’s and how often people actually have that hardware and software in working order now.
Well yea, but it’s a matter of funding and business/government desire. 99% of the time the only people who care about accessing things that old are hobbyists and enthusiasts.
If something critical to a fortune 500 company or government was stored on it and they needed it they would have the means to contract out a specialty one off device just to read it (Or contract out to a very pricey data recovery shop)
And software is software, we can still run 70s and 80s software through a myriad of virtualization technologies fairly easily and cheaply.
old family videos? old government data?
its not just for hobby.
You want to make some money? Start manufacturing microfiche readers. There was a brief time in the 20th century where microfilm and microfiche was all the rage for archiving and even publishing technical documents, and now there’s a lot of data people need for various reasons and no device to retrieve it on because they all got put in a room in the back of a library and got kicked in when someone backed into the room carrying a heavy box.
That’s for future people to figure out
Aren’t most of those emulateable in dos-box or similar programs?
Assuming the software isn’t lost, then yeah, typically it can be emulated or reverse engineered to work.
The bigger hurdle is the hardware, especially if the encoding of the data was proprietary, meaning that even if you could get a reading without it, you’d still need to figure out how to decode it into useful data
How do you emulate reading from a physical medium?