It's hard to imagine, but true: There was a time when storage space and memory space for computers was highly expensive (a thousand times more expensive, actually), which is why the computer programs at the beginning of Digital Age were usually written in a way that consumed little memory (at the same time, by the way, there was less computing power available, which is why programs in old languages like Assembler were much more efficient than programs in today's high-level languages like Java or C#).
Due to these limitations, the year was not not stored in data sets with 4 digits (i.e.: 1976), but only with two digits (i.e.: 76). The software architects and Software programmers in the 1950s and 1960s could hardly have imagined that their program would be used until the turn of the millennium - but in fact myriads of computer programs developed during the early days of the computer revolution are still running today.
For many computer programs it was not clear how they would behave at the turn of the millennium. And - even more surprisingly - it was not transparent at all which programs (and also devices) processed dates at all. Measures to deal with this "Y2K problem" therefore initially consisted of initiating extensive survey and created test procedures to determine where adaptation of code was necessary. This alone caused considerable costs. In the second step, the programming code itself had to be adapted, and the prerequisite (and challenge) for that task at hand was to find developers with know-how in the programming languages of these "legacy" applications (e.g. Assembler, Cobol).
As those who experienced the millennium change can remember, management of the Y2K problem was successful: By and large, no major disruption happened.