Colorado — like most states and territories across the country — is experiencing record unemployment numbers. But the state’s unemployment system is built on aging software running on a decades-old coding language known as COBOL. Over the years, COBOL programmers have aged out of the workforce, forcing states to scramble for fluent coders in times of national crisis.
A survey by The Verge found that at least 12 states still use COBOL in some capacity in their unemployment systems. Alaska, Connecticut, California, Iowa, Kansas, and Rhode Island all run on the aging language. According to a spokesperson from the Colorado Department of Labor and Employment, the state was actually only a month or two away from “migrating into a new environment and away from COBOL,” before the COVID-19 pandemic hit.
Are you one of the already 17 million people laid off in the US, losing what little health insurance you had in the process, and now you can’t even apply for unemployment assistance because some baby boomer coded the damn system in COBOL? Time to lift yourself up by the bootstraps and learn the wonders of COBOL!
India needs to teach its H1-Bs ready to come here COBOL.
Y2K reappears 20 years later.
Condemning how things have been done many years earlier though is disingenuous. They have had 3-4 DECADES to update the code, but then if they spent taxpayer money you would be screaming that they shouldn’t waste money updating a fully working system.
Now the next problem is can they keypunch hollerith cards for batch processing.
They’ve spent taxpayer money. They’ve bought new mainframes to run their COBOL code faster, which is the entire value proposition for a mainframe. The code doesn’t change; it just runs faster.
It’s amazing really.
Flatland_Spider,
It’s not clear to me what exactly the hold up is? Sure I get that the systems are overloaded, but even with no modifications to the software, I don’t understand why they couldn’t just buy faster hardware? 16M is a lot of unemployed people, but it shouldn’t be too many records for a modern mainframe to process.
IBM mainframes are very expensive, but is this all about price tag or is there some other reason they can’t handle the workload?
Presuming the state already has the fastest mainframes and they are still overloaded (for whatever reason), then they could divide the data in half, quarters, eighths or whatever until your software is able to handle the load. Obviously this ramps up the hardware costs, but it scales pretty darn well. This is what modern databases call sharding.
https://en.wikipedia.org/wiki/Shard_(database_architecture)
Normally in a modern database this is done under the hood, but there’s no reason this can’t be done manually. Mainframe #1 will process cases for last names A-L and mainframe #2 will process cases for last names M-Z. This way each mainframe (running the same code) only has to process 8M cases instead of 16M.
The price tag for hardware would clearly go up, but the software would likely continue to function as is.
I’m not sure. I don’t have any insight into these systems. I know the airline industry heavily relies on mainframes, and mainframe deployment projects weren’t quick.
I’m guessing the problem has something to do with this being an unprecedented event in the last 70 years and most systems were designed within the last 50 years. When looking at the data to spec the system, this would be considered an outlier and discarded. This is so far out of the norm that no one would have gotten the budget approved to build a system to cover it, or if they did build something to handle this, it would have gotten dismantled years ago due to cost cutting.
This is definitely an area where “bursting to the cloud” would be helpful.
Flatland_Spider,
This seems to be the case, but I’m telling myself that surely today they’d overlook the price of upgrading, no? I would think that the older that the hardware was to begin with, the greater gains there are to be had by upgrading now. /speculation
The funny thing about “the cloud” is that if you’ve designed your software to be scalable, you don’t need the cloud to scale up. “The cloud” is really more of a cost saving measure who’s primary advantage is that it lets you *scale down*. That’s ironic, but hopefully insightful. Consider that you can easily buy/rent as much dedicated hardware as you need to handle peak traffic loads, but outside of “the cloud” you’ll be paying for way more capacity than you probably need or have a use for 24/7.
If it’s licensing imposed limits, sure. Call IBM and get a license code unlock more resources. Mainframes are fully packed from the factory, and resources are limited via software licenses.
If it’s a hardware limit, the new stuff will probably be online well after this is over. Installing a mainframe isn’t like racking a Dell Poweredge. It’s more like commissioning a new aircraft carrier.
Sort of. If you have the money and/or space to keep extra servers around, do it. Owning servers is cheaper in the long run. Otherwise, the option to spin up servers in AWS, DO, Vultr, Google Cloud, or wherever, to help for a few hours is nice.
It’s also heavily dependent on the workload if it makes any sense to look at something like that.
Flatland_Spider,
There’s no reason that installing a mainframe HAS to be such an ordeal, that’s more to do with IBM’s marketing and self imposed restrictions. These are excellent reasons NOT to have a mainframe, but I get your point anyways.
That’s kind of my point, if you don’t need 24/7 operation, you’d end up with more capacity than you need using your own equipment. And if you do need 24/7 operation, I agree you can do a lot better than AWS on price.
I’ve worked on AWS, as I’m sure many of us have. The way people talk about it you’d think it was magical, haha. But once you work on it, it’s just another data center that happens to have amazon’s name on it.
Talking about scalability goes back to something that I frequently have gripes about, which is lousy performance with modern software. We’re often telling software developers that optimization isn’t important because hardware is cheaper than developers. This CAN be true sometimes, but when we allow this attitude to go unchecked it can lead to egregious inefficiency. I wrote a batch import process for wordpress that took less than 1/30 the CPU time of the official process. Clearly many users will tolerate it, and they will resort to the time honored advice of “get better hardware”, but sometimes just a little more optimization goes a long way towards improving performance way more than new hardware can. I don’t feel enough people appreciate the true value of optimization, at least not until the circumstances are desperate.
>This is definitely an area where “bursting to >the cloud” would be helpful.
>
>
Dude we are talking about *REAL SOFTWARE* here,not some *RINKY_DINK WEB APP*
That’s why it’s written in COBOL.
Grace Hopper’s revenge.
It’s more that it’s incredibly hard to get exposure to COBOL without getting hired to write COBOL. I’ve looked around, and there isn’t much about how to learn COBOL and the related systems.
You can use GnuCOBOL and pick up any COBOL book to learn COBOL. If you want the (almost) real mainframe experience, you can fire up a virtual mainframe on an emulator such as Hercules (http://www.hercules-390.org/).
Maybe here : https://github.com/openmainframeproject/cobol-programming-course
Learn COBOL? No thanks, I’m not willing to sacrifice my sanity.
Andre,
Cobol itself isn’t complex, in fact it’s pretty simple once you get the basics down, the real challenge is always acclimating to the datasets and potentially lots of code to understand both the big picture as well as the minute details. This takes a lot more time than learning the language. It’s the same with any project. That said, working on mainframes is a bit different because the workflow is kind of unique in terms of how the program gets run after every telnet 3270 screen is submitted and then terminates immediately after processing it. No state information is kept around except for whatever the program explicitly saved somewhere (often VSAM files), it’s kind of like the unix CGI model.
I worked with a lot of mainframe guys, but I wasn’t a mainframe developer myself.
During my second semester as an undergrad CS student, a considerably more-experienced student said this to me about COBOL, and it has stuck with me ever since:
“If you choose your names carefully, a well-written COBOL program can read like a novel.”
I found out later, that COBOL was designed in part to allow upper management (who weren’t programmers) to understand, maybe even verify, what their coders were putting together to run on the company’s expensive equipment.
gus3,
I’ve heard that too. In practice these days I don’t see that happening too much though, haha
*Cobol itself isn’t complex, in fact it’s pretty simple once you get the basics down, the real challenge is always acclimating to the datasets and potentially lots of code to understand both the big picture as well as the minute details. *
>
>
Yeah. Learning COBAL was rather simple. Just like FORTH was once you understood what you wanted to do.
https://github.com/georgejs/Cobol2PY Just saying…
Cool, and does it deal with CICS integration, or the other mainframe COBOL features that are the actual issues with getting COBOL programmers and not something trivial like syntax?
jockm,
It doesn’t look like it does much at all.
Searches do come up with mainframe emulation tech for CICS/IMS/JCL/etc, but it looks expensive and personally I don’t think I’d ever go down that route when I already have *nix. If I had mainframe software that I desperately needed to run, maybe an emulator would make sense then? I’m curious whether one could get better performance out of a commodity x86 server emulating one or more mainframes than out of authentic mainframe hardware? Does anybody know?
If I had access to a mainframe now, I’d be very temped to benchmark it! I’ve dealt with some cobol code, especially in terms of mapping out copybooks, vsam files, and screen maps, etc, which was very useful for external integration, which was my primary role in dealing with mainframes. I never had my own mainframe environment to play around on, so I never got proficient with things like JCL. I did work with CICS integration though. I ported a single C program to the mainframe under the guidance of a coworker, which was my one and only program to run on the mainframe, haha.
Starting point, then you work on maintaining/improving the Python version. I bet there are commercial equivalents of this tool, with better “CICS integration and stuff”. But anyway who am I to give advises, if tax payers don’t fell cheated not receiving their unemployment checks, not my problem.
And they call themselves programmers?
A lot of languages i have learnt on the job… Your next project is X you will be doing it in Y whicg you have never used before.
For proper programmers this is not usually a problem. Idiots that just learn’t one thing maybe (I guess they are all you get there days looking at the quality of some commercial programs).
Oh and for the record COBOl is not one of the get oust. It is fairly simple (as it was designed to be user firendly after all!), even if it is the really old 80 line punched card compatible format version.
If is was forth and that sort of thing was new I might give a few days to get used to it. If it was lisp maybe they have a actual complaint and need a month.
I mean it does not even have to be good code, just working, this is business after all. Where have all the programemrs gone?
Carrot007,
I agree, with caveats… It won’t take much time to learn cobol, however there’s a lot more to learn about working with the mainframe. It isn’t trivial, it takes time and you will make mistakes. Someone who has no experience would not be qualified to start a mainframe project without training (unless they explicitly had the luxury of a mentor to help as they went along).
They got chased away by the Windows 8 and Gnome 3 developers….
” because some baby boomer coded the damn system in COBOL” Pray tell what they were supposed to code it in? Java, C# or some other language THAT DIDN’T EXIST AT THE TIME? You know if you’re gonna sneer it helps to at least have some awareness of why things are they way they are. Just be grateful it wasn’t coded in PL/1.
It’s awful !
I remember an old guy who had worked on mainframe systems that ran financial transactions, New inexperienced guys would always come in and want to rewrite these systems in (back then) Java or python instead of spending the time to understand what the decades-old code was doing. Often programmers without a sound understanding of floating point vs fixed point mathematics. I’m sure they’d be crying for JavaScript or Elixir today.
Joshua Clayton,
To be fair though, it’s not just inexperienced guys who want to replace legacy systems. While it’s true that it would cost money to redevelop something, there’s more to consider. Legacy systems, and particularly mainframes, can be extremely expensive to maintain.
I think the owners of most mainframes are thinking “why fix what already works”, which is a valid point, but the reality is that commodity servers, databases, operating systems, etc have received a lot more innovation and cost reductions over the years, And due to marginalization, it has become difficult to replace mainframe administrators. Schools can train new workers, but most people paying for college degrees don’t want to be pigeonholed into maintaining legacy systems, which is what most mainframe work is.
It may make sense to keep the old system around, however one should be considering all the extremely compelling options for new data center applications. AMD scores pretty high these days.
It’s true that people and even professionals may lack the experience to understand the significance of some old features, but that’s true of any domain.
This sort of reminds me of a discussion about BCD numbers on mainframes and early processors.
http://www.osnews.com/story/30272/a-constructive-look-at-the-atari-2600-basic-cartridge/#comments
BCD was used on mainframes for the benefit of humans, and to this day we still see mainframe datasets using BCD conventions that would be considered obsolete for anything written in the past several decades.