Open source: big benefits, big flaws
Given the dominance of open source in the IT marketplace, any significant debate over its value might be considered moot.
As Eric Cowperthwaite, vice president, advanced security and strategy at Core Security, said recently, “Open-source code has conquered the world.”
Indeed, its advantages are multiple, compelling and well known. Among the most compelling are that it is free, it is open to everybody, users can customise it to fit their needs and there is a community of thousands – perhaps millions – of eyes on the code to spot bugs or flaws so they can be fixed quickly, before they are exploited by cybercriminals.
When the source code is, “open to the world, you are going to have multiple eyes viewing the same configuration,” said Andrew Ostashen, security engineer at Redspin, “so if issues arise, the owners will be able to remediate faster.”
Not perfect
Still, world conqueror or not, a number of security and legal experts, while they agree in general with Ostashen and are not issuing blanket condemnations of open source, continue to warn both organisations and individual users that it is not perfect, or even the right fit for everybody.
It is critical, they say, to be aware that some of the characteristics that make it so attractive also make it risky. Obviously, if the flaws in code are exposed for all to see, criminals can see them as well. And even millions of eyes on open-source code is not a guarantee that every flaw will be found and fixed.
“There have been claims that open source software is inherently more secure due to the openness and ‘millions of eyes that can review the source code,” said Rafal Los, director of solutions research at Accuvant. “This was thoroughly debunked by bugs like Heartbleed and others.”
Indeed, Kevin McAleavey, cofounder and chief architect of the KNOS Project, somewhat sardonically refers to it as “open sores.”
Open sores
“Open source publishes the source code, and many eyes claim to review it, thus exposing any possible bad code,” he said. “And yet … Heartbleed. The defective code was right there for those ‘many eyes’ to spot since its release in February 2012, yet nobody spotted it until more than two years later, after the exploits had become overwhelming.”
Another example he and others cite is the “Ghost” exploit in GNUTLS, which dates back to 2005 but was discovered only last year.
“Again, nobody ever spotted that one either until after exploits were piling up like cordwood,” McAleavey said. “There was also the “Shellshock’ exploit in the BASH shell, which similarly was published, seen by many eyes and dates back to version 1.03 since 1989.”
That is because millions of eyes does not mean all those eyes are qualified to spot flaws.
Qualified eyes
“Just because you have a critical mass of people reviewing the code, are they qualified to do so?” asked Aaron Tantleff, a partner at Foley & Lardner. “There are no credentials to speak of, or certification that can be given to code reviewed by the open source community.”
That is McAleavey’s view as well. “Just because the source code is there doesn’t mean that all of those eyeballs understand what the code actually does, or does incorrectly,” he said.
And even if flaws are spotted and patches created, that does not guarantee they will be installed in every device or system that could be affected.
Tantleff said recent history is proof. “One need not look back very far to find examples of the risk of open source in one’s environment,” he said. “Park ‘n Fly and OneStopParking.com suffered from attacks due to an open-sourced based security vulnerability that existed in the Joomla content management platform.
“A security patch had been issued well before the attack, but unfortunately the patch was never installed,” he said.
McAleavey, who said he started working with Linux, one of the most popular open-source operating systems, when it came on the scene more than 20 years ago, said this problem exists largely because open source tends to exist as, “two separate entities.”
Kernel team
In the case of Linux, “there is the ‘kernel team,’ which is the primary operating system itself, and then there are ‘application maintainers,’” he said.
“Any changes to the Linux kernel itself still has to be approved by Linux (creator Linus Torvalds) personally or through one of his handful of trusted kernel maintainers. They, and only they, determine what happens to the core kernel OS itself,” he said.
“But they have no interest whatsoever in what happens among the literally thousands of other open-source developers who maintain a single application or ‘package’ – also known as ‘distros’ or ‘distributions’ – of Linux. They’re pretty much on their own.”
That, he said, has led to “absolute anarchy in userland. And that’s not good for stability or security. No one is in charge.”
Los said closed-source software is “just as susceptible to being ‘abandoned’ as open source,” but noted that the incentive to maintain and update commercial or proprietary software is there, “if the vendor truly cares for their product quality.”
But, like McAleavey, Los said open-source components used in commercial applications, “are a massive problem, primarily because they’re forgotten. Take, for instance, the OpenSSL library and the issues that popped up when a series of major flaws were discovered in it. Open-source and commercial software alike fell victim to the dire need to patch, but where OpenSSL was used in commercial applications, many of the end users simply weren’t aware that it was there and so didn’t know it needed to be patched.”
Subscribers 0
Fans 0
Followers 0
Followers