In the last year there have been a number of organisations offering rewards, or ‘bounty’ programs, for discovering and reporting bugs in applications. Mozilla currently offers up to $3,000 for crucial or high bug identification, Google pays out $1,337 for flaws in its software and Deutsche Post is currently sifting through applications from ‘ethical’ hackers to approve teams who will go head to head and compete for its Security Cup in October. The winning team can hold aloft the trophy if they find vulnerabilities in its new online secure messaging service ^aEUR“ that’s comforting to current users. So, are these incentives the best way to make sure your applications are secure?At my company, Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, in fact, can be potentially dangerous to an end user’s security.
One concern is that inviting hackers to trawl all over a new application prior to launch just grants them more time to interrogate it and identify weaknesses which they may decide are more valuable if kept to themselves. Once the first big announcement is made detailing who has purchased the application, with where and when the product is to go live, the hacker can use this insight to breach the system and steal the corporate jewels.
A further worry is that, while on the surface it may seem that these companies are being open and honest, if a serious security flaw were identified, would they raise the alarm and warn people? It’s my belief that they’d fix it quietly, release a patch and hope no-one hears about it. The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.
Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it. Windows Vista is one such example. Microsoft originally hailed it as ‘it’s most secure operating system they’d ever made’ and we all know what happened next.
A proactive approach to security
IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack. However, as these assaults are all from a single IP address, intelligent security software will recognise this behaviour as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
An intelligent proactive approach to security
There isn’t one single piece of advice that is the answer to all your prayers. Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony:
^A- Application testing combined with intrusion detection
The reason I advocate application testing is, if you have an application that’s public facing, and it were compromised, the financial impact to the organisation could potentially be fatal. There are technologies available that can test your device or application with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be at best embarrassing but at worst detrimental to your organisation.
Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is ^aEUR“ be it email, a web service, anything. The device looks for characteristics in behaviour to determine if an incoming request to the product or service is likely to be good and valid or if it’s indicative of malicious behaviour. This provides not only reassurance, but all important proof, that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.
While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
About Author
Haywood’s computing history began writing programs for the Sinclair ZX80 and the Texas Instruments TI99/4A. During the early 1990’s Haywood worked for Microsoft in a cluster four team supporting their emerging products, then internet technologies firm NetManage and Internet Security System (ISS).
Leaving ISS in 2002, Haywood founded his first network security company Blade Software, pioneering the development of the ground breaking ^aEURoestack-less^aEUR network security assessment and auditing technology. It was this technology that became the foundation for the companys’ IDS and Firewall Informer products, winning the coveted Secure Computing ^aEURoePick of Product^aEUR award for two years running, with a full five star rating in every category.
In 2004, Haywood founded his second network security company ^aEURoeKaralon^aEUR. It was during this time that he developed a new network based security auditing and assessment technology with the aim of providing a system and methodology for auditing the capabilities of network based security devices, with the ability to apply ^aEURoesecurity rules^aEUR to fine-tune intrusion detection and prevention systems.
2009 saw Haywood join forces with Idappcom Ltd. Haywood is currently the Chief technology Officer for the company and is guiding its future development of advanced network based security auditing and testing technologies as well as assisting organisations to achieve the highest levels of network threat detection and mitigation.
Why would I be interested in half page of author’s biography…
I have no problem with the bio at the end of the piece. In this age of online anonymity, it is all-too-easy for people to hide behind nicknames, such as “jimmy1971”. (And when people wear masks, it’s that much easier to engage in pointless flame wars, which really is the online equivalent of mob activity.) Kudos to those who put transparency first. It takes courage to put something out there under your own name and open yourself up to whatever criticisms are pending.
My main concern is with the author’s reference to his employer. These days companies tend to have strict policies on their underlings referencing their company in newsgroup postings. Therefore, this article leads me to believe that Idappcom has vetted this article, and potentially has encouraged and/or paid the author to write and publish it. Furthermore, that would make this article an “advertorial”.
If the author was simply writing his own opinion, there would be no need to start a sentence with “At my company, Idappcom, we’d argue that …”. I don’t care what his employer thinks, and I don’t expect him to care what mine thinks. The article should be about what *he* thinks.
While on the surface this article isn’t selling Idappcom products and services, it nevertheless reminds of that “chip shop” Kroc spoke off quite a while back on a podcast, where Coca-Cola had branded the menus and signage.
I hope this isn’t the future of OSAlert.
Although OSAlert seems to think nobody’s interested in the alternative OS scene, I for one would much rather read that than corporate-approved, anti-FOSS tripe.
I also thought the bio was excessive, and I actually edited it down a little bit. The reason I allowed it to be as long as it ended up was because you can make the case that if you’re going to opine on IT security, it’s okay to state your authority to speak on the matter.
And yes, I’m certain that this author’s employer was willing to let him write this story on company time in order to get their name out there. You’re right that it might have come off a bit to “advertorial” and I’ll take that as constructive criticism. I’ll make a point to edit further articles that come off this way a little more heavily to tone down the pimping.
Thank you for taking my comments in the constructive spirit in which they were intended. As a regular reader I certainly appreciate the good job you folks do. The fact that I spoke up via the comments is merely a sign that this site is something I care about.
“One concern is that inviting hackers to trawl all over a new application prior to launch just grants them more time to interrogate it and identify weakness”
This seems to be suggesting criminal intent by Hackers which, contrary to mass media brainwashing, is the furthest thing from the truth. If customer security is your primary goal then what you want is very much to invite Hackers to audit, interrogate, test and identify weakness. What you don’t want is to dismiss Hackers as criminals thus opening yourself up to the risk real criminals finding the vulnerabilities before your limited developer team can.
Criminals are a constant threat. They will try to break in regardless of if it’s legal so long as the potential financial gain outweighs the risk of being caught. Hackers will tend not go beyond what the law allows; they won’t be trying to break into your computer systems without prior permission from you. If someone is trying to break in without permission, they’re not a Hacker, they’re simply a criminal. If what you mean is “one with criminal intent” then use the correct term “criminal”. If you must sensationalize the situation then at least use “cracker”. Don’t miss-represent the majority of Hackerdom by branding all criminals based on the actions of a very few.
Historically FOSS has had more transparency with bug discoveries and shorter times between first report and patched update delivery. FOSS inherited the traits that result in this from Hackerdom. If, thanks to the media, the word “Hacker” scares your executives, use the word “nerds” in the management meetings but consider the very real benefits available by working with Hackers rather than insulting them with suggestions of criminal intent.
Thanks, jabotts!
I think it’s very important to keep fighting the mass media’s misuse of the term “hacker”, and to defend the original spirit of the “Hacker Ethic”.
A penetration test is simply having someone with relevant skills see what they can do with your product. It’s not magic nor is it limited to any specific test profile.
It may be someone who knows the system being tested (White Box). It may be someone who knows of the system but not it’s details (Grey Box). It may be someone with no more information than “attack this” (Black Box). It may be a remote test. It may be an insider test. It may be a local test with direct physical contact.
If the majority of pentests your seeing are remote attack tests from the outside then that’s what you’ve been contracting them to do; you could always choose otherwise. If the intent is to pentest your network from the outside and the tester gives up after your IPS blocks that single remote IP then they are not a very good or the scope of the test included “must test from one IP only”.
Sadly, there are also pentest services which are nothing more than running Nessus, pressing “print report” and mailing you it as the final deliverable. If that’s all you wanted and paid for then fair enough; if you wanted something more in-depth than a vulnerability scan then you should have found someone that can do more than push a button or two.
So, are these incentives the best way to make sure your applications are secure?
Yes. Whats your problem with it? at least poor russians & ukrainians have other opportunities to make money. I bet they sending in most of the bug reports.
The link missing from the article however theres a unicode char ~Ac^AEUR”
Anyway most of the people who pentesting and fuzzing apps for bugs would agree that the 1-3k offers are low prices especially from those big companies.
See where your money goes if you donate https://donate.mozilla.org/page/contribute/openwebfund
$75.00 – T-Shirt and dino plush toy please!
I wouldn’t say that because I think that in large software like those mentioned in the article, the number of bugs may be large as well. Yes, the ratio of time spent (or personal investment) over money isn’t favorable and I certainly wouldn’t participate in these “bug hunts” but from the companies’ perspective, it is risky. And yes I am not one of the people you are referring to.