With all the news about Anonymous, LulzSec, Anti-Sec, and so on, you’d almost forget there are more ethical hacking groups out there as well. One such group, YGN Ethical Hacker Group, informed Apple of several weaknesses in its developers website on April 25. Apple acknowledged the flaws, but so far, hasn’t done anything about them. YGN Ethical Hacker Group has now stated they will fully disclose the vulnerabilities if Apple doesn’t fix them in the coming few days.
The hacker group claims to have found three separate security flaws in Apple’s developer website – arbitrary URL redirects, cross-site scripting, and HTTP response splitting. Especially the arbritry URL redirects are problematic, since it would make it quite easy to lead a phishing attack to obtain login credentials from Apple’s third party developers. Developers use Apple IDs to login, so this would give malicious folk access to developers’ iTunes accounts.
YGN Ethical Hacker Group isn’t a new group – they’ve already identified similar security issues at other websites. Java.com, for instance, suffered from similar URL redirect issues, but Oracle fixed it within a week, and thanked the hacker group. They also found issues with McAfee‘s website, but McAfee refused to fix anything until the hacker group went for full disclosure.
Apple has been given the same two months to fix their issues, but Apple has so far refused to do so. The issues were reported to Cupertino April 25, and Apple confirmed they had received the information two days later. We’re two months down the line now, and nothing has been fixed, according to the hacker group. As such, they will now take the same steps they took with McAfee: full disclosure.
I find this a very responsible way of dealing with hacking. I would say two months is more than enough time to fix these issues (or at least enough time to detail ongoing work to the hackers to gain an extension if the work proves to be more extensive) – at some point, the hackers must fully disclose this information to inform the public about the dangers of using, in this case, Apple’s developers website.
It will be interesting to see how Apple is going to respond to this.
http://lulzsecexposed.blogspot.com/
or are they just slightly more tech savvy journalists?
Probably journalists or maybe suites a-la HBGary type given that they clearly don’t know what real Hacking is all about.
“we never hacked anyone”, probably meaning that they’ve never broken into a system without prior permission; neither has the majority of real Hackers.
Heck, Hacking does not even apply to just computers or security. Food Hackers, Radio Hackers, Stereo Hackers, Bio Hackers.. not much computers or security going on with them but some very cool DIY stuff and deeper understanding of those topics coming out of them.
They’re the equally lame and moronic counterpoint.
Not sure that I can see how this is responsible, or ethical.
Scenario: You discover that your local bank has a dodgy lock on their front door and you tell them about it but they don’t fix it within your allowed timeframe. Do you then run an ad in the national newspaper effectively telling all the crooks who they can burgle and how to do it?
Edited 2011-06-28 23:29 UTC
Yes? At least that’s what I would do. If they won’t fix a security concern, I will widely publicize it to force them to fix it.
Edited 2011-06-28 23:33 UTC
And if they were burgled would you accept that you are an accessory to the crime? I’m no lawyer but I suspect that’s the way it would be viewed…
Edited 2011-06-28 23:39 UTC
And if they were burgled, but the damage could’ve been greatly limited had you informed account holders?
That is the point at which the responsible and ethical thing to do would be to come forward and say “We told them so!” Yes the crime has been committed, but you played no active part in it. Regardless of the motives I don’t see anything ethical or responsible about actively facilitating a crime. You found the weakness, you reported it, you’ve actively tried to prevent the crime. Changing tack and becoming an active facilitator for the crime makes you no better than those who would commit the crime in the first place IMHO.
But anyway, that’s the way I view it.
Well, I guess that’s the reason I’m a gray hat instead of a white hat.
The shade of grey could very rapidly become a lot darker if you were considered an accessory to a crime tho
I’m not saying it’s right, but if we’re all honest with one another, very few companies will make security a priority until information about insecurity reaches the public.
For companies where security doesn’t drive sales, there’s little incentive to be secure except to avoid public embarrassment after the fact. Whether we like it or not, going public is an effective way to motivate companies to enhance security *immediately*.
What is the solution for the lack of motivation otherwise?
More liability? I don’t like the thought, but we can debate that.
Security regulation? I have doubts about the effectiveness of this.
A legal time frame after which security consultants are allowed to go public? I think this could work in a fair way, but it would never fly.
Let the public decide adequate security? Obviously this can only work if the public are aware of the relative security of competing companies, but it’s hopeless if companies themselves don’t even know where they stand, or they lie deliberately to customers.
What is the answer?
There are things that trump legal law, and justify breaking it for moral reasons. Again, why I am a gray hat and not a white hat.
Apple’s official policy was to deny the existance of malware they where finding on customer’s computers when braught in for support.
– Do not aknowledge the existance of malware
– Do not fix the malware unless specifically asked to by the customer
It was not until public disclosure braught enough pressure from the consumer base that Apple publicly admitted knowledge of the problem and took steps to address it. The business PR image was more important than the customer’s safety until customer’s awareness threatened Apple’s future product sales.
Apple put them into a lose-lose situation ethically by not fixing the vulnerability.
a) leave others vulnerable to the possibly unethical hackers.
b) disclose the vulnerability.
they absolved themselves of any responsibility when they privately contacted apple to let them know of the problem and gave them ample time to fix it.
further, instead of just disclosing the vulnerability, they publicly stated their intent to disclose the vulnerability without actually doing so, and giving them a further time to act.
as a last resort, the public deserves to know the details of how they are vulnerable when dealing with a specific company. Is someone held responsible for pointing out that the rat turds in their raisin bran aren’t raisins?
Standard “This is provided for informational purposes only. We assume no responsibility for how this information is used, etc.” legal disclaimer applies. Not sure it would hold up. But then again, it’s not like you are going to use your real name or make it easy for the feds to find you if you share information with a newspaper about how to rob a bank.
In these cases, the civilian public is the last to know. If Hackers (ethical) discoved the issue, you can be sure that Crackers (unethical) have also discovered it. They are not publicizing something that criminals do not already know about.
If I can see how one might break in through your back door, you can be sure that burgler’s have also noticed this.
…to inform the public that their money is not safe, so they can transfer it somewhere else, somewhere safe. If my bank has unsafe locks and were unwilling to do something about it, I’d rather know about it so I can secure my money.
You do realize your money isn’t actually stored in a large fault at your local bank?
If every customer of your bank came to collect their money there wouldn’t be enough, far from it.
A thousand times: YES. Time and time again, companies have shown they will not fix security issues unless they are disclosed or threatened to be exposed. Security researchers are not the only ones that look for exploits. In fact most exploits are found after they have been exploited ( without any public disclosure by a security researcher). The public disclosure ensures that all stake holders have a better idea of the risks and can make better business decisions based on that; ie rewarding companies with good security and punishing those without good security.
I know I’ve posted this a few times here already, but since the same conversation keeps coming up here it is again:
http://www.schneier.com/blog/archives/2007/01/debating_full_d.html
Edited 2011-06-29 05:57 UTC
Apple has known since April 25th. People with criminal intent probably found this on there own and already know about it too. Apple’s customers are the last to find out about it and they are the one’s who suffer as a result of any criminals exploiting these issues.
The group discovered problems without breaking laws.
The group disclosed vulnerabilities to Apple directly so they could address them.
The group disclosing those vulnerabilities to the public after the grace period given to Apple allows the public to mitigate the risks or at least accept them with informed concent until Apple fixes the problems.
It is indeed ethical. Unethical would have been exploiting the vulnerabilities for criminal gain, not reporting them to Apple and not reporting them to the public when Apple failed to address them for the responsible protection of it’s customers.
Look at it this way. I build a tree-house for my kids. Someone sees that parts or coming loose; kids could fall through the floor or be hit by falling parts. They report it to me “When I picked Jimmy up after the play date the other day, I noticed that the old tree-house needs some work.”
Two months later I’ve done nothing to address the risk of injury. “look, I’m not comfortable with Jimmy visiting to play with your kids if they are going to be in or around that tree-house.”
I still do nothing so they start telling friends who also have kids that come over to play with my kids.
One might call this responsible parenting versus alling children to get hurt by ignoring these known problems.
The real problem is that companies like Apple have more motivation to avoid the expense of fixing the “tree-house”. It often takes public disclosure and proof of concept documentation to convince such companies that there is indeed risk of there customers being hurt when they come over to play. At minimum, customers can be aware of possible injury and take steps to protect themselves.
OK ya made me change my password, I do that every 90 days. And I will readily admit that developers, are sometimes (more often than we as a body will admit) – to often running as admin, but AGAIN as in any unix I will make another account; say Test2 and login. Is is still happening? No? well that is unix. || Yes <infected> well, boot to a CD and work your recovery plan?
-=- Yes I offer that Many recent ‘Apple Developers’ are green & do not know a buffer overrun from ‘free software’ on warez sites, well let them feed the bigger fish. I am in This MacOS thing for the Long Haul. And Yes I see the Mac Market is Low Hanging Fruit (over trusting) and having deep pockets, BUT the *nix level security ( while heavily flawed ) seems adequate.
As I said before -Easy Hacks\Tricks – seem easy because they capture the low end of your target. How exactly do you intend to get my root password AND my credit card number? Proof of concept means the same thing as saying I might-could get with that girl. Show me the case where this ‘exploit’ if released did do something, or call it linkbait/end of Month-end of Quarter
I’m afraid I don’t think I understand what you are talking about. Its not quite clear.
I think you are questioning if an arbitrary redirect is a real vulnerability. Is that right?
Well, take a look at this and see if it changes your mind:
https://www.owasp.org/index.php/Top_10_2010-A10
Its true that not every vulnerability will or even can lead to an exploit, but its a better idea to just fix the potential problems than waiting for someone to successfully be scammed. But make no mistake this is a vulnerability that can and will be exploited if it is not fixed.
Edited 2011-06-29 23:53 UTC
thanks for the insight, however I will still arrive at the original position. If one Tricked a user into a malware install then that is cute but it is not an exploit. An unpatched Mac will not go ‘zombie’ on it’s own. An unpatched Linux install will not go rogue – BOTH will expect admin access. Servers *could-possibly* be set to auto update and reboot but that is NOT the default Linux install. AND that is what SACLs and Service accounts are for. Meanwhile Windows still does the same old same old (Security as a Rogue Process) and promises us that this time it will be different.
No. All OS’s vulnerabilities are eploitable or even well documented, but only one vendor/(Kernel+HAL) has so many holes. Sure it will all be fixed in the next version of windows but I cannot escape the feeling that calling out the Mac OS or ANY other OS for security every July is exactly Linkbait.
(^aEURc_~)I am once again speaking about a properly configured unit. I would never run a Windows box on the web in the default settings as the Admin account. YES that is how the Mac OS Ships but if that is the case then where are the worms and root-kits. My position is that the proof is in the pudding.
-=-If I run a non-admin client for 24 hours clean and it is ‘safe’ 89.1% in *nix and 100% infected at the end of the day in Dot-Net or Active-X then that IS no reason to call out the MacOS as being exploitable with no “Out of Lab Exploits” – You know like a Mac with no browser open going zombie on a non admin box, Prove it.
Dude, bad things can still happen with just a redirect: Phishing. Tricking people into giving them login credentials to sites that contain their financial information.
Let me be more explicit in the example:
1) You click a link that is clearly going to apple.com. Which you think is safe and will have apple related information.
2) The link contained a redirect url hidden at the end, you are instead redirected to apple.Mactunescentral.com a bad site run by bad people who don’t like you or your dog FlufflyCakes.
3) The site apple.Mactunescentral.com looks like Apple’s uh oh it warns you that your credit card info needs to be updated for your app store/itunes account.
4) You enter in your credit card.
5) Bad people use your creidt card. You later get a decline when trying to buy life saving medicine for fluffycakes. He dies.
The hackers killed your dog, man. Its got nothing to do with your os, just your browser and your lack of fully checking each and every URL and foolish faith in Apple’s security reputation*.
*Note: Even if it is apple’s site, don’t give them your ccnumber. They also hate fluffycakes.
Sorry for your dog.
Apple really should fix their site. Even if the hackers don’t release the info, crackers are now aware that the site is vulnerable. I bet money at least on of them is already trying to figure out how to abuse the Apple site, since he knows there’s at least one way in, and might not have tried before because he had assumed it was safe before.
criminals where already aware. If these Hackers could find the issue, you can bet someone with criminal intent has also found the issue.
This is what makes high bug counts a good thing provided they show fast patch times. software that isn’t getting bug reports or isn’t getting expedient patches for those bugs is either no long in development or neglegent towards it’s users.
I read few posts about the potential of becoming an accessory to crime by disclosing flaws that Apple was made aware of earlier. What a joke! If not de jure, someone is even reverting to ethical standards. Are you fri… kiding me? Are you that blind??
How about this: Apple is moraly responsible and bound by law, to do due dilligence and protect my personal information! If they fail to take proactive measures despite recieving all relevant infrormation and in the proces my information is used to harm me, I would take them (Apple) to court! This is akin to “inside job” … bank management knew of faulty lock and failed to replace it? What does that mean, they were part of the heist or what?
Seriously, companies always seem to have higher priorities than their own users/customers’ security. The beginning of the article proves it, for the millionth time:
“One such group, YGN Ethical Hacker Group, informed Apple of several weaknesses in its developers website on April 25. Apple acknowledged the flaws, but so far, hasn’t done anything about them. YGN Ethical Hacker Group has now stated they will fully disclose the vulnerabilities if Apple doesn’t fix them in the coming few days.”
Replace “Apple” with “Microsoft” or “Adobe” and you’ve got a pretty typical article; company puts out buggy and security-flawed product, knows about security flaw, but doesn’t feel like getting around to fix it any time soon. Unless it starts being used by the bad guys.
Meanwhile, if these hackers would have went the “unethical” route and just said what the exploit was and how to perform it, as well as telling Apple themselves… the company would be scrambling, tripping over themselves trying to get it fixed if it’s bad enough of a bug, before those bad guys are able to react.
See how swiftly Sony reacted when they got their asses handed to them, hardcore and deservedly I might add, by Anonymous. Of course, it helps that their customers’ personal info was on the line, but well… would they have ever learned otherwise? Probably not, they would have likely went the quickest possible route to get their systems back online. It even made them realize that their password-reset page was buggy, so they even had to re-think that part of the system and re-implement it properly. I’m sure after that disaster, Sony will think at least a little bit more about the security of their systems.
Companies don’t learn unless they have to actually react to something bad enough to be referred to as a “disaster,” “emergency” or “catastrophe.” I doubt that Microsoft would have got so much more serious about Windows XP’s security starting with SP2 if there weren’t people exploiting and tearing the the living hell out of the OS all though years ago, wreaking havoc and making computers around the world miserable. I’m amazed so many people just took it and continue to even trust them, really. Their illegal monopoly really saved their asses.
Edited 2011-06-29 21:01 UTC
I say it is, why? Simple it keeps the corporate lackey’s on their toes and doing their bloody job. Groups like this who find security holes are doing the public a favor. It’s people like this who keep the Sony debacle at a minimum. They’re forcing Apple to be responsible.
I say “Cheers”
Edited 2011-06-30 02:02 UTC
Won’t it be obvious something is going on when the ssl cert doesn’t match and therefore the phish is likely to fail?