Android’s various screen sizes – how big of a problem is it, really, for developers? Not a big one, according to iOS and Android developer Russell Ivanovic:
The answer tends to surprise pretty much everyone: It’s not that hard, and honestly causes us less headaches than most people imagine. Firstly, the tools Google give us to lay out interfaces have supported this from day one. You’ve been able to define one or more layouts that scale to various sizes, and if you want to get everything perfect, you can have as many of these layouts as you like, while still keeping the one codebase. The layouts are XML, and don’t live in your code. If you’re an iOS developer they are pretty much the equivalent of XIB files with size classes like iOS 8. The other part people don’t realise is that Android has standardised on screen resolutions for a long time now.
I’ve long since accepted that certain complaints and issues are mostly only perpetuated by people with an agenda, even long after the actual problems are solved or no longer relevant. There’s Windows and security, Apple and pricing, Android and security – you name it. In order to get a real finger on the true extent of these problems, you have to cut out the official bloggers and party parrots.
Windows has been secure for almost a decade now. Apple’s devices and PCs are not expensive. Android has never been insecure. These are all cases of ‘fear, uncertainty, and doubt’ perpetuated and/or made excessively worse than they really are by people of questionable nature.
Nope, I am pretty sure that Apple products are expensive. They might be worth there money, but they are expensive. Apple basically just operates in the higher priceranges. I can buy 500 Euro laptops, 300 Euro pc’s, 200 Euro phones and 100 Euro laptops from many brands but I cannot buy them from Apple.
Most of the time the products that Apple products compete against are expensive as well though.
The difference is more to do with build, fit and finish. I bought a MacBook in 2007 and it still looks as good as it did the day I bought it. I took care of it, but then why wouldn’t I? I’ve had various laptops through work that have basically had a very privileged life – more or less lived on a desk, been taken care of in a similar fashion, and have essentially fallen apart. That is the issue for me.
I bought a 500 euro laptop in 2007. Still looks new. If you take care of your stuff it holds up longer.
Everyone’s experience is different. I have a Dell Latitude CPx from around 2001 that was used in a warehouse environment for most of its life. Yet, it’s in excellent cosmetic condition, with very few light scratches on the bottom, and a flawless keyboard and screen. This is a plastic shell, mind you, not brushed aluminum or titanium, as most of Apple’s pro laptops have been.
It’s definitely not all in how you take care of it either. I’ve had cheap computers that started falling apart on me with light use, but I’ve also had Apple machines do the same (my Core Duo Mac mini has a loose DVI connector, even though it has sat in the same place, plugged in to the same monitor, for a very long time).
The difference has more to do with profit margin.
Not really. Like for like in 2007, the Macbook was pretty equal in specs to Windows laptops. A Dell I got through work at around the same time was almost double, and really the main difference was the OS. Most of the rest of the specs were comparable.
Well, i dont abuse my Macs but they still accumulate scratches and dings on those damned edges.
Don’t get me started an the charger cables. They just keep splitting. I put supports near the ends, and the splits just happen elsewhere.
That’s a bit like saying you own an immaculate 50 year old Mercedes. It may still look good but it is totally outdated.
Is it? It has a Core2 Duo that is 2.2Ghz. It pretty much does almost all of what I need it to. I just upgraded the RAM and hard disk along the way. I have a Surface 2 Pro now, so I don’t touch it as much, but it’s still a perfectly viable laptop.
Just like the Android $600+ phones: HTC M8, Galaxy S5, etc.
Well ok, but the Apple laptop that’s 3 times expensive, lasts 3 times longer (6 years instead of just 2). So you can buy the $400 HP, but you’ll need to buy another one in pretty short order (or pay to have it repared). I’m not sure it equals out, but if you add in the overhead of the time you will spend fixing it, etc. I’d say the Mac isn’t expensive.
Quality? care about it?
Reliability? care about it?
Resale value? care about it?
Initial purchase price is just the first step of total cost of ownership.
Is a Rolls Royce not expensive because it’s a quality car?
Is a Rolex not expensive because it lasts a long time?
Is Gold not expensive because it holds its value?
Apple’s cheapest laptop is $899 and they sell a $4000 desktop. They are expensive, no way around it.
Unfortunately, MacBooks keep their value. I could probably eBay my 2007 MacBook for at least ^Alb200, if not more – and it is pretty much obsolete by Apple’s OS upgrade path.
Windows Security -> We ALL remember the days of multiple toolbars installing themselves basically playing havoc on the machines of non-technical users. Whenever we ‘fixed’ their machines this was the first thing we removed.
Apple Price -> Apple were Expensive in the 90’s I mean well above the PC equivalent.
Android screens -> Back in android 1.x days this was a real problem.
Over times, these have all been resolved to a lesser or greater extent. But those old negative perceptions survive. In no small part due to the fanboys in the other camp repeating them ad infinitum (check comments on any new mac release “its not as cheap as OEM XYZ”)
What do you mean ‘Remember the days’…? toolbars, especially the ‘Ask’ and Yahoo ones are still being installed if you aren’t careful.
Having developed for both iOS and Android, I disagree that the essentially infinite number of screen density/dimension/aspect ratio combinations is not a problem.
it becomes a problem for testers, iOS you know you have a limited set of hardware to test against. Android, how many devices is enough?
Restricting screen resolutions and aspect ratios in order to have “pixel-perfect apps” is a terrible idea. Because it restricts resolutions to integer multiples of the original. Don’t hardcore pixel-positions in your apps.
Symbian S60v2 did this, by fixing the screen resolution to 176×208. This meant S60v2 devices couldn’t have 240×320 screens, which became the standard screen resolution later. Phones had to either feature a fuzzy 176×208 screen or use an unusual (expensive) resolution of 352×416.
Apple, not learning from history, fixed the resolution to integer multiples of 320 pixels (the other dimension appears free though). Which means you ‘ll never see a 1080p iPhone, it must have a weird resolution of 1280p.
Essentially, the proponents of “pixel-perfect” apps blame Android for not doing the wrong thing.
Edited 2014-07-09 19:34 UTC
Nonsense. There is nothing in Cocoa Touch or regular Cocoa having any such restriction.
Apple has so far chosen to “double up” to avoid the problem with backwards compatibility. Take Windows 7, or heck, even Windows 95. They all had support for querying what the DPI of the monitor was and a perfectly coded application would do that and scale accordingly. It just happened to be the case that if the OS returned anything else than 96 DPI about 90% of the applications would totally mess up their own layout because they had never been tested under this condition.
The double up strategy works because older software can still function correctly without knowing what is going on because the OS just multiplies all values by two. It has to be exactly double up to avoid aliasing effects when scaling images and thin lines.
HTML does something similar. They redefined ‘px’ to mean sizes to be in 96 DPI. However they do not need the “double up” trick because HTML maps “used values” to “actual values”. What this means is that they round sizes to the nearest pixels and avoids blurry aliased lines and boxes that way. Images, however, still becomes somewhat blurry if the image doesn’t fit the resolution.
Cocoa Touch does not have this distinction between used and actual values and so they didn’t have much choice.
This problem is more caused by the trend of designers first drawing an application in Photoshop and then send those as comps to the developers. This, however, has little to do with why Apple chose the double up strategy.
The “double up” tactic is why Apple has a system today that works well for Retina (4K) displays, while both Windows and Linux struggle with it. Relying on all apps to be ported is a terrible solution.
It is very easy to seem progressive when you use a naive strategy. It gets you there first but becomes a limitation later.
Yup.
Until you have to introduce a new screensize and have to do so six months in advance just to get your developers on board.
Actually six months isn’t even enough. The thing is, applications are always written with certain assumptions (“computer will have 2+ GB of RAM”, “the monitor size will be 15″ to 24”, “the DPI will be 80-100”, “the mouse will have two buttons”, “computer will have 250+ GB of disk space”, etc.) and once you go beyond those expectations programs break for all kinds of reasons. Once they break, maybe 25% of them will never be repaired and maybe 25% will be take years to do so.
Apple’s problem has been that their Cocoa API never really had any kind of proper automatic layout. When they needed an UI for the new iPod Touch and iPhone they grabbed their existing UI Toolkit, rewrote the standard components to work with touch and called it a day. If I remember the history of the iPhone correctly, they never truly planned for allowing 3rd party apps – it was something people discovered they could do (when they realized iOS is actually just Darwin + Cocoa + new Objective C standard views). Only then did they add an official API and the damage was done.
Unless they want to break half of all their applications, they have no choice but to introduce the concepts slowly. The pixel doubling strategy is just part of that slow process.
The history of Android is completely different – they always included automatic layouts and developers always assumed that display dimensions weren’t “pixel perfect”. Therefore Android applications always dealt with the “fragmentation” of different sizes, just like an OS X Cocoa application never assumed its window dimensions would be always 1024×768.
What I don’t understand is why people are expecting Apple to use a tactic that would break 50% of their applications over night. Microsoft tried that with Windows 8 Metro apps and could someone please point me to the Adobe Photoshop application with a completely rewritten UI running in Metro?
Heck, they couldn’t even port their own Office suite with that tactic. And for XAML, C# and the Windows Presentation Framework it took DevDiv almost a decade before they were ready to brag that Visual Studio now *finally* had switched its main window to their new UI toolkit.
Don’t get me wrong. I don’t mind better tech, but an app eco system does not react well to fundamental changes.
If there’s an Android “screen size fragmentation myth”, then for the sake of a balanced discussion, let me point out that there’s also myth being perpetuated that “layouts in iOS apps are fixed (aka static, absolutely positioned)”.
Cocoa / Cocoa Touch allows you to do fluid layouts. The semantics are different. Auto Layout in Cocoa uses constraints whereas Android uses a box model. The results, however, are the same: Layouts that can flow, grow or shrink depending on available screen real estate.
Auto Layout was introduced in iOS 6. That meant for example that any app that had to run on the iPad 1 could not use it. It is only very recently that most apps stopped their iPad 1 support.
It now also means that any legacy code written in that time period is still not using it. If you suddenly now change the resolution a lot of those apps will break. Even if their newer view controllers are now using iOS 6 features.
Apple has to find a way to slowly introduce resolution awareness to applications so the broken applications do not ruin the brand in the process. I’d say Auto Layout is only the first step in doing so.
They’ve already done this before when they introduced the iPhone 5. Breaking changes will be handled the same way they’ve always been handled: by asking apps to opt in. This is reasonable. If you don’t update your application then it gets letterboxed.
Also: You can still design a fluid app without Auto Layout, using the old springs and struts system.
Edited 2014-07-10 07:07 UTC
The old system was quite terrible – it got replaced for a reason. Given that most developers assumed it would run on a 1024×768 device (for iPads), my guess is that the majority of developers just hardcoded the positions. Yes, I’m pulling that out of my ass, but the general code quality level is not very high in this industry – why should it be different for iOS apps
But more importantly, the existence of an API does not guarantee people are using it. If they did, then all Windows 95 applications and all X11 applications would be fully DPI-aware.
Likewise, opt-in doesn’t work unless you guard it like a hen (i.e. using the App Store approval process). When I tried to use my 4K retina monitor with Windows 7, virtually all the Microsoft apps had opted in that they were DPI-aware. In practice, only Visual Studio passed the test.
The old system was easy to use for simple layouts, but quickly became a PITA for complex layouts. However, we should keep in mind that OS X developers had been using springs and struts to build windowed applications since forever. By their nature, windowed applications need to be able to gracefully handle different viewport sizes.
I’m sure that many iPad apps are doing the wrong thing. We both agree that there are no technical reasons preventing developers from doing fluid layouts though.
Opt in works to prevent unplanned breakage. It means that Apple won’t cause your app to break. However, if you voluntarily opt in and your app breaks, then you absolutely deserve all the one star reviews coming your way. I’m willing to wager that the flood of negative reviews would quickly push you to fix your app.
As for Microsoft’s apps being bad citizens on their own platform, that reflects poorly on themselves and themselves only.
what I was getting at was that whereas testing iOS devices means a known set of hardware the company can buy (iPhone 3, 4, 5) testing for ‘android’ is much more complex. Screen size and resolution is just one aspect of it. Do we get the S5 and the S4 or HTC One or should we plump for the Z2? Do we need them all?!
Count Brazil out of the “not true” about Apple pricing. An Apple notebook is R$ 5.000 (something like U$ 2.000) while PC ones with equivalente hardware are $R 2.000 (~ U$ 400).
Of course you are leaving out the fact that most of those clones are made locally and aren’t subject to Brazil’s hefty import tax to promote “Made in Brazil”
I have been looking for a new laptop for a while. Just visited the Apple site, looking at MacBook Pros. They are easily twice as expensive as comparable models from Dell and HP. Which is why the last laptop I bought was an HP. Even the refurbished Apple laptops are significantly more expensive than their equivalent new version from other vendors.
That said, high quality laptops like Lenovo are closer to Apple’s prices. Apple has unparalleled build quality. I wasn’t happy with my HP, but it did it’s job. While I would probably have been happier with an equivalent MacBook Pro, I just can’t justify dropping $3k every 4 years.
I also develop Android software. The screen thing isn’t my biggest headache by a long shot. Most of the time I don’t even think about it.
The Windows user experience was better until Windows 8, but the developer experience is still a nightmare compared to other platforms. MS just doesn’t know how to build APIs. That goes double for their mobile platform.
Edited 2014-07-09 16:45 UTC
Screen resolution fragmentation was a nuisance at most compared to the other, real fragmentation issues.
The problem with Android is the insane degree to which it can be customized. This leads to very subtle, hard to debug bugs up and down the stack. The bugs in some OpenGL drivers are especially nasty.
And the even bigger problem with the very vendor specific bugs like that is that now to recreate the issue you need to go out and purchase one of those random ass devices.
Not that it’s any better with iOS though, I’ve heard horror stories from coworkers about bugs in different models of iPads (same “release” like the iPad 2, but different variations).
I don’t think anyone is best served by downplaying the issue of fragmentation as a whole though. It definitely is a problem. Not necessarily Android specific, but made worse by the staggering scale at which Android operates at.
Three words:
The Windows ecosystem.
Anywho, sometimes I get the feeling mobile developers whine a lot because they tend to be new at it. Windows developers have had to deal with a lot more hardware variation than any Android developer will ever have to.
What about it? The situations are comparable? Of course they are. But I don’t think Windows apologists pretend that the problem doesn’t exist.
To add to this: Times are different. Windows developers in aggregate deal with a lot of this. But not all at once.
On Android, and in the mobile context you have access (and in fact are expected to use) a bunch of different sensors (Location, Cellular, WiFi, Fitness, etc.) a lot of which aren’t standardized chipsets (so OEMs can pick and choose whoever). So bugs in those drivers can cause issues, and you use those sensors more often than Windows developers traditionally have.
Graphics is another beast. OpenGL’s IHV driver (the part the vendors like nVidia write) is more involved than say, a DX driver. The OpenGL API is actually implemented by vendors, which allows for a larger potential surface area for vendor specific bugs.
And that is what ends up happening. Made not that much easier by the fact that OpenGL also supports vendor extensions. Sigh.
Your feeling is off the mark.
The type of hardware variation that you find on the Windows platform doesn’t really affect the developers that much. Most applications work at a level where such concerns are already abstracted out. A typical business desktop application doesn’t care what sort of graphics card you’re running. It’s a moot point.
The hardware on Android, while also abstracted out by the OS and framework, tends to affect the developers directly. There are decisions to be made, sometimes wholesale design changes to handle different screen sizes. The move from a phone size screen to a tablet size screen affects how you want to display your app. On Windows you have resizable windows. The user can simply resize your application when they move from a laptop sized screen to a 30″ monitor. You don’t have to alter your design. Moving from a phone size display to a tablet size display is a big deal to an average Android app. And soon we’ll have smart watches, Glass, TVs, Chromebooks. More fun all around.
An Android application also makes use of many more sensors than a typical Windows application. The developer may need to check whether the device has a GPS, camera, NFC, etc., and gracefully degrade the experience if a sensor or input is not available. A typical Windows application can assume that the hardware they depend on will always be there.
A mobile environment is a much harsher environment to applications. Your app is using too much memory? It gets KILLED. Cue one star reviews. On a desktop OS? The OS will do its damnest to keep you running. It will swap, the hard disk will trash. OS X will beachball. Cue complaints against OS X.
On a mobile device, you need to worry about battery life. Even if you don’t care, the OS does care, and reserves the right to terminate your application whenever it’s in the background. Windows won’t do that to your .NET app. A good mobile app needs to be able to come back from the grave and pretend that it never died. Show the same activity, with the same content in the same state that the user last left it. That means implementing state restoration and caching. If you quit and reopen a Windows application, you don’t expect it to come back in the same state.
Mobile applications live in a harsher, more constrained, more sandboxed environment. Apple and Google could start from scratch and force applications to be better behaved compared to the anything goes state world of the desktop. The end user wins, but it means more complexity for application developers.
Apple is bringing many of these concepts from iOS back to OS X (remember “back to the Mac”?). The OS X user will eventually benefit with longer battery life, more secure (sandboxed) applications, better use of available memory. However, the OS X developer is now getting a taste of what it means to be an iOS developer.
Man, how have desktop software developers survived with all those different display resolutions, widescreen ratios and colour bit depths all these years? It’s clearly impossible. Apparently.
Easy. They survived for most parts by not having to do multiple designs.
Why do mobile developers need to “do multiple designs” when no such thing was necessary for desktop applications? It’s like the wheel has been re-invented but it’s a hexagon.
Maybe an example would help. Let’s consider the classical example of a phone vs. tablet UI: The master -> detail pattern.
Say you’re writing an email client. When the app launches you want to show a list of email subjects, sorted by most recent on top. Tapping on a subject opens the detail view where you can read the body of the email. On a phone you don’t have enough space to display both. So you only show the subject list. Tapping on a subject replaces the list view with a view showing the email body. You then provide a back button so that the user can go back to the list view.
Now on a tablet you might have enough room to display both the list of subjects and the body for the currently selected subject. Maybe you show them side by side, with the list on the left and the body on the right. This UI is more efficient than the phone UI. You have more space to display more stuff.
But wait. This only works when the tablet is in landscape orientation. In portrait orientation you still don’t have enough space. Let’s see how Gmail solved this. Gmail puts the subject list in a sliding pane in portrait mode. When a subject is selected, the pane slides away to only show the detail view.
So that’s three (3) different UI layouts for one app. For one user activity. If you do things right, you can reuse the components. However, reuse or not, we’re still doing more work here compared to a desktop application that has the luxury of being able to show the same UI everywhere. A desktop app doesn’t need to scale down to work on a phone screen. It doesn’t need to worry about screen orientations and switching between them. Android will destroy and recreate your Activity when the user rotates the device. So you need to save and restore state whenever that happens. iOS is more considerate. Your UIViewController survives, but you get told that the screen is about to get rotated so that you can rearrange and animate your views to reconfigure your UI.
That’s the same UI problem desktop developers have been dealing with for decades; you can’t fit 1280×1024 worth of UI into a 640×480 display.
It’s funny that you chose email clients as an example, because most email clients offered the exact same solution you outlined: have different UI layout options (and let the user choose).
What’s old is new again, and we need to re-invent our entire toolbox to “solve” it.
Oh, what’s old is simply old, which is not saying that it’s inferior. WIMP clearly works great on the desktop and you can pry it from my cold, dead hands.
There’s nothing wrong with re-inventing and renewing. Mobile has brought us an influx of cheap(er) sensors that allow us to explore new ways to interact with computers. GPS, proximity sensors, NFC, motion detectors, voice, Siri, Cortana, etc.
I think that I see where you’re coming from now. I don’t see the need to bring an “us” vs. “them” angle to this debate. That was never my intention. I build desktop applications too. My point to Thom was that a mobile application has more constraints placed on it than a desktop application does.
I’ll take “because you’re comparing apples to oranges” for $500 Alex.
A UI is a UI, a display is a display. I can’t see how desktops are magically different to mobile devices.
… and a thing is a thing. Therefore a desktop computer and a potato are the same. Flawless logic, I guess.
Cheers.
If you believe Windows is secure, then I have a bridge to sell you. I constantly get people bringing their Windows laptops to me to clean, usually after some malware infected their computer through a browser (IE) exploit. Also, Apple products are quite a bit more expensive than …. well, just about everything else. Usually, where I live, about two to three times more expensive for the equivalent non-Apple hardware. Sorry, but some popular opinions are there because they are true, not because bloggers have grudge.
Security isn’t an absolute binary property, but a relative value.
Windows is more secure than it was a decade ago, and they do continue to improve it.
Is windows as secure as Macs? Not sure what the best way to determine relative security of platforms, is. Should the popularity of windows be counted against it? From a technical point, that’s not fair. From an end user perspective, its a real consideration.
I think that Microsoft and the *nix development community are striving to improve security. I’m not really sure Apple is that committed. They might be resting on their laurels a bit.
And I’ve had Macs brought to me infected with malware because the users downloaded .dmg files from shady websites trying to get Logic and Aperture for free. What’s the difference? Market share, simply put. There are nine Windows machines for every one Mac in the wild, so of course you’re going to see more Windows machines with malware based on the numbers alone.
That said, Windows security has matured with the times, but you’re right: It’s still dead-easy to write malware for it and get it onto the machines. IE with default settings is a dead-easy vector, as well as any email client not properly configured to handle attachments. Microsoft Security Essentials is one of the weakest AV/AM products out there, but it’s better than nothing at all.
Windows 7 and 8 are about as secure, if not more secure, than the other major operating systems. And this is coming from someone who doesn’t care for Microsoft products. After the security calamity that was XP, Microsoft made a concerted effort to fix the broken architecture of Windows that allowed so many vulnerabilities. The resultant fixes in the architecture broke many features, which is why Vista was viewed with so much hated, but it was Microsoft’s first step towards cleaning up their act. At one point, the ASLR Microsoft used in Windows was better than that of OS X and Linux. Chances are the people that are getting their Windows computers infected are doing so via trojans which are a problem with the user and not the OS.
Sometimes they are. The value of Apple gear relative to comparable gear from other vendors varies greatly, primarily as a function of time.
Apple refreshes their products at a lower rate than other vendors, including simple speed bumps. That Macbook Pro or iMac is a good deal when it is first released, but as time passes, other vendors drop prices or give speed bumps to their stuff. Apple doesn’t do this nearly as often, so later in a product’s life cycle it becomes overpriced.
Very true. Also, the one component that Apple is known for bending you over a barrel for is memory. You’re better off buying the minimum and installing more on your own (assuming they haven’t started soldering it to the motherboard).
Well, the only desktop that has it soldered is the new 2-core iMac – and, it only comes in 8GB configurations.
But, that’s an artifact of it being a Macbook Air in an iMac case more than anything else.
Hahahaha, you’re joking, right? OF COURSE they do – hell, Apple practically invented that particular sleazeball tactic with the Macbook Air (funny, you don’t see iFanboys trying to brag-by-proxy about THAT particular innovation…).
Just do a google search for “ram soldered onto motherboard”, all but one of the results are about Apple laptops… including one hilarious attempt to defend that practice, which is such a pathetically by-the-numbers bit of iFanboy apologetics that it verges on self-parody:
http://drbobtechblog.com/soldered-ram-new-macbook-pros-problem-mave…
This is always an interesting discussion. From what I have seen, Apple prices are comparable to PC prices when the hardware is comparable (at least in the U.S.). The problem is that people who spec out two “comparable” machines between Apple and a PC usually leave out features that the Apple hardware has that they don’t care about, such as Thunderbolt. Therefore, the Apple hardware looks more expensive when it is really comparably priced if you added the features that you don’t care about to the PC. The problem is really that Apple doesn’t offer hardware without the bells and whistles that nobody cares about.
Remember when the PC vendors decide to make their version of the MacBook Retina called “ultra” laptops?
The equivalents actually cost MORE and didn’t end up selling well.
Since when have apple not been expensive? They always cost more than the competition in the UK, their phones especially are insanely pricey in and out of contract. Hell most contracts make you pay a sum for the iphone on a contract thats also higher than other phones of the same spec that come free with your mobile contract.
We have “designers” — basically trained monkeys who spank it on the screen in photoshop, who constantly bitch and moan about different size targets. It’s why we STILL see halfwits, morons and fools pissing out fixed width layouts, and now just pissing out MULTIPLE fixed width layouts with responsive triggers on websites.
They cannot wrap their heads around the notion of elastic or fluid design concepts, and to be brutally frank how user interfaces — be it websites OR applications (and the line is getting really blurred) — should be built. They put their artsy fartsy bull and pixel perfect design concepts ahead of functionality and accessibility, and as users we all suffer as a result.
The laugh being that at least in web design, we’ve had the tools to do it properly for over a decade and having content and functionality auto-adjust to device capabilities was it’s design goal from day one over two decades ago.
But the lazy, inept, and outright ignorant fools who continue to think in “pixels” or “fixed layout” will continue to point the finger not at themselves, but at the devices and the users.
Of course, desktop applications have never had a major problem with different screen sizes — or on Windows at least different default font sizes since win 3.0 and the 8514/large/120dpi/125%/win7 medium/whateverTheHellTheyreCallingItThisYear — so why should handheld and other small screen devices have issues?
It’s a BS claim by people who probably don’t know enough about computers or interface design to be flapping their gums on the subject!
Looking at the discussion above …Thom nicely trolled us