Some folks like to call it fragmentation, others call it choice, but by any name there are certainly a lot of different Android phones. Building applications that need to work with all of them is no easy task. You have wildly different hardware configurations that make for a big difference in performance, and even though one apk file can work on every one of them, there’s still the issue of needing an app to run smoothly on low-end devices without sacrificing features on high-end devices. When you’re talking about an app as popular as Facebook, this can quickly become a nightmare for the folks doing the coding.
Facebook showed everyone at the Big Android Meat and Greet a new solution that’s simple – the Device Year Class component.
A clever method for developers to tailor their applications for specific Android phones – and it’s open source.
OSAlert – so good it now gives me code tips…
Nearly all the current Entry Level Androids have quad-core CPUs and 1GB of RAM. Most of the low end single models will be out of circulation within 1-2 years.
It will continue to be a problem because apps do more and phones will continue to be built to a budget. When the lowest end ones have quad cores the fastest ones will have 12 and a 10 times faster GPU.
No, probably not. There will most likely continue to be devices that are more powerful than others. The trick of this is giving them a year as a performance indicator. It doesn’t even matter if a low end device built this year gets categorized as a top of the line 2012 model. Grouping devices by these performance characteristic isn’t that new or novel, its the labeling that packaging of all this logic together with an easy to understand label that’s great.
Most of the people I know use devices with pre-paid cards until either they die or get stolen.
So do I.
My current phone is 600Mhz, 192MB RAM and 320×240.
I can now buy a Huawei 550Y [quad core 1.3GHz, 1GB RAM. 4.5″] Vodafone prepaid for AUD50 (USD40).
RAM, number of cores, and clock speed? Doesn’t sound like it will be an accurate gauge of performance. My CPU has a far lower clock speed than a P4 from years ago and yet is way faster. RAM isn’t even a constraint until you fill it up, so a device with 1GB of RAM will do an animation just as fast as one with 4GB as long as it’s not memory constrained.
A device with 8 cores won’t do you much good if you’re not using them.
A device with a slower CPU but faster GPU might be just fine but excluded because it doesn’t fall into a certain device year class.
It’s a very very rough solution. If it’s about performance for UI animations (which was their example), then I’d much rather see some measurement of frame rates and adjusting based on that rather than some piecemeal indicators of performance.
This is essentially doing features by device, which is similar to the old web practice of detecting browsers and changing behaviour which is now widely regarded as bad code. Better to detect capabilities instead of device classes.
It isn’t meant to gauge performance per se, it is more of a method for doing approximate performance categorization.
They are not using clock speed as a performance gauge, they are using it as a fingerprint. Same with number of cores and amount of RAM. If you know, with 20/20 hindsight, that no one built a phone in 2010 with a 2Ghz CPU, or with more than x cores, or with more than x amount of ram, etc., you can reasonably determine that said device you are looking at was made after 2010. Keep going until you there is a year slot it fits in or until you run out of years. Whatever year it falls in, it probably performs similarly to the rest of the devices in that years group. No, not absolutely, there are certainly some exceptions to the rule – but it is a fairly good working approximation.
Again, the point isn’t how much RAM the device has. It just serves as a forensic marker… Look at the code, they only expose a single public method, getYearClass(). That’s it. It won’t even tell you how much RAM the device has (or the number of cores, or the clockspeed, or anything else). If you get “2011” back you are not supposed to assume the device has 1MB of RAM and then act accordingly… If you need to branch based on amount of RAM you need to actually do real detection. I think you might be missing the point…
Than write framerate detection into your app and use that instead. If you really need it this won’t help you. This will, however, generally give you a fairly easy way to implement very basic “slow, midrange, high-end” code paths without much extra effort. Perfect is the enemy of good…
Its actually the opposite. Its doing “device by features”, although features in this case is a fairly small selection of metrics. “Device” is whatever year the detected features fall into. There is no specific device detection happening, no need to constantly update the code to reflect software changes or add new identification mechanisms or whatever. An app written using this right now will have no problem running on any future device. It has none of the problems that makes device detection “bad code”.
Sure its better, but this is orders of magnitude simpler and easier with far less possibility of things going horribly wrong… Sometimes good enough is good enough.
Except it doesn’t work as a fingerprint like I just pointed out. CPU speeds haven’t increased on the desktop in years, and they aren’t going to increase much on mobile either. They are going to get more efficient though.
I don’t think it will be a useful approximation going forward.
Right. And then in their example they use that to make a decision about a feature to enable (in that case an animation). I just don’t see this as good practice and leading to frustration for users when their device is misclassified and they miss out on features for no good reason.
Much more useful is screen resolution and DPI. This is already done.
You don’t understand why browser detection was bad. It was bad because it locked out users of browers that weren’t specifically tested for. Exactly like this. If your phone is classified as 2010 and the developer has arbitrarily decided that 2011 is needed to use a feature, then you will be locked out.
I don’t agree. I think it’s better not to detect and then address performance problems when they arise. For most developers and most apps, this code will not add to the quality and will probably cause more problems than it solves.
Edited 2015-04-09 18:51 UTC
I agree this is not a great way to determine what features to enable in software. By collapsing several information dimensions into a single dimension, it dramatically reduces the usefulness of the information in deciding what features should be enabled.
It doesn’t matter that much though as long as we can enable/disable the effects in an options screen. Some of us like higher resolution/framerate & less effects, others like more effects at the cost of performance and/or battery, etc. Edit: This is speaking in terms of games. For something like facebook, I don’t see the point in having effects regardless of hardware.
Edited 2015-04-09 19:58 UTC
Never heard of an Android smartphone with a P4 though…
With an Atom ?
It sounded like a neat solution, until I read your message. I agree, there’s a lot of scope for problems here. It should be the user or OS making this decision, not something hardcoded into the implementation. Why not give the decision to the end user as to whether they want the animations or not?
I know the answer is that the user shouldn’t have to worry about it, but even then it would still be useful to be able to change the default behaviour if I want to (e.g. in case the heuristics make a bad choice).
It doesn’t preclude giving the user an option… It would still be useful to generate a reasonable set of default settings. Better, imo, than making the defaults be “your device sucks” – which is the only sane choice without doing some kind of heuristics.
I think the argument boils down to whether or not you think the heuristics are “good enough”. Imo they are, at least for very basic stuff. Its a “better than nothing” solution to me. Certainly not ideal but its probably good enough most of the time.
if this is the work of Amit Singh (kernalthread). His company osmeta was purchased by facebook back in 2013. osmeta had planned on breaking into the one os for all devices or os on top of the os, I’m really not quite sure. osmeta showed about 19 devices with which they were testing.
It could be, but if it was it certainly isn’t something terribly noteworthy as far as the coding goes. I like the concept to be honest, but there isn’t much to it beyond that. The actual implementation is totally trivial, a mid-level developer could literally write this library in an afternoon of work. The entire thing is less than 400 lines of pretty simple java.
The core idea is nice, simple, and neat, which I appreciate. There is nothing terribly special about it though. Maybe it was a cog in the bigger wheel he was building? Could be. Don’t know.
Edited 2015-04-09 23:07 UTC
So now you can build an app that will appropriately drive performance down on any device, even the latest ones!
I’d rather totally get rid of animations and other useless stuff.
Hopefully, someone will come up with a way to make your device look 5 years older… *Sigh*
With you on this, 100%. Not sure if we’re in the minority, or whether marketing assumes users want this eye candy and most people just put up with it. All it does is delay the information I want for an animation that has no informational content whatsoever.
1. go to Settings > About phone, then tap on Build number a bunch of times until it tells you “your already a developer”.
2. Go to Settings > Developer options, and turn off all of the animation effects (or set them to very low values).
Now you have a 5 year old phone
I often wish the core libraries would come with it. But I’m assuming they don’t want to take on that level of responsibility and certainly don’t want to complicate their actual implementations.
It’s a simple metric to determine feature support.
The alternatives aren’t too pleasant
1. Don’t have switches and your app simply runs slow/not at all on older devices or not fancy enough for newer devices
2. You end up writing your own massive selectors, which will probably be getting the device name and doing some things that way. You’ll probably miss devices or screw it up.
3. Getting too complex and maybe tracking things dynamically or things most people don’t want to do
…
It’s not exact, but often it is good enough.
The use of the year is interesting. Time will tell if that makes it easier or more complex in the grand scheme of things.
By this Logic, the Nexus 4 phone will just have simple animations? It’s still a quad core beast with 2 GB of RAM and enough storage for those fancy images.
I really wanted to get into Android development but this fact kind of scares me.
Is there no such thing as auto tunning the performance of the system based on it’s hardware specifications? (like Windows did for ages).
The rest a developer might care about is the total RAM usage and proper scalling.
Hi,
So, if someone happens to have an extremely fast quad core CPU with only 128 MiB of RAM, or a phone with 8 GiB of RAM and a slow (more power efficient) CPU, what happens?
If/where it matters; applications should find out how much RAM, how many cores and how fast they are (whichever is relevant for the situation) and base their decisions on useful information. A half-baked “approximate year” only makes incompetence seem acceptable.
– Brendan
The “device year” is quite clever, providing that it is used to simply set default options in the app settings (e.g. “Show animations” etc.) when the app is first installed.
After installation, the user should be allowed to tweak those defaults, with the app perhaps warning about performance issues if turning the setting on would go beyond the device year’s capabilities. As ever with Android, the more app settings you can change, the better
Edited 2015-04-10 10:03 UTC
From the article:
While it might look ‘elegant’ when browsing sources, it’s still not good. Every year has seen slower, faster, lower end, higher end, and midrange devices, with lots of different cpu types and builds, paired with several different gpus, shipped with varying memory configurations. Saying that some app would’ve run in such and such a way on ‘a’ device from 2013 is an impossible task.
I’m not saying we don’t need something to judge differences, but I don’t think this is it. Looking ‘simply’ at cpu&gpu model and clock would be a better way to judge the level of effects to be used, IMO of course.