“Having read this, one realization is that better code often means less code. I don’t think about lines of code exactly, or something similarly stupid, but in terms of meaningful code. However, argument for less code isn’t about making code as compact as possible, avoid redundancy, etc. The argument is about not writing code at all whenever reasonable or possible. Should we focus on deciding what should and what should not built instead of polishing our software development craft then? Yes and no. Yeah, I know. Exactly the kind of answer you expected, isn’t it? Anyway, you can’t answer this question meaningfully without a context.”
Better code is better because it suits it’s task better. Sometimes it is smaller, sometimes it is faster, sometimes it is more maintainable, sometimes it is easier to debug, sometimes it is easier to change, sometimes if you are extremely lucky, it is all. All good programmers will realise they will always have ways to improve and inprove the way they do things over time when they have the opportunity. I’m being very broad, because a good program designed to run in 2kb on a Z80 based micro controller is going to have different requirements and constraints likely a good program that is to run on a modern desktop computer.
Programming languages are like other languages if you ask me.
An experienced programmer is probably more eloquent in defining what he/she wants the code to do when using programming languages and he/she is also more likely to be more eloquent in the programming language of choosen.
An experienced programmer is also more likely to make better code, which is more generic or to the point and probably also better at choosing which code should be to the point or generic.
So an experienced programmer is more likely to make better code and because of the eloquence you also get shorter code.
This means shorter code (because of the programmers experience and knowledge) is usually faster and all the other properties you mentioned.
But hey, that is just my opinion
shorter code != faster code.
For instance, if I were writing a video codec, faster and better code would be using hardware frame buffers as opposed to purely software rendering. However using hardware acceleration adds a lot of code as well as complexity.
shorter code != maintainable code.
Modularising code can lead to easier debugging (as classes can be tested in isolation), and better maintainability in many cases too. However modularisation produces more verbose code.
However I think the OP was the most accurate when he stated that “Better code is better because it suits it’s task better” – ie there is no definitive rule stating code most comply to x, y and z as the circumstances will vary depending on the project. (case in point, I’ve written some horrible kludges before because it made the most sense for that particularly project to patch the code in the crudest possible way given the time constraints and the (lack of) significance of the routine that was being patched).
Edited 2012-09-27 11:10 UTC
My point was: if the programmer sucks at programming you’ll probably end up with much more code which ends up doing the same thing just slower and probably more buggy.
Yeah, but that’s got nothing to do with this topic what-so-ever as crap developers will write crap code irrespective of best practises.
Generally yes, however I see regularly where people have just put in a try … catch instead of actually understanding and dealing with the route cause (Checking for the obvious condition, such as null object).
Most bad code comes from one of the following:
* Inexperience.
* Not Invented Here (NIH) syndrome.
* Re-inventing the wheel.
* Not using standard libraries (inability to Google or use API docs).
* Lack of coding standards or code reviews.
* General Management WTFs.
* Policy problems (you are not allowed to change this because of x, y and z, which are all retarded problems)
* Hacks due to laziness, or lack of understanding (happens a lot in Web development with either JavaScript or Internet Explorer).
* People that don’t have any professional integrity.
Oh, gosh, yes. Agree. And this has nothing to do with experience or how long the coder has been programming. I regularly see clangers from people who should know better.
That’s relative. I’ve written code to do a very specific CRUD/persistence layer, for example, that would have been just as simple to write using another API. Except, given we were using MS.Net 2.0 at the time, Linq was an Alpha when we started the project, and most other suitable persistence frameworks were fighting against the goals of the project. What we really needed was NOSQL, but that wasn’t a well known concept then.
Again, relative. As with above – we did reinvent the wheel. But as noted, we needed something that worked. Reflection and custom crafted SQL worked just as well to produce results.
Depends on what you are trying to achieve. I’m sorry, this really does contradict the “don’t re-invent the wheel” ethos.
Bane of my life. I hate external contractors with avengence.
Like farming out all money making projects to a third party and then wondering why you are skint? Yep. Been there.
Sometimes legacy code is legacy code for a reason. Sometimes keeping the status quo and quietly redeveloping the back-end is a better prospect.
Having supported a number of legacy systems, I feel this pain regularly. But, sometimes hacks are a means to an end. If you are supporting legacy software with minimum development and no budget, you hack and patch. If you are working on a bluechip level product, no, you don’t do that. Horses for courses.
That depends on the culture. It’s hard to care when no one around you cares.
Agreed, shorter code much of the time is slower, it can sometimes be more difficult to understand if the complexity to make it shorter increases (not always). The fastest text output routine for example on the Amstrad CPC is about 2 kilobytes because it has masses of repeated LDIs and sequential arithmatic operations without many loops where-as the simplest is about 25 bytes – but takes ages comparison to render – important when you are trying to synchronize things in a Demo or Game, but less important when writing a business application – but then that 2 kilobytes might be more precious than the speed – ah such trade offs… to achieve ‘better’ code.
Hell, no. Or yes ?! Your point is to general. I’m sure any decent programmer wrote code that proves your point correct and also wrong.
DeepThought,
That’s the problem, these are nice abstract ideologies but they’re meaningless in practice since everything is relative. Write as much code as you need, but no more! Well, that almost goes without saying, but it doesn’t acknowledge the evolutionary processes that software undergoes to get from A to Z. Like others have said, sometimes more code is better than less, there are so many factors that need to be considered (ie modular versus hardcoded, efficient vs simple, quickly hacked together vs long term managability). It would be ignorant to push forward an absolute ideology up front without even taking into account the specific requirements of a project.
Which is exactly what I said.
Edited 2012-09-28 06:13 UTC
Sorry, then I missed your point.
“The Dao that can be spoken is not the Eternal Dao.
The name that can be named is not the Eternal Name.”
You can’t really define the best code until you’ve come across it, and it’s usually not very general purpose.
Personally, I find the Halstead metric a good heuristic while I’m writing code. I think it’s much better than Cyclomatic Complexity because it is more cargo cult programming than scientific. A problem requires as much cyclomatic complexity as needed, not some arbitrary number.
However, the number of unique operands and operators over the total number of operands and operators in use are still a very good sign of whether you’ve overcomplicated something. It’s also somewhat generalizable to design as well replacing operands for classes/servers/clients/subsystems and operators for states.
Keep one or the other as low as possible while coding/designing/documenting.
i think there are a lot of great tool now to have an idea where programmer need to improve.
Also, i think tool who generate code could be a good solution, surely it’s the way to go… there are lot of reasearch about this kind of tool.
For me, better code means any other person can understand the workflow and the purpose of that code without needing 1,000 pages long manual/information.
That means, in most scenarios, that the code will be suboptimal (when compiling and running it).
So, the real question would be “code faster or code readable”?
Sub-optimal code would be useless for kernels or real time systems.
Plus the reason for code comments is to make code readable. So there’s little point, in my opinion at least, of writing deliberately crippled code just to make it readable.
That said, I don’t agree with needlessly obfuscating code just for the sake of gaining a few clock cycles either.
So we’re back to the earlier points at the top of this discussion: that there’s a time and place for every methodology.
…is a futile metric. Sure there’s good code and bad code, but it draws focus away from the solution itself and nit-picks at the implementation. Often times the developers are hired/contracted to implement totally asinine solutions for any given problem. The best code in the world isn’t going to be appreciated by the users if the solution doesn’t work for them.
Who is good code important to? It’s important to us as developers. But in my experience users & clients don’t care, at least as long as the developer is able to manage the mess (which isn’t always a given).
A specific example I’m fighting with alot these days is OSCommerce. Unfortunately, the code is very poorly written. It is dependent upon php’s notoriously problematic “magic quotes” feature. It supports a kind of plugin, but these lack modularity therefore each plugin needs to patch the base source code. Source patches often overlap with one another and our own customisations such that adding new plugins is error prone and not straightforward. These source patches are inherently version specific. Each adjustment we must make locally puts the code base that much further from the mainline. Software updates are non-trivial, which is a major problem due to OSC vulnerabilities. Also, there is tremendous functional overlap between include files of the website and administration panel, resulting in twice the maintenance burden.
So I hate OScommerce code, but it happens to be included at most hosting providers, and because of that many users will try it out and consider it a good base solution for their website without having looked at the code.
Edit: My conclusion is that both solution and the code are important, but they are important to different people.
Edited 2012-09-27 14:34 UTC
Agreed. In my experience, good code is code that gets the job done. If there is interest, money, and time enough to attempt to reach the other milestones – readability, maintainability, simplicity, speed, etc. – then I’m certain that good code can become better, maybe best. But users, clients, or managers, might not even notice that because they – more than often – don’t care about those things we, as developers, care.
I would only add to that reliability. If the code does it’s intended tasks reliably, it’s good code. Anything beyond that is simply a matter of opinion.
Really bad code is bad code, regardless of if it’s ‘more’ or ‘less’. More code can often be more efficient or faster — just ask those of us working in assembly about ‘unrolling loops’ some time.
Really I think most bad and/or bloated code can be blamed on bad habits. Piss poor inconsistent formatting (It’s called TAB and ENTER, USE THEM!), needlessly complex and cryptic variable and function names (because I’m so sure 6 months from now you’ll remember what ctx1168v means), pointless overuse of commenting instead of clear code, over-reliance on code ‘cleaning’ tools like HTMLTidy instead of just writing it properly in the first blasted place, failing to learn the language thanks to using autocomplete as a crutch, etc, etc… In fact a lot of the ‘tools’ that supposedly make people more ‘productive’ — like autocomplete and color syntax highlighting, IMHO reduce the efficiency of the programmer and just promote ignorance. (or in the case of the latter just make code an illegible acid-trip of color where you can’t actually see the errors!)
But more than bad habits, it really comes down to the ignorance of the average developer. So often I come across code these days that is brute forcing things the language already has constructs to handle. Sometimes it’s simple stuff like the programmer doesn’t understand binary… like this gem I just helped someone with in C (checksum is uInt32)
Checksum = (Checksum – ((Checksum / 0x10000) * 0x10000));
Cracked me up, since that mess of divides, multiples and subtracts is just trying to pull the bottom 16 bits… that’s AND’s job!
Checksum &= 0x0000FFFF;
Functionally identical. I actually was able to speed it up even more by using unit16 that way it didn’t even need the AND.
But that’s just a simple example — I just saw some jquery asshattery where the developer was a master of jq stuff — but was brute force converting a javascript DATE to UTC, then dividing by 1000,60,60,24,etc,etc with endless if statements to calculate the month — when javascript’s DATE object already has methods for extracting seconds, day, month, year, etc… I see that type of rubbish all the time these days, and it just adds to my saying “the only thing you can learn from jquery is how NOT to program javascript”. (not that 90%+ of the crap people use javascript for has any business on websites in the first place!)
You see it in PHP all the time — PHP has a massive function library, you need something done, there is likely already a function to do it for you; but you’ll still see people brute-force coding things. In that case at least the massive library can be used as an excuse, but it’s still a laugh to see people manually iterating a file directory to dump it into an array instead of just calling the glob function… or manually writing a function to do SHA512 instead of just calling the hash function. You see it all the time, quite often in ‘professionally’ written software too! But as always, there’s a difference between someone working professionally and someone who does professional grade work.
Nowhere do you see inefficient bloated bad code more than you do web development. Decade or more out of date coding practices are still taught as the norm; and along comes the steaming pile known as HTML 5 to just further piss all over accessibility, clean code and minimalism resulting in even more bloated bad code. Worst of all though is the ignorance of the average person writing PHP when it comes to HTML — a disturbing trend since as a hypertext pre-processor the ENTIRE POINT of PHP is outputting HTML. You look under the hood of turdpress, and you’ll repeatedly see the use of classes that show the people making it have NO clue how CSS even works or what inheritance is.
Take the idiotic default markup turdpress LOVES to throw at lists, with title attributes redundant to the content, attributes like TARGET that have no business in markup written after 1998, and endless pointless redundant classes that serve ZERO legitimate purpose when there’s a perfectly good class or ID on the parent. If every LI and A inside a UL are getting the exact same class, NONE of them need classes. That’s the ‘cascading’ part of ‘cascading style sheets’ and it’s like the dimwit ninnies writing wordpress templates are completely ignorant of it.
Though that’s hardly surprising — wordpress is for and by people who don’t know HTML or CSS… think about that.
Static CSS inlined in the markup, static scripting inlined in the markup (so much for leveraging caching models), non-semantic markup or abusing semantic tags (like lists) out of some ‘tables are evil’ paranoia (when tables are semantically correct — for tabular data!) — it gets worse every year, and this new HTML 5 garbage (at least in terms of markup) does nothing to improve it — if anything it’s the worst of HTML 3.2 all over again!
But, I still remember the lessons I had drilled into me three decades ago when it comes to writing software — the less code you use, the less there is to break. It’s as true writing PHP, HTML and CSS as it was 35 years ago hand-assembling RCA 1802 machine language.
Edited 2012-09-27 15:31 UTC
I think it’s a little OTT to state that colourised code promotes lazy / bad development. But I do agree with your points about other productivity tools. However, I do think it’s also worth baring in mind that modern languages are so complex these days that it would be silly for any developer to memorise every function and parameter. What normally happens is the important / regular APIs are memorised and the less frequent APIs are remembered as “those functions I know exist but need to double check the docs before using“.
While your CSS rants are mostly correct, there is a time and place for inlined scripts / stylesheets. Each new file is a separate page request and can generate quite a bit of overhead. Sometimes it’s more efficient to have a small portion of inlined content rather than generating an additional resource file – not just from the user end (as each HTTP request will add quite a bit of bloat for small files) but also from the server end (fewer connections == lower chance of generating your own DDoS attack during peak loads).
Like everything though, there’s a time and place for each tool. It’s up to the developer to make a professional judgement on which methodology is best suited for each solution. It’s just a pity that -as you rightly stated- some developers are not proficient / too lazy to make the best judgement call.
Seriously guy, this is not rocket science:
http://xkcd.com/844/
the best code is the stuff that someone else can pick up and understand and work on the quickest!!!!!!!!!!!
Let’s try that again…
I would only add to that reliability. If the code does its intended tasks reliably, it’s a good code. Anything beyond that is simply a matter of opinion.
Edited 2012-09-30 18:20 UTC