I’ve been developing software for quite a few years. One of the issues that seems to come up again and again in my work is this concept of design and implementation. I recall it being a significant part of my education at the University of Waterloo’s Computer Engineering program as well. The message was always the same. Never write code first. First you must design software by writing a design document, flow charts, pseudo-code, timing charts… then it’s merely a trivial matter of implementing it. Make note of the attitude here given towards implementing. The real work is in the design, and it’s just a trivial matter of implementing it. It sounds so simple doesn’t it? Now, how often does this work out in real life?
I’ve been writing software perhaps not for as long, but I’ve been a part of starting new software projects.
One thing I learned is that one thing that’s bigger than the design-before-implement you’re talking about is how business gets it’s profits. If you tell a manager that the only thing that’s preventing them from being profitable to corporate is three to six months’ worth of design and development, you better have an easy-to-understand reason why it makes sound business sense to do so, and that doesn’t mean repeating the tome of software development to them.
Business doesn’t want to pay for a “programmers paradise” where we have endless amounts of iterations of design, charting, etc… to come up with a glorious design. They want to know “how long will it take to write?” and they will derive the answer to: “how much money will it cost?”
Sure, there might be outliers like Google where the managers are well aware of the importance of design, but in the real world, the opposite is the case. I’ve just resigned myself to build in time for design/testing/etc… such that I put out quality code and it’s part of what the managers see as “cost” to get work done.
The problem is that non-trivial programs are nearly impossible to estimate “how long will it take to write?” with ANY sort of accuracy. How long will Stephen King’s next book take to write? Business HAS to fund that “programmers paradise” for at least as long as the first project takes, no matter how long that is. Then they need to keep enough funds from the profits (assuming it made a profit) to fund the paradise long enough for the next project, which won’t take the same amount of time as the previous project.
I think your description of the business world is quite accurate.
However, you are overlooking another important world: the academic world. I use the word academic in its most concrete sense, as a collection of educational institutions, not in the sense that’s used in forums where it means impractical, nebulous, or distant from reality.
In long term academic projects, programmers are part of the very user base they are writing software for. A biologist writing a program knows pretty well what other biologists like to work with. The programmer is not given a set of highly compartmentalized tasks. Rather, the programmer has a large, amorphous idea of what the program should do, and addresses the problem without the significant planning that large software firms employ. Their chief concern is to produce results that earn a publication. Writing well designed software is part of that, since the scientist-programmer must constantly refine and restructure the program to further his or her study. Programmers are given extraordinary flexibility out of necessity.
It’s this very flexibility that Google uses. By giving programmers 20% of their paid time to work on any project that interests them, Google has amassed 50% of its software from this sliver of time.
It’s no coincidence that the free software movement has taken hold very strongly in the software of physics, chemistry, and biology fields. It’s the very nature of open software that matches science’s necessity for openness in order to progress.
Because free software is now widely respected in business circles, business managers are recognizing the importance of decentralization of planning. If it weren’t profitable to do so, how can one explain the success of Linux in the business world? Expect to see programmers to be given much more control over their work.
I liked this article. That’s it, nothing snarky to say
Without being too snarky, I did not like it. It described a problem I was very familiar with, but did not add anything new, nor did it do a very good job of explaining the problem.
It’s a good article, but he independently discovered an already well-established philosophy: don’t document just for the sake of documentation.
It sounds like the author works at a company which doesn’t believe in selectively documenting a system, but it’s well established in ALL the software process camps that the trick to success across the SDLC is to “realize what decisions you can defer and what you must take now” – that is, only specify what needs to be specified beforehand, and leave the more trivial decisions for when you’re in the IDE.
The software development body of knowledge is massive. There are a ton of strategies available for a software team to use: use cases, invariants, sequence diagrams (a personal favorite) – in fact, there are probably formal ways to describe every single aspect of a piece of software. But that doesn’t mean you have to! Use common sense and remember why you’re documenting: to get your development team (and maybe stakeholders on the same page.
That said, thank you Yamin for writing this article, because you started a good, substantive conversation about a topic that’s way more interesting than anything that’s been published on this site in a long time.
Edited 2009-09-10 02:50 UTC
I disagree, IMHO this article oversimply things.
I understood the article as a way to say either specification are too vague and they’re not useful or they’re too detailled and you might as well use a (pseudo)program instead.
Good specifications (like good program) are hard, but are they impossible?
Here’s a specification for a function which must provide the square root of a non-negative Number with precision P:
Result >=0
and abs(Result*Result – Number) < P
Note that computing a square root is quite difficult, but checking that the result is correct is very easy.
Good specification are like this: they explain the issue and give the property desired for the output, *without* restricting the implementation to only particular implementation.
That said, the difficult part with specifications is error handling: I must admit that I don’t know how to handle this part correctly..
Edited 2009-09-10 16:01 UTC
To a certain extent the problems this article mentions are insoluble.
In my experience your best defense is to keep your projects modular and have an incremental design approach. I am not promoting Agile, but the absolute worst thing you can do is set out to create one massive monolithic project that only gets released to users as a single block. Witness the Netscape rewrite.
It is *extremely* important to get something out there, and to get users using it. They will find bugs. They will direct new features, they will stop you from wasting your time designing stuff nobody wants.
The more you tolerate imperfection (within limits of course) and an incremental release cycle, the more likely you are to have a successful product.
Congratulations. This is the best essay I’ve read on OS News ever. No kidding. Ever.
The problem I have with the best article ever is that it seems like it is trying to program me to think like it. It is way ahead of it’s time now, which is exponentially changing. It was as much effort for me to wrap my mind around it’s further implications as it was slowing my thoughts down to the speed of the following words.
“Design and Implement” is a style of getting things done. Trees get forests done, but the forest is specification for the space. The purpose for specifications is too big to see, like the planet. The specification is a style that makes more readable now for some unique individual in the forest we cannot see. One is too small to see the planet. The payload, Yamin says, is the actual thing, the facts inside. The payload of Design and Implement is getting things done while in a primitive society that is full of “cover your ass”.
In support of the articles premise, why suffer the duress of such dress? Network protocols packetize and frame better. Computers encapsulate better. When we build a computer better at human language than a human is, we won’t be so primitive any more. We will then have a basis for tailoring a common language that will do as Yamin says.
Design specifications are like manuals of style, the wordiest of manuals, and love to propagate themselves. And (it is my momentary opinion that) to the extent that we deny this our machine-like repetitive propagative nature, we will fail to materialize the happiness that comes from getting change done. It is most human to say “Silly language, words are for deeds.” But humans are fallible and imperfect, and are indeed a cog in a planetary system in transformation, and out of which gracefully we move.
Human language is largely purposed for programming our motives. (We are our own worst enemy.) What motivates each of us is both our verbal and motor skills. Words are like this and are used to control through masses-media programming. So you see, now, the practical perspective of specifications? I hope not.
Granted wholeheartedly many words are an inhumane affliction, like too much physical labor, and can “kill the spirit” of our moment by moment. Witness, for example these words as few are ever willing, and fewer ever will. That many Americans choose not to read is as natural as a cat disliking water. And we trust this because organismic behavior is in the now, while wordy programming is a language for some private future. The rub of life is such.
OK. No two humans are alike, nor can they understand in the same way, most specifications, or any book, or language. Complexity is questioned, and the questions determine the answers. So we have the “many answers” problem. There are as many answers as there are possible frames or paradigms. This is the price we pay for change from trusted old ways. Trustworthiness increases with decreasing complexity and sophistication; fears and complaints increase with increasing complexity. So we change slowly, gracefully, balancing all the way to the future.
Language is for someone’s “to your specifications” for a future. Specifications are visions, and they trump language like pictures trump language. Even when we get a common language, life will have the aspect of “design specifications” in it somewhere, because we’re not all visionaries. And no one can see the larger dimensions of our forms. What the article has done, stunningly well, is highlight an important aspect of a life, seen from Yamin’s private position, that we cannot wholly understand.
The future is being built by specifications from project builders. The planet is a “project I’ll” (projectile) in space, projected from private positions. It is far too human and thus a project of protracted proportions. I love words, but they bully pound to a pulpit. There should be a new regulation on them…
It was the best of articles; it was the worst of articles, it was an article of the times. Good programming OSAlert.
P.S. Thought developed language. There is thought without language. But there is no language without thought. Similarly I predict that the “thought” that will develop a practical, common language will come from fruits of the the natural language processing field in computing.
[quote]”I’ve yet to run into a situation where I’ve written code and the compiler has not followed my instructions and that is the reason something broke. It hasn’t happened yet.”[/quote]
Either 1 of 3 things is true:
1. Yamin hasn’t been writing software for long enough.
2. Yamin has been very lucky to get bug-free compilers (I want to know which ones those are, honest!)
3. Yamin is lying
I’ve firsthand seen compilers screw the pooch, and violate what is meant, even when not optimizing the code. The problem is that compilers are written by failure-prone humans, and they’re complex beasts, coupled with the fact that, as this article does state, the source code IS the design, and… not all designs are identical for the same programming language implementation, which is something Java was supposed to remedy. Other than that horrible flaw in the article that smacks of unreality, it’s a great article
Perhaps I’ve never dealt with low enough level languages, or buggy enough compilers, but I’ve never encountered a compiler that didn’t do exactly what I told it to.
A developer who imagines he has found a compiler bug is wrong, in 99.999% of the cases.
The Borland C++ 5.0 compiler (forget which one, exactly: they also had an Intel compiler shipped in that package) in debug mode generated invalid code for the , operator: I saw it at work, when someone was using the operator for generating debugging code, and Borland confirmed it was a bug when tested. Also, optimizations in gcc aren’t so guaranteed at the highest optimization levels, especially on older compilers (before 3.x). If you write C/C++ software and you’re unaware of this, you’re inexperienced. So, too, Windows-based compilers often have subtly different code at top optimization levels than that generated that’s unoptimized. I have no reason to believe these issues are restricted to C/C++ compilers, as back-ends are often shared amongst front-ends for actual compiling down to assembly, especially in the Gnu Compiler Collection.
In every one of the two-dozen-or-so cases that I’ve
been trying to build an executable and the compiler
message was “ERROR: Internal compiler error in phase…” what I have found is certainly a compiler bug.
In every one of the similar number of cases where the compiler seg-faulted, it was also a compiler bug.
I think your “99.999%” is a WAG.
What compilers *have* you used? Its possible you haven’t, but I imagine its more likely that you just haven’t used the compilers in such a way that their bugs were exposed.
In General, they are complex beasts, and they do have bugs.
I don’t deny the existence of bugs in compilers, but they are increasingly rare for the following reasons:
1. It’s much easier to test compilers than graphical applications. One feeds a predefined set of code to the compiler and ensures that it produces code functioning properly. It’s difficult to test graphical applications, since they don’t follow the simple input-output design the compiler neatly follows.
2. A compiler like GCC has been gradually refined over decades. Software that’s been slowly and continuously developed is quite stable.
3. It’s easy to detect a compiler error when the compiler compiles its own source code. Comparing output of the bootstrapped compiler to the original is a very powerful way of debugging it.
It has been said,
This statement is slightly too strong; there are a few
cases where there is a finite and small number of inputs
and one can “test” the program by running it on all
possible cases. For an example, consider a tic-tac-toe
program.
Most real-world programs do not deal with such a simple
situation.
You forgot option 4 and 5:
4: Yamin is smart enough to only use reliable compilers and smart enough NOT to use certain advanced features of programming languages
5: Yamin forgot about it and found another way to write the piece of code without relying on the faulty feature of the compiler
Eh, could be: in my past, however, there were no compilers available which were “perfect” for C/C++ for Windows, as they weren’t as standards-compliant as they are now, and even if they were, that’s all fine and well, until you find that obscure bug, or have that particular set of code Or, in other words: what was already out on the market was known to be imperfect: you just had to choose which imperfections you dealt with, and the three most important things about releasing final software, especially with any optimizations turned on: testing testing testing! Just because something works when in unoptimized code, does not mean that the compiler will create optimized code that’s correct!
fair enough
In general though, the compiler is rarely the problem. That was my main point. Never say never in an article I’m sure there are compiler bugs out there.
I’d inclined in option 2. In function of the decision done in your organization/company, you can easily spend some years before hitting that head-scratching compiler bug (I’m pointing to you! first releases of GCC 4.X series, GCC 4.3.2 and VC ++ 6 before first SP).
If you are lucky enough of using a subversion of GCC quite stabilized or a “semi-old” version of VS with all the service packs, you have one of the first conditions to be in coders’ paradise: You can really trust your compiler.
And coming back to the point of design, 10 years ago I remember that UML tried (and was successful at some point) to simplify design and coding by bringing them together as much as possible. I remember it was quite cool to export/import in Rational Rose all the definitions of our classes. No idea how much it is still in use.
Your main issue seems to be with implementations not matching the intended design or specification, presumably because there is ambiguity in the design or specification.
I am understand your point about code being the only true specification, however I do not think design should be expressed in code solely.In this way more mistakes will be made in the code.
The code is an implementation of the higher level design and more often than not the implementer will spot issues in the higher level design that need correcting, this leads to an improved design and code that is more likely to be fit for purpose – consider it a verification of the higher level design if you will.
Also you fail to mention Agile development methodologies such as Test Driven Development and Scrum which reduce the big bang approach and allow for a much more formal and verifiable approach to implementation.
Test Driven Development drives out code design through a test first approach. Scrum and Sprints allow for frequent interaction with the customer to discuss and demo small advances during the implementation phase – ensuring correctness even if the specification is ambiguous.
I don’t know much, but IIRC that is still a valid question and the basis of the two major splits in modern linguistics
Edited 2009-09-09 17:58 UTC
Thanks for the feedback. I just thought I’d put some pen to paper and see what happened.
“To a certain extent the problems this article mentions are insoluble.”
I don’t know about that. A lot of the attitudes given to software in attempts to solve it make it worse. These attitudes stem from the problems listed in the article.
By changing attitudes and recognizing that there is no dummy implementation and it is all design, we can address some issues in a better fashion.
I am also a big supporter of agile. Because software is 100% design and agile is iterative (as is virtually all design), it is better fit that many models.
Subject matter is of critical importance and I wish I had wrote more on it, but I figured it was getting too long. If you don’t know your domain, how do you know what to program?
In any case, I don’t think anyone expects programmer paradise. Yet, there is certainly much room for improvement. There are companies that do get this as well. There are also companies that used to get it, but are now losing their way.
I am sure the issue can be ameliorated, but whatever your process, however broken it is, you are going to be better off if you develop a modular product, in stages, and have a quick, iterative development cycle – you will be most protected from whatever idiocies exist in your design/development process.
You will also weed out the incompetent. In monolithic products, developers can fall down the rabbit hole and not produce anything of value for months, even years. Frequent releases will quickly highly non-producers.
Im gonna say the most important method is not:
– coding after thorough requirement gathering and design
or
– coding for speed
but instead:
– coding for fault tolerance
No one code’s for faults or latencies. Everyone assumes things work most of the time.
Coding w/ requirements, people rarely take into account every possible error point… and if they do recognize errors, it’s big failures like machines down and not things like timeouts or latencies.
Coding for speed, well, programmer’s will just say “reboot a server” before spending any man time to make their code more robust — and be sure to blame the DBAs and LAN …
When a problem does happen, everyone screams to fix it asap… inducing more bugs.. and making your company a “reactive” company instead of a “proactive” company.
I feel a lot of people think this way: “Assume machines and networks will always work. Get a program up as fast as possible meeting all the ‘basic requirements’. Make $$$. When something DOES break, reboot things and restart things. If it still breaks, give the customer a credit. If it still breaks, put in service-level-agreements. Finally, fix the code.”
It’s easier to just ‘design for errors’ to begin with.
On page two, you have a specification for GCD and an implementation in pseudo code, and claim the code implements the design.
Actually the design calls for two NON-NEGATIVE numbers. Your implementation does not..
The design also gives a proof of program termination. The implementation does not.
Give your nice little pseudcode the inputs of -1 and -2 and see how long it takes to stop. [In most programming languages it will stop.. eventually, with an underflow.]
If you were to program under Eiffel or ADA this simple implementation would be immediately thrown out by your programming group as it has no pre or post conditions to check the sanity of the code. The design contract actually is a way of specifying things so that they don’t litter your code and allow you to do sane things (sometimes even at compile time) to ensure that your code works.
If you use something like smalltalk or lisp. the routine will never end (until your computer runs out of RAM/Disk space) as integers in these have infinite precision.
I have a relative who works for Microsoft. His job is “programmer documentator” or some such.
What this means, is that the brilliant folks at MS crank out their code; and it is HIS job to figure out what they have written (hopefully by talking to the programmers if he can pull them away from their coding) and write an explanation for OTHER programmers or managers so that they can use the program or library in their own work or make Power Point presentations about it.
Think about that.
I’ve talked to him about creating a design before you implement and he just scoffs at the idea. That is the culture there. It also seems to be part of the Linux camp, which is finally managing to address those little intangibles which make a system work.
An example of “design” gone wrong can be found here:
http://moishelettvin.blogspot.com/2006/11/windows-shutdown-crapfest…
And I think the whole thing about extreme design (previously called rapid prototyping) started out at Microsoft. The whole point of that is to get something out the door, but you end up with badly designed workflows in using the software resulting in the creation of things like Wizards to be able to do simple things with your interface.
For example, most programs to do something like burn a CD/DVD are impressively bad all across the board if you want to do anything more complex than burning a few files to the disk, which now everyone copies Apple and their OSX in doing that little thing. When I see a machine with a copy of Nero on it for something more complex, my mind boggles at the mess.
Now. The comparision with architecture is somewhat invalid, mixing it with engineering.
If you are building a bridge, sure, but if you are creating a small building? The architect creates the plan, the *contractor* tries to create an implementation of that building, acting as a go between between things like, construction methods, resources, finances, government regulations, and schedules.
An example of this can be found here:
http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse
The fault was with the architect who created a design which was physically impossible to build, the contractor found a work-around which was structurally invalid, this should have triggered a review of the structural requirements and caught there.
Here in Seattle, the designer of our world famous library:
http://en.wikipedia.org/wiki/Seattle_Central_Library
.. made the fascinating statement after it had been opened that he couldn’t figure out how to make the building stand up, until an engineering student figured out an elegant way to cantilever the building. The original building design would have folded up like an accordion at the first earthquake, or probably when the books and shelving were installed..
An interesting thing about the library. It’s internal design was done by differing teams doing different sections. It is overall quite brilliant but there are interesting places in the library where the different areas.. meet.. and some interesting and unusable spaces were created. This is similar to much commercial software I’ve seen.
In addition, last year, when the entire 5th floor had to be removed and replaced due to unacceptable wear (<5 years had elapsed,) and at the time a professional mapmaker was hired to create signs (with a rather sumptuous fee) to tell folks how to get to where they wanted to go in the building, the equivalent to the software wizards..
Even though technically the software is equivalent to the implmentation, the design is NOT equivalent to the software.
Yes, I copied both examples from Wikipedia there. Perhaps then implementation they gave is not an exact implementation of the mathematical description… I didn’t bother checking it. My mistake. Nonetheless, the point is still there. Even if we added the extra checks… it is still far more concise and usable than the mathematical language description.
“Even though technically the software is equivalent to the implmentation, the design is NOT equivalent to the software.”
As to the architect issue. I have to say, I really disagree with you and that is the attitude I wish to change. Might not be able to change yours, but who knows.
“The architect creates the plan, the *contractor* tries to create an implementation of that building, acting as a go between between things like, construction methods, resources, finances, government regulations, and schedules.”
I am really unsure how this conflicts with my views. Yes, the architect screwed up. The design was bad. Humans design things. Humans are not perfect. It is going to happen.
In the software world, it is bug. They happen. It should have been caught in testing, code review, somewhere before it is released.
All I can say is given a design an implementer should be able to implement just by following the instructions. In that sense, source code is the design.
In the case of your contractor that spotted an error in the design of the architect. That’s a smart implementer. Let’s say a smart compiler that catches errors. Obviously, no example is perfect. The contractor is doing both the job of implementer and reviewing the design to some extent.
Yes, computer implementers (compilers) are very good at following exactly what you design them to do (via source code). Human implementers are much better at questioning or spotting odd errors or dealing with new situations.
If the architect merely had a robot that followed his design 100%, the structure might have collapsed due to bad design even though the implementer (robot) did its job perfectly.
“In the case of your contractor that spotted an error in the design of the architect. That’s a smart implementer. Let’s say a smart compiler that catches errors. Obviously, no example is perfect. The contractor is doing both the job of implementer and reviewing the design to some extent.”
No. You don’t quite get it. That’s the contractors *job*. Unfortunately he made a change and failed to send this information back up to the engineering level. Though the contractor does know a little about engineering, it’s not his speciality.
The Architect creates a “design”, leaving the implementation details to the lower levels.
That’s why I made that statement about Rem Koolhaas, the creator of Seattle’s downtown public library. He is not an engineer but he does know some engineering. He knows enough engineering and building codes to create a design in the required design space so it is physically realisable.
If the project is small, like a wood framed house. That’s really all he has to do, create the blueprints and send them off to the contractor. The contractor has reality and a set of building codes necessary to make the desired construction.
It looks like this: client-> architect-> contractor-> building.
A *bigger* construction project probably looks more like this: planning committee-> architect-> engineer-> contractor-> building.
Each of these each of these are actually groups. Koolhaas created the master design, and then used separate groups to implement specific areas based on specifications from Koolhaas, planning committees, and using constraints from the next level down; engineering.
The engineer as actually a whole team of folks including materials (many specialities), building, mechanical, power, plumbing, ventilation.
Then there is a master contractor/engineer who oversees all the various contractors, each with their own specialities and sub-specialities (down to the folks who pour concrete and dig earth), which generally reflects in layout the whole of the engineering groups.
Now I have arrows pointing to the next level down, but the information flow is always bidirectional, each layer operating between constraints from the level above and below.
Now. This gets us away from your actual assertion that the specification IS the code.
At each level, requirements come from above, and constraints come from below.
As others are pointing out, your code says nothing about requirements nor constraints. It’s just an implementation. It also doesn’t say *why* you decided to implement it that way, which may have an interesting history due to bugs in the hardware (or even in the specification) you are working with.
The GCD pseudo code from Wikipedia, was perfectly correct *code*, there was no error. However it is neither the specification nor the design.
Another way of putting this is is that the architect takes a requirements specification and produces an output: a blueprint.
With a sufficiently high level machine (in your view, the contractor) churns out a building. Job done.
Is the blueprint the design or the specification?
No. the blueprint will not tell you *why* those drawings were created, for what purpose, or how they relate to their environment.
Each level you go down loses information about the process but gains specificity.
Another counter example is taking some machine code and decompiling it into the original source.
I seem to remember this as an argument against the use of macros in such languages as C. An example is Min and Max functions which are typically implemented in terms of macros in C. You cannot find from your the resulting object code where these macros where originally used.
In Author C. Clark’s novella, the City and the Stars, you could actually create a building by creating it in some super CAD system, and then the city minds would create the building for you through matter transmutation. Just like what you are saying.
Even then. There is no information about why the building was implementation in that fashion.
Design is not source code.
Design is not implementation. Even in software.
Even discounting possible flaws in compiler design.
And yes. I am unanimous in that!
A good notion BTW. I am very interested in ultra high languages where they try to edge closer to design and implementation being the same, but they will never get there, there is always more information you leave out to create an implementation.
The problem with code is that it only tells you, what the program currently does, and not what it is supposed to do. Thus if you mix your code with the specification, you no longer have knowledge over what it was supposed to do.
There are then two paths people tend to take to fix this. Test-driven development (TDD) and proof-carrying code. In test-driven development developers write the specification in form of a set of tests that are assumed to cover everything relevant for the operation of the component. The developers writing proof-carrying code write their source in a way that should indicate a fallacy in case the software didn’t do the right thing.
It is a pity that neither of these approaches suit the field of networking very well, as then network is a rather non-deterministic beast. Tests may pass occasionally and fail in a real life situation, while writing proofs for the correct operation of the whole network may seem impossible in the first place.
It assumes that people know, ahead of time, what they want, (i.e. a bridge, or a rocket, or a computer switch).
It is rarely applicable to anything that requires flexibility.
i.e. It is rare that someone says (late in the cycle), “you know, I think we really need to send 20 people to the moon in that thar spacecraft, instead of just 3”.
“The problem with code is that it only tells you, what the program currently does, and not what it is supposed to do. Thus if you mix your code with the specification, you no longer have knowledge over what it was supposed to do.”
My point as per the article is that to properly specify what a program is supposed to do… you have to essentially code it. The nitty gritty of design documents and specifications is when problems arise. What is the program supposed to do when situation X, Y, Z happens? How is this form supposed to behave exactly You will end up getting into that level of detail and ultimately, you are writing source code.
I am by no means against general specifications or overall design documents. I would absolutely consider these.
Requirement documents are the most essential… so you know what the ultimate objectives are. let’s not confuse requirements with design documents.
However, you must always view these as tools to guide the development of source code… which is the final design.
Edited 2009-09-09 19:39 UTC
Could you possibly, reply to comments that you are in fact replying to?
Its the way the system was designed to keep related comments together. It makes for a better overall discussion.
I have to say, it was really hard to get the main point of the essay. There were so many diversions and metaphors it was quite difficult to discern the true thesis. However, if it was: design and implementation are the same task, then I think that is somewhat obvious.
When people separate design and implementation, it’s because they are different, but related tasks. However, I still believe it is a useful distinction to make, even a low-level design can be somewhat language agnostic, but an implementation is pretty well tied to a very rigid semantics. So, design and implementation: these terms live happily together.
As far as the code being the design, well, that’s ok, as long as it’s not your only reference point. Usually a software methodology benefits from redundancy, this is why things like design by contract and TDD really appeal to me: they encourage you to think more about how you create your software. They basically make you state your implementation twice, once in the code, once in a slightly higher-level in the tests (or pre/post conditions in DbC). This really helps you hammer out bugs earlier rather than later.
“However, if it was: design and implementation are the same task, then I think that is somewhat obvious. ”
Except I would say… there is no implementation It is all design. Yes, it is obvious to those of us who program. It is not obvious for many who do not.
And yes, I fully agree TDD, agile,… are all good to improve and perfect the design. I really didn’t want to have article advocate specific methodologies.
It is really just a consequence of history that the problem mentality (separating design and implementation as human activities in software) is highly tied to the WaterFall model in practice.
I wanted to stay out of that debate, and just focus on the problem mentality. I know a lot of people who don’t like Agile or TDD is they view it as hacking while fully designing something first is ‘proper’. I didn’t want to have the article take on that debate, so I avoided it.
Very interesting read. Good stuff.
I’ve been doing this for far too many years but I’ve learned in that time that the audience for your product (code) is more than other programmers. Top-down works … requirements to functional specs to programming specs to code. Remember,
-> You have to convey the function to tech writers who develop manuals and websites
-> You have to give enough information to QA for them to get their job done
-> You have marketing who wants to know what the product will do
-> You have to let your manager know what needs to be done with enough detail that you can work out a scope-of-effort together.
In none of those cases can you guarantee the reader will know how to read a programming language. And if you leave out comments you may as well be writing in Klingon.
“Agile” programming, as a technique, was discarded literally decades ago. It was found that the only people who knew the code were the programmers and it was very difficult to scale the project to add new developers, not to speak of promoting the original developers to other positions. The net result was a program that may or may not have fulfilled the market requirements (remember them, the customer?) and could only be maintained by the original developer. I saw time and again in my various programming positions, senior programmers saddled with a piece of code because there was no way of educating anyone else about its complexities since there was no human-readable documentation. (BTW, code is not human readable, only geek readable).
Pity the poor schlub out of college, trying to learn C++ and he/she is thrust into maintaining a sophisticated piece of code without any documentation.
I’m not against top down or documentation. I kind of anticipated some people taking the article that way. I thought I’d prempt a lot of it with the statement at the beginning of the article as well as in the conclusion.
“Top-down works … requirements to functional specs to programming specs to code”
I’m not disagreeing with that. I would 100% agree with requirement and functional specs.
It’s the programming specs where I tend to have the biggest issue. There are useful design documents and programing specs. This is especially true at the high level. But in no way you can discern the full functionality and behavior of a program from the design document.
My point is that to fully specify a program… is to code it.
For example, let us say you are building a router and some weird condition happens that causes a problem. Some packet comes in with a specific bit set while the router is receiving a BGP update at the same time a new route is being written to the route table.
Is how you handle this going to be in the programming spec? I doubt it, otherwise you end up with a 100 000 page spec.
Your spec will end up look exactly like code, except written in English with overly verbose if statements lacking the clarity of a regular programming language. In the end, your support staff is still calling the programmer and asking them what happens if this happens or debug some issue.
I have no freakin idea what you are talking about, but it certainly isn’t agile. You are sort of backlashing against the COBOL “let the analysts code” thing, not anything that has to do with process, let alone an agile methodology.
Holy cow, Agile was definitely not “discarded literally decades ago.”
Another problem is consistency of the specifications.
Goedel’s Second Theorem says, informally:
I’ve seen lots of times the specifications were inconsistent.
I don’t disagree with this article. Admittedly when I saw the title and read the first paragraph I stated to tighten up a bit, but as I read on I saw what the auther was getting at.
Business requirements are probably the most important documents. A well-written requirement will lead to solid test and technical specification documents. The design documents have really (in my experience) ever been detailed – with the exception perhaps of UI design documents.
I actually write more comments in my code than in my design documents… which seems to go against the latest agile programming concepts, but whatever. I’d rather be able to go back and look at a nice block of comments that explains what is supposed to be happening in the following code.
I DO believe those requirements and test documents should exist before anything else – and that design documents should really be a set of patterns that describe the best way of solving the problem[s] at hand (as far as we know to date). But the development process does need to be flexible enough to handle change…
Joel has written some really good stuff on the subject of why and how to write good specifications and keep them up to date:
You can start here “Painless Functional Specifications – Part 1: Why Bother?” :
http://www.joelonsoftware.com/articles/fog0000000036.html
He understands this stuff quite well from the different perspectives of developer, manager, and customer and would not accept Y’s argument that the code is the spec, and neither should you.
I read Joel a fair bit.
From your link.
“This series of articles is about functional specifications, not technical specifications.”
I am predominantly talking of technical specifications and design documents in my article.
When dealing with customers… it is absolutely essential to bridge that gap. To give you another example here… The civil engineer may provide a prototype or 3d image of what a building might look like. Get feed back… Similarly, getting the customer OK on what they expect a product to do and how you plan to design it is absolutely essential.
I have no issue with requirement documents, functional specs… If you think I am against good documentation or other good practices, I apologize for not conveying my point effectively.
Edited 2009-09-10 01:08 UTC
You make a good point there.
I googled some more and found this bit:
http://discuss.fogcreek.com/askjoel/default.asp?cmd=show&ixPost=420…
wherein Joel comments a bit on technical specs and links to a example of his.
“Joel On Specifications”
Joel is a pompous wind bag…..
Do what works for you… not what some ying yang named Joel says.
I finished my undergrad degree last May, and I was hired as a programmer (functionally) last October; the internship I took in college didn’t actually involve any programming. I’ve really been trying to figure out, lately… well, how to program in the environment of a large project. Hints and lessons-learned from industry experience are much appreciated. So… thanks.
Excuse me for cherry picking (and perhaps going off topic a little.) Something just a nerve, and I’d like to rant for a moment
When I read/hear comments like the above from developers in the workplace (an all too common occurance) I already know what I’m dealing with.
That is to say, a second rate programmer. Usually arrogant and know-it-all too. Yet when it comes down to the crunch, they’ll need *my* help making their “pie in the sky” designs work (usually with substantial modification.)
I can’t be the only one experiencing this…
of course you’re not
That’s a big part of the motivation for me writing the article. I despise that attitude and it just seems to be getting more and more prevalent; especially as those who are writing the high-level design are disconnected from the people actually writing most of the code.
We had one major vendor sell us a tool set with the promise that most of the work we’d be doing in the future would be with modelling tools.
The implication being that the rest of the coding is just “filling in the gaps.”
I’ve also found a lot recent graduates have much the same attitude – they want to become “architects” not programmers.
So I can certainly understand your thoughts on the issue!
You can eliminate most of them by asking them the question: “What is the difference between a software engineer and a programmer?
-Ad
Let me first say, I agree with saynte and Bill – a short introduction or final conclusion wouldn’t have hurt. As far as I understood the article, there are two basic statements:
1. The waterfall model and similar ones suck
2. There is no implementation, everything is design
Congratulations for inflating that into three full pages. Now seriously, please be more concise next time.
Anyway, I’ll pick some things from the article and comment on them.
I agree that the traditional waterfall model isn’t terribly efficient. From my own experiences, I concur that for any non-trivial project, it’s almost impossible to completely lay out the design at the beginning of the development process. OTOH, the idea that you should or even have to specify the software entirely beforehand is a misconception as well.
You can, technically, design every bit of the architecture and the algorithms with class, activity and other diagrams. However, when you’re actually down at that level of detail, there is no need for an intermediate language anymore. Diagrams that precise could be compiled directly. But you’re not doing that. Like noirpool said, design is about abstraction. Being independent from the programming language, and having a guide to “what” should happen, without exactly detailing “how” it should happen.
Agreed – but they’re still relatively bad at describing the system at large, including several packages, classes and their relationships. Which gets almost trivial when using a more abstract modeling languages, like UML. The book “Software Engineering for Game Developers” has an example on that in chapter three.
I’d say: Yes, but the specification isn’t complete without the addition of the more abstract design. I, personally, would never accept code as the only specification. That’s because, as I wrote, even today’s languages aren’t doing a good job at conveying the “bigger picture” to me, the reader.
Well, that’s obviously just my opinion, which stems from my experiences at different forums. Most of the time, someone asks for help with his program, leaves a very generic statement about what’s wrong, and then posts a page of source code. While this code may be regarded as the “specification” of his program, having no abstract explanation at all makes it unnecessarily difficult to understand the “what (should happen)” of the program.
That is just semantic hairsplitting. So when I e.g. take my class diagram and write Java code according to it, it’s perfectly reasonable to say that I’m implementing. You could say I’m implementing (static parts of) the classes, while designing the algorithms, but… splitting hairs. Seriously.
You might be right that the idea of engineering software without writing a single line of code comes from the academic world. Being able to make statements and predictions about the to-be engineered software before actually implementing it is very popular here at my university.
In my day job I often work with projects that has “blown up” and the three most common project methodologies that blows are Waterfall, RUP and Cowboy.
The Waterfall’ers almost always blame lack of time.
The Cowboy coders has left the project…
The RUP’ers are by far the most annoying of the lot, usually blaming the users for giving bogus specs, lack of people, money etc. Unfortunately they never blame RUP and their useless diagrams, and the fact that they spend the last couple of years on nothing (rather than producing something of value).
The managerial problem with RUP and related tools is that they give the illusion of progress in a project where it does dot exist.
Listen to this and learn before it is too late:
“Diagramming exists to assist coding NOT the other way around.”
When you say “languages” I assume you mean English?
You can generate UML diagrams from code written in many languages if you want to, but if a drawing conveys more meaning to you than code, you probably should not be programming in the first place (has something to do with how the brain works).
That are interesting insights, and I can only agree with your conclusion: “Diagramming exists to assist coding NOT the other way around”.
No, I referred to programming languages. However, now that you mention it, the same is also true for natural languages in my opinion.
Well, that’s not what I wanted to say. Diagrams don’t convey more information – they convey less, because of their abstract nature. Their strength is their higher efficiency in conveying the information they contain.
Take a class diagram as example. You can see the classes and interfaces, their inheritance relationships, and any other way how they relate to each other. What you can’t see is how the classes’ operations are actually implemented. But at this point, that doesn’t matter. What matters is that alternatively, for just understanding how the classes collaborate, you’d have to hunt through all of their source files.
You have obviously put a lot of time and thought into this, and I pretty much agree with you, but I’ve got to ask what kind of software you write, because this stuff is considered common knowledge nowadays in most shops I know of.
You have mentioned earlier you didn’t want to bring in metholodogies, which is fine, but agile is not a methodology, it is a set of principals by which to practice the craft
(btw, that is what agile means, no more, no less. anyone who says different is misinformed)
It covers more then what you are talking about, but one of the core ideas is that good software is the product of tightly iterative collaboration between the user and the developer, and that design decisions should evolve with the process, not be all made at the beginning of it when you know nothing.
TDD is a direct, practical implementation of this. The reason we practice TDD is not to test, it is to have constant proof that the design of the code we are about to write is solid. Tests themselves are not about software quality, they are a safety net so we can feel free to refactor, which by its definition is a refinement of design.
From what it sounds like, you still live in a world of UML written by “architects”, and specs written by designers. It also sounds like you recognize that is bullshit, and that it is not the correct way to write software, but are sort of grasping at what is. I highly suggest reading the XP book and the TDD book by Kent Beck (or anything else he has written for that matter), listening to process related podcasts (like the elegant codecast http://elegantcode.com/) and re-evaluating working somewhere that is holding on to such dated ideas.
Edited 2009-09-10 00:52 UTC
I second the podcast recommendation, and would add that I’ve found the “SPAM Cast” to also be very substantive. A warning, though, you can confidently skip the first quarter of each podcast without worrying about missing anything worth hearing – you’ll see what I mean if you listen.
I primarily write embedded networking code. But I’ve also done some application work and basic web work.
I’m used to very different methodologies. The networking world is very formal and everything is very strongly documented at least at the protocol level.
Something require stronger documentation than others or different processes.
I didn’t want to get into methodologies mainly because it’s not the point. My main point is really just a reminder that all software is design. It’s so blatantly obvious, yet conceptually, it was something that seems to be forgotten. I kind of wish I could edit the post now and add something like this to the start
If UML and full design document with flow-charts is what it takes for you to design software. Wonderful. Just don’t take that… hand it over to someone in India or just dump it on some coop student and think it’s trivial to ‘implement’.
It really is not their fault if it breaks. It is your fault for not specifying it enough. And of course, the point of the article… to specify it enough… you would have to go into so much detail that you might as well have just written it yourself.
This kind of attitude really does need to be broken. You actually need your people to ‘know shit’ about what they’re coding.
This is just one aspect of the failure of viewing implementation as ‘trivial. The other is the SME I write about in the article. Or people who really have no experience building the applications themselves suddenly deciding to write design documents and thinking they should just hand it off to some trivial entity to ‘implement’. Kind of like an SME who decides to design something, but without writing code.
These and many more stem from the culture that separates design from implementation. I figure the best way to break this is to show that there is no implementation in software. It is all design.
Edited 2009-09-10 03:27 UTC
I realize you don’t want to get into methodologies, but my point is if the whole industry is into agile, and agile is totally against Big Design Up Front, then the industry in a general way already agrees with you.
There is also a big difference between something like a protocol specification and software specifications. Anything that is going to be implemented by many people needs good documentation around it for it to be implemented properly. If the specs are not though, you will end up with many implementations that don’t work quite right together, simply because the specification is not correct, or it is ambigious and open to interpretation.
Yep, I fully agree.
I prefer agile myself. One of the things it does is it forces you to get rid of the notion of design and then implementation. It is perhaps the most ideal way of designing. Which meshes well with how I view software (as all design).
Thanks for the article – I agree in most points.
I wonder wether concepts like CWeb may help during the specification/documentation/implementation process. You mix code with documentation/spec, so you are not restricted to one way to express yourself during specification and implementation – write code, explain how to use it and explain its functionality in one pass seems to make sense…
Implying that just writing a program is the same as documenting is a not correct.
This is due to that fact that writing a program in an imperative language such as Java, C++ defines how something must be done e.g. storing in this variable, doing this step, multiplying, blah blah. It implies not just order but also the nitty gritty of what IS done. Importantly, imperative programming never explicitly defines WHAT must be done (the overall thing) but rather defines HOW something should be done and the overall WHAT has to be derived from that.
To use the bridge example, this is like specifying construction methods, order pieces are put together, types of bolts used, blah blah without ever giving a single overview of what the bridge should look like.
I agree with the bulk of the article but there is a place for design only that does not involve coding.
Perhaps a better solution would be declerative programming methods (Haskell, Prolog). These define WHAT must be done without going into HOW it should be done.
“To use the bridge example, this is like specifying construction methods, order pieces are put together, types of bolts used, blah blah without ever giving a single overview of what the bridge should look like.”
Which is why I am not against documentation.
You need high-level documentation. Read the conclusion… it’s in there
The implication in the separation of design/implementation is that design is the skilled work. Implementation is dummy work.
So you have a skilled civil engineer just give an overview of a bridge. Then he hands that overview to some unskilled person to just implement? The unskilled person get to choose the bolts? The materials? How long piece will be?
Of course not. That bridge would collapse. The civil engineer HAS to take care of all those little details. The little details are just as essential to the design as the 3d mockup he does of the bridge.
So it is in software. The high-level architecture and overall design is essential. No less design is all the details. And when you fill in all the details, you essentially have code.
Edited 2009-09-10 16:06 UTC
There are several problems with the proposed approach, and I won’t re-iterate the details of what has already been said – just summarize and add:
1. Code may be incomplete too; it only tells what actually DOES happen, not what it was INTENDED to do. You can no more read the mind of the coder than you can the specification (design, architecture, concept of operations, requirements, etc.) writer.
2. Specifications are NEVER complete. You can’t necessarily capture all use cases in the specification. One implementation language may have use-cases another doesn’t, and the specifications writer may not understand the language specific implementation details.
You also start with the assumption that THEORY makes the program. Suffice it to say, it doesn’t; and the Academic world is killing the prospects of the students for it. Academia focuses way too much on theory, and no where near enough on the practical implementation. Businesses care more that a programmer can (i) understand the concepts presented, and (ii) implement that concept than they do that the programmer can theorize about the concept and vaguely tell someone else how to implement it. They are in business to MAKE money, not pay multiple people to do what one person could have.
However, that still doesn’t get down to the root of the problem you are trying to solve. The biggest problem in programming, aside from Academia – though Academia is also partly to blame for it, is not so much theory and implementation, nor how programmers are taught to program, but rather the lack of discipline in the field and the arrogance of programmers themselves.
To start with, a lot of programmers write software as if they are ever the only person who will ever, ever touch the source code for that software. But that is absolutely wrong – most programs are initially setup by either one person or a small team, and then quickly replaced by others who then have to maintain it. Whether they initial writer leaves the company and goes somewhere else, or simply moves onto another project – it doesn’t matter; someone ELSE has to pick up the ball and carry it. And sadly, that someone else does the same thing.
Additionally, as programmers we more often than not fail to get even half of the user’s requirements before we make assumptions on how the program should operate. And then not only are our managers surprised when the user doesn’t like (or even use) the software, the programmers are too!
Needless to say, I think most all of the problems in software can be brought down to a matter of the lack of discipline in software writing, and the lack of humility of the programmers to follow even basic coding styles as a team.
Discipline must be followed in every step of the way – from gathering thorough requirements (as much as the user can give, not as much as it takes for the coder to decide what to do), to handling error conditions at every possible point (it’s far more robust, and C++/Java Exceptions lend no help to this matter; that’s not to say they are not a useful way of communicating errors, but those errors must still be handled at every possible level in the program, not simply caught and discarded, or left to a higher level to find), to design of interface (both HCI and CCI), to documentation IN the code, and specifications writing (at all levels – concept of operations, requirements, design, etc.) There’s more, and hopefully I’ll someday be able to publish my full thoughts on the subject to OS News and Slashdot, but it won’t be today. (I’m partially still gathering my thoughts together to write, and also need to write what I already have more thoroughly; but it will happen in time.)
It’s obvious from the article that the writer missed the point of design; but they probably also missed (likely entirely) the stages that PRECEDE design – concept of operations, requirements gathering, and architecture. All these stages are important, and all, as with those that follow them, are iterative. But none of them solve the end problem – stable, secure, robust software – because all fail to capture everything. These stages (design and implementation included) are what I call Traditional Software Engineering, and the only people that can afford to do so are those who are putting lives in danger (e.g. DoD, NASA, etc.) and cannot afford even the slightest error. (That said, DoD Contractors don’t do it quite right either – they only do as much as the Gov’t Program Manager makes them, who only does as much as the Gov’t budget allows.)
There has to be a better way that makes it practical, and being more disciplined about software writing at every step of the way is at least part of it.
(Obviously, I’d love to hear other’s ideas on what it means to be a disciplined software developer.)
I’ve ran into this in the comments here. I almost feel like a repeating record.
Everything you say, I agree with.
The point of the article was NOT to suggest a lack of documentation. I tried to make sure of that in the opening paragraph… apparently, it didn’t take.
1. Documentation is needed… absolutely… as is requirements, analysis, high level design… I take these things as a given.
My main point is that software is not implementation. It is all design. As a design, we must take as much care of it as a civil engineer writes his blue prints. Which means taking care of all the stuff you mention with respect to how we treat source code.
The attitude today is what is at fault for everyone. Whereby, people assume the design is all in the design documents. Notice the language used here.
Then the implementation is ‘trivial’… can be just done by anyone… the equivalent of assembling a lego design.
The reason why I went through the euclid example is to show that to fully design something, you HAVE TO code it. The code is the design. An English description of a design is good, but English is not the language to handle logic and algorithms. Source code is fairly good. You could use some mathematical or abstract notation as well. But its really just duplicating source code at that poitn.
So handing a design document to a ‘code-monkey’ will often result in failure as the design document will never specify the full design. The ‘code-monkey’ will end up doing design (good or bad). We’re lucky that many times you end up with a good code monkey who is able to do the design or interpret what is implied by the design.
This is the attitude that must be broken. The source code is the design. There is no implementation. Source code requires just as much vigor as a civil engineer pays to his blue prints. Yes, all the other documents (requirements, high level architecture…) are all wonderful and needed.
However, the final design is software code. That is what the compiler reads and finally outputs.
Hope that clears it up.
Edited 2009-09-11 21:23 UTC
<quote>However, the final design is software code. That is what the compiler reads and finally outputs. </quote>
And here is where we – and I say we as others in reply have expressed the same – disagree, and you would by logic have to disagree if you really agree in full (as you state) with what I wrote – because I explicitly wrote the opposite, as have others.
The source code of software IS the implementation of a design – that is its nature, since you can implement the design of a program in numerous languages and still have the same program operating the same way – whether you implement it in Java, C, C++, Objective C, Assembly, or YAL (Yet Another Language – whatever you please).
While your teachers proceeded under the false assumption that the implementation is trivial – most don’t. Indeed, good teachers would not as they themselves realize that as with Civil Engineering the implementation is no more trivial than the design itself. Indeed, with Civil Engineering you have many people working on the implementation and many checks to verify that they are all following the design to the letter, and even more so – following the standard procedures that are not part of the design but assumed by the design – assumed by the Civil Engineers that wrote the design.
Conversely, in software there are extremely few standard procedures that anyone that only performs design can assume – thus the design must be clear all the way down to the source level, and sometimes further than that – explicitly stating what the machine should do at the machine level. (Indeed, some systems require such accuracy.)
However, as stated by myself and others, source code only tells you what the software actually DOES, not what it was INTENDED to do.
Perhaps you need to think a little differently –
Source Code is the implementation of the final software specifications – that is how Traditional Software Engineering sees the source code, however wrong it may be (and I do agree that view is wrong).
The primary problem at all levels in software engineering is that the field has yet to formulate standards, and software developers are to arrogant to follow what has been laid out – they want to do their own thing regardless of what anyone tells them. So there is a great lack of discipline in the field.
Conversely, when a bridge builder is helping to build a bridge there are only certain ways in which the builder is allowed to put a rivet in, certain methods of how a weld is allowed to be done, etc. And for instance, many of these are verified by others – a welder does not simply weld and go away; rather the weld is inspected and verified that it is to specification required by law and by the design (since the design usually exceeds the law). The inspection may primarily be done my a master welder, but the welder is ultimately responsible for any failure in his work – if the weld fails and someone dies, the welder and (even more so) the weld inspector may be charged with criminal negligence, and the company that built the bridge would be in court for wrongful death. There is liability attached.
Now consider software implementors, and designers. Are any of them liable for failure to do their every day job? Aside from being fired, there are few that are – namely those that are involved in fields of life and death, but even then it is likely rare the software coder would be liable beyond being fired.
Does anyone inspect the code to verify it against a design? Does anyone verify that it properly handles failures? Do most programmers implement not just units tests but also functional tests? Are most programs even testable to be verifiable?
Sadly, the answers are mostly a resounding ‘NO’; and most of this comes back to a lack of even basic discipline in the software field, and the arrogance of software coders such that they refuse to do anything in any way other than their own – especially when it comes to coding styles.
So while you call out for documentation, you primarily fling it back in your analysis as something that is not required since the code should be documentation enough – and that is not the case. If that is not what you meant, then you failed at such communication – both in your original set of articles and in your reply to my original post.
Traditional Software Engineering has many faults; faults which need to be corrected. But the culture in which software is produced also has many faults that need to be corrected. And software will continue to flounder about as it has until these are both resolved.
But there is one other thing to remember – they must be resolved such that they are practical for software coders and the companies that employ them, and this is a great part of the failure of Traditional Software Engineering, which is also partly (but not wholly) attributable to the arrogance of the software coders as well.
Sadly, I think while you hit on the nail of one side of the problem, you also fail to realize the whole of it, or even the entirety of what you wrote.
And with all respect, I sincerely ask that you sit back and think a lot more before writing on the subject again – think about the implications. I would also be willing to work with you more on it, and if you so desire then send me a message privately and we shall go from there.
“The source code of software IS the implementation of a design – that is its nature, since you can implement the design of a program in numerous languages and still have the same program operating the same way – whether you implement it in Java, C, C++, Objective C, Assembly, or YAL (Yet Another Language – whatever you please).”
Yes, I guess we do disagree here.
I’d view a design as the final plan of a product. The final design that you can hand to some implementer who can go and follow your instructions to get some final output.
I don’t really understand the relevance that you can express source code in multiple languages as somehow making it implementation.
Almost every design can be expressed in different ways.
Source code is just one way to express the design.
It’s a really good way. But suppose I wrote a compiler that instead of taking in source code, it took in flow charts (visio) and compiled that and produced code. Would that make flow charts not design?
If a mechanical engineer designs something like this:
———-
Cut a rectangular piece of plywood 5 foot by 6 foot and 1 inch think.
Or he gives a standard autocad blue print of it.
———-
Both convey the same meaning. Both are design. One is more standard and more precise than the other (autocad). The language of design really doesn’t matter. It is merely like all languages, the expression of the design.
Or suppose I want to write a book (refer the final quote on can you have thought without language). I have all the content in my head. I just need to express it so others can understand it. I could choose to express it in English, French, German… At some point, for me to convey my thoughts, I will just have to pick one of the many languages suitable to express my thoughts and do it. Then if you wish, it can be translated into other languages.
The fact that you have choice in how to express the design is not a problem. I could for all purposes express my thoughts in a painting. It would be very vague and I’d probably be asked a million clarifications as to what is meant by it, but I could.
Similarly, source code is merely an expression of design. A very low-level, precise, and detailed one and one that can be understood well by out compiler implementers. If we could design a compiler to hook up directly to my brain or understand english… then we wouldn’t need source code.
Anyways… still got some work to do to convince people that all the implementation is software is done in the compiler. All the rest is design. Source code is design.
Edited 2009-09-12 15:39 UTC
And that, my friend, is as you so eloquently described without realizing it why source code is implementation.
The problem with source code is that it only states what actually happens, it does not carry intent. And that is the difference between design and implementation.
Design carries the intent of what is desired; implementation makes that intent a reality by trying to match what is desired as closely as possible.
Different programming languages have different sets of limitations – for instance, C vs. Java vs. Assembly vs. HTML. Some of those languages would be utterly prohibitive for many programs (e.g. HTML). Likewise, implementations can be tweaked to provide certain characteristics that designs cannot – e.g. having a mixed language program, calling out to assembly from C, C++, etc.
By contrast, English vs. Greek vs. Chinese have no limitations in themselves other than what culture puts behind it. What can be said in English can be nearly as expressly said in Greek or Chinese if one is fluent in the others, but not so by someone who is not perfectly fluent in both the original and translated versions.
But again, I’ll appeal to the post of which I am replying for an excellent view of why source code IS implementation and NOT completely design.
“Similarly, source code is merely an expression of design.
And that, my friend, is as you so eloquently described without realizing it why source code is implementation.
”
Ummm… unless you plan to have some abstract notion of design that is not expressed… I don’t know what you’re implying here.
Source code is an expression of a design.
An English document can be an expression of design.
A drawing can be an expression of design.
Mathematical notation can be an expression of design.
Flow-charts are an expression of design.
GUI designers are an expression of design. (we do have ‘compilers’ for them, so you don’t have to write the source code to draw the GUI often times)
For this domain (algorithms…) source code is a very good language for design. You would spend more time and introduce vagueness writing it in English than you would specifying it in source.
This is not to say that other levels of design (high-level, architecture, timing diagrams…) are not needed. They are, but they only aid you in designing the source code that you finally give to the implementer, which produces the final result.
I appreciate the interesting discussion here, but I respectfully disagree with this. From my point of view, the problem is the attempt to pigeon-hole the practices of traditional engineering into software development. We don’t need more discipline. We need less.
There is a major difference between someone who is a programmer and someone who has a job as a programmer. It takes years of dedication and a tremendous investment of personal time to become a good programmer. Those types of people simply aren’t going to accept (or if they’re forced to, they certainly won’t respect) jobs where their primary task is to translate 10,000 pages of poorly worded specification into code. So the people you’re left with to fill that role are those who barely made it through a CS program and have no other option but to accept the job. It’s not surprising that the resulting implementation is typically poor.
Give the programmers more freedom; give them more responsibility. Throw out the ideas of requirement gathering and design as separate and disconnected phases and give the programmers– the ones actually implementing the code– a direct, open dialogue with the clients. Allow programmers to do what they love to do: solve problems. You’ll attract more competent programmers to your company and produce better code as a result.
As long as the practices of software engineering continue to view programmers as laborers, akin to someone who drives a rivet or welds an I-beam, the process will continue to fail. If we desperately need to map traditional engineering onto software development, I’d suggest that the best analogy is that the programmers are the architects and the compiler represents the labor force.
This is not to say that the code is the specification. I’ve been involved in enough large scale projects to recognize that up front design is supremely important. I only suggest that insulting programmers from that process and then expecting them to be code monkey translators is a recipe for failure.
To some degree, a very well agree with you, and you simply misunderstand by what I mean by discipline.
To start with, as with any other job, the only people that really going to be good at the job are those that truly love it and are dedicated to it – this holds true whether you are a shop keeper, a welder, a CEO, a programmer, or even (dare I say) a politician. On that point we agree.
However, you don’t see the discipline in which I speak – you mis-associate so what you are thinking is different from I am thinking. So please bear with me while I layout my thoughts, and try to make them as clear as possible at the moment – at least those which I have thought enough to be able to well formulate.
So let us first layout what I mean by discipline:
To start with, a programmer must exercise great discipline in how a program is laid out. Logic should be clear, predictable, well documented (e.g. comments in the code that are clear). Coding style should be uniform across the team and the company as a whole; that is not to say that the programmers would not work to build a single coding style, but once laid the MUST follow it, leaving their arrogance aside. (A hard task, but it is a must if software programmers are to be respected.) Errors must be handled – ALL errors, not just those that are expected.
While there is more I can say, I have more thought to do before saying it aside from what else I will answer while responding to your comments below.
I very much agree that programmers need more freedom; but not likely on what that freedom is. For instance, coding style is not a freedom for the programmer – to a company, programmers come and go; but the programs remain, and another programmer must be able to pick up the work and continue it on. Programmers need to realize that they are not the only person to work on a project and that they must build the project such that the next fellow can pick it up – the only assumptions should be that they will be familiar with the field in which the program is targeted; that is – if the program is specific to a field of biology, the comments should reflect that and speak to that field so that a programmer (who may not be quite as experienced as the one writing the code) may be able to pick up the software and continue. Likewise, the programming style should be uniform so that any programmer working at the company can quickly pick up the program’s source and be able to understand it – having similar logic, error handling, etc; allowances of course as implementation languages dictate.
However, I very much agree that the programmers – the technical people – need to be involved in the requirements gathering and design – e.g. the concept of operations, analysis, architecture, requirements, architecture, etc. The earlier they are involved the better – and not just one programming individual, the whole team in so far as who is working on the project. (E.g. if developer A is not assigned to the project, then they should not be in the meetings; but once it is realized that developer A will be on the project they should be immediately brought in, brought up to speed, and made part of the effort from then on.)
Let them talk to the customer – or at minimum, let them listen in directly on the conversation – e.g. they’re in the same room during the discussion – even if the company wants all questions to go through a specific individual who ultimately talks to the customer; and give them some way to pass questions to that individual on the fly.
I realize that most on this list will not like many aspects of the discipline I propose – and for that, I simply would have to reply – grow up, humble yourself, and learn to work as a team for the company. Creativity can still be had within those limits, and far more will be doable and achievable. And if I were your boss and you wouldn’t follow the policies set forth, you’d be out on the street just like anyone else who wouldn’t follow the company policies – following such policies is part of your employment contract. (And yes, it is legal to let someone go on those grounds.) The sooner you realize that businesses are less and less going to let programmers get away with the antics they have in the past, the better – because it is changing.
Instead of thinking that your job security lay in that only you can understand the code (it doesn’t – it never will), think instead of your job security as the ability for you to bring profit to the company – the more you can bring, the better. (And yes, that can be with FOSS too!) But it first starts with working to achieve standard processes, policies, etc. for writing software and then following them and ensuring the whole team follows them.
And if you don’t think this works – just look at the Linux Kernel. No patch is accepted unless it meets the standards set forth by Linus and his band of lieutenants – standards that even layout an explicit coding style. It works; and it can be achieved even among the most creative.
There is a time for being creative, artistic and there is a time for it not. Learning to differentiate between them is a good software engineers responsibility; and doing so shows a great maturity that will garnish respect. Coding style is one of those times to set aside your creativity and artistry and just get with it; your work life will be better for it too.
And, btw, I do my best to do this myself. I have one preferred coding style; but I do my best to follow what has been set before me. I may use my own style on projects (it does fit the guidelines of my company – the company coding style is too inclusive to start with, but that’s all I shall say on that), but I also try to fit code for others projects in the same coding style that project uses. In other words – I try to adapt to what the company requires of me, no matter my genius or lack thereof.
From your description of what you intended by discipline, I don’t disagree at all. In fact, I was also going to point out a few open source projects that fare well in the code quality department– one was the Linux kernel, as you mentioned, and another is Webkit. Both projects have strict requirements for style and organization when it comes to accepting code. But my reason for choosing open source projects was to show that better quality code can be achieved when programmers are able to take the reins.
I’ll probably stumble a bit here too while I present my thoughts because I think the points where we disagree may be subtle.
I suppose my point of contention is regarding the solution to the problem. I firmly believe that modern software engineering practices encourage shoddy code by embracing the bad programmers and stifling the good ones. I’m sure there are others like you who are able to maintain their integrity and produce quality code under those circumstances. I readily admit that I am not one of them. Put in a situation where I am handed a design document and told to implement it, I would slop out whatever code is necessary to make it work and then go home. And I’m a person who is positively anal about properly structured code, strict style guidelines, and helpful comments. Luckily (for me and for those who may have the fortune to work on some of my older code), before venturing out on my own to do the indie game stuff, I worked in the game industry where those types of design methodologies aren’t typically used and programmers have control over the process because those sitting up above simply don’t have the necessary knowledge to produce a specification of any worth whatsoever.
Now, I don’t intend to shift the responsibility for bad code away from the programmers that produce it. They certainly hold the blame. My assertion is that bad code is the expected result of current software engineering practices, regardless of where the blame may lie. We can’t change the culture without changing the environment. As long as the methods stay the same, the outcome won’t be any different.
Well, I had written alot more, and then lost it.
But basically:
I think we agree very much.
As with other engineering disciplines, when the Software truly becomes a discipline, the geniuses will be able to excel while the standards will keep the slackers either from the field entirely or at the very least, from doing too much damage.
I find Traditional Software Engineering, and “modern” software engineering to be wholly insufficient and utter failures at the task – but again, I think this is partly to blame because of the lack of true discipline in the field. Software Engineering only works when there is a basic level of common understanding on how to do things – a common language that all understand, a common methodology for how things work, and a common level of competence such that failures and successes can be learned from.
In other words, the software field is presently very chaotic with everyone wanting to do their own thing. Some work out, but sheer luck; while most fail. In developing standard practices, and a true discipline to the field – one that is taught as rigorously as any other engineering field is – then we should be able to turn the tide, and instead of the majority of software projects failing (as is today), we can have the majority succeeding.
When the software field is disciplined as such, THEN we can certify software engineers much like we do any other engineering field. Where as today, we have one U.S. State (Texas) that does so, and part of is simply testing ones knowledge of UML – something which most use, and is truly useless to the software field. A certification of engineering is only useful when it can test standard practices, but the standard practices must first exist, and today they do not. Hopefully, by the end of my career they will – I at least certainly aim to my part to ensuring they do by then.
As Software Engineering is presently defined, I agree. However, I think it is defined incorrectly, and a proper definition will bring out the best, especially in the good ones who would be able to excel even more.
And if you think I am wrong, consider how well other engineering fields still maintain their superstars – how people shine in those fields, and how respected they are. Now contrast that today, with how the "best" of software are seldom respected, and mostly found to be arrogant – there are exceptions, but they are few.
Managers typically hate software developers, because they find them unmanageable; and that is primarily because of the lack of discipline in the field which directly leads to a lack of control of the projects. Time estimations are presently worthless, but with proper discipline can actually be done with little error – and that is what management wants, and what helps make a profits for the company.
Such discipline would also benefit both open source and commercial software projects. Not just one or the other.
Ah, things are clear now. We do agree on the problem and on the desired outcome, but we disagree on the solution. To put it simply, you argue for a tighter leash on programmers while I argue for a looser one.
I believe that applying engineering disciplines to software development was a mistake from the start and to further entrench those methodologies would only serve to exacerbate the problem.
I’ll present some of my reasons. I’ve given this quite a bit of thought over the last few years, but like you, I haven’t spent a lot of time formulating this into a coherent set of arguments, so consider this my scratch pad.
First, building codes rarely change and when they do, it is usually by very small increments. This is typically the result of environmental factors, the development of new tools or materials, or the discovery of a defect in a current process. Not to mention the fact that any sort of structural engineering is very conservative by default due to the loss of life or property that is possible in a failure condition. So changes in traditional engineering happen at an absolutely glacial pace compared the rate of churn in new ideas, technologies and methods in the software industry. This presents a major stumbling block for generating any sort of standardized “language” for software development that would do anything more than hamper the agility of company that subscribes to these standards while trying to compete with another that does not.
We agree that UML has already proven to be an abysmal failure. You might also agree with me that UML failed because it is incapable of expressing sufficient detail. I would argue that too little detail leaves your design open to error and misinterpretation but that too much solidifies it in the face of changing requirements. I don’t believe there is a “sweet spot” to be found due to the chaos that is inherent in the field.
To put it succinctly, building a bridge today isn’t going to be all that much different from building one ten, or even twenty years ago. Software development, on the other hand, has changed drastically in that time.
Second, the nature of programming is fundamentally different than that of construction. A welder working on a bridge is not required to understand things like variances, setbacks, wind speed tolerances or load distribution. That is the job of architects and engineers. The line between design, engineering and labor is extremely clear cut. The same is completely untrue when it comes to software development. A competent programmer will understand the project that he or she is working on, from the high level design all the way down to the machine code that will be executing, often to a greater extent than those that originally designed it.
Yet software engineering attempts to apply the same design/labor dichotomy to the software development process. I’d argue that this forced dichotomy engenders recalcitrance in programmers due to the fact that it squanders their skill, devalues their creative input, and effectively reduces them to specification/code translators. As we’ve both agreed, a programmer becomes skilled through their love of the trade, through their joy of using a computer as a tool to solve problems. When you take the ability to solve a problem away and reduce them to mere labor coders, their job satisfaction and motivation to produce quality code will quickly drop to nil and they’ll do the least amount necessary to take home a paycheck. Or they’ll quit and find another place to work where they feel they can actually exercise their knowledge.
Third. Well, I don’t have a third yet. The ideas are still bouncing around and I don’t want to ramble on for too much longer.
Suffice it to say that I think burying programmers under stricter requirements, tighter management, government certifications, and formal languages will have the exact opposite effect of what you intend. Instead, loosen your management, set the programmers free, put them in charge, and they’ll produce for you. If you disagree, take a look at Google. It has a very relaxed management structure, attracts some of the brightest minds in the field, and produces some of the most successful software on the planet, often ahead of any announced schedule. This method is not an anomaly– it is the way forward and any companies that don’t follow suit will continue to face missed deadlines and project failures, and will eventually be run out of the market by their more agile and motivated competitors.
Edit: Broke up some rather large paragraphs. We may be able to co-author a book by time this thread is finished.
Edited 2009-09-12 18:44 UTC
You misunderstand what I mean by discipline.
Traditional Software Engineering – and even present thought on Software Engineering – is dead wrong. It doesn’t work, and it’s not practical – too laborious and too costly.
Rather, I am arguing that certain aspects of Traditional Software Engineering are good, and helpful (e.g. Concept Of Operations, Requirements, Architecture, Design, etc.); but not necessarily as they are now. They are good and helpful namely because they force communication to occur so that both sides understand what is being delivered in plain language. They also provide something both sides can test against and say without doubt that what was agreed to has been delivered.
However, while the "art" surrounding those documents is pretty much already settled – since it applies to far more than simply software – there is a whole other side to engineering that software presently lacks – and that is simple discipline in the field, of which those documents are merely a part.
Software today is (i) highly unreliable, (ii) highly buggy, (iii) a support nightmare, (iv) very costly to produce, and (v) late on delivery if it is even delivered. Most software projects – even small ones – are vastly over budget; and the majority fail ot even make it to delivery. But that need not be the case – software CAN be reliable, with few bugs, on budget, and on time; but we don’t get there without discipline.
I’m talking about discipline similar to how one is disciplined about driving a car. There are certain norms – e.g. drive on the right side of the road, etc. You need those norms to provide a good experience, and do what is necessary. With respect to software, the programming language IS NOT one of those norms. Rather, the norms apply to the coding style, how the logic is layed out, etc. In other words, how the programmer does the job of programming. They need not be overly burdensome, but they do need to make code more uniform at least at the company/team level. (I say company/team because when working for an employer, it should be defined by the team of programmers working for the employer – all of them; however, when working on a multi-company project or on a project not related to a company, then it must be defined instead by the team of programmers working on it.) By providing uniformity, more people are more easily able to spot the problems.
Fair enough; I think I may be a little further along than you, but not necessarily by much. I’m all ears, and I’ll certainly try to incorporate what I’ve learned into what I am working on.
Okay, you actually have two multiple things going on here. First, I very much do agree that software is a very different beast than any other engineering field; first and foremost because it is in a very different environment than any other engineering field. In all other engineering fields you have very exact physical limits – e.g. the law of gravity, the structure of a cell, etc – where as with software, you have to create your universe nearly from scratch for every project, well not quite any more since we mostly write for one or more operating systems that do provide some continuity in the environment. So to compare – civil engineering is predictable because the laws of physics never change; but software is far less predictable because the environment is constantly changing (e.g. what other software is installed, the hardware it is run on).
Building Codes equate more equally to programming languages. They don’t come very often, and they seldom change.
You get the structural engineering part write, but still fail at the software engineering side.
There is nothing prohibative about writing a standard set of procedures on how to program. I would not say that any programming language itself should be standardized upon – indeed, no single language works for every instance of application implementation. However, procedures for writing a program can be.
For instance, consider how one implements an algorithm. Do you check the error condition first? Or do you check the success condition first? How do you handle the errors? Do you check inputs to see if they are valid? Do you check pointers? There are a lot of little details like those that can be very easily standardized upon, details that after being decided upon aid the programmer in keeping from creating bugs in the software, and they do not prohibit the advancement of software development in any manner – rather, they promote it.
Again, going back to the questions I asked in the last paragraph, think of how many different answers you can come up with just from that small set of questions, and there are many more that need to be addressed. Standardizing on things such as these will enable coders to do more – code will be more quickly written, and more easily verifiable – not just be tools, but by people as well.
Furthermore, you increase what I call your “Bus Factor”. That is – if you are the only one working on a given piece of code on the project, and you were to get hit by a bus today – (i) would someone else be able to pick up your code and continue, and (ii) how quickly would that be able to happen? By standardizing, you enable someone else to be able to pickup what you are working on, and continue it, and you decrease the time it takes them to figure out what you have been working on. (BTW, I’ve done just that 3 times already – first time for someone who was fired, second time for some that literally died, and the last time the guy quit.)
Programming such that you are the only programmer to be able read and understand the code is NOT professional. It can be guaranteed that you will not be the only programmer on the project; so humble yourself and write your program with that in mind. More likely than not, the next programmer won’t be as smart as you so it needs to be clear enough that someone with lesser experience can pick it up – they won’t be able to read your mind, especially if you got hit by a bus (e.g. died).
Very much agreed.
UML has a strength – it can represent something in a graphical way. However, it’s greatest weakness is change. Software changes rapidly during development, and UML will quickly become outdated. Even if the initial UML model was able to express every detail necessary, by the time the coders were able to get just the initial coding done it would be outdated – things would have changed – there would be more functions or less, more classes etc.
[q]I would argue that too little detail leaves your design open to error and misinterpretation but that too much solidifies it in the face of changi
Your post apparently got cut off, but there are a few clarifications to be made so I figured I’d toss them in now.
I specifically put “language” in quotes to imply that I was speaking of a formal language for specifying the procedures of a software development project. Not a programming language. Sorry for the confusion.
Like I stated, we do definitely agree on the desired outcome. That formal procedures for code style, layout, organization, error handling, etc. are established and followed throughout a project cycle by all team members unless a specific situation calls for deviation, in which case it is clearly documented.
Our disagreement lies in the methods of achieving a team of disciplined programmers who follow these practices.
I didn’t notice a response to my statements about Google. They may have been part of the post that was cut off but I feel they are pertinent to the discussion so I’ll reframe them as a series of questions here.
Why does a company like Google seem to have a surplus of great, disciplined coders who can pull off successful projects ahead of schedule?
On the other hand, why do in-house and contract development projects consistently fail due to lack of discipline?
Why continue to push software engineering or variations thereof when, as you’ve mentioned, it has so far led to nothing but unreliable, buggy, costly and late software and other models have already proven successful?
I have my own answers to these questions, but I’m honestly more interested in yours, as it seems you’ve spent at least as much time pondering this subject as I have and I always enjoy differing yet well-informed views.
The basic point that a sufficiently detailed spec actually *is* a program cannot be stressed enough.
But then why write a specification?
The problem is thus – where to draw the line between lack of specification and over specification.
If you do not specify enough, things go awry because there is not enough clarity.
But if you over specify, then you leave no room for error in the design and flexibility in resolving problems – you essentially micromanage, and write the code yourself.
There is a balance that must be found between the two, but that balance can only be properly struck once there is a true discipline to the software field, and coders lose their arrogance. Sadly, it will probably be a long time till that happens.
“But then why write a specification”
Because it helps you write source code.
You can transplant this to any field.
Suppose you are writing a book.
You don’t just start writing. No, you probably think of a plot first. Create descriptions of characters, their backgrounds. You even think of the killer ending.
Those all sketch out the general specification of the book. Yet, you can’t take those, hand them to anyone and think they would produce a work of literature as great as Shakespeare.
No, the actual writing of the book is very important. Give the general specification to anyone and you would still end up with a million different stories.
Only the final writeup is important. Now I would think of all writing as 100% design. If you think of this writing as ‘implementation’ then I think we’re just too far off on the meaning of these words.
As it is with source. You should never start writing immediately. You must gather requirement, do high level design… but at the end, you MUST produce the final design… which is source code. It is what specifies everything in absolute detail and clarity.
If you don’t think implementation is just dummy work. I seriously suggest talking to people outside software. That is the mentality. That someone can write a spec or a design document and then then it’s just dummy work to implement it. It is not their fault they think this way.
Even if you still disagree and think of software as implementation… I would still suggest just changing the wording you use to low-level design for the good of all people in software. Take it for the team. Otherwise, in the minds of those outside software, software will still be views as being designed by ‘architects’ and ‘analysts’, while being implemented by anyone who knows ‘C#’ or ‘Java’.
That is exactly the wrong answer.
Specifications are closer to contracts.
They clarify exactly what is to be delivered so both sides (customer and developer) are clear and understand the final product.
But they do NOT provide the implemenation. If they could, the customer would not have needed the developer to write it for them – they would have already done it themselves.
BTW, specifications are then used at delivery for the customer to verify that what is being delivered is what was agreed to; and payment can be withheld if the two don’t match.
Not quite true. It applies specifically and most literally to any engineering field, but only abstractly to any artistic field, such as the literary field you mentioned.
That is the design of the book, yes. However, some authors do in fact just sit down and write; and produce great works – though they are few and far between.
Others put in lots of research before hand, then sit down and piece it all together, for example Gaston Leroux put in tremendous research before writing L’ Fantom de l’Opera, e.g. The Phantom of the Opera.
Funny you mention Shakespeare – he likely didn’t do any of that, and yet his works are considered some of the best. Same with Homer, and likely Plato too.
That said, writing classes often involve the entire class being given the same outline, characters, etc. and each student then having to produce their own work.
But as I said, this really only applies abstractly to the literary field.
Quite true, but they would all follow the same literary formula – they would all meet the same specifications. This kind of thing is done quite regularly by authors.
One reads another’s books, puts together similar plot, changes the names, the backgrounds, and writes their own book. Such things are known as ‘literary formulas’ and are typically used to mass produce books. For example, this technique is often employed by Star Trek, Star Wars, the writers behind Nancy Drew and The Hardy Boys, and more. It is often applied to films as well.
Important how? Important in how it actually functions or looks? Yes. Important in that it matches the specifications? Yes. Important in that it can’t be replaced by something else that meets the same specifications? No – and that is where your proposal fails.
It is implementation for several reason:
(i) It details with certain details – e.g. implementation details – that are below the design. For example, whether to use a ‘=’ or ‘:=’ as the assignment operator, what to name the functions, the variables, etc.
(ii) It is one implementation of many possible implementations that meet the specification, design is part of the specification not the implementation.
I talk to people outside of software quite regularly. And none, have I found, find software to be a menial task – most are petrified at the thought of trying to do it themselves.
No, the people that think implementation is a trivial task are the programmers become managers, those who have stopped programming, and those who got their degree in the aboration that is Information Systems. They can write a script to add 1 and 1 to get two, and they immediately think the world of themselves.
No, most people realize there is much more to software development than meets the eye. They know there is a lot of work that goes into it, and they don’t understand it. They likely never will; and they respect people who do it because they think there is a witchery behind it.
Your ‘outside’ world is vastly limited to the IS people – the people that failed out of the CS or CE programs and chose to do something similar out of pride. Thus they studied at how to being a traditional software engineer, and try to be managers instead, they got their MBA and can only order people around. They think they know what they do not, and will never let you know otherwise no matter how foolish they are.
I’ve had a number of managers. The good ones were rarely software people at all; and they realized they did not know something and turned to the technical people who did for support and help. The bad ones, on the other hand, thought they knew the world and would not have it either way.
Yes, Software Coding is Implementation, and it cannot be anything else. And, by the way, low-level design is another practice all together so calling implementation design is not right either.
I am no advocate of Traditional Software Engineer (see the other related threads for discussion on that), however, it would greatly behoove you to learn something about it.
But, to keep it shorty, if one were to follow all of Traditional Software Engineering then it would go like this:
Concept Of Operations
Architecture & Requirements
Design
(Optional) Low-Level Design
Implementation Specification
Implementation
Repeat until satisfied.
Just for reference if anyone does decide to look it up, this is traditionally known as the Waterfall model and is generally accepted as bad practice these days, even among software engineering proponents.
Actually, no – the Waterfall Model is one method of how you work with the various parts. The other models (e.g Spiral, Incremental, Evolutionary, etc.) all use those same parts but incorporate them differently.
The Waterfall Model, for example, only does each of those once. One time, period. No ‘repeat’. You’re done after the first iteration.
The Spiral Model, puts the Waterfall Model into an iterative loop, but only for a certain number of times.
The Incremental Model cuts down the amount of work done at any given iteration and tries to build on itself so that in the end the product will be the same as the Spiral Modal.
The Evolutionary Model is more like the Spiral Model with an infinite loop, but typically cut down for any given iteration like the Incremental Model.
Needless to say, ALL current Software Engineering Models require the steps in my previous post. It’s just how you iterate them and the amount of detail at any given iteration that makes the difference.
But yet, the Waterfall Model is generally considered bad practice – namely because it is not iterative in nature.
The real purpose of design and implementation is not just to make good software. It is also use to justify the cost of investment.
The project manager need to explain to others why they need to buy a $20K database and $50K server. They need to let people know the benefits and future cost saving and improvement in efficiency & productivity as a result of the new software system.
Then there is also the problem of high turnover. System analyst or programmers, who leave in the middle of the project without leaving any documentation about the project and without doing a proper project handover. The replacement staff will have the unfortunate task of beginning the project from scratch.
In my previous company, the system designers are the perm staff and the programmers are the contract staff.
If one of them left, at least another person will know the system well enough to carry on the project.
In my current company, I am the unfortunate programmer, who have to take over many legacy projects that do not have documentation. To make matter worst, some of the code are written by Japanese, who use function names and variables names like kito, zento, mikino etc. Comments are also written in Japanese that I could not understand