Happy Hangul Day! October 9th is a South Korean national holiday held in honor of the invention of the Korean writing system, which experts have called the most “scientific” (also “ingenious,” “rational,” “subtle,” “simple,” “efficient,” “remarkable”) writing system ever devised.
It’s a bit outside of OSAlert’ regular stuff (although not unheard of), but as a language specialist myself, Korean, and Hangul in particular, has fascinated me for quite a while now. In contrast to other writing systems, which have developed over centuries – or millennia – without clear guidance, Hangul was more or less designed and set in stone 600 years ago, specifically for the Korean language. It is an absolutely beautiful alphabet, with a clear structure, and a unique way of organising letters – they are grouped in square morpho-syllabic blocks. To the untrained eye, Hangul may resemble e.g. Chinese characters – however, each ‘character’ actually consists of several letters.
Even though I’m not a programmer myself, Im pretty sure those of you who are will find Hangul fascinating. Due to its structured nature, it’s incredibly easy to learn – I taught myself to read and write Hangul in a matter of days – and once you do take a few hours to grasp the basics, you’ll surely come to appreciate its innate beauty and structure.
that the language isn’t nearly as easy as its alphabet. They never are, though.
I dunno; memorising kanji has been the most frustrating part of learning Japanese for me, and I’m sure it would be even moreso for Chinese languages.
They’re incredibly useful and beautiful after memorisation, but it seems to me that it takes much longer than grammar and vocab.
That’s true, however, to be technical about it, Kanji as well as other Chinese character systems are not alphabets. They are picture-based writing systems, where by individual pictures represent individual concepts. An alphabet is a system of symbols that form one sound each in the case of traditional alphabets (though most have become more complex than that now), or one full syllable in the case of syllabaries. Hiragana and Katakana, for Japanese, would fall under this category. Hangul is interesting, as it’s a hybrid of both. You form syllables with individual sounds (as you would in a traditional alphabet) and yet when used they form recognizable syllabic characters as well.
Chines is a logographic system, and logographs aren’t ideographs. Also, it is not picture based, no more than our alphabet is picture based. Don’t get “technical” if you don’t know the facts!
In fact, historically alphabets have almost always been imperfect for writing the languages using them.
That still doesn’t mean it’s a hybrid of an alphabet and a syllabery. The way you stack characters on top of each other has nothing to do with what each character represents. Syllabaries are typically employed by languages with a simple syllabic structure. Korean doesn’t have that, as is evident by the ridiculous size of the Unicode Hangul Syllables block.
Though it was? Oldest Chinese characters ( http://en.wikipedia.org/wiki/Oracle_bone_script ) do look like pictures of things they represent…
Normal language no, but I’ve been told that Esperanto is very easy to learn, which isn’t surprising as it was designed instead of the result of evolution/history.
I’ve spent a lot of time in Korea myself and know the Hangul alphabet. Yes, it’s pretty easy to learn, with only 24 letters. I question whether or not it’s “the world’s greatest alphabet,” though almost every Korean will insist it is.
What makes Hangul special is that it’s (almost) unique among the world’s alphabets in that you stack the letters vertically upon one another to form a syllable.
As an example, the Korean word for “culture,” which would be Romanized “munhoa” (two syllables) "enot,íTM” is built of these components:
m = ~a…
u = ~a…oe
n = ~a"'
Thus "enot,
h = ~a…Z
o = ~a…—
a = ~a…
Thus íTM”
Combine those two syllables to get "enot,íTM”.
I did say that Hangul was “almost” unique, because in fact there are at least two other (and probably more) alphabets that employ that strategy. As a learning aid for Chinese, in Taiwan (and only Taiwan) there is a Chinese phonetic alphabet called “Zhuyin Fuhao” or “Bopomofo” which can be written either horizontally or stacked vertically to form syllables as in Hangul. But since this is only a learning aid and not actually used outside of text books, you aren’t likely to encounter it unless you to to Taiwan to study Chinese. To see what it looks like:
http://en.wikipedia.org/wiki/Bopomofo
If I’m not mistaken, traditional Mongolian characters also employ this vertical stacking strategy, but since I don’t know traditional Mongolian writing I can say much about it. I did briefly study Mongolian using the Cyrillic alphabet (which the Russians introduced), but that is far different from the traditional writing system. More info for those who want to dig:
http://en.wikipedia.org/wiki/Mongolian_alphabets
Edited 2013-10-12 03:46 UTC
My Chinese teachers were from Taiwan, so their lessons used bopomofo. I guess they got the idea from furigana for Japanese.
Yes, but that’s just that, vertical writing. Korean has syllable stacking. Tibetan has some syllabic stacking, though it’s not taken as far as in Hangul.
As a fluent Mongolian speaker, I can tell you that, unfortunately, the traditional Mongolian script is very hard to learn and nothing like Hangeul, whatsoever.
Almost every alphabet is easy to learn. Anyone can pick up Greek/Hebrew/Cyrillic within some hours. That’s the point of alphabets!
Unfortunately, that’s only the start of learning a language, and by far the easiest bit.
I got basic Greek lessons in religious studies class. We started with the alphabet and the original text of the Lord’s Prayer. Barely remember any of it now, though.
I think you do not understand the point of easiness of Korean alphabets. The one of the strong points of Korean alphabets is that with only 24 letters you can describe virtually any sound you can make. And because each letter has exactly one sound denotes it, you do not have to know the pronunciation of a word before you could pronounce the word correctly. For example, in English many of the letters could be pronounced differently in each word. For Korean alphabets once you learn the letters and how it works, you can pronounce anything in Korean even if you do not understand the meaning.
Fortunately In Korean there are no such sounds like f, v, r or th so you canno describe those sounds in Korean alphabets. But in truth, f is linguistically equals to p, v equals b, r equals to l, and th equals to t.
Well, not in all languages – they’re distinct enough to be identified in a context-less word in English.
He means they’re phonetically similar; i.e., they are produced in the same area/with the same parts in/of the mouth.
You contradict yourself. Here you state:
In truth, you cannot as yourself state further down:
They are not linguistically equivalent. Being produced in the same area of the mouth does not make them equivalent sounds. To use your argument and apply it to English: “forth” versus “fort.” Two distinct words, pronounced differently, with completely unrelated meanings. The sounds are related, not equivalent.
Thom is right. F is closer to V than P, P is closer to B… in fact, if you look at your his own native language in that respect, you can see that V is often used where F is in English. Even if in English, F and V being distinct is a fairly early notion (for the written language, post Norman conquest, I guess), you can still see traces of the older usage. Examples being: Fox/Vixen, Calf/Calves, half/halves etc. In many English dialects, even Thom’s example falls down – fort and forth, in many Irish dialects, sound pretty much identical.
The problem with using English as an example of phonetics is that there is so much variation, that it’s almost impossible to actually model English as it is spoken on paper, without creating a lot of mutually unintelligible written forms. Just in my local dialect alone, Free and Three are pronounced the same, as are Duke and Juke (as in Juke box). The unvoiced TH becomes and F, the voiced becomes a V, most H are dropped, T anywhere appart from the initial position becomes a glottal stop, and L regularly becomes a W in similar positions.
Ball = Baw
House = ays (rhymes with “ace” I guess)
Bottle = Bo’aw
This ball was in my house. It broke my window – Vis baw were in me ays. I’ broke me windah.
Wonders of Working class South coast English meeting Working class London Cockney in the 19th century.
please please please don’t use english as an example of a phoenetic alphabet. From my limited experience if you want an example of a good phoenetic alphabet look at german (and I think italian as well). And koine greek was assumed to be phoenetically correct as well.
In a Filipino language, F may sounds like P in some situations, hence the Filipino and Pilipino confusion.
I quite like the Georgian script in that regard. The consonant clusters of the language are a bitch to master, but at least the script will tell you exactly how they’re pronounced, due to it’s one-to-one mapping with the language’s phonemes.
Ancient Greek and Latin can be considered phonetic because the alphabets were designed to reflect the sounds. Other languages used a foreign alphabet to approximate the sounds of the language.
By that logic Russian should also be phonetic. Unfortunately e.g. the o is not always pronounced as written. I would assume the language has evolved from a state where it was phonetic.
I wouldn’t use German as an example either. Its a lot easier than English and, if you do learn German and understand the various combinations, you could read just about anything out loud even if you don’t understand the meaning. However, the reverse state is by no means true, i.e. you can’t necessarily hear something in German and immediately figure out the spelling without being somewhat familiar with the language as well. Various letters in German change their sound based on where they are in a word and what is around them, ‘s’ is a perfect example. Just to describe the various sounds ‘s’ can represent in German, with examples, would probably take more characters than OSAlert allows. This generally isn’t a problem for those who are somewhat familiar with German, as these various sound changes too follow a logical pattern, but if you’re unfamiliar with German you may find looking up words you hear to be an exercise in masochism.
I think you do not understand anything about writing systems or linguistics. What you write is inaccurate on so many levels…
No, that would be IPA, the International Phonetic Alphabet. It has a tad bit more than 24 characters.
Unfortunately, Korean has undergone sound changes as well, which are not reflected in the spelling, and iirc, has also some ortographic rules that must be learned before being able to correctly pronounce things.
There’s, iirc, a fair bit of allophony (e.g. between r and l springs to mind), rules of the language which must be learned before being able to correctly pronounce it.
This nonsense remark has already been rebuked by others, so I’ll leave it to that.
Unlike programming languages, all human languages are made for the very same purpose, to express the same things and same thought, and they’re neither good or bad, beautiful or ugly. Another alphabet just means more waste of time of learning to communicate to each other.
We should use one universal language for everything, leave others in museums and libraries.
There already is one, it’s called Esperanto. It’s not a smashing success though.
I agree. As long as it’s mine.
This is somewhat similar to the argument that we should all just use one programming language, since they are all used for the same purpose – to implement software. However, like most programmers I am very multi-lingual because some languages work better for some classes of problems than others; the language also affects how you think about a problem.
I would be really surprised (as a non-linguist) if the native tongue of a person didn’t affect how they thought about pretty much everything. We get almost as many Spanish TV channels in Texas as English channels, and I occasionally watch them late at night. From what I see, native Spanish language programs are generally not the same as transliterated English programs!
If we were seriously to make an effort at a “standard” world language, though, I’d favour lojban (“logical language”) rather than Esperanto. If we’re gonna change languages, as the Koreans did, we might as well follow their lead and switch to something designed to be logical and easy to learn.
The Korean didn’t change their language – just their “alphabet”.
How “easy” or “difficult” a language is to learn depends entirely on how closely related it is to your own native language. Languages are primarily structurally different rather than “easy” or “difficult”.
Writing a language does vary greatly in difficulty depending on the writing system. Ideographic writing systems eg Chinese are far harder to learn to write than logical phonetic (“alphabetic”) systems such as Latin.
True, though certain uses of the latin alphabet have lost most of their logic, e.g. English. The only spelling rule of English is that all bets are off. Even those of us who speak it natively get our spellings mixed up constantly. This happened largely because foreign words were pulled in with no attempt to alter their spellings to conform to the latin alphabet as used by English, though some words have also retained old spellings while the alphabet changed around them.
This is a bit of a hyperbole. English spelling, though worse then some other languages’, is fairly consistent.
No, it’s largely because pronunciation has changed while the spelling has been fixed for 500 years.
I’m not sure what you mean by “the alphabet changed”. The alphabet has stayed pretty much the same since the yogh was abolished, and that was quite a while ago. What has changed is pronunciation.
No it’s not. It is consistent in it’s inconsistencies. ie.
Read – (is that “reed” or “red”)? But “bead” is never “bed”.
This causes a lot of issues for anyone trying to learn English spelling, not least kids who arenative English speakers.
Many claim that the reason English spelling needs to stay the same is because of the many homonyms, but when you use Latin characters as a kind of pictograph, rather than a spelling, well it makes no real sense any more.
Or not. Sometimes the English pronunciation is closer to the original. Take “hotel” and “hostel” as prime examples. Or ever “beast”, which retains the S that modern French has removed. Sometimes it depends on when English borrowed the word (e.g. Guarantee and Warranty, same word borrowed at different times from different dialects of the same language.)
But I’m sorry, the original poster was correct. Most borrowings from Latin and French have been Anglicised, but retain close to their original spelling, even when it makes little sense. All of the “-tion” and “-sion” ending, as an example, should easily be “-shun” (on “-shiun” or similar, the debate is not what they should be, only that they shouldn’t be what they are.)
Wrong (sort of). The alphabet changed around English a few times. Firstly, the transition between Anglo-Saxon and Norman literary traditions had a giant influence on the shape of the spelling. Otherwise, English might look more like Dutch, Frisian or possibly even Gaelic/Welsh in spelling these days. Certainly, the letter K didn’t exist, nor did Q, Z or J. And those are fairly well established now.
With the advent of printing, the spelling of English became a lot more defined. But it was not one specific spelling. So we have weird oddities like “one” (wun), “only” (“ohn-ly”, aka “one-ly”) and a/an (which is another form on “one”).
Finally, what the OP and what you say is the same thing. I just don’t think you’re using the same terms. The way the alphabet is used has “changed”, *because* the pronunciation has changed. See, same thing. I think you are equating spoken and written as being the same, and they aren’t at all. Written English is barely able to accommodate most spoken English dialects in the UK.
Well, we can debate about what constitutes “fairly consistent”. I think we can both agree that it’s also riddled with irregularaties. For an alternate take on this, see http://www.zompist.com/spell.html.
It seems you are conflating a number of things here. What I was arguing is that the majority of irregularities in current day English spelling are caused by change of pronunciation.
“Sense” is not something to take into consideration. French has been incorporated in English mostly after 1066, and the French back then wasn’t the French of modern times. Both French and English pronunciation have changed drasticaly those past 950 years. Latin words were incorporated into English later, starting around 1500. But by then, Latin pronunciation was already heavily Anglicised, mostly based on spelling. Today, Latin words are pronounced far more regular than non-Latin ones! See also here: https://en.wikipedia.org/wiki/Latin_influence_in_English
Now you are talking about respelling. But that’s a whole different game from easy legibility. Let’s keep the discussion clean.
I mentioned the yogh. You don’t know what a yogh is, amirite?
The pronunciation “wun” is from a different dialect of English, replacing the pronunciation the writing is based on. See also here http://english.stackexchange.com/questions/14959/why-is-one-pronoun…
I’m really not sure what you’re getting at here. The alphabet has been used as it always has: to accomodate writing. And there’s no 1:1 correspondence between the canonical value of a letter and its use in words. English is slightly worse in that respect than some other languages, though not the worst by far.
Just out of curiosity, which ones are obviously worse?
I’d expect it to be something that has existed in a written form for a long time, preferably in a fairly stable culture (since that probably has a conserving effect on the spelling)… and preferably in a writing system that always includes vowels, to increase the number of things that can go “wrong”.
Danish is an example often cited of a language that is slightly (or very much, depending on the source) worse than English, but the prototypical one seems to be Tibetan, which indeed has existed in written form for a long time. French could be another example, but that has a fairly regular ortography-to-pronunciation correspondence (though pronunciation-to-ortography is a nightmare, far worse than English).
You should probably read a bit about Esperanto though. It **is** designed to be logical and easy to learn.
And in fact it is. Also, even though it simply uses the latin alphabet it also has the “one char -> just one sound” feature and the easiest part of esperanto is to learn how to read and pronounce correctly. it’s always the same, and after a bit of classes you can pretty much read any text, no matter how complicated it is, even if you don’t really understand it.
The most interesting fact, for me, in Esperanto, is how you can pretty much construct a word*, that has a specific meaning and everyone can understand its meaning, even though it doesn’t really make any sense in the real world.
* due to the root and suffix/prefix combination/mechanics.
Edited 2013-10-14 13:09 UTC
OK, I’ll try to read a bit more on it when time permits. Thanks for the extra info!
the wiki is a good infosource but if you want to spend a couple of hours experimenting a bit i highly recommend:
http://lernu.net
This is simply untrue. MRI scans show that people actually think differently when they use different languages.
I would challenge you to cite the relevant scientific studies. Using (partly) different brain areas doesn’t equal “think differently”.
Learn some Japanese. Then you will understand. To speak even basic Japanese, you need to learn to completely rewrite the order in which you construct a sentence.
Don’t be daft. You really have no grasp of even basic linguistic processes, do you? I can guarentee you that the Japanese use exactly the same brain processing as you do when it comes to language processing (or anything else, actually).
I glad you can “guarentee(SIC)” it for me. However, that wasn’t my point. I know for a fact that I, as a learner, did not find it easy, nor comfortable, to construct simple sentences. Look at Japanese grammar, especially how one constructs lists. To list something, you must have already preprocessed the list and know if you are talking about a finite or open-ended list of items. I assure you, not all lists in Japanese are open-ended. Japanese is incredibly contextual, and spoken Japanese is terse to the point of insanity. It’s rare to even use pronouns in a lot of situations. Someone with better knowledge could explain it better than me, but it is a radical departure from a Western European language. Couple that with the fact that Japanese is often hard to even translate accurately in to English, I’m pretty sure my point probably stands.
Yes I do know somewhat about linguistics. No I am not an “expert”, nor is it my primary field.
The original premise was “MRI scans show that people actually think differently when they use different languages”. That is untrue. The fact that you find it difficult to learn Japanese is unrelated. The fact that Japanese has a rather strong head-final word order or is pro-drop or whatever feature English doesn’t have and makes it difficult to learn does not mean that an MRI scan can show the difference, nor does it mean that one “thinks differently” (a rather Whorfian statement).
After some quick research, it seems that fMRI scans can show differences between two languages. See e.g. here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2643466/
So I was wrong about that. It may be the original poster confused “differences in fMRI scans” with “different thinking”. For the latter, see e.g. here: http://www.psychologytoday.com/blog/life-bilingual/201111/change-la…
I’ll not cite anything (can’t be bothered to google) but I have read some papers describing behavior changes in bi/tri/…-lingual people when switching between languages.
The same can easily be seen in people switching between programming languages IFF they are good programmers – those that write BASIC with different syntaxes obviously doesn’t change their thinking.
I did do some googling, and found this: http://www.psychologytoday.com/blog/life-bilingual/201111/change-la… and its follow-up: http://www.psychologytoday.com/blog/life-bilingual/201212/change-la…
It seems that though there may be noticeable behavioural changes, these are more a result of switching culture as well, although in the follow-up, some examples are given that could be described as non-cultural behavioural changes.
You might be thinking of the English language? If that happened during the 1st century, we would be all learning Greek.
The problem is though, that there’s way too much people to keep that new, single, universal language stable for a very long time. Even English, our current “universal” language (well, at least the universal business language) is diverging rapidly. Languages change, all the time. You can’t stop that.
Doesn’t modern English get more homogeneous through massmedia?
I’m a Korean myself and I have more difficult time understanding the Korean language. The writing system is very easy. This is the only good thing.