Appendix 3: A Near-optimal Loglan Syntax?

[What are the 'case markers?'] [So how do these end of word suffixes work?] [Conclusion]

Introduction

Referring to the two sentences he quoted when considering the phonology of Plan B, namely:
English:  I   like her driving my car
 Plan B:  G-l tk-s ck-l mg-n g-n cc-l

and
English:  She  likes me
 Plan B:  Ck-l tk-n  g-l

Jacques Guy wrote:

"Let us now turn to the grammar of the language. It makes do with an unlimited number of ... er... case-markers, of which you have already encountered three: - l, -n, and -s. -l has highest precedence, -n second highest, -s third. Armed the vorpal sword of that knowledge, you should be able to disentangle the Gordian knot of the two sentences above in even less time flat than Alexander."

Is this a fair assessment?

What are the 'case-markers'?

They are not referred to as 'cases' by Jeff Prothero, nor are they cases in the way we understand the cases of languages like Latin, German and Russian, since this would not allow an infinite set of cases. Their analogy to traditional cases is that they are suffixes and they do mark how each word relates to the rest of the sentence. So what exactly are they?

Perhaps the simplest things is to quote what the author of Plan B himself actually wrote:

"We know we need to attach about two bits of tree-structure information to each word to record the tree structure. Two bits of end-of-word-indicator affix plus two bits of tree structure yields a four-bit field -- just the length of our letters. Thus our tentative solution to the word-resolution problem: A word is a string of affixes ending with one of the reserved affixes 'l', 'n' 's' 'v'. (Any four of the eight single-letter affixes would do just as well.)"

The tree referred to here is a binary tree that will be used by a computer program as it parses each sentence. So the "case" markers are suffixes that (a) show we have the end of a word, and (b) the position of that word within the parse tree. Although the author writes: "... only the first four, or at the very most eight, affixes are ever likely to be needed in practice", he provides "an infinite number of end-of-word affixes. The alphabetically last four affixes in each affix length group are end-of-word affixes, and each such affix binds less tightly than the previous one." He illustrates this with the first twenty. I show these below (as the suffixes are given with Plan B letters, and not our loglang syllabary, I have included the bit patters of the suffixes also):

SuffixBit pattern of suffixPrecedence  SuffixBit pattern of suffixPrecedence
l10000 pzv1011 1111 111010
n10101 pzz1011 1111 111111
s11002 kzzs0111 1111 1111 110012
v11103 kzzt0111 1111 1111 110113
ts1101 11004 kzzv0111 1111 1111 111014
tt1101 11015 kzzz0111 1111 1111 111115
tv1101 11106 zvzzs1111 1110 1111 1111 110016
tz1101 11117 zvzzt1111 1110 1111 1111 110117
pzs1011 1111 11008 zvzzv1111 1110 1111 1111 111018
pzt1011 1111 11019 zvzzz1111 1110 1111 1111 111119

... and so on ad infinitum. Thus Plan B can, in theory, cope with a parse tree of infinite depth, but no computer can do that!

 
Top

So how do these end of word suffixes work?

The author gives seven sample sentences to illustrate how the language works. They use the following vocabulary:

a(n)= b    can (able)= cn     car= cc     drive= mg    I/ me= g     like(s)= tk
she/ her= ck    the= hb     to= th     will (future)= ml     you= j    

We have already met two of these sentences twice before. However, for completeness I include them here again, writing them this time as they should be written without hyphens between the morphemes, i.e. no hyphen before the end-of-word suffix. They are:

  1. Gl tkn  jl.
     I like you.
     
  2. Ckl tkn   gl.
    She likes me.
     
  3. Gl mgn   hbn ccl.
     I drive the car.
     
  4. Gl mgn   ckn ccl.
     I drive her car.
     
  5. Gl cnn mgn   bn ccl.
     I can drive  a car.
     
  6. Gl tks  ckl mgn     gn ccl.
     I like her driving my car.
     
  7. Gl mln  mgn   gn ccl thn jl.
     I will drive my car to  you.
 

There are several points one can make about these sentences. Firstly, consider Jaqcues Guy 's observation:

Ha, ha! I hear you say, why "Gl cnn mgn bn ccl" if "Gl tks
ckl mgn ccl"? Shouldn't it rather be "Gl cns mgn bn ccl"?
I agree with you. It's probably a typing mistake .... However:
 
I will drive my car to you
G-l ml-n mg-n g-n cc-l th-n j-l
 
So, clearly, it wasn't a typing mistake.

Ineed, some of the parse trees do not appear to be the like those I would have derived from the above sentences. We are not given any proper indication as to how the parse trees are to be derived from the linear sentences. Therefore, as Jacques comment shows, we cannot always be certain precisely which suffixes we are appending to words. The grammar presented to us is incomplete.

Secondly, although in all seven specimen sentences, all words are clearly bimorphemic, i.e. they are each composed of one lexical morpheme and one grammatical morpheme (the end-of-word suffix), there is nowhere any discussion as to what constitutes a 'word'. Are all words in the language bimorphemic like these? This is not made clear.

Thirdly, it will be seen that, unlike languages such as Russian and Chinese, Plan B has a definite article; but there is no discussion as to why it does. Many languages that have a definite article do not have an indefinite one, e.g. Welsh, Gaelic, Arabic and the Semitic languages, yet Plan B has one. Why? We notice also that Plan B, just like English, forms the future with a modal auxiliary! Indeed, it is only too apparent that Plan B is just a relexification of English with suffixes to show the precedence of each word in its parse tree.

This last point, in my opinion, is a major criticism. The seven specimen sentences are really no more than the following, relexified so that they can be easily represented as a stream of bit quartets:

  1. me0 like1 you0.
  2. her0 like1 me0.
  3. me0 drive1 the1 car0.
  4. me0 drive1 her1 car0.
  5. me0 can1 drive1 a1 car0.
  6. me0 like2 her0 drive1 me1 car0.
  7. me0 will1 drive1 me1 car0 to1 you0.

Jacques Guy essentially responds the same way with his 'Plan C', except that he shows precedence with 1, 2 & 3, where I have used 0, 1 & 2; also he uses hyphens, but they "have been inserted only for your convenience, o, gentle readers!" For example: "Me-1 drive-2 the-2 car-1" and "Me-1 can-2 drive-2 a-2 car-1."

 
Top

Conclusion

As I wrote on the introductory page: "Thus, Plan B provides a way whereby one may analyze an English sentence as a binary tree and then generate a continuous stream of characters (alphabetic, bits or whatever) which both maintains the same word order as English and unambiguously represents that tree." Indeed, quite clearly by providing each word with an end-of-word suffix (or "case ending") which shows its precedence in a parse tree, Plan B makes it easy for a programmer to write a parser. But as Plan B is basically a relexification of English, I fail to understand how it could be used to test the Sapir-Whorf hypothesis.

Also one must ask why we want to write a parser? The author of Plan B states that: "Our problem, then, is to supply a good encoding scheme allowing these graph fragments to be linearized, sent through a bitstream chanenel, and be reconstructed at the far end." I have to ask why. For millennia, human beings have been linearizing fragments of their knowledge of objects and relations, sending this through a speech channel and reconstructing the knowledge fragment at the other end; it's what we do when we speak to one another. So why do we need to get a machine involved, especially if the aim is to test the Sapir-Whorf hypothesis which maintains that a particular language's nature influences the way its human speakers think and conceptualize the world?

One very good reason for wanting a parser might be in a program attempting automatic translation from one language to another. But the parse tree generated by Plan B will not do as a machine 'interlingua' for this purpose. This is far from being a trivial task; I recommend anyone interested in this to read Rick Morneau's comprehensive monograph Lexical Semantics.

A loglang is a constructed language designed and engineered to implement formal logic. Typically, loglangs are based on predicate logic but they can be based on any system of formal logic. However, it seems clear to me that Plan B is not based upon any such system, therefore we cannot class it as a 'loglang'.

However, it claims to be a loglan, not a loglang. In its most narrow sense, Loglan is a loglang, devised originally by Dr James Cooke Brown, and developed by the Loglan Institute. Others use the term more wdely to include loglangs such as Lojban, developed from the same ideas as those put forward by Dr James cooke Brown. Occasionally the term is used to include languages like Voksigid which, although owing their origin or inspiration to the Loglan/Lojban set of languages, do not implement formal logic and are, therefore, not loglangs (though they are engelangs). But in my opinion this usage of the term is misleading, especially since the word 'Loglan', like 'loglang', is derived from logical language. In my opinion, the word is best reserved to denote a subset of loglangs which owe their origin or inspiration to James Cooke Brown's ideas.

It seems clear to me that no satisfactory loglang can be constructed using 'Plan B' or 'Plan C' grammar. Whether such a grammar is optimal for an engelang is, I think, very debatable. Indeed, whether we can sensibly speak of an 'optimal engelang syntax' seems to me debatable in itself.

 
Top
Valid XHTML 1.0 Transitional Valid CSS!
Created April 2006. Last revision:
Copyright © Ray Brown