Tuesday, June 21, 2005

Analysis of Linguistic Failure

In my life, I have spent many hundreds of hours defining new languages, logics, and even maths in an attempt to get something which would allow me to perform specific untractable problems. My common focus was on making systems which could be executed by computer, with a few very notable exceptions.

It is not overstatement to say I have made literally hundreds of new almost-languages, almost-logics, and/or almost-maths. But these are invariably incomplete, largely because I'm not nearly as smart as I think I am.

Two things fascinate me when it comes to these things, and most of my systems deal with either one, the other, or both. The first thing which fascinates me is patterns. Pattern recognition on any meaningful level is almost impossible for anything which isn't a biological brain. It's probably almost impossible for a biological brain, too, but we just through a lot of firepower at it. Many times, I've tried to come up with a language which either contains or explains patterns such that it is, in itself, the pattern recognition system. No luck as of yet.

The other thing which fascinates me is logical fallacies. There are some really bizarre tidbits out there - chinks in the armor of math and logic which reveal that the whole thing is just crazy-glued together. The easiest example would be simple paradox, something like "this sentence is false". I'm not sure if anyone has come up with a useful system in which such things can be said without causing logical fault. I've never seen one outside of my own extremely limited attempts.

My early attempts were what I call "linear" attempts. They were unable to reference "backwards", including themselves. While this did eliminate most of the paradoxes, it was totally unsuitable for... well, anything. Unfortunately, later attempts to make nonlinear systems have been largely unsuccessful. My only successes on this front have been related to forced uncertainty and partitioning. Which are moderately interesting - I might go into them at some point, especially partitioning - but they don't really allow for useful pattern recognition. They also have many of the same logic holes that common languages and maths do, although they solve certain problems and allow for some pretty interesting computer-driven analysis.

My most recent attempt is the self-modifying logic set, which is really just an upgraded version of the uncertainty ideas I had. It occurred to me as I was writing self-modifying PHP that there was no real reason to keep statements set in stone - the only reason we think like that is because of the prevalence of the written word. Assuming I don't come back and change this paragraph, it will always be the same, regardless of who reads it. But that's not really the case, and that's not really something that needs to be kept.

For example, if I say "nuns are evil", people will interpret that in many different ways, depending on their various religious ideals and what they think of me. The language doesn't matter at all. I could say "la nunna es loco", or whatever it actually is in whatever language I'm butchering, and I would get largely the same response, albeit from a whole bunch of people whose commentary I can't understand.

Originally, I just assumed that was a glitch in the actual grammatical rules inherent in romance languages. Having looked into many decidedly unromantic languages, all their grammatical rules allow for the exact same flaws. Obviously, if no language, logic, or math has as of yet varied even a SMIDGE as to which kinds of flaws it allows, it's a bit unlikely I can create something which solves these problems working from the same basis.

What I want is a language PROCESSING algorithm - something like the strip of fat in our heads that is telling you that I'm talking about brains when I say "the strip of fat in our heads".

Unfortunately, I've been there, too. If you wish to picture that field in your head, it looks a lot like a popular sporting event right after the home team wins the big game. Well, it looks like that if you also light the stadium on FIRE.

There's literally thousands of brilliant people - and tens of thousands of not-so-brilliant people - working on that problem. Some call themselves computer scientists, some call themselves "cog sci", some call themselves space cowboys. They all share one thing: a deep frustration at their continued abject failure.

Which, I'm sure you'll agree, does not bode well for our hero's chances should he enter such a place.

Meh. Onward! Fat will fry!


Textual Harassment said...

Why would you need a language in which logical fallacies are impossible. Just because I can say
A = !A
Doesn't mean there's anything wrong with the language. As long as you know how to identify these contradictions, isn't that good enough?

Also a phrase like "nuns are evil" is only asking you to question the terms "nuns" and "evil" (I think the verb "to be" is pretty much set in stone, though). You can say something like "nuns are not nuns", but anyone can identify that as nonsense.

Craig Perko said...

The problem is in identifying the contradictions. Some are straightforward, like "A = !A". But this is supposed to run in a computer, after all, and the data all needs to be context sensitive.

There are a lot of places where contradictions and logical fallacies cause problems. While many are noticed and dealt with either by prioritizing or requerying, many are subtle enough that they can cause some serious system damage. Think about basic programming!

It's not just straightfoward paradoxes. What if you're using an incorrect data type? What if you add a new logic block on which interacts unexpectedly with an old logic block? These are things you've probably seen happen, if you've programmed much. The errors they cause are often hard to track down - or even hard to SEE.

I was trying to create a language in which they couldn't happen. I never really succeeded, although I did come up with some interesting languages along the way.

On a second note: "nuns are evil" causes a reaction in you based on your definition of "nun", the scope of all nuns you believe the sentence relates to, the definition(s) and scope(s) of "evil", the scope of "are", and the exact type of "are" you're dealing with - contrary to popular belief, there's about a million different things "are" can mean. That plus the fact that it is coming from an outside source, which dramatically changes your perception based on what you know about them and how the phrase is couched. I don't think anyone will get offended by my using "nuns are evil" as an example sentence, for example, but if I said it seriously, I probably would have gotten some comments.

Things get really complex when you start using probabilities, scopes, and quantities combined with the whole shebang - which we do every day, often without realizing it. The last time you said "I am hungry", how hungry were you? Were you actually hungry, or just saying it to be polite? Were you hungry for food? What kind of food? If someone ELSE says "I am hungry", you have to ask the same questions about what THEY mean. This is the sort of thing which is usually overlooked when beginners try to create some kind of human-computer contextual language.


But you're right - paradoxes are only a small problem.

Textual Harassment said...

Maybe a language needs to allow for nonsense in order to leave room for original thought. Maybe the rules of a language that allows nothing incorrect to be expressed would have to contain all the knowledge of what is correct in the first place. I'm thinking of the novel 1984. In it, the government was in the process of reducing the English language into "newspeak", wherein nothing could be expressed that was against the party's philosophy. As a result, one could talk and talk in newspeak without saying anything at all meaningful. Even though, It still contained contradictions:

"It was of course possible to utter heresies of a very crude kind, a species of blasphemy.

"It would have been possible, for example, to say 'Big Brother is ungood.' But this statement, which to an orthodox ear merely conveyed a self-evident absurdity, could not have been sustained by reasoned argument, because the necessary words were not available."--George Orwell, in an appendix to 1984

So you can say nonsense in any language, but what's more important is the means to work around the nonsense, say, debugging programs and strict type checking.

Craig Perko said...

Yes - many of my early languages were too rigid. The biggest problem is the "computer-executable" bit - you have to very carefully plan out the ways in which a language can grow if you're making a computer execute it. In turn, this means it can only grow in ways you've predicted.

Without the language to identify the ways in which the language is likely to change, I never figured out how to do it usefully.

I also never figured out how to make it work nicely with my various logical systems. The more powerful the logical system I produced, the less capable of change it was. Change naturally tended to shatter the logic system.

The power of context cannot be overstated: I created several computer executable logic systems ("languages") which allowed for perfect, error-free coding in very restricted "worlds"... because the computer knew the context of what you were trying to communicate, and the logic system would never fail to create an exact chain of statements from one end to the other.

The key word in that paragraph is, unfortunately, "restricted". There is no way to apply those languages to an open world.

Although... duh... I might have been able to apply them to a GAME world. I wonder why that never occurred to me? I wasn't big into games those years, but I would think I would have thought of it.

I can already see some problems with that, but I'll have to go back and check my notes. See what I can think up.