Semantics VS syntax in language
Syntax is what the grammar allows, semantics is what it means. int x = "five"; // syntax is okay (type identifier = value), semantics is wrong ("five" is not an int). No idea what the following is supposed to mean. It couldn't be more wrong. Semantics is what matters in an interface because it maps one to one between what the user of the interface wants and the way the implementor of the interface does it. Syntax is what doesn't matter to either one or the other party on each side of the interface. If the user doesn't care about the distinction between two functions but the implementor forces them to make a distinction, then that is syntax. Conversely, if there is no distinction between the implementation of two functions but the user forces the implementor to distinguish them, then that too is syntax. Semantics is the one to one mapping between how the user thinks about the black box and between the black boxes' actual internals. Syntax is the measure of the discrepancy between the two. Syntax that supports the user's view of the system is helpful and good. Syntax that violates that view in favour of the implementor's convenience is harmful and evil. An interface that has only good syntax adheres to SyntaxFollowsSemantics. Since the user's view of the system is different from user to user, SyntaxIsSubjective. This is why syntax should be user-configurable. Ok, that clarifies it quite a bit more. Certainly an unusual use of the words, but the concepts are now making sense. Hey, that sounds familiar... :-) Are you assuming an idealized user? What about users who don't care about something, but really really should? What confused me for a while was the distinction between good syntax and bad syntax. The idealized user is me. :) And the distinction kicks in this way; I really don't give a damn about the low-level details that compiler writers and linguists agonize about. Those things aren't even syntax to me, they're chaff that should be completely eliminated from my perceptions, buried so deep in the system that I never, ever encounter it. Well, you gotta specify the semantics of something somehow. And unfortunately, to a great many people SyntaxMatters. Perhaps more than it should; some Smalltalkers like to complain that Smalltalk might have gained more widespread acceptance had it used curly braces rather than square braces. (I doubt it, but hey-it's a nice StrawMan). But there is a constant tension between what a) what's easy to read; b) what's easy to write (type in); and c) what's easy to process (both for the compiler itself, and for other tools that might process or generate source code). Some languages focus on terseness-C and perl come to mind. Fans of these languages like typing 'em in; as they don't have to repeat themselves too much. Others are very verbose-it's easy to read and figure out what is going on (and the compiler can use the verbosity to detect errors); but...
- Show there is a real problem with the inflexibility of the traditional way of doing things, i.e. that the fact that I have to write for (int i=0; i< n; i++) {...} while they can't do it with say for i from 0 to n-1 step 1 loop ... end loop creates such a traumatic experience for the user of programming language, that it makes it a problem worth addressing.
- Show that your medicine is not worse than the disease (i.e. that it has enough virtues and less drawbacks). One can easily imagine that when every programmer invents his new syntax, there will be some trouble reading some code.
- Show that it is not only economically feasible to do it, but also that it is a problem worth addressing now. Even if we admit that having pluggable syntax is a "nice to have", is it worth bothering? For example, English-speaking folks [know] that a phonetic notation is technically superior to the ad-hoc mumbo jumbo of English spelling. However, they prefer their SpellingBee?s' fun rather than having their kids know to write and read perfectly by the second grade.