This makes the issue much more precise, thanks. — J
Cheers.
The difference between syntax and semantics is very clear in formal logic. Less so in natural languages. The following is probably familiar.
Let's look briefly at propositional logic. It includes just letters, p,q, and so on, as many as you want, and a couple of symbols, usually v and ~. To this we add formation rules that tell us what we are allowed to write. First, we can write any letter by itself. So we can write "p", or we can write "q". Then, we are allowed to put a "~" in front of anything else we are allowed to write; so we can write "~p" and "~q" and so on. Then, for two things we can write, we can join them with a "v". So we can now write "~pvq". From this, we can set out a system that shows how some strings of letters are well-formed - they follow the formation rules - while others are not... if we follow those rules we can never write down "pvvq~", for example. (I've left out brackets just to keep things simple. Also, ^ and ⊃ can both be defined in terms of v and ~, so they are not needed here)
All we have here is a system of syntax. It is purely a set of rules for stringing letters together in a specific way. In particular, it tells us nothing of what "p" and "q" stand for, and so nothing of which of our strings of letters might be true or false.
We add an interpretation to this syntactic system by ascribing "T" or "F" to each of the letters, together with a rule for the truth functionality of "v" and "~". A string beginning with "~" will be T if and only if the stuff after the "~" is F, and a string joined by a "v" will be T if and only if the stuff on either side of the "v" is also T.
A useful way to understand this is that p,q, and so on
denote either T or F. We've moved from syntax to semantics.
We can expand our syntactic system by allowing ourselves to write not just p's and q's, but also "f(a)" and 'g(b)" and so on, in the place of those p's and q's. We can add rules for using ∃(x), but still at the syntactic level - just setting out what is well-formed and what isn't. This gives us a bigger system.
And to that we add more interpretation, were the letters "a","b" and so on stand for a and b, respectively, and "f" stands for some group of such letters, perhaps "f" stands for {a,c,e} while "g" stands for {b,c} or whatever. We then get that f(a) is true - "a" is in the set {a,c,e}, while f(b) is false - "b" is not in the set {a,c,e}. Similar rules apply for interpreting the quantifiers. And this gives us predicate calculus.
We can then expand the system once more, adding the operator "☐" outside of all of the stuff in the syntax for predicate logic, together with a few rules for how we can write these. This gives us the systems S1 through S5. These are just ways of writing down strings of letters, with ever more complicated permutations.
In order to give a coherent interpretation to these systems, Kripke taught us to use possible world semantics. In a way all this amounts to is a process to group the predicates used previously. So we said earlier that "f" stands for {a,c,e}, and to this we now add that in different worlds, f can stand for different sets of individuals. So in w₀ "f" stands for {a,c,e}, while in w₁ f stands for {a,b}, and so on in whatever way we stipulate - w₀ being world zero, w₁ being world one, and so on. Now we have added a semantics to the syntax of S4 and S5.
So reference-fixing is giving an interpretation, yes? — J
Exactly.
We have two levels, if you like, for each system. At one level we just set out how the letters can be written out, what sequence is acceptable. That's the syntax. At the next level, we add an interpretation, what the letters stand for. That's the semantics. So for propositional logic, the letters stand for T or F, and for predicate logic, we add individuals a,b,c... and for modal logic we add worlds, w₁, w₂ and so on, in order to get out interpretation, our semantics.
And to this we
might add a third level, where we seek to understand what we are doing in a natural language by applying these formal systems. So for propositional logic, we understand the p's and q's as standing for the sentences of our natural language, and T and F as True and False. For predicate logic, we understand a,b,c as standing for Fred Bloggs, the Eiffel tower and consumerism, or whatever. And in modal logic, we get
Naming and Necessity, where we try to understand our talk of modal contexts in natural languages in terms of the formal system we have developed.
I left out brackets, truth tables, domains, and accessibility, amongst other things, and only scratched the surface of extensionality. But I hope I've made clear how clean the distinction is between syntax and semantics in formal systems.