In case it is not obvious why functions should be defined as having the property "if two inputs to the function are the same, the output should be the same" and allowed as term expressions:
(Technical reason) Without that restriction, a term could transform into a collection of terms, and a statement like
would both require a lot more effort to make sense of (what would it mean for a number to be less than a collection of numbers, say?) and "for all x, x < (some set)(x)" is equivalent (I think) to quantifying over a predicate (a collection of domain items dependent on x) - it breaks the first order-ness of the theory.
(Expressive reason) It means that you have a way of describing ways of transforming a term into another term. To reiterate:
you can describe properties and relations of ways of transforming terms into other terms. These are incredibly important ideas, functions let you talk about change and transformation, in addition to relation and property. The concepts this allows the logic to express are much broader.
Signatures also let us stipulate properties (strictly: well formed formulae that can be true involving) their constituent elements (domain elements, functions, predicates), EG, if we have
(have in mind "natural numbers when you can add them and multiply them and nothing else"), we can stipulate that
and
obey:
, which is the (left) distributive law.
Stipulating the distributive law lets you derive this as a theorem:
The first and second equalities follow from the distributive law. This is, hopefully, the familiar
FOIL (firsts, outsides, insides last) or "smiley face" method of multiplying binomials together from high school algebra.
This can be written formally as
when
are understood as obeying the distributive law.
Usually the signature and the axioms which define the properties and relations of its predicates and functions will be the intended meaning of just writing down the signature. This is a notational convenience, it does no harm. We could introduce an extra list of axioms for the operations and add them to the signature (analogous to building a system of inference over well formed formula production rules), but usually in other areas of mathematics this is omitted.
When you leave this regime of mathematical logic it is extremely convenient to just assume that whenever we write down a signature, we implicitly give it its
intended interpretation. This specifies the structure of interest for mathematical investigations/theorem proving in terms of what can be proved and what we hold to be true, but it does not block people from analysing other models; called non-standard models; of the same signature.
Because of the waning relevance of the distinction between syntax and semantics for our purposes, I am not going to go into all the formal details of linking an arbitrary signature to an arbitrary interpretation; but I will state some general things about models to conclude, and end our discussion of them with a mathematical justification for why set theory is in some sense the "most fundamental" part of (lots of) mathematics.
(1) Signatures do not necessarily, and usually do not, have unique models. More than one object can satisfy the stipulated properties of a signature. This applies as much for things like
arithmetic as it does for my contrived examples.
(2) Models can be different, but not be interestingly different. If we decided to write all the natural numbers upside down, we would obtain a different set that satisfies all the usual properties of the natural numbers; a formally distinct set anyway. If we started calling what we mean by "1" what we mean by "2" and vice versa, we would have a formally distinct model. But these models are entirely the same for our purposes. In this regard, models are equivalent when each truth of one obtains just when an equivalent corresponding truth obtains in the other. EG, if we relabelled the numbers 1,2 by a,b,c, we would have:
holds in our new labelling when and only when
holds in the usual labelling.
When this property holds between models, they are called
isomorphic.
(3) Models are extremely informative about the syntax of a formal system, for example, you can prove that the truth value of a statement is
independent from a system of axioms (neither it nor its negation are derivable) by finding a model of that system where the statement is true and finding a model of that system where the statement is false. Alternatively, one can assume the statement along with the formal system under study and show that it entails no contradiction (IE that it is consistent) and then assume the negation of the statement along with the formal system under study and show that this too entails no contradiction. This proof method was what showed that the
continuum hypothesis was independent of the axioms of set theory ZFC; a result partially proved by Godel and then the rest by Cohen.
Lastly, you have no doubt noticed (if you've been paying attention) that the discussion of first order logic required us to talk about collections of objects; eg for the domains, for the definition of functions and predicates, for how the quantifiers worked... It showed up pretty much everywhere. If it were somehow possible to make a bunch of axioms of a privileged few symbols and bind them together with a signature that captured how collections of objects behave, whatever
modelled such a theory would be an extremely general object. It would contain things like natural numbers, groups in algebra, functions in analysis... It would contain pretty much everything... And in that regard, the theory that expressed such a structure would serve as a foundation for mathematics.
This happened. It's called ZFC, for
Zermelo Fraenkel set theory with the axiom of choice. So that's what we'll discuss (very briefly, considering the complexity of the topic and my more limited knowledge of it) next.