The notion of a series, or chain or regress, comes up a number of times in philosophical discussions. In this post, we’re going formalize the notion in general, and then develop this into a formalization of essentially ordered series in particular.
Intuitively, a series is when we start with some member and from there we trace through the other members one at a time, possibly indefinitely. The order in which we trace or discover the members in the series can be (and often is) the inverse of their order in reality. This happens with causal chains, for instance, when we start with some effect A, which is caused by some B, which in turn is caused by some C, and so on. Here, tracing up the series — as we just did — involves tracing backward through the causes. In other words, later members in the tracing correspond to earlier causes in reality.
To give this a formal notation, we can write a series as S = (→sn) = (… → s3 → s2 → s1), where the index of each member represents the order of our tracing backward through the members, while the order of the members represents the order of reality. Thus, because s1 has the first index it is the first of in tracing, but because it is the last member it is the last in reality.
Technically we could drop the requirement that a series has a last member, allowing it to be infinitely extended in both directions. But for our purposes here this would just clutter the notation unnecessarily, so we’ll keep the requirement for the sake of clarity. Nevertheless, the central result of this post does not hinge on this requirement.
Mathematical underpinnings of our notation
Note: if you’d rather not read a bunch of maths, and are happy with our above notation, then you’re welcome to skip this section.
We can give our series notation a mathematical underpinning by analyzing it in terms of a well-known mathematical structure: a sequence. The idea is simple: start with the sequence of indices (which represent our tracing backward up the series), match them up to members in the series, and then give those indexed members the reverse order to that of the indices. More formally, a series (or chain, or regress) is a structure S = (S, I, <, σ) where:
- S is a non-empty set of members and I is a non-empty set of indices,
- σ:I→S is a map from indices to members,
- < is a strict total order on I,
- For each i∈I, if the subset of all indices greater than i is non-empty, then it has least element,
- I has a least element, written 1.
In (S1) we separate S (the members) and I (the indices) because, in general, the same member might appear multiple times within the series.
In (S2) the map σ connects the two sets and captures repetition in the series when two distinct indices map to the same member.
(S3) and (S4) tell us that the indices form a sequence. (S3) guarantees that for any distinct indices i and j, either i < j or i > j, and (S4) guarantees that each index (except the last) has an index immediately after it, which we can label i+1.
(S5), which is technically optional, allows us to write this sequence starting with a first member as (in) = (1, 2, 3, 4, …).
Using the map σ, we can move from this sequence of indices to a series of indexed members, which are the true members of the series. For each i∈I, we have the indexed member si = (σ(i), i). They’re called indexed members because they’re members with an index attached. How do we order these indexed members? In order to get what we had earlier, we need the indexed members to be in the opposite order of their indices. So, if i and j are distinct indices with i < j, then their two corresponding indexed members will be si and sj respectively, with si > sj. Given that the starting order on I was a strict order, there is no problem with inverting it into a strict order on the indexed members, and so we can safely write our series with the above notation of S = (→sn) = (… → s3 → s2 → s1).
So, the members of the series S are the indexed members ordered inversely to their indices. So, s1 is the last member in the series. Notationally, we will refer to the series with either a bold-face S or the arrowed (→sn), depending on which is easier to read at the time. These two notations are interchangeable.
I admit that all of this is quite abstract, and so before continuing, we’ll consider some examples. As mentioned before, a familiar class of examples is causal chains. These start with some final effect (s1), and trace backwards to its cause (s2,), and then to the cause of that cause (s3), and so on. For instance, consider the causal chain of me moving my arm, which in turn moves a stick, which in turn moves a stone. We would write this series as (me → arm → stick → stone). Similarly, we could we depict the series of the successive begetting of sons as (… → grandfather → father → me → son → grandson).
But causal chains are not the only kinds of series. Say we define word1 in terms of word2, word2 in terms of word3, and so on. This would give us a series of definitions (→wordn) = (… → word3 → word2 → word1). And, as we saw in a previous discussion, some good1 might be desirable as a means to some other good2, where this good2 is itself desirable as a means to some other good3, and so on. This would give us a series of desires ordered from means to ends, (→goodn) = (… → good3 → good2 → good1). Let’s say we took members from the moving chain above and ordered them as a desiring series: I desire to move my arm, as a means to moving the stick, as a means to moving the stone. This desiring series would then be written as (stone → stick → arm), which has the members in the opposite order from a causal chain.
Each example so far is a series where earlier members depend on later members. Call such a series a “dependent series.” We’ll return to these below, but for now, we note that not every series is a dependent series. Imagine, for instance, we had three lights of different colors (red, blue, and green), such that only one light is on at a time, and where the light that’s on switches randomly and endlessly. The series of switched-on lights up until some time might then be something like (… → red → green → blue → blue → red).
Two final points on notation before we proceed.
First, sometimes it will be helpful to talk about sub-series, which are taken from a series by excluding some of the later members. So, the sub-series as (→sn)n>i consists of all the indexed members of (→sn) that come before sn (remember that the order of the indices is the inverse of the order of the indexed members in the series). Unsurprisingly, we write this as Sn>i = (→sn)n>i = (… → sn+3 → sn+2 → sn+1).
Second, in the interest of not cluttering everything with brackets, we say that entailments have the lowest precedence of all logical operations, so that a statement like A ∧ B ⇒ C ∨ D is the same as a statement like (A ∧ B) ⇒ (C ∨ D).
For any series or member thereof, we can talk about its activity, in the sense of whether it is active or not. What it means to be active is determined by the series we’re considering: to be moving, to be begotten, to be defined, to be desired, or to be on are what it means to be active in each of our examples above respectively. The notion of activity enables us to distinguish genuine series from merely putative ones, and compare them within the same formalism. To see what I mean, consider the moving stone example again. Let’s say the stone is moving and there are two putative series that could be causing this: me moving it with a stick, and you kicking the stone with your foot. These would be depicted as (me → arm → stick → stone) and (you → foot → stone) respectively. Both series are putative because each would account for the movement of the stone if it were active. Nevertheless, only the one which is active actually accounts for the movement of the stone.
We encode the activity of a member with a predicate α, which is true of a member if and only if that member is active. The necessary and sufficient conditions for α will depend on the kind of series we’re considering, and sometimes we will be able to give an explicit formulation of it. Nevertheless, it is safe to say that a series is itself active only if each of its members is active, so that:
- α(S) ⇒ (∀si∈S) α(si),
As an illustrative example, consider the lights from earlier. Imagine we had three putative series for which lights went on in which order: (green → blue → red), (red → blue → red), and (blue → red). Now assume the lights went on in the order specified by the first of these. In this case, both the first and third series would be active, but the second series would be inactive because it would have an inactive member.
Now, we want to focus specifically on dependent series. In such series, the activity of later members depends on the activity of earlier members. More formally, si depends on sj if and only if α(sj) factors into the conditions of α(si). We’ll call the inverse of dependence acting: an earlier member acts on a later member if and only if the latter being active depends on the former being active.
Before we continue we need to make a technical note about how the series and its members are being considered. A series is always considered in terms of an order given by a particular activity (and dependence) on the members themselves. Take the example of me moving the stone with the stick with my arm. When we write this as (me → arm → stick → stone) it must be understood that we are considering me, my arm, the stick, and the stone in terms of the movement only. This series is not meant as a universal description of dependence between the members, but just dependence with respect to a particular instance of movement. So, in the present series “me → arm” just means that on account of some activity within me I am imparting movement on to my arm; it says nothing about other ways my arm may or may not depend on me.
Essentially ordered series
The particular kind of dependent series we’re interested in here is called essentially ordered. In such a series, we distinguish between two types of members. A derivative member is not active of itself, but is active only insofar as the previous member is active. Or, put another way, a derivative member continues to be active only so long as the previous member continues to act on it. A non-derivative member, by contrast, does not need another to be active but is active of itself — it has underived activity. An essentially ordered series is a dependent series because deriving activity from something is one way of depending upon it.
The moving example from earlier is an essentially ordered series: the movement originates with me as the non-derivative member, and propagates through the derivative members (my arm, the stick, and the stone), each of which moves something only insofar as it is moved by something else. Something similar can be said for the defining series and the desiring series, each of which is also essentially ordered.
Traditionally essentially ordered series have been contrasted with accidentally ordered series, in which later members depend on earlier members for becoming active but not for continuing to be active. The begetting series from earlier is accidentally ordered: me begetting my son does not depend on my father simultaneously begetting me.
Now, the fact that in essentially ordered series the dependence in view is derivativeness, makes it relatively straightforward to give a necessary condition for the predicate α. Let η be a predicate which is true of a member if and only if that member is active of itself, so that η(s) if and only if s is a non-derivative member. Then we can explicitly give the following necessary condition of α:
- α(si) ⇒ η(si) ∨ α(Sn>i).
This formulation captures both the non-derivative and derivative cases. Non-derivative members are active of themselves and so can be active irrespective of the activity of the chain leading up to them. Derivative members, by contrast, are not active of themselves but by another, and so will only be active if the chain leading up to them is active.
From (ES), we see that the following holds for essentially ordered series:
- ⇒ α(s1)
- ⇒ η(s1) ∨ α(s2)
- ⇒ η(s1) ∨ η(s2) ∨ α(s3)
- ⇒ …
- ⇒ η(s1) ∨ η(s2) ∨ η(s3) ∨ ….
Given that a disjunction is true only if one of its disjuncts is true, it follows that any active essentially ordered series must include a non-derivative member:
- α(S) ⇒ (∃u∈S) η(u).
From (AS) and (EN) it follows fairly straightforwardly that in an active essentially ordered series, every derivative member is preceded by some non-derivative member:
- α(S) ⇒ (∀s∈S) (∃u∈S) η(u) ∧ u ≤ s.
Now, because non-derivative members are active regardless of the activity of the members before them, it follows that they do not depend on any members before them. And because essentially ordered series are a species of dependent series, we can say that if a member is non-derivative, then there are no members before it. We’ll call this the non-derivative independence of essentially ordered series, and formulate it as follows:
- η(u) ⇒ (∀s∈S) u ≤ s.
Together, (ENP) and (ENI) entail that any active essentially ordered series will have a first member which is non-derivative, which we call the primary member. We call this the primacy principle and formulate it as follows:
- α(S) ⇒ (∃p∈S) (∀s∈S) η(p) ∧ p ≤ s.
This is the central result of this post.
Questions and objections
This property of essentially ordered series — that they must include a primary member — can and has been leveraged in a number of ways. It is perhaps most well-known for its controversial usage in first cause cosmological arguments arising from the Aristotelian tradition. We’ve seen previously how Aristotle uses it when arguing for the existence of chief goods. It is also the formal reason behind the intuition that circular definitions are vacuous. For the remainder of this post, we will address various questions and objections that might be raised, first two shorter ones and then two longer ones.
First, some will be quick to point out that what we’ve said here doesn’t prove that God exists. And this is true: the result given here is very general, and any successful argument for God’s existence would need additional premises to reach that conclusion.
Second, some might wonder if our use of infinite disjunctions is problematic. While infinitary logic can be tricky in some cases, our use of it here is fairly straightforward: all it requires is that a disjunction of falsehoods is itself false. As such, I see nothing objectionable in our use of it here.
Third, astute readers will notice that we have not shown, namely that every active essentially ordered series must be finite. This is noteworthy because it is at odds with traditional treatments of such series. For example, in his Nicomachean Ethics Aristotle argues for a chief good by denying an infinite regress of essentially ordered goods:
If, then, there is some end of the things we do, which we desire for its own sake (everything else being desired for the sake of this), and if we do not choose everything for the sake of something else (for at that rate the process would go on to infinity, so that our desire would be empty and vain), clearly this must be the good and the chief good. (NE, emphasis mine)
And in his Summa Contra Gentiles Aquinas argues for the prime mover by arguing against an infinite regress of essentially ordered movers:
In an ordinate series of movers and things moved, where namely throughout the series one is moved by the other, we must needs find that if the first mover be taken away or cease to move, none of the others will move or be moved: because the first is the cause of movement in all the others. Now if an ordinate series of movers and things moved proceed to infinity, there will be no first mover, but all will be intermediate movers as it were. Therefore it will be impossible for any of them to be moved: and thus nothing in the world will be moved. (SCG 13.14, emphasis mine)
Our result in (PP), however, is perfectly consistent with the series being infinite: all we need is for it to have a first member. This, for instance, is satisfied by the following series:
- ω+n → … → ω+3 → ω+2 → ω+1 → ω → … → 3 → 2 → 1
where ω is the first ordinal infinity and n is some finite number. The question, then, is what the present result means for the validity of the traditional treatments.
On the one hand, the key property leveraged by thinkers like Aristotle and Aquinas is not that there are finitely many members, but rather that there is a primary non-derivative member. Now it’s possible that they conflated the question of finitude with the question of primacy, but it’s also possible that they merely used the language of infinite regress to pick out the case where there is no such primary member — something we might more accurately call a vicious infinite regress. Either way in the worst case they were slightly mistaken about why a primary member is needed, but they were not mistaken that it is needed.
On the other hand, in the kinds of essentially ordered series Aristotle and Aquinas were considering, it is a corollary of (PP) that there are finitely many members in the series. In general, (S4) guarantees that every member in the series (except the first) has a previous member, but it does not guarantee that every member in the series (except the last) has a next member. It’s precisely because of this that there can be series with beginning and end, but with infinitely many members in between. However, if a series is such that every member (except the last) has a next member, then given (PP) that series will also be finite. Now, each series discussed by Aristotle and Aquinas have this second property. And so they are somewhat justified in talking as they do.
Finally, we might wonder why it is not sufficient to have a chain of infinitely many active derivative members, where each is made active by the one before it. After all, if the chain were finite we could pinpoint one derivative member not made active by a previous member. But in an infinite chain, it can be the case that each member is made active by the previous.
Now, behind this objection lies the unfortunately common confusion between a series considered as a part and a series considered as a whole. When we consider a series as a whole we’re considering it as if it is all there is, so far as the series is concerned. For a series considered as a whole to be active, then, it must contain within itself the necessary resources to account for its members being active. By contrast, for a series considered as a part to be active, it need only be part of a series which, considered as a whole, is active. To illustrate this, imagine we see a stone moving, then realize it’s being moved by a moving stick, and stop there. In this case, we’d be considering the two-member series (stick → stone), where both members happen to be active. The series is active, but not when considered as a whole, since it needs additional members (like my arm, and me) to be able to account for the motion of its members.
Given this distinction the central question is what the conditions are for a series, considered as a whole, to be active. Naturally, the answer will depend on the kind of series we’re considering, but merely pointing to a series in which all members are active is not enough to show that such a series considered as a whole can be active — as the previous example illustrates. What we need is an account of the distinctive characteristics of such a series, and a derivation from these what the conditions for activity are when such a series is considered as a whole.
Now, as we’ve seen the distinctive characteristic of essentially ordered series rests on the distinction between derivative and non-derivative members. Derivative members are only conditionally active, whereas non-derivative members are unconditionally active. Derivative members propagate the activity of earlier members, whereas non-derivative members originate the activity. The result encoded in the (PP) is that no members have their conditions actually met if all members are only conditionally active. Again, it’s that no member can propagate without some member originating. The point is not about the number of members, but about their kind. It doesn’t matter whether you have finitely or infinitely many pipes in a row, for instance, they will not propagate any water unless something originates the water. It doesn’t matter how many sticks you have, they will not move the stone unless something originates the movement.
In short, then, the mistake of the objection is that it confuses the activity of an infinite series considered as a part, with the activity of an infinite series considered as a whole. The example does not contradict the present result because the objector has given us no reason for thinking the series in question is active when considered as a whole.
This page was significantly rewritten on 26 Aug 2017. The notation for series was made easier to follow, by distinguishing the sequence from the series so that the latter could follow the order of the series in reality. I also reordered the conclusions and formulated more in symbolic terms.
On 15-16 Dec 2017 I reworked the introduction and order of formalizations, so that the maths section is now optional. I also changed the Greek letters used to be closer to their English counterparts (sigma for the map into the series, and alpha for the active predicate).
- Well, an efficient causal chain. The chain here is, in Scholastic nomenclature, a final causal chain.
- We leave the proof of this as an exercise to the reader.
- This objection is inspired by Paul Edwards’ famous objection to first cause arguments for God’s existence.
- From a formalization perspective, this means that our formalism of series considered as wholes can include the answer if done correctly. Indeed, this is why we introduced the active/inactive distinction so that we can “step outside” and analyze the differences.
- To be sure, there is a difference between finite and infinite cases, in that a finite inactive series there will always be a first inactive member. This will sometimes happen in the infinite cases, as we saw above with our ω+n example, but not always. This difference, however, does not entail that infinite series can be active without non-derivative members.