The notion of a series, or chain or regress, comes up a number of times in philosophical discussions. In this post, we’re going formalize the notion in general, and then turn our attention to *essentially ordered* series in particular.

Intuitively, a series is when we start with some member and from there we trace through the other members one at a time, possibly indefinitely. The order in which we trace or discover the members in the series can be (and often is) the inverse of their order in reality. This happens with causal chains, for instance, when we start with some effect A, which is caused by some B, which in turn is caused by some C, and so on. Here, tracing *up* the series — as we just did — involves tracing *backward *through the causes. In other words, *later* members in the tracing correspond to *earlier* causes in reality. The relevance of this point will become evident shortly.

Technically we could drop the requirement that a series has a starting point, allowing it to be infinitely extended in both directions. But for our purposes here this would just clutter the notation unnecessarily, so we’ll keep the requirement for the sake of clarity. Nevertheless, the central result of this post does not hinge on this requirement.

### Formalism of series

More formally, a series (or chain, or regress) is a structure **S** = (S, I, <, α) where:

- S1.
- S is a set of members and I is a set of indices,
- S2.
- α:I→S is a map from indices to members,
- S3.
- < is a strict total order on I,
- S4.
- For each i∈I, if the subset of all indices greater than i is non-empty, then it has least element,
- S5.
- I has a least element, written 1.

In (S1) we separate S and I, the members and the indices, because in general the same member might appear multiple times within the series. In (S2) α connects the two and captures repetition in the series when two distinct indices map to the same member.

(S3) and (S4) tell us that the indices form a sequence. (S3) guarantees that for any distinct i and j, either s_{i} < s_{j} or s_{i} > s_{j}, and (S4) guarantees that for each index except the last has an index *immediately after* it, which we can label i+1. (S5), which is technically optional, allows us to write this sequence starting with a first member as (1, 2, 3, 4, …).

Using the connector between the members and the indices, we can move from the index sequence to the index member *sequence*, and from there to the indexed member *series*. Why we do this in two steps will become clear shortly. For any i∈I, the indexed member s_{i} is the member of S which is given by α(i). Both the sequence and series of indexed members have these members in them, but they differ in how they order these members. To avoid confusion, we will write the members of the sequence using asterisks (like s_{i}*) and leave the non-asterisks version for the members of the series (like s_{i}). The *sequence* of indexed members is ordered just as the indices are, so that s_{i}* < s_{j}* if and only if i < j. We write this sequence with the usual mathematical notation as (s_{n}) = (s_{1}*, s_{2}*, s_{3}*, …). The *series* is indexed in the same way as the sequence, but with the order of the indexed members is inverted, so that s_{i} > s_{j} if and only if i < j. We write the series using the notation (→s_{n}) = (… → s_{3} → s_{2} → s_{1}).

The upshot of all of this is that as we proceed forward through the sequence of indices, we proceed backward through the series of indexed members. This corresponds to our earlier comment about how the order we typically discover the members of a series is the inverse of the order these members have in reality. The sequence of indexed members is of technical interest to us as the mathematical foundation of our formalism, but for the purposes of further discussion, we will restrict ourselves to the series, as it more closely corresponds to common language about series in general. Thus, the above structure **S** we defined above primarily refers to the series of indexed members (→s_{n}) with the order given by the inverse of the order of indices.

I admit that all of this is quite abstract, and so before continuing, we’ll consider some examples. As mentioned before, a familiar class of examples is causal chains. These start with some final effect (s_{1}), and trace backward to its cause (s_{2},), and then to the cause of *that cause* (s_{3}), and so on. For instance, consider the causal chain of me moving my arm, which in turn moves a stick, which in turn moves a stone. We would write this series as (me → arm → stick → stone). Similarly, we could we depict the series of the successive begetting of sons as (… → grandfather → father → me → son → grandson).

But causal chains are not the only kinds of series. Say we define word_{1} in terms of word_{2}, word_{2} in terms of word_{3}, and so on. This would give us a series of definitions (→word_{n}) = (… → word_{3} → word_{2} → word_{1}). And, as we saw in a previous discussion, some good_{1} might be desirable as a means to some other good_{2}, where this good_{2} is itself desirable as a means to some other good_{3}, and so on. This would give us a series of desires ordered from means to ends, (→good_{n}) = (… → good_{3} → good_{2} → good_{1}). Let’s say we took members from the moving chain above and ordered them as a desiring series: I desire to move my arm, as a means to moving the stick, as a means to moving the stone. This desiring series would then be written as (stone → stick → arm), which has the members in the opposite order from a causal chain.[1]

Each example so far is a series where earlier members *depend* on later members. Call such a series a “dependent series.” We’ll return to these below, but for now, we note that not every series is a dependent series. Imagine, for instance, we had three lights of different colors (red, blue, and green), such that only one light is on at a time, and where the light that’s on switches randomly and endlessly. The series of switched-on lights up until some time might then be something like (… → red → green → blue → blue → red).

Two final points on notation before we proceed. First, sometimes it will be helpful to talk about *sub*-series, which are taken from a series by excluding some of the later members. So, the sub-series as (→s_{n})_{n>i} consists of all the indexed members of (→s_{n}) that come *before* s_{n }(remember that the order of the indices is the inverse of the order of the indexed members in the series). Unsurprisingly, we write this as (→s_{n})_{n>i} = (… → s_{n-3} → s_{n-2} → s_{n-1}). Second, in the interest of not cluttering everything with brackets, we say that entailments have the lowest precedence of all logical operations, so that a statement like A ∧ B ⇒ C ∨ D is the same as a statement like (A ∧ B) ⇒ (C ∨ D).

### Active series

For any series or member thereof, we can talk about its *activity*, in the sense of whether it is active or not. What it means to be active is determined by the series we’re considering: to be moving, to be begotten, to be defined, to be desired, or to be on are what it means to be active in each of our examples above respectively. The notion of activity enables us to distinguish genuine series from merely putative ones, and compare them within the same formalism. To see what I mean, consider the moving stone example again. Let’s say the stone is moving and there are two putative series that could be causing this: me moving it with a stick, and you kicking the stone with your foot. These would be depicted as (me → arm → stick → stone) and (you → foot → stone) respectively. Both series are putative because each would account for the movement of the stone *if it were active*. Nevertheless, only the one which *is* active actually accounts for the movement of the stone.

We encode the activity of a member with a predicate β, which is true of a member if and only if that member is active. The necessary and sufficient conditions for β will depend on the kind of series we’re considering, and sometimes we will be able to give an explicit formulation of it. Nevertheless, it is safe to say that a series is itself active only if it is non-empty and each of its members are active, so that:

- AS.
- β(
**S**) ⇒**S**≠ ∅ ∧ (∀s_{i}∈**S**) β(s_{i}),

As an illustrative example, consider again the lights from earlier. Imagine we had three putative series for which lights went on in which order: (green → blue → red), (red → blue → red), and (blue → red). Now assume the lights went on in the order specified by the first of these. In this case, both the first and third series are active, but the second series is inactive because it has an inactive member.

### Dependent series

Now, we want to focus specifically on *dependent* series. In such series, the activity of later members depends on the activity of earlier members. More formally, s_{i} depends on s_{j} if and only if β(s_{j}) factors into the conditions of β(s_{i}). We’ll call the inverse of dependence *acting*: an earlier member acts on a later member if and only if the latter being active depends on the former being active.

Before we continue we need to make a technical note about how the series and its members are being considered. A series is always considered in terms of an order given by a particular activity (and dependence) on the members themselves. Take the example of me moving the stone with the stick with my arm. When we write this as (me → arm → stick → stone) it must be understood that we are considering me, my arm, the stick, and the stone *in terms of the movement only*. This series is not meant as a universal description of dependence between the members, but just dependence with respect to a particular instance of movement. So, in the present series “me → arm” just means that on account of some activity within me I am imparting movement on to my arm; it says nothing about other ways my arm may or may not depend on me.

### Essentially ordered series

The particular kind of dependent series we’re interested in here are *essentially ordered* series, in which the kind of dependence in view is *derivation*. A member in such a series is called *derivative* if it derives its activity from the previous member in the series: it is not active of itself, but rather it is active only insofar as the previous member is active. Or, put another way, a derivative member continues to be active only so long as the previous member continues to act on it. A *non-derivative* member, by contrast, does not need another to be active but is active of itself — it has underived activity.

The moving example from earlier is an essentially ordered series: the movement originates with me as the non-derivative member, and propagates through the derivative members (my arm, the stick, and the stone), each of which moves something only insofar as it is moved by something else. Something similar can be said for the defining series and the desiring series, each of which is also essentially ordered.

Traditionally essentially ordered series have been contrasted with *accidentally ordered* series, in which a member depends on later members for *becoming* active but not for *continuing to be* active. The begetting series from earlier is accidentally ordered: me begetting my son does not depend on my father simultaneously begetting me.

Now, because in essentially ordered series the dependence in view is derivation, defining β is — at least partially — a fairly straightforward matter. To start, let η be a predicate which is true of a member if and only if that member is active of itself. So η(s) if and only if s is a non-derivative member. Using this we can explicitly give some necessary conditions of β:

- ES.
- β(s
_{i}) ⇒ η(s_{i}) ∨ β((→s_{n})_{n>i}).

This formulation captures both the non-derivative and derivative cases. Non-derivative members are active of themselves and so can be active irrespective of the activity of the chain leading up to them. Derivative members, by contrast, are not active of themselves but by another, and so will only be active if the chain leading up to them is active.

From (ES), we see that the following holds for essentially ordered series:

- β(
**S**) - ⇒ β(s
_{1}) - ⇒ η(s
_{1}) ∨ β(s_{2}) - ⇒ η(s
_{1}) ∨ η(s_{2}) ∨ β(s_{3}) - ⇒ …
- ⇒ η(s
_{1}) ∨ η(s_{2}) ∨ η(s_{3}) ∨ ….

Given that a disjunction is true only if one of its disjuncts is true, it follows that any active essentially ordered series must include a non-derivative member:

- EN.
- β(
**S**) ⇒ (∃n∈**S**) η(n).

From (AS) and (EN) it follows fairly straightforwardly that in an active essentially ordered series, every derivative member is preceded by some non-derivative member:

- ENP.
- β(
**S**) ⇒ (∀s∈**S**) (∃n∈**S**) η(n) ∧ n ≤ s.

Now, because non-derivative members are active regardless of the activity of the members before them, it follows that they do not *depend* on any members before them. And because essentially ordered series are a species of dependent series, we can say that if a member is non-derivative, then there are no members before it. We’ll call this the *non-derivative independence* of essentially ordered series, and formulate it as follows:

- ENI.
- η(n) ⇒ (∀s∈
**S**) n ≤ s.

Together, (ENP) and (ENI) entail that any active essentially ordered series will have a first member which is non-derivative, which we call the *primary* member. We call this the *primacy principle* and formulate it as follows:

- PP.
- β(
**S**) ⇒ (∃p∈**S**) (∀s∈**S**) η(p) ∧ p ≤ s.

This is the central result of this post.

### Questions and objections

This property of essentially ordered series — that they must include a primary member — can and has been leveraged in a number of ways. It is perhaps most well-known for its controversial usage in first cause cosmological arguments arising from the Aristotelian tradition. We’ve seen previously how Aristotle uses it when arguing for the existence of chief goods. It is also the formal reason behind the intuition that circular definitions are vacuous. For the remainder of this post, we will address various questions and objections that might be raised, first two shorter ones and then two longer ones.

**First**, some will be quick to point out that what we’ve said here doesn’t prove that God exists. And this is true: the result given here is very general, and any successful argument for God’s existence would need additional premises to reach that conclusion.

**Second**, some might wonder if our use of infinite disjunctions is problematic. While infinitary logic can be tricky in some cases, our use of it here is fairly straightforward: all it requires is that a disjunction of falsehoods is itself false. As such, I see nothing objectionable in our use of it here.

**Third**, astute readers will notice that we have *not* shown, namely that every active essentially ordered series must be *finite*. This is noteworthy because it is at odds with traditional treatments of such series. For example, in his *Nicomachean Ethics* Aristotle argues for a chief good by denying an infinite regress of essentially ordered goods:

If, then, there is some end of the things we do, which we desire for its own sake (everything else being desired for the sake of this), and if we do not choose everything for the sake of something else (

for at that rate the process would go on to infinity, so that our desire would be empty and vain), clearly this must be the good and the chief good. (NE, emphasis mine)

And in his *Summa Contra Gentiles* Aquinas argues for the prime mover by arguing against an infinite regress of essentially ordered movers:

In an ordinate series of movers and things moved, where namely throughout the series one is moved by the other, we must needs find that if the first mover be taken away or cease to move, none of the others will move or be moved: because the first is the cause of movement in all the others. Now if an ordinate series of movers and things moved

proceed to infinity, there will be no first mover, but all will be intermediate movers as it were. Therefore it will be impossible for any of them to be moved: and thus nothing in the world will be moved. (SCG13.14, emphasis mine)

Our result in (PP), however, is perfectly consistent with the series being infinite: all we need is for it to have a first member. This, for instance, is satisfied by the following series:

- ω+n → … → ω+3 → ω+2 → ω+1 → ω → … → 3 → 2 → 1

where ω is the first ordinal infinity and n is some finite number. The question, then, is what the present result entails about the validity of the traditional treatments.

On the one hand, the key property leveraged by thinkers like Aristotle and Aquinas is not that there are finitely many members, but rather that there is a *primary non-derivative* member. Now it’s possible that they conflated the question of finitude with the question of primacy, but it’s also possible that they merely used the language of infinite regress to pick out the case where there is no such primary member — something we might more accurately call a *vicious *infinite regress. Either way in the worst case they were slightly mistaken about *why* a primary member is needed, but they were not mistaken *that* it is needed.

On the other hand, in the kinds of essentially ordered series Aristotle and Aquinas were considering, it is a corollary of (PP) that there are finitely many members in the series. In general, (S4) guarantees that every member in the series (except the first) has a *previous member*, but it does not guarantee that every member in the series (except the last) has a *next member*. It’s precisely because of this that there can be series with beginning and end, but with infinitely many members in between. However, if a series is such that every member (except the last) has a next member, then given (PP) that series will also be finite.[2] Now, each series discussed by Aristotle and Aquinas have this second property. And so they are somewhat justified in talking as they do.

**Finally**, we might wonder why it is not sufficient to have a chain of infinitely many active derivative members, where each is made active by the one before it.[3] After all, if the chain were finite we could pinpoint one derivative member not made active by a previous member. But in an *infinite* chain, it can be the case that each member is made active by the previous.

Now, behind this objection lies the unfortunately common confusion between a series considered *as a part* and a series considered *as a whole*. When we consider a series as a whole we’re considering it as if it is all there is, so far as the series is concerned. For a series considered as a whole to be active, then, it must contain within itself the necessary resources to account for its members being active. By contrast, for a series considered as a part to be active, it need only be part of a series which, considered as a whole, is active. To illustrate this, imagine we see a stone moving, then realize it’s being moved by a moving stick, and stop there. In this case, we’d be considering the two member series (stick → stone), where both members happen to be active. The series is active, but not when considered as a whole, since it needs additional members (like my arm, and me) to be able to account for the motion of its members.

Given this distinction the central question is what the conditions are for a series, considered as a whole, to be active.[4] Naturally, the answer will depend on the kind of series we’re considering, but merely pointing to a series in which all members are active is not enough to show that such a series *considered as a whole* can be active — as the previous example illustrates. What we need is an account of the distinctive characteristics of such a series, and a derivation from these what the conditions for activity are when such a series is considered as a whole.

Now, as we’ve seen the distinctive characteristic of essentially ordered series rests on the distinction between derivative and non-derivative members. Derivative members are only conditionally active, whereas non-derivative members are unconditionally active. Derivative members propagate the activity of earlier members, whereas non-derivative members originate the activity. The result encoded in (EPP) is that no members have their conditions actually met if all members are only conditionally active. Again, it’s that no member can propagate without some member originating. The point is not about the *number* of members, but about their *kind*. It doesn’t matter whether you have finitely or infinitely many pipes in a row, for instance, they will not propagate any water unless something originates the water. It doesn’t matter how many sticks you have, they will not move the stone unless something originates the movement.[5]

In short, then, the mistake of the objection is that it confuses the activity of an infinite series considered as a part, with the activity of an infinite series considered as a whole. The example does not contradict the present result because the objector has given us no reason for thinking the series in question is active when considered as a whole.

### Update

This page was significantly rewritten on 26 Aug 2017. The notation for series was made easier to follow, by distinguishing the sequence from the series so that the latter could follow the order of the series in reality. I also reordered the conclusions and formulated more in symbolic terms.

### Notes

- Well, an
*efficient*causal chain. The chain here is, in Scholastic nomenclature, a*final*causal chain. - We leave the proof of this as an exercise to the reader.
- This objection is inspired by Paul Edwards’ famous objection to first cause arguments for God’s existence.
- From a formalization perspective, this means that our formalism of series considered as wholes can include the answer if done correctly. Indeed, this is why we introduced the active/non-active distinction so that we can “step outside” and analyze the differences.
- To be sure, there
*is*a difference between finite and infinite cases, in that a finite non-active series there will always be a first non-active member. This will sometimes happen in the infinite cases, as we saw above with our ω+n example, but not always. This difference, however, does not entail that infinite series can be active without non-derivative members.