Nature of Causality
Juleon M. Schins
Quantum indeterminacy, specifically as manifested in the repeated experimental transgressions of the 1964 Bell inequalities, definitively swept the notion that events may serve as causes. Apparently, medieval philosophy had a sounder concept of causality than modern and contemporary philosophy. This revives interest in the causal model of Thomas Aquinas, as it contains all the necessary ingredients for a consistent model of causality; most importantly, such a model must be able to account for both ordinary experience (macroscopic) and scientific (microscopic) observation. However, Thomas’ model contains two inconsistent notions (the causal impediment, and prime matter) and one major ambiguity (the multiplicity of causality). In order to safeguard Thomas’ monumental theological legacy, it is crucial that these issues be duly addressed.
Matter is primarily that which obeys ‘physical natural law’, or ‘the universal law of physics’. Although the precise formulation of that law is presently unknown, physicists know enough about it to grant a meaningful academic discourse. First of all, this universal law of physics does not determine particle trajectories, in the way classical theories did. Instead, it determines the (deterministic) evolution of a wave function. In its original version, as contemplated by Erwin Schrödinger, that wave function depends on time, and on the positions of all particles composing the physical system. Instead of detailing the future behavior of the system, this wave function only determines probabilities of its future behavior. German physicist Werner Heisenberg was the first, in the history of sciences, to formulate the indeterminacy, intrinsically associated with the concept of wave function, in quantitative terms: i.e., as a mathematical inequality involving the expectancies of two non-commuting operators. Even though Heisenberg’s inequality has a straightforward interpretation, the confusion sowed by Niels Bohr has long prevented its proper understanding to the larger audience.
Second, the universal law of physics discloses an equally unexpected characteristic of all matter, which is its spatiotemporal structure as described by Einstein’s general relativity. Both these features (the probabilistic nature of the wave function, and its spatiotemporal structure) were truly discovered, as a result of an attempt to capture the quantitative aspects of scientific observations. Hence, it is not reasonable today, as it might have been in the days of Immanuel Kant, to believe that there is anything even remotely intuitive or aprioristic about that universal law of physics.
As the general-relativistic aspect of matter does not pose a challenge to the concept of causality, while quantum indeterminacy does, we will focus on the latter aspect only. In 1964, Irish physicist John Stuart Bell found a way to generalize Heisenberg’s inequality, applicable only to systems obeying the law of quantum mechanics, to bear on all physical systems, regardless of the dynamical laws underlying them. The only supposition underlying Bell’s inequalities is that particle and momentum are somehow determined at all times by so-called ‘hidden variables’. These variables are called ‘hidden’ exclusively for the purpose of allowing all kinds of underlying dynamical laws: that way, the conclusions to be drawn from an experiment apply to all possible physical laws, independently of which or how many variables are needed (provided they are finite in number) to formulate them. Since a multitude of quantum experiments have demonstrated that nature transgresses Bell’s inequalities, covering all reasonable loopholes one by one, it is established beyond any reasonable doubt that no prior physical state of affairs can ever provide sufficient information to predict a future physical event.
Staunch defenders of determinism will always exist. Fact is, they have no clue at all how to predict even a single-particle quantum event, let alone the behavior of a complex multi-particle system. And as long as they need to invoke a mysterious conspiracy of particle behavior as an explanation of quantum correlation (the loopholes in Bell experiments), determinists remain utterly ridiculous, both in philosophy and in physics.
Thanks to God, Thomist authors never fell in the trap of determinism, nor in its philosophical opposite, that of chaos. Nevertheless, they hold a broad variety of opinions. Towards one side of the spectrum is Andreas Van Melsen, who flatly rejects any connection between quantum indeterminacy and free will. Richard Phillips and Leo Elders do not reject such a connection, but they do not discuss it either. Among all Thomist writers, the view of Jean-Marie Aubert on the mechanics of a universe devoid of human beings comes closest to determinism. At the other side of the spectrum are authors like Filippo Selvaggi. He acknowledges the revolutionary aspect of indeterminacy in quantum mechanics, and holds that in quantum mechanics “the concept of probability is at the very heart of matter, causing the essential indeterminacy in all physical future”. Although Selvaggi realizes that fundamental particles are not the authors of a free choice, he does not identify the causal origin of the choice, clearly expressed in the behavior of any fundamental particle. Consequently, he fails to relate the cause, deciding the behavior of those particular agglomerates of fundamental particles which we commonly call human bodies, to the human soul. A similar criticism holds for Innocenzo D’Arenzano and Charles De Koninck. Elizabeth Anscombe postulates the existence of ‘non-necessitating causes’, that is, of causes which ‘can fail to produce their effect, without the intervention of anything to frustrate it’, invoking the magic of quantum indeterminacy to found such a curious notion. Mariano Artigas stresses the causality inherent in contingent processes, as well as the ontological nature of contingency; he even mentions quantum non-locality, but in the end he provides a deistic account of the quantum measurement problem, philosophically as obscure as Bohr’s. William Wallace acknowledges the importance of quantum indeterminacy, and comments on scientists’ proposals of causal theories, although he refrains from formulating one himself.
For the notion of causality to make any sense at all, we postulate the following four fundamental properties:
- causal necessity: whenever the cause exists, then the effect does so, necessarily, universally, and unconditionally;
- causal requisition: every material event has a cause; and more generally, every finite entity, whether material or spiritual, has a cause;
- causal intersubstantiality: a human being is the true author of his or her free actuation, whence a genuine source of causality;
- causal proportionality: the level of universality of the effect is equal to that of the cause.
The slightest indulgence with respect to these tenets completely destroys the notion of causality. All four aspects of causality will be treated in due order. We start out with causal necessity.
Aquinas had a clear intuition of it, as he consistently defined a cause as ‘that upon which an effect follows with necessity’. Interestingly, Petrus Ispanus used the same definition, though omitting the adverb ‘necessarily’, probably because he thought the adverb to be implied logically. In spite of his fundamental intuition, Thomas indulged in exceptions, and plenty of them. He did so for the sake of confuting determinism, which he rightfully considered a philosophical aberration. Thomas’ argument was inconsistent, though. In support of his view, Thomas quotes Avicenna: ‘if a true, sufficient cause cannot fail, and if everything that happens must have a true cause, then everything happens necessarily’. From the context, it is obvious that in this quote the adverb ‘necessarily’ is meant in the sense of ‘deterministically’. Arguably, Aristotle favored causal necessity, too, although Thomas reads him differently. The problems associated with Thomas’ interpretation of Aristotle have been pointed out, among many other Thomist authors, by Harm Goris and Stephen Brock.
However praiseworthy their intentions, the argument of Avicenna and Thomas is inconclusive. In order to pinpoint their error, we need to define the notion of determinism. From the physical perspective, determinism implies that every physical event follows logically from two principles: a past state of affairs, and classical laws of nature. From the philosophical perspective, determinism results from two philosophical ingredients: causal necessity, and the necessary occurrence of causes. In their arguments, Thomas and Avicenna only mention causal necessity, but they neglect to mention the necessary occurrence of causes. Since there is nothing in the definition of a cause, which refers to its occurring deterministically, Avicenna’s conclusion does not follow.
Now why would such a luminary as Thomas Aquinas overlook such an elementary distinction? As we shall have ample opportunity to see, there are many passages in which Thomas seems to consider the operation of causes as tough they were forces. This confusion is not typical of Thomas only: it is common to all philosophers of all times.
Since Newton we know that forces are additive quantitatively. Impeding forces are reactive forces, equal to applied ones, and operating in a spatially opposed direction. Who wishes to move a piece of furniture by shoving it over the floor, has to overcome static friction. The piece of furniture does not move at all, unless one applies a force exceeding the static threshold. While the physical notion of force is not at all impaired by the notion of an impediment (‘impeding’ force), such is decidedly not the case for the philosophical notion of causality, whence causes and forces are essentially different concepts, describing essentially different operations.
The inconsistency of the notion of a causal impediment is readily understood in the context of Thomas’ causal model. As it postulates four fundamental causes, the impeding cause must be one of these four.
- Since the material cause (in the unqualified sense, to wit, prime matter) is numerically one, there cannot be such a thing as an impeding material cause.
- Formal causes do admit for opposites. Whenever two incompatible formal causes compete, the decision for prevalence must issue from another kind of cause, which has exactly that property (of deciding the prevalence between two possible formal causes). Having that property does not belong to the notion of a formal cause, whence the deciding cause cannot be formal.
- Thomas proposes that efficient causality causes the causality of both material and formal causality, and the deciding cause could be efficient. But what happens in the case of two incompatible and concurrent efficient causes? The deciding cause must be of even higher rank, for if one takes the deciding cause to be of lower rank (e.g. formal), circularity ensues inevitably.
- This leaves final causality as the only possible deciding principle. And again, what happens in the case of two competing and concurrent final causes? Consistency requires the postulation of an even higher-rank principle. Since there is no higher-rank cause by definition, the notion of an impeding cause is doomed to inconsistency: the inconsistency of eternal recurrence.
Thomas was well aware of such inconsistency in his explanation of motion (the First Way), whence it surprises he did not recognize it here.
The second inconsistency in Thomas’ causal model concerns his notion of prime matter. Thomas defines a principle quite generally as that from which change begins; he defines change (or motion) as the promotion of something from potency to act. Thomas is adamant in stating that such a promotion can only be effectuated by an entity in act. Hence, Thomas is inconsistent when stating that pure potentiality can act as a material cause or principle, for a principle is such only insofar as it is in act. In order to avoid the absurdity that prime matter is the immediate principle of individuation, Thomas needs an extensive discussion of the concepts materia (de)signata, demonstrata, and (de)terminata. John F. Wippel points out the many difficulties associated with Thomas’ discussion. Thomas seems to reintroduce act through the back door, after having bounced it out pompously through the main gate.
A third problem with Thomas’ model of causality is its multiplicity. Thomas claims causality to be fourfold, twofold and simple, in different contexts: fourfold in his treatise of causality considered as the totality of answers to why-questions; twofold in his commentary on Aristotle’s hylemorphism; and simple with respect to the cause-conferring origin of all causes. According to Thomas, final causality is the cause of the causality of the efficient cause, and efficient causality is the cause of the causality of both formal and material causes. His formulation is so general that one cannot escape the impression that final causality is of a more fundamental kind than the three others. At the time, Thomas’ account of the hylemorphic composition is so general that it suggests that material and formal causality are the fundamental kinds of causality. The same level of generality applies to Thomas’ discussion of the four causes. What lacks, clearly, is a discussion of the hierarchy of the three levels of generality.
The above three problems (two inconsistencies and one ambiguity) can be solved by simplifying Thomas’ causal model. The simplification consists in postulating causality to be fundamentally dual in nature: neither simple, nor fourfold. Since the two causes are inspired on Aristotle’s hylemorphism and on quantum mechanics, I call them q-hylemorphic causes. From hylemorphism, the q-material and q-formal causes inherit the property of relating to one another like potency to act, in exactly the manner as described by Aristotle on several occasions.
In order to define the q-hylemorphic causes, we consider a physical experiment. It is characterized by two basic ingredients: the so-called ‘initial state’, and the observed event. A causal account is such only when it gives reason of the observed event in terms of a prior physical state of affairs (the ‘initial state’). One can thus inquire which of the two detectors fires (that in transmission or that in reflection), given some prior state of affairs, to wit, that a single photon was issued from the source, heading for a semi-reflective surface.
This context allows for the following definitions:
- The q-material cause of the single quantum event occurring at time t>0 is the propagated wave function Ψ(t), such that Ψ(0) describes the initial state at some earlier time t = 0.
- The q-formal cause is the creative quantum choice, which realizes a single reality from among a finite number of options contained in Ψ(t).
The latter cause produces what is usually and most confusingly called the ‘collapse of the wave function’ in quantum-mechanical literature. These two definitions have to be complemented with the four above-mentioned essential properties of causality (necessity, requisition, intersubstantiality, and proportionality).
The rather technical definition of the q-material cause obviously startles many a philosopher. Is it really necessary to define it in terms of a wave function? And does one really need the time-dependence, and the reference to a prior state of affairs? To be honest, I do not know, and I wish it were not necessary. However, I can think of no other way to explain q-hylemorphism. So please bear with me. Everything one should know about this wave function, is that it is able to describe a state of affairs sufficiently, in the sense that it provides all predictable information, which just happens to be relevant information, without providing all knowable information. That is to say, even if a particle has both momentum and position, the wave function does not provide this information: the better it describes a particle’s position, the worse it describes momentum.
This formulation shows that the elusive paradox of Schrödinger’s cat, also known as the measurement paradox, results from a mere confusion of orders: there is no such philosophical entity as ‘the observer’, with the mysterious power of causing a wave-function collapse. It is too ridiculous for words that the behavior of a detector depends on whether a cat observes it (the strong version of Copenhagen collapse confusion). It is not less ridiculous to believe in a mysterious dividing line between macroscopic reality and microscopic reality, supposedly granting an observer-independent definition of ‘the observation’, as if nature would behave differently because some physicists built single-particle detectors (the weak version of Copenhagen collapse confusion). As a matter of fact, the whole concept of an experimental set-up does not appear in the definition of the q-hylemorphic causes: the only ingredients are the physical event (whether in the case of ordinary experience, laboratory experience, or in the absence of any observation) and a prior state of affairs. Earlier attempts to solve the paradox have failed miserably: not only the Copenhagen interpretation, but also the many-world theories, and to lesser extent decoherence theory: the latter only insofar as it claims to provide a solution to the paradox (decoherence is otherwise a sound quantum-mechanical concept).
The root of the solution to the paradox is the non-material nature of causes. The reason that so many generations of the most gifted scientific geniuses were unable to see this, is that they instinctively, and possibly inadvertently, embrace the notion that only physical events qualify for sources of causality. This is a sequel of philosophical materialism. Such a stance is inherently contradictory, as the very definition of matter cannot be part of matter.
In q-hylemorphism, neither the q-material cause is an event (it is an intelligent word spoken by an intelligent mind), nor the q-formal cause (it is an act of volition produced by a free mind). Both of them are necessary, because one needs to explain, on one hand, the observation that identical experiments yield identical statistical outcomes, and on the other, the observation that no two experiments —however identical— produce the same single event. Hence, one needs at least two causes. Whatever prior state of affairs represents but a single cause, and is therefore essentially insufficient. In the here presented causal model, the q-material cause accounts for the quantitative predictability (determinism) of the observed statistical distribution, and the q-formal cause for the occurrence of the single event.
Francisco Suárez claimed that if no free agent intervenes, there can be no contingency, but only necessity, in the sense of determinism. However truthful, the crucial insight that escaped Suárez ―for his writing four centuries before the discovery of quantum mechanics― is that even in a universe devoid of free human beings, every single quantum event presupposes a free choice, whence the intervention of a free agent.
Let us now proceed to the status of the Aristotelian efficient and final causes, which I claim to be sensible notions, though not proper causes, because they do not satisfy the four basic causal criteria.
- Efficient causality reflects the acting subject’s knowledge of q-material causality, which incorporates all physical necessity proper to natural processes. It is not identical to q-material causality, because the acting subject does not have, in general, perfect knowledge of the propagated wave function. Efficient causality approaches q-material causality the closer, the better the acting subject knows the properties of the propagated wave function.
- Final causality reflects the acting subject’s knowledge of q-formal causality, which is issued by a free and intelligent mind. Final causality is not identical to q-formal causality, because the acting subject does not have, in general, perfect knowledge of the quantum choice. Final causality approaches q-formal causality the closer, the better the acting subject knows the choices of all intelligent minds involved in the realization of an event.
Note that the here presented view on causality applies to macroscopic reality in exactly the same way as it does to microscopic reality. In the absence of human beings, a single intelligent mind suffices to provide the choice cause for any quantum event, independently of the size of the physical system. This is not so for a physical system involving human beings, like the fortuitous encounter of two friends on the market. The q-formal cause of this event depends not only on the above-mentioned intelligent mind, but also on the minds of our two friends. In line with catholic tradition, I propose (i) that God alone provides the full and single quantum choice, not only by selecting it, but also by causing the being (esse) of the selected event; and (ii) that God delegates all choice-determining power to created intelligent beings, according to the hierarchy of their perfection. The only exception to God’s delegation of choice power concerns, obviously, the human body of Christ, and the Eucharist, the theological treatment of which is beyond the scope of this contribution.
The question arises to what extent the occasionalism of Al-Ghazali, Nicolas Malebranche and Louis La Forge, the even more preposterous pantheism of Baruch Spinoza, or the pre-established harmony of Gottfried Leibniz, philosophically differ from Aristotelian-Thomist contingency. The essential difference is that such positions refute intersubstantial causation. Why would they do so? For it is undeniable that ordinary experience tells us that everybody seems to be the causative author of his or her free actuation.
According to Nicholas Jolley, Leibniz rejected intersubstantial causation because he believed that it necessarily implied ‘physical influx’, as Leibniz called the transgression of the physical law of conservation of momentum. Quantum mechanics proved him false: conservation of momentum is conserved both when Andres Segovia strums his guitar, and when he does not. A quantum choice never messes with natural law, because a quantum choice chooses only from among those options which are allowed by natural law. Hence, in order to give a causal account of Andres Segovia strumming his guitar, it is not sufficient to consider the initial condition (Andres Segovia sitting on a chair, ready to start strumming his guitar), but one needs the additional choice cause, too: Andres’ free volition to actually do so.
Pre-established harmony might have described reality, were there only a single instance in our universe, in which Andres Segovia seemed to have caused the vibration of his guitar strings. Daily experience tells us something different, though: not only does Andres seem to cause the vibration guitar strings whenever he wishes so, but the same seems to apply to all human beings, and in all their freely willed actuations. Hence, pre-established harmony is in needless conflict with not only causal intersubstantiality, but also causal proportionality.
The existence of substances does not follow from the discussion of a single event, on which we based the definitions of the q-hylemorphic causes. Hence, q-hylemorphism acknowledges Thomas’ tenet that the substantial form plays a double role: it not only determines the substance which exists individually, but it is at the time the source of the active power and tendency of that substance: a power which, in Leibniz’ terminology, is continual and non-draining.
In the remainder of this paper, we address the question to what extent q-hylemorphism supports Thomas’ causal model.
It accommodates Thomas’ distinction between per se and per accidens causes: in q-hylemorphism, the only per se causes are the two fundamental ones, to wit, the q-material and q-formal causes. According to Thomas, a per accidens cause yields an effect which is not within the power of the cause to produce. Hence, a per accidens cause is not a proper cause, as it does not satisfy the condition of causal proportionality. A typical example of a per accidens cause is the chance cause, which Aquinas calls ut in paucioribus. Our two friends’ deciding independently to go to the market is the per accidens and ut in paucioribus cause of their fortuitous meeting there. There is no problem with Thomas’ formulation in q-hylemorphism, which accounts for chance by considering that human choices are but part of the q-formal cause, not the whole of it. Interestingly, in his account of chance events, Thomas stresses that divine causality never fails, and therefore intends the very coincidence. Thomas is therefore compelled to distinguish divine causation from all other kinds of causation. To this end, he posits that divine causation is intellectual, as opposed to physical. This physical cause is obviously of the efficient kind. Here again, Thomas’ distinction hints at a confusion of the physical concept of force with the philosophical concept of cause.
In much the same way as chance causes produce their effect seldom (in paucioribus), Aquinas considers that fallible causes do so often (in pluribus). He therefore calls fallible causes ut in pluribus. Stephen Brock criticizes Thomas’ position in the following words:
Still, what about the accidental factor that accounts for the failure, the impediment? Does it not tend to make the agent fail? How else could it explain the failure? And if it tends to make the agent fail, is there not after all a per se cause of the failure? There is, Thomas says; but he argues that sometimes this cause itself only arises from a coincidence. Prior to the coincidence there was nothing at all tending to make the agent fail, no per se cause of its failure.
Although Brock does not draw the conclusion that Thomas’ argument is circular, he provides all the necessary elements for doing so. This does not render Thomas’ theory useless: it only demonstrates that Thomas’ causes are derived causes (secundum quid), rather than proper causes (per se).
Thomas’ distinction between causes ad unum and ad utrumlibet is directly applicable to q-hylemorphism. Thomas argues that no cause determined ad unum (as are natural or physical causes) can be a per se cause of every event. In q-hylemorphism, the q-material determines ad utrumlibet, though within the bounds set by natural law (i.e. there is no pure potentiality), while the q-formal cause determines ad unum.
Finally, in the context of the human choice, Thomas distinguishes between compelling and weak causality. He teaches that a particular good can move the free will. Whenever that happens, the will is not compelled to move, as if the good irresistibly overpowered human free will. Thomas makes this point repeatedly, because his causal treatment so strongly suggests the opposite: that the good of the desired object sufficiently drives the free will. In q-hylemorphism it is evident that such is not the case, because causality always requires the actuation of a free mind: apples or pears, however desirable, only provide for q-material causality.
The above treatment of Thomas’ causal distinctions is meant to show that q-hylemorphism is not only compatible with Thomas’ views, but that it always confirms his conclusions. The reason that Thomas has to hammer on these distinctions over and again, is that of his four causes, two (final and efficient) are not proper, but derived causes, because they reflect the limited knowledge of the human subject; and one (material) is fundamentally ill-conceived: pure potentiality cannot be a principle of anything at all, let alone of a material being.
The root of Thomas’ misunderstanding seems to be twofold: (i) a confusion of the philosophical concept of cause with the physical concept of force, ubiquitous in his time as it is today; and (ii) the unavailability, in his time, of the notion of physical lawfulness, not as a mere scientifically interesting characteristic of material processes, but as the mathematically phrased, philosophically definition of the nature of matter. The fundamentally dual nature of causality, as proposed in q-hylemorphism, solves all these problems, and allows for a consistent understanding of Thomist philosophy and theology.
List of References
 Erwin Schrödinger (1926) Physical Review 28: 1049–1070
Schrödinger’s equation considers a system with a definite number of particles. Field-theoretical generalizations loose this condition by treating particles as excited states (also called quanta) of their underlying quantum fields, thereby expanding the concept of indeterminacy (from mere particle position to particle number and species, like in radioactive decay). However, the philosophical challenges in quantum field theory do not differ essentially from those in quantum mechanics. Throughout this paper, the conservation of particle number is assumed, in order not to burden the reader with terminology taken from quantum field theory.
 Werner Heisenberg (1927) Zeitschrift für Physik 43, 172
 Heisenberg’s inequality:
it sets a quantitative limit on the precision with which both momentum and position of a single particle can be predicted, not measured. In a typical experimental set-up, the accuracy of observation is much better than Heisenberg’s limit, whence the mentioned quantities can be measured with any accuracy one is willing to pay for. Consequently, quantum experiments were able to confirm Heisenberg’s inequality beyond all reasonable doubt, thereby proving the impossibility to predict the single quantum event. This impossibility claim applies to all systems obeying the laws of quantum mechanics, and is therefore far from universal. John Bell found a way to make the claim universally valid, in the sense that his claim applies to all physical systems, independently of the quantitative dynamics underlying them. Obviously, in order to ascertain Heisenberg’s claim, one has to measure repeatedly momentum and position of a particle using a particular experimental set-up. It is here that Bohr’s obscure philosophy created widespread confusion: the only reason he was taken seriously at all, is that he successfully confuted Einstein’s criticism of quantum mechanics. The genius at work was Einstein, however, and not Bohr. Many scientists could have confuted Einstein’s criticism, but only exceptionally few would have thought of it. The worst confusion generated by Bohr’s philosophy (which is to be distinguished from his successful confutation of Einstein) is the tenet that momentum and position of a single particle cannot be observed in a single event, which is patently false. Another erroneous tenet is that any experimental observation fundamentally perturbs the particle’s behavior, to the point of making the prediction impossible. A third category of philosophical nonsense is represented by many-worlds theories.
 Niels Bohr (1949) The Bohr-Einstein Dialogue
 Immanuel Kant, Kritik der Reinen Vernunft, A 79, B 129
 John S. Bell (1964) Physics 1: 195–200
 A ‘loophole’:
this is technical jargon for ‘a way to avoid the Bell conclusion’. When an experiment claims to be ‘loophole-free’, what is usually meant is that it closes two specific loopholes: the ‘fair-sampling loophole’ (which bears on the partial detection efficiency of detectors, such that the observed correlation is due to a conspiracy of undetected events) and the ‘locality loophole’ (which allows for a message to pass between the photon source to the detector and miraculously arrange for the detector settings to produce the correlation). The former was closed by using detectors with high enough efficiency, and the latter by deciding the detector settings with sufficient retardation to compel those mysterious messages to travel faster than the speed of light. But even then, determinists adduce that the correlations might be preprogrammed from the beginning of the universe. They do not realize that they are replaying Leibniz’ disc of pre-established harmony, in not providing a causal account of the observation, but only a mechanism in which apparent causality is the consequence of conspiracy. To counter this kind of stubbornness, not even Anton Zeilinger’s cosmic experiment suffices: see Johannes Handsteiner et al. (2017) Cosmic Bell Test: Measurement Settings from Milky Way Stars, Phys. Rev. Lett. 118, 060401.
 The best known determinist is Nobel laureate Gerard ‘t Hooft.
He explains his views in a 2016 book entitled ‘The Cellular Automaton Interpretation of Quantum Mechanics’. Even though his denial of human free choice is Quixotic, there is much to learn from his reasons for doing so. In my view, ‘t Hooft does not deny free will because he does not believe in its possibility, but he denies it because he believes that free will is necessarily associated with Copenhagen’s deficient view on reality, according to which it is forbidden to ask what actually happens: see ‘t Hooft’s comments on counterfactual observations on page 41. Note a characteristic error of ‘t Hooft on that same page: he states that different polarizations cannot be measured because of non-commutation. That is wrong: they can be measured together, and are being measured routinely in all labs, by means of the careful preparation of correlated spin zero states. What non-commutation precludes is not simultaneous measurement, but simultaneous prediction. The confusion between these two is a typical consequence of Bohr’s intentionally obscure ‘philosophy’. Nevertheless, ‘t Hooft’s reason for denying free will, to wit, his emphatic claim of ontological truth, is perfectly Thomist. His mistake is not ontological, but epistemological: in believing that an ontological stance on reality (a particle has all properties that are measurable) is incompatible with free will, ‘t Hooft confuses ontology (a particle has all properties at all times) with determinism (man can know all properties at all times). This confusion started with René Descartes, and is at the core of rationalism: the error according to which man is able to know everything knowable.
 Andreas G.M. van Melsen (1953) The Philosophy of Nature
 Richard P. Phillips (1934) Modern Thomistic Philosophy: an Explanation for Students
 Leo Elders (1997) The Philosophy of Nature of St. Thomas Aquinas
 Jean-Marie Aubert (1965) Philosophie de la Nature: Propédeutique à la Vision Chrétienne du Monde
 Filippo Selvaggi (1985) Filosofia del Mondo: Cosmologia Filosofica
 Innocenzo D’Arenzano (1961) Divus Thomas 3:27-69; (1964) 64: 27-69
 Charles De Koninck (1937) Revue Thomiste 45, 227-252
 G. Elizabeth M. Anscombe (1971) Causality and Determination
 Mariano Artigas (1998) Filosofía de la Naturaleza
 William A. Wallace (1997) The Thomist 61: 455-468.
 Thomas Aquinas, In Metaph. V lectio 1 §749
Thomas Aquinas, In Metaph. V lectio 6 §827
‘causa est ad quam de necessitate sequitur aliud’
Thomas Aquinas, Quaest. Disp. de Malo tr 23 q 3 a 3 ad 3
Thomas Aquinas, Summa Theologiae I-II q 75 a 1 obj 2
Thomas Aquinas, In Metaph. VI lectio 3 §1193
 Petrus Ispanus, Summule logicales, tr 5 n19 , 67, 6:
‘causa est ad cuius esse sequitur aliud secundum naturam’
 Avicenna Latinus, Liber de philosophia prima, tr 1 cap 6, 44-46
 Thomas Aquinas, In VI Metaph. lectio 3 §1191-1222.
 Harm J.M.J. Goris (1996) Free Creatures of an Eternal God:
Thomas Aquinas on God’s Infallible Foreknowledge and Irresistible Will, page 283
 Stephen L. Brock (2002) Quaestio 2: 217-240. See notes 21-23 on pages 222-223
 Thomas Aquinas, De Principiis Naturae 2: 98-119
‘Dicitur etiam aliquid unum numero, quia est sine dispositionibus quae faciunt differre secundum numerum: et hoc modo dicitur materia prima unum numero, quia intelligitur sine omnibus dispositionibus a quibus est differentia in numero.’
 Thomas Aquinas, In Metaph. V lectio 2 §775
‘Efficiens autem causa causalitatis et materiae et formae’
 Aristotle, Physics II.3 and Metaphysics V.2
 Thomas Aquinas, Summa Theologiae I q 2 a 3. Freely translated:
‘But this cannot go on to infinity, because then there would be no motion in the first place (but only transmission of motion)’
 Thomas Aquinas, De Principiis Naturae ch 3:
‘generally speaking, everything from which some change begins can be called a principle’
 Thomas Aquinas, Summa Theologiae I q 2 a 3:
‘for motion is nothing else than the reduction of something from potentiality to actuality’
 Thomas Aquinas, Summa Theologiae I q 2 a 3:
‘nothing can be reduced from potentiality to actuality, except by something in a state of actuality.’
 Thomas Aquinas, De Principiis Naturae ch 3:
‘for every cause can be called a principle and every principle can be called a cause, though the concept of a cause seems to add something to that of principle in its ordinary sense, for whatever is first can be called a principle, whether there results some existence from it or not’
 Thomas Aquinas, De Ente et Essentia, ch 2
 John F. Wippel, The Metaphysical Thought of Thomas Aquinas: From Finite Being to Uncreated Being, pages 358-359
 Thomas Aquinas, In V Metaph. lectio 3 §782
 Aristotle, Categories IV:1b
Aristotle, Topics I 9:103b
Aristotle, Physics V 1:225b
Aristotle, Metaphysics V 7:1017a
 A quantum event is nothing but an ordinary event,
with a mere emphasis on its fundamentally unpredictability. My sitting at a desk, typing this text, is as much a quantum event, as is the quantum detection of a single photon.
 The term ‘wave-function collapse’ suggests the nonsensical notion that,
at times, the wave function does not evolve according to the Schrödinger equation
 An experimental set-up is well-designed when the initial state is ‘simple’.
Mathematically, this means that the initial state is a single eigenstate of the measured operator. E.g., when the final states are position states (as opposed to momentum states), then the optimum initial state is a single eigenstate of the position operator (i.e. not a linear superposition of such states). This initial state is, by definition, a linear superposition of eigenstates of the momentum operator (and of every other operator which does not commute with the position operator). Note that the initial state involves a wave function collapse in much the same way as the final state, for the initial state cannot be known by measurement: it can only be known by inference from the way the experimental set-up has been built. And again, it is well-designed only if it is such that one can predict the initial state as being a single eigenstate of some observable operator.
 The Copenhagen interpretation stems from the discussions,
held in that city, by Werner Heisenberg and Niels Bohr. The former’s views were distinct (a philosophical separation between quantum and macroscopic reality) but inconsistent, the latter’s were as indistinct as inconsistent. Einstein rightfully ridiculed Bohr’s view on the nature of physical properties, by asking Abraham Pais whether he really believed the moon to exist only when observed.
 Hugh Everett (1957) Reviews of Modern Physics 29: 454–462
Bryce S. DeWitt (1970) Physics Today 23: 30–35
 H. Dieter Zeh (1970) Foundations of Physics 1: 69–76
Wojciech H. Zurek (1981) Physical Review D 24: 1516–1525
 The ‘statistical observation’ is an idealized concept,
because one cannot accumulate an infinity of events (as is necessary in order to determine the statistical distribution) in any finite amount of time
 Francisco Suárez, Disp. Metaph. XIX sectio x §5-6, 736
 Thomas Aquinas, In V Metaph. lectio 2 §775
 The catholic view on how an infinitely good God can give being to an evil world without contradiction,
stems from the book of Job 1: 8-12. From the divine perspective, the good represented by the realization of the free will of a free creature not only by far exceeds the evil represented by the consequential suffering of all other free creatures: that suffering, whenever increases the level of merit essentially beyond original justification (the situation of Adam and Eve before their fall).
 Even though Leibniz was fundamentally mistaken on multiple counts,
he did deserve a worthier opponent than Voltaire to point them out
 Nicholas Jolley (1998) The Monist 81: 591-611
 Thomas Aquinas, Summa Theol. I q 42 a 1 ad 1
 Thomas Aquinas, Summa Theol. I q 3 a2; I q 5 a 5; I q 77 a 4; I q 80 a 1; I q 115 a 1 I-II q 55 a 3; III q 13 a 1
 Gottfried Wilhlem Leibiniz (1714) Principes de la Nature et de la Grace fondés en Raison — Monadologie
 Thomas Aquinas, In II Phys. lectiones 7-10
 Thomas Aquinas, In VI Metaph. lectio 2 §1182 and following