Computational Field Theory

Over the years, many people have argued that physics, the study of all physical systems, is isomorphic to computation, the study of all programs. This made many people very angry, as they felt that such a reality, where physical systems are reduced to programs running on a universal quantum supercomputer, would suck. Granted, the ontological aspiration of the theory that the whole Universe is a program running on a universal quantum supercomputer might be ambitious, but the theory is useful as an epistemological metaphor nonetheless.

Konrad Zuse – Calculating Space (1967) ©Konrad Zuse (Fair Use)

On the grounds of the above theory, it can be argued that there is a computational equivalent to spacetime called “computation space”. Spacetime relates to computation space as follows.1 Imagine the shape of a decision surface in a spacetime diagram splitting points that belong to an ontological object from points that do not, e.g., the shape for a photon is a line, the shape for a non-idealized double pendulum is a chaotic polytope, and the shape for any living thing is even more complex. The shape of an arbitrary selection of spacetime is defined by the function2 constructing that shape. A function is computable if there exists an effective procedure that satisfies the mapping between input and output as defined by the function. Possible selections of spacetime form shapes that correspond3 to functions in the form of effective procedures occupying an atomic area in computation space. The following describes the structure formed by the elements in computation space.

‘An object forms a kind of four-dimensional “worm” in spacetime. An object, seen at any one instance, is a three-dimensional cross-section of this long, thin, intricately curved worm. The line along which the worm lies… is called the objects worldline’. Paraphrased after David Deutsch and visualized in the movie Donnie Darko. The picture is from a very nice article.

For example, imagine two possible universes. The first universe contains only a frozen planet moving about. A singe function based on some parameters would seem like a suitable model to describe the movement of the frozen planet. Mapping the function to an element in computation space would be trivial. The second universe contains only a double pendulum. The chaotic behavior of the pendulum can be modeled by a single function only if its parameters are infinitely fine tuned. Multiple different less fine tuned functions could approximate the behavior of the double pendulum. In other words, there is some fundamental fuzzyness associated with the function – or rather the similar functions – of the double pendulum from which novelty can emerge. Mapping the functions for the second universe to computation space requires a computation space structure in which novelty can live, i.e., more elements populate computation space in case of the second universe compared to the first.

Relation between spacetime and computation space
a) Universe devoid of novelty and structure
b) Universe devoid of novelty where behavior can be described from multiple points of view
c) Fine tuned chaotic system giving rise to a multiverse where each novel universe arises from initial conditions and the operation of the chaotic system
d) Different novel universes arising from different fine tuned chaotic systems
e) The total novelty “amplitude” for a universe is given by the sum of all functions in computation space approximating that universe. Optimizing for novelty is what justifies the unreduced existence of the universe.

Just as spacetime is not flat, neither is computation space. In analogy, computation space is better thought of in the vicinity of a “computational field” (not engaging into the discussions related to a computation space analog of Supersubstantivalism), where the shape of the field changes due to the presence of computational complexity, i.e., the massiveness of an object in computation space. Computational complexity is a measure of the effort related to the transformation of a function into a one way street (i.e., a computational procedure, e.g., an algorithm running on a Turing machine) connecting a set of input parameters to some desired output. A computational procedure is generally said to express complexity if novelty (unpredictable algorithmic depth when traveling a one way function in the opposite direction)4 explains how a small change in input parameters results in a vastly different output result. Functions that are computationally complex could be considered as the computation space equivalent of a massive object in spacetime.

The idea of functions existing in a computational field raises the chicken and egg question: does the complexity of functions deform the computational field or does the deformation of the computational field define the complexity of functions. The hypothesis is that similar functions attract each other and form the analog of massive objects in computation space, i.e., functions behave like smallest pieces of matter exerting a gravitational force on other pieces that are similar in terms of spacetime coordinates. It is the computation space analog of this attracting force that grants functions their complexity by means of deforming the computational field.

The computational field at any point P in computation space is defined as the force X felt by a tiny unit of Y placed at a point P. The tiny unit of Y is the function governing the smallest possible change in spacetime. The force X is the “computational force” which pulls together what belongs together in computation space. Functions aggregate in regions of computation space due to the computational force, similar to how tiniest fragments of matter aggregate in spacetime to form massive objects due to the gravitational force. The foundation for the gravitational force is the proximity of matter according to some distance metric. In analogy, a metric is required to define the proximity of functions. The computational force pulls together what belongs together according to the proximity metric for computation space, like the gravitational force pulls together what belongs together in terms of the distance metric for spacetime. 

The metric for the proximity of functions – for lack of a better understanding5 – must be defined indirectly. Instead of trying to find a metric for the proximity of functions by extracting similarity from their symbolic definition, similarity is presented by their expanded form, i.e., the corresponding shapes formed in spacetime. Similar functions approximate some specific shape of spacetime with a certain degree of accuracy. E.g., when a creature moves its arm, several functions with a degree of similarity are approximately homomorphic to that movement. Similar functions enable the movements of an arm – which are aggregates made from similarly behaving matter building blocks – at different granularity levels. 

From a larger chunk of spacetime, all possible functions for all possible selections from that spacetime chunk aggregate and diverge in shaping the computational field6. Some complicated shapes in spacetime can be closely approximated by many similar functions which are pulled together in a common place in computation space by the computational force, forming attractors around which ontological objects order themselves. Likewise, the attractor at the core of a computationally complex region of the computational field facilitates or is facilitated by the increasing number of functions that are similar by some metric. E.g., when a creature moves its arm to grab an apple, the similarity of functions structuring around attractors of the computational field is a requirement enabling the description and configuration of different aggregates of spacetime selections in a way that creates the consistent illusion of a creature, its arm, and an apple at the different granularity levels of the macroscopic, microscopic, and quantum world. 

Possible Distortions in 1D Computation Space
a) … nothing moves in spacetime
b) … trivial behavior in spacetime
c) … multiple systems interacting
d) … irreducible complexity
e) … computational equivalence

Some ontological objects rely on functions that are computationally complex, as complexity is a feature around which extensive similarity lives. I.e., many similar problems and approximations live around a computationally complex function but not around an easy one. E.g., there are many algorithms that provide optimization or approximation for the factorization problem, but there are not many algorithms which approximate the results of addition. Similar functions for complex problems can be exploited by ontological objects evolving to ever greater competence and are a necessity for any advanced form of life. For example, the interactions between paths of stellar objects or neurons in the brain are defined by complex systems capable of producing novelty, and many approximations can be found for these systems on different granularity levels.

As the scale of complexity increases, so does the deformation of the computational field. The curvature of the computational field is an effect of the computational complexity of an attractor reflecting the similarity of functions in computation space and relates to the resources required in the hyper-computational setting. It is explained later – with a twist – how the computational field can be embedded in a hyper-computational setting. 

The deformation of the computational field can take many forms. The computational field is flat where functions live that govern no change in spacetime, i.e., for static parts of spacetime selections. Small dents in other parts of the computational field indicate a linear/continuous/smooth shape of the corresponding selection of spacetime representing, e.g., an electron that moves through time and space. Of course, electrons interact with all kinds of stuff and stuff interacts in general. As the scale of interaction increases, the complexity that a shape of some spacetime selection contains – and implicitly the amount of functions approximating that shape – also increases, creating greater dents in the computational field. Once a certain threshold of complexity is reached, a hole is formed in the fabric of computation space. This reflects the hypothesis that the extreme of a computationally complex function, a computationally irreducible one, is surrounded by approximations that produce less and less similar outputs as their distance to the computationally irreducible singularity in computation space increases. As the similarity in output decreases, so does computational complexity. Once the distance increases sufficiently, functions might fall into the well of another complex region in computation space. The meaning of a hole in the computational field is that spacetime is behaving according to a function that is not computationally reducible – which seems like a paradox as it is hard to explain how spacetime would be able to form shapes that cannot be mapped onto the computational field and vice versa. This article will revisit what it means for a function to be computationally (ir)reducible while forming a hole in computation space later. 

Increasing computational complexity (1-4) leading to irreducibility and the formation of a new computational bubble (5)

First imagine two possible universes. One contains a digital computing machine adding up numbers, while the other contains a digital computing machine calculating the n body problem. From within these hypothetical universes, reality might seem very similar. Imagine further that both universes are described by the same computational field7. Inside the universes exists the expanded form of what is a pocket of computational reducibility on the outside, i.e., a deformation of the computational field.8 The first universe creates only a small dent in the computational field. In contrast, the second universe is accessing a problem that is computationally irreducible (why it can do that is another story9), meaning that the hyper-computational resources required to calculate the second universe might approach infinity. The second universe has the potential to punch a hole into the computational field.

Using a computer science view, it is easy to imagine that nothing else (still not engaging into the discussions related to a computation space analog of Supersubstantivalism but silently adopting it) is required besides the computation of the functions to give rise to reality. The computational field is the computation of the functions that manifests in the performance of spacetime. In other words, spacetime is an inevitable effect resulting from the mere existence of the computational field and the functions that live in it under conditions of complexity, driven by a hyper-computational construct.

Zooming out, the computational field forms a bubble containing functions surrounded by infinity like the big bang formed a bubble containing all the energy in the universe. Inside the computational bubble, black hole like singularities define the edge of what is efficiently computable. Wormhole like structures connect parts of computation space that are computational equivalent. 

However, at the edge of the computational bubble, holes do not form inward. Imagine the matter from a collapsed star forming a black hole and falling into its own black hole. The result would be a drop of spacetime faintly connected to traces of its energy in the spacetime bubble that the drop was formed from. A vibrating string too small for anything to pass through, too small to be considered on its own as a dent in the shape of spacetime. Similar drops are formed from computation space by functions that are not computationally reducible. The hole in computation space seemingly requiring infinite resources is no hole, it is a gateway to another computational bubble where the relation between functions and their complexity may be differently allocated. To summarize, there is a computational field underlying spacetime forming a bubble whose shape defines what can be efficiently computed. The bubble defines the scope of computational reducibility as a set of functions that can be efficiently computed. The hypothesis is that a) smaller bubbles contain less efficiently computable functions, and b) bubbles contain an arbitrary set of functions for which the definition of efficient reducibility is given by the respective shape of the bubble they are contained in. In short, there is no absolute definition of what is efficiently computable and what is not. Rather, our materialistic part of human experience in spacetime filters out the perception of all other possible computational bubbles. Understanding the expanded form of a foreign computational bubble would hardly be possible from the point of view of spacetime. The reason is that the computational buble underlying spacetime considers the function irreducible for which another bubble provides a pocket of reducibility. The irreducible function would correspond to an unnatural shape for a selection of spacetime. Approximations of that irreducible function are possible in spacetime, but the true irreducible problem has no representation in spacetime. This could lead to us humans having only a very limited idea of what is possible if it were not for the fact that we are no philosophical zombies.

Bubbles that Compute in Infinite Escher Hyper-Computation Space
A’ … Approximations of: functions in A for which no computationally reducible form exists in B that computes efficiently
A … Expanded form of A’ calculated at infinite precision, which is computational irreducible in B

By untangling the computational field from its spacetime expression, suddenly the capacity of the computational field becomes unbound in an infinite space where everything (every function) is computationally reducible in an efficient manner. It is only the local bubble that imposes constraints. That bubble is connected to others through holes formed by the massiveness of similar functions inside the bubble. The hole represents in one bubble the computational irreducible attractor that similar functions structure around, while the hole becomes its own computational bubble with different rules that define complexity on the other end. What is irreducible in one bubble can be easy in the other and vice versa (in an optimized bubble structure, where optimization is possibly driven by mechanisms of evolution). Likewise, the computational bubble underlying our physical universe could be calculating what is irreducible in another computational bubble. 

Imagine two universes again. One contains a classical digital computer calculating the n body problem to some limited precision (i.e., a numerical solution, which is possible in spacetime), while the other contains a weird computational machine calculating the exact solution to the n body problem (i.e., an analytical solution, which is not possible in spacetime). Underlying the two universes are two computational bubbles formed from the computational field. The first bubble is the one humans can exploit with a materialistic approach, e.g., humans can program a computer for the task at hand (calculating a numerical approximation). The second bubble can be employed by the constructor for materialistic reality, e.g., for running the simulation of the universe containing all planets, where on one of these planets there are humans computing a numerical solution for the n body problem. The second bubble is inaccessible by materialistic means as it is employing computation in an unnatural way.10 However, in a wild relativistic twist, the analytical solution is an approximation of the numerical one. Between the two solutions exists a symmetry more so than a hierarchy. Novelty leaks from one bubble to the other due to the symmetry. The relation between the bubbles is that in the local bubble many similar functions representing numerical approximations surround an attractor representing an exact analytical solution. This attractor causes a deformation of the local bubble forming a hole that extends far out and expands into its own bubble on the other end. Novelty radiates outwards where bubbles meet, creating unexpected patterns when complex systems are at work.

For any complex life to emerge, novelty needs to be injected into an otherwise dormant system. Novelty in its approximated form stems from the complexity of functions surrounding an attractor in computation space, and in particular from the effort required to compute such functions. This kind of novelty in principle is explainable in the substrate that the local bubble of the computational field supports. I.e., it is possible to create deterministic novelty using complex functions by running programs on a classical computer, e.g., a random number generator. However, novelty is not fully explainable as functions push against what is efficiently computable. I.e., it is possible to create deterministic novelty in complex systems, e.g., the behavior of a cell is not reducible to natural computation and neither is the novelty produced by the cell. This kind of novelty is indistinguishable from true novelty. Novelty in its true form originates from an exchange of information between a function hosted in one bubble and the attractor it is approximating, where the attractor conceals the effective computation of a function irreducible in the local bubble. True novelty is created by the seemingly irreducible nature of the attractor that radiates outwards to the surrounding functions.

Functions surrounding an attractor are in a relation to wherever the hole formed by the attractor leads. Since the computational field forms a bubble made of functions that construct the substrate, connecting one bubble to another results in a duality for reality, as the other bubble constructs another substrate. There is no causal relation required between the two substrates, as the underlying functions exist intrinsically related. A qualitatively different expression other than the materialistic one associated with spacetime can be imagined to produce the expanded form of the functions in a foreign computational bubble. Just as the functions in the computational bubble underlying materialism correspond to a shape in spacetime, the functions in the foreign computational bubble might correspond to an expression in a qualitatively different substrate. When humans tap into complex functions by conscious thought, they might in some sense activate an infinite cascade of computational bubbles.

Imagine two universes that are qualitatively different, one is based on spacetime, the other on something else. Spacetime relates to the computational bubble as we know it, the other to a different type of computational bubble that brings other capabilities and where other functions are reducible. Maybe our impressions of qualia live in that other universe. The matter and energy in our brain move according to a computational complex function living in our computational bubble. That function is connected by a wormhole like structure to a different bubble. An operation on one end of the bubble is related to an operation on the other end. The operations construct parts of reality. The functions in the second bubble are not reducible from the point of view of spacetime, but they might correspond to a shape in a qualitatively different reality. Maybe our brain is the approximate solution to an irreducible problem that is calculated exactly somewhere else or vice versa.

Computation space containing all the computational bubbles might be a static creation. However, it seems compelling to think about the evolution of computation space on different granularity levels. If the way how functions aggregate when shaping the computational field is itself exhibiting dynamic behavior, it can be seen as another strange fountain for novelty. (Next to deterministic novelty that weak-emerges from the similarity of functions interacting in a dense area of computation space and strong-emerging true novelty created at the border of the singularity when functions structure around the hole formed by an computationally irreducible attractor involving some bi-directional exchange). If functions themselves are a form of life, they might combine and mutate in a dynamic environment enabling ever greater expressiveness which could again correspond to novelty exploitable by the ontological objects in spacetime. This force of life might drive the evolution of the computational field. On a coarser granularity level, the aggregation of functions shapes the computational bubbles. Thereby, the bubbles themselves become another substrate. It is imaginable that entities made from that substrate live in computation space. As part of a living thing, computational bubbles might evolve under selective pressure and the evolution of the computational bubbles might be a facilitator of the creativity employed in the evolutionary process of these beings. The machine elves send their regards.

The infinite conceptual space before the first computational bubble contains everything and nothing. When certain combinations of concepts meet, a conceptual big bang occurs in the infinite conceptual space. Imagine meta-concepts like locality that bring together the concepts of causality, circular dependency and an infinite pass compiler. These concepts together would start a huge mess as the infinite pass compiler would try to resolve circular dependencies until a causality chain is created. Maybe that’s just how everything started. A set of computational bubbles causing each other in a strange loop in a distant part of the infinite conceptual space where the attempted resolution of the conflict causes an ever greater mess. An Escher hyper-computation area in the infinite conceptual space. This mess could act as the hyper-computational machine promised before in the article.

Footnotes and Open Ends

1) An alternative view on computation space is to consider initial conditions for a single algorithm that is executed in a super-deterministic manner. Given the right algorithm, the whole of spacetime reality might get laid out by computing the algorithm. Such an algorithm relates to all of computation space, as both the single algorithm and computation space need to account for novelty. E.g., if the single algorithm were a cellular automaton, the vast amount of computation necessary to produce reality could be described at different levels in a hierarchy, e.g., a cellular automaton might realize Turing machines running programs (emerging as novelty from initial conditions and the computation of the algorithm), enabling decomposition of the cellular automaton into a structure (a space) of computable functions. This article describes the space of computable functions and how novelty structures in it. In the sense that pockets of novelty are explicitly addressed, computation space is an explanation exceeding that of a single algorithm being executed based on some initial conditions. 

(On a side note, the claim in the article is that the four dimensional spacetime structure relates to computation space. To facilitate a more accessible explanation, other possible dimensions next to spacetime are ignored in the article.)

2) The (parametric) function describing the spacetime shape is different from the function generating the spacetime shape. The function generating the spacetime shape might be computationally simple or complex unrelated to the resulting shape being simple or complex. A complex function can conjure novelty when creating output, but novelty does not have to manifest in one particular output. A butterfly in Brazil can cause a tornado in Texas, but a butterfly can also have become bird food long before stretching its wings. However, both of these types of functions might find a place in computation space.

3) There are good arguments claiming that reality is not fully reducible to computable functions. Accordingly, models based on computable functions fail to capture some essential aspects of reality and computable functions can model reality only when certain tradeoffs are accepted. The arguments do not rely on some of the more esoteric aspects of reality but work just as well for materialistic questions like how the path of a double pendulum or the orbits of all stellar objects in the universe evolve. In these examples, the tradeoff is that complex systems produce novelty which can be modeled by computable functions only to a limited precision.

One could imagine a path of a chaotic system that is truly random. Such a path would be incompressible. There would not exist a function that could represent the chaotic system in the sense that no reduction is possible as the total input of the function would have the same effective length as the output. Unless the entropy comes from somewhere else. E.g., there is no computational function capable of exactly modeling a molecule in a storm efficiently. The molecule is part of a (truly random?) chaotic system and no efficient reduction is possible, meaning that the behavior of the molecule in reality is its most condensed form. Similarly, a function describing the spacetime shape of the molecule can not be condensed (unless randomness breaks down when looking at a shape of a size smaller than the sensitivity of the function to initial conditions). A compressed function might exist if the chaotic system would be approximately random, but even then there is no guarantee that a computational procedure could calculate the function efficiently. 

A problem in the whole discussion is the unclear definition of a computable function (as an example of semantic confusion, there is a difference between claiming that computable functions have to calculate efficiently on a classical or on a quantum computer). Essentially, the model of computation is in question. This article wonders if currently only traces of the concept for computable functions shimmer through the veil of the unknown to the state of the art. Similarly, state of the art concepts for computable functions may live in a conceptual space that is connected to another conceptual space underlying the unexplainable parts of reality. I.e., there could be something like the concept of computable functions that a broader scope of reality can be reduced to – broader than what is reducible to computational functions in the initial sense. Such an extended concept of computational functions would seem unnatural as it most likely would not be constructible in materialistic reality. I.e., computable functions as currently understood arise from a conceptual proto-computational soup that might give rise to different procedural frameworks (as is the case with, e.g., the frameworks for deterministic and nondeterministic Turing machines), but some (most) of these frameworks will not have a correspondence in that part of physical reality that is materialistic. 

It is reasonable to acknowledge the argument claiming that some parts of reality are not reducible to computable functions in the narrow sense, but whatever these parts are reducible to should be understood in relation to computable functions. This relation should be combined with the semantics of what computable functions are, which is exactly what this article aims to do. By claiming that all of spacetime is reducible to computational functions, the rest of the article is foreshadowed in which it is outlined how the concept of a computable function can be understood in a more extensive manner. The extension of the concept concerns especially the point at which functions stop being efficiently computable.

4) The hypothesis is that there is a base layer of compute which acts as the calculating foundation for processes of reality, i.e., reality can be reduced to something that computes. The base layer is reflected by the formal computational model underlying computable functions and consists of a finite number of procedural elements expressible as computational constants. E.g., the computational cost of arithmetic operations take on the numerical value of 1. Accordingly, it is not possible to further divide the elements on the base layer of compute. Some processes of reality can compute on the base layer, i.e., there is no semantic gap between the base layer and the abstract processes to be computed, while abstract (virtual) processes can be decomposed to the base layer. I.e., the assumption is that reality relates to computational elements like the operator for multiplication which can be efficiently computed while operators like factoring can not be efficiently computed, as the first has a representation on the base layer while the latter has not. Thus, reality is shaped not only by initial conditions and rules of causality, but also the base layer of compute. Similar to the assumption of efficiently computable elements on the base layer, pockets of computational reducibility enable efficient decomposition of a set of abstract processes, while abstract processes not in that set fail to compute efficiently. The key point is that the selection of elements on the base layer of compute and the selection of pockets of computational reducibility is ultimately arbitrary and that the arbitrary selection is not a self-explanatory state. I.e., why is there not an infinite amount of elements on the base layer of compute an infinite selection of pockets of computational reducibility? 

The strange side effect of the arbitrary selection is the possibility to define abstract processes in the formal computational model that can no longer be decomposed efficiently, e.g., in cases where decomposition leads to unpredictable algorithmic depth. Under the assumption that the formal computational model describes the base layer of compute, a discrepancy is presented between how the base layer operates while constructing reality and how it is modeled, e.g., the base layer can construct the evolution of planetary orbits precisely which is something a machine realizing the formal computational model can not do. Thus, either the model is incomplete, or the base layer operates differently than expected. 

The core of the problem seems to be that the formal computational model in some aspects is more general than what is supported by the selection of elements on the base layer of compute and the selection of pockets of computational reducibility as is supported by materialistic reality. I.e., it is possible to formulate an abstract process in the formal computational model that does not compute efficiently in materialistic reality. It seems as if the formal computational model models something else other than what is supported by materialistic reality. Furthermore, there seems to be a way how the overarching formal computational model can be put into operation somehow, as is evident by, e.g., the planets moving in their orbits or the mind understanding Gödel sentences based on non-computable behavior in the brain. Maybe orbital mechanics and human understanding are non-materialistic in the sense that materialism and the model of computation aligned to materialism are a subset of something more general.

A much better explanation is that materialistic reality acts as a filter exposing only the arbitrary selections necessary to compute materialistic reality. That is why the same abstract process might compute efficiently in one version of materialistic reality while if might not compute efficiently in another, e.g., on a classical computer vs a quantum one. I.e., the formal computational model to some extent allows to describe problems that could run given another base layer of compute and other pockets of computational reducibility than what passes through the materialistic filter. It is only once the formal computational model is transformed into materialistic reality that problems regarding efficient computation emerge. (However, the state of the art formal computational model adopts some of the constraints presented by the view through the materialistic filter and is handicapped in that sense).

Another take at the issue is to understand that the formal computational model in principle can be unbound from the materialistic filter. A process formulated in terms of the formal computational model can be transformed into a partial reality like the one understandable through the materialistic filter. The transformation implies that the process formulated in terms of the formal computational model has to be separated into what can be reduced to the arbitrary selection of compute as defined by the materialistic filter and into what can not. What can not be reduced, can not be reduced only within the scope of a partial reality. The processes are not owned by a partial reality but a partial reality represents a certain view on the processes that filters the full scope of the processes. A pathway for novelty is expressed in a partial reality as processes are computed by the unbound base layer of compute. E.g., materialistic reality is a filter through which it seems impossible to compute an exact solution to the n body problem without running into unpredictable novelty that cannot be explained from within the filter. Without that filter, it is possible to calculate the exact solution to the n body problem as demonstrated by materialistic reality itself. The assumption of a hierarchy between the two problem instances of the n body problem breaks down as both are reduced to an unbound base layer of compute. 

The symptom of the complexity in calculating the n body problem when viewed through the filter of materialistic reality comes in the form of novelty. Novelty is found when the description of the n body problem in the formal computational model is viewed through the filter of materialistic reality, as this is where unpredictable algorithmic depth occurs. Novelty is resolved by removing the filter. In the expanded view, it is possible to compute the n body problem without encountering novelty as there is no longer a semantic gap between the n body problem and efficient computation, which is demonstrated by the constructor of materialistic reality hurling around planets. That is not to say that there can not be novelty ever, as the interconnected, evolving, and different views on the unbound base layer of compute might actually be the base layer, i.e., there might not be a static absolute truth as the article will argue.

5) A function and its transformation into an efficient computational procedure is a point-like construct in computation space. It is the opposite of trivial to understand the proximity of these points, especially if the functions occupying these points are described in a state of the art symbolic form. The reason might be that the properties of function might not be fully explainable locally, which would be like trying to understand the gravitational field of a point in spacetime by looking at the atom at that point when it is the area surrounding that point that influences the gravitational field at that point. I.e., if the complexity of a function is not defined by its properties encoded in symbolic form alone but also by its neighborhood, it would be hard to impossible to compare functions based on their symbolic forms. By executing a function, its neighborhood is implicitly considered, as that function runs in a hyper-computational setting that knows of the neighborhood. Therefore, it is generally not easy to predict the deformation of the computational field by looking at a random function without considering its expanded form, i.e., without considering the shape that a selection of spacetime occupies. Certainly, one cannot simply examine the symbolic properties of a function to understand if it is computationally complex or not. If it is hard to understand the complexity of a function, it is also hard to understand the similarity of one function to another without considering the expanded form of such a function. Also, functions might be similar in many ways, e.g., the spacetime shapes might contain another, cover a comparable surface area, possess the same fractal dimension, etc. The metric for the similarity of functions is something that would have to be examined further at a later stage. The different ways how functions can be similar possibly relates to the dimensions of computation space.

6) An ontological object is a composition that corresponds to multiple similar shapes in spacetime. A selection of these shapes collapses to the base layer of compute which employs computational constants for some atomic functions constructing the shape in addition to pockets of reducibility that efficiently relate function aggregates to atomic functions. Thereby, it is possible that some optimization is in play regarding which shapes are collapsible, e.g., a pocket of reducibility to constant const could be provided for the most widely used functions. Some other shapes might not be so easily reducible. 

The computational field could be seen as something like information about possible trajectories that physical systems can take. Complexity is a measure of the freedom that has been introduced into a physical system, e.g., by causal events involving other physical systems. This freedom – which becomes novelty then utilized – is expressed by the deformation of the computational field. For some spacetime selections encoding novelty, the reduction to the computational field would deform the computational field to an extent that would create a singularity, as functions that reduce novelty would have to be supermassive objects in computation space. 

Regarding the claim that all of spacetime can be reduced to computational function, two approaches can be differentiated. The first is to find computable functions that reduce spacetime shapes approximately, where either the approximation does not fully account for novelty or novelty is an illusion. The second is to take a closer look at the singularities in computation space. The introduction of true novelty beyond emergence due to large scale causal events is possible, as the computational field can form a wormhole to somewhere else when the scale of similar functions approximating an attractor is increased sufficiently. These singularities are why approximations by computational functions can depict a selection of spacetime under novelty conditions merely with a certain degree of accuracy. The explanation for a spacetime shape containing novelty is non-computational in the sense that a model of computation that is implementable on a physical materialistic substrate can only approximate that shape. It is computational in the sense that an unnatural model of computation implementable somewhere else can describe the shape. The plot twist is that the physical materialistic substrate is a  somewhere else place that unpacks a prime shape in a prime unnatural way, inducing prime novelty. However, the computational functions approaching an attractor are the same as the attractor, in analogy to how the event horizon of a black hole is the black hole as all theory becomes speculation beyond that point. As a consequence of the deformation of the computational field by the aggregation of similar computable functions, ever greater behavioral patterns of a material substrate become possible, eventually leading to the creation of life.

7) The two problems in the two universes have to be rephrased as decision problems (or function problems) first to better understand their complexity. A simple approach is to turn them into halting problems. The problem in the first universe is to determine if a program halts that takes as an input a string encoding two numbers to be added. Such a program halts of course and does not require exponential resources to do so. The problem in the second universe is to determine if a program halts that calculates the n body problem. It is possible to imagine an extended form of the n body problem with a halting condition, e.g., halt if two bodies collide. Still, the n body problem is a tricky one, because there is no exact procedure for solving it in general. At least in our computational bubble, the n body problem is best solved by approximation. It would not be clear if an n body simulator halts given a halting condition and arbitrary input. Even worse, the resources required might scale exponentially with input size depending on the procedure of approximation.

8) The relation between a constructed reality and its underlying computational field requires further examination. An important question is if the functions underlying a constructed reality represent only a subset of the functions spanning the computational field. In the sense that the computational field is a canvas upon which constructed reality is drawn, the computational field would be a space of possibilities which is explored by a particular spacetime graph. The method of exploration could be something like entanglement, where a distant part of the computational field is made of functions constructing a substrate, e.g., the geometric substrate possibly underlying materialistic reality. These functions could be entangled with functions in other parts of the computational field. As one possible spacetime diagram expands in its construction, it is accompanied by the linking/swapping of the entangled state along a path on the computational field, forming a strange knot in the end (if one were to accept that a knot can be a geometric interpretation of entanglement). In that part of the knot corresponding to the origin point of the spacetime diagram, the function for constructing a materialistic substrate is entangled with other functions than those in parts of the knot corresponding to the vast landscape of the spread out spacetime diagram. In other words, as the freedom of systems increases in the expanding spacetime diagram, it is necessary that the knot grows in size and knotedness. If the spacetime diagram instead were a uniform thing, the knot would collapse to a point and possibly vanish. 

One can imagine in a spacetime diagram two shapes that are nearly identical, e.g., one could construct a second earth identical to ours. Let’s assume one of these earths is not a physical copy but runs as a program on an advanced universal quantum computer on this earth. Parts of the knot would become another substrate that is entangled with parts of itself. Rather, it is easier to imagine that the same function exists multiple times in the computational field. So parts of the knot forming the substrate (the universal quantum supercomputer on our earth) would be entangled with the part of the computational field (made of functions underlying the simulated earth). That part of the computational field is computationally equivalent to the part underlying the physical earth. The article will continue to argue that computational equivalence can be visualized as a wormhole connecting two parts of the computational field. Smoothing out the wrinkles, there is not much difference between a wormhole and entanglement. Massiveness by means of similarity between functions is a source of large scale deformation of the computational field. The computational field might intersect itself and create entanglement on a small scale.

9) The strange presumption is that a computational mechanism is being run on a hyper-computational machine, where the computational mechanism may construct systems as complex as the hyper-computational machine. I.e. the structure of abstraction – the computation hierarchy – does not constrain complexity, it is merely virtual. Consequently, the computational field would have to be universal, meaning that it is the same computational field that is underlying everything. However, it is inconsistent to claim that multiple systems are computationally equivalent and can be reduced to the same computational field while also claiming that the computational field is the unique and singular constructor of all systems. I.e., how could two systems be computationally equivalent while being different in other aspects like the substance they are made from? The key is to consider that the part of the computational field responsible for constructing substances of reality might be separated from the part that performs computationally equivalent operations. E.g., computationally equivalent functions might take other functions constructing substrates as parameters. In other words, function might exist in entangled form at different places in the computational field. 

10) The approximation of true novelty by functions subscribing to a naturalistic computational model according to the framework of materialism is a form of accessing true novelty. Accepting a duality (and multi-ality) of valid computational models and the substrates they correspond to, similarities will exist. The extreme view is that the source of similarity is a hyper-computational one that shows symptoms in different computational models, but an absolute authority of a hyper-computational model should not be taken for granted. Rather, computational models and the substrates they correspond might form bubbles that pull and push each other, leaking what is novel to the foreign construct. In a wildly connected bubble, complex systems can, e.g., think about a numerical solutions in a formal computational model that carries novelty and meaning over from another bubble in which a symmetric version of the complex systems is mirrored and is computing an analytical solution.



Leave a Reply