# Numerical work in sage math 14: 1+1 dimensional Causal Dynamical Triangulation

In this post, a theory of quantum gravity called Causal Dynamical Triangulation (CDT) is explored. The 1+1 dimensional universe, the simplest case of an empty universe of one spatial and one temporal dimension is simulated in sagemath. This post explains CDT in general and presents and explains the results of the 1+1 dimensional simulations.

Understanding gravity at the fundamental level is key to a deeper understanding of the workings of the universe. The problem of unifying Einstein’s theory of General Relativity with Quantum Field Theory is an unsolved problem at the heart of understanding how gravity works at the fundamental level. Various attempts have been made so far at solving the problem. Such attempts include String Theory, Loop Quantum Gravity, Horava-Lifshitz gravity, Causal Dynamical Triangulation as well as others.

Causal Dynamical Triangulation is an to quantum gravity that recovers classical spacetime at large scales by enforcing causality at small scales. CDT combines quantum physics with general relativity in a Feynman sum over-geometries and converts the sum into a discrete statistical physics problem. I solve this problem using a Monte Carlo simulation to compute the spatial fluctuations of an empty universe with one space and one time dimensions. The results compare favourably with theory and provide an accessible but detailed introduction to quantum gravity via a simulation that runs on a computer.

In order to use the CDT approach, the Einstein-Hilbert action of General Relativity and the path integral approach to Quantum Field Theory are both important. I’ll begin by introducing both concepts as well as the metric and the Einstein Field equations. In this post I attempt, at least briefly, to explain CDT in general and explain what I have found with my simulation.

Quantum gravity

Theories of quantum gravity attempt to unify quantum theory with general relativity, the theory of classical gravity as spacetime curvature. Superstring theory tries to unify gravity with the electromagnetic and weak and strong nuclear interactions, but it requires supersymmetry and higher dimensions, which are as yet unobserved. It proposes that elementary particles are vibrational modes of strings of the Planck length and classical spacetime is a coherent oscillation of graviton modes.

Loop quantum gravity does not attempt to unify gravity with the other forces, but it directly merges quantum theory and general relativity to conclude that space is granular at the Planck scale. It proposes that space is a superposition of networks of quantised loops and spacetime is a discrete history of such networks.

Causal Dynamical Triangulation is a conservative approach to quantum gravity that constructs spacetime from triangular-like building blocks by gluing their time-like edges in the same direction. The microscopic causality inherent in the resulting spacetime foliation ensures macroscopic space and time as we know it. Despite the discrete foundation, CDT does not necessarily imply that spacetime itself is discrete. It merely grows a combinatorial spacetime from the building blocks according to a propagator fashioned from a sum-over-histories superposition. Dynamically generating a classical universe from quantum fluctuations has been accomplished.CDT recovers classical gravity at large scales, but predicts that the number of dimensions drops continuously from 4 to 2 at small scales. Other approaches to quantum gravity have also predicted similar dimensional reductions from 4 to 2 near the Planck scale.

Classical gravity

In the theory of relativity, space  and time  are on equal footing and mix when boosting between reference frames in relative motion. In the flat Minkowski spacetime of special relativity, the invariant proper space dr  and time ds  between two nearby events follow from the generalised pythagorean theorem.

which involves the difference in the squares of the relative space dx and time dt  between the events in some reference frame. This difference prevents causality violation, because light cones defined by dr = 0 or dt =+/-dx partition spacetime into invariant sets of past and future at each event. Free particles  follow straight trajectories or world-lines x[t] of stationary proper time. In the curved spacetime of general relativity, the separation between two events follows from

where the metric g  encodes the spacetime geometry. Free particles follow curved world lines of stationary proper time. The vacuum gravitational field equations can be derived from the Einstein-Hilbert scalar action

where the Gaussian curvature is half the Ricci scalar curvature, K = R/2, and Lambda is the cosmological constant. By diagonalizing the metric, the invariant area element

where g = det g.  Demanding that the action be stationary with respect to the metric, dS/dg = 0, implies the gravitational field equations

where the Ricci tensor curvature is the variation of the Ricci scalar with the metric, R = dR/dg. The Einstein curvature G is proportional to the cosmological constant, which can be interpreted as the vacuum energy density.

Quantum mechanics

In classical mechanics, the action

is the cumulative difference between a particle’s kinetic energy T and potential energy V[x]. A particle of mass m follows a worldline x[t] of stationary action. Demanding that the action be stationary with respect to the worldline, dS/dx = 0, implies Newton’s equation

In quantum mechanics, a particle follows all worldlines. Along each worldline, it accumulates a complex amplitude whose modulus is unity and whose phase is the classical action S[x] in units of the quantum of action h. The Feynman propagator, or amplitude to transition from place a to place b, is the sum-over-histories superposition

where Dx denotes the integration measure. The corresponding probability:

is the absolute square of the amplitude. If the wave function is the amplitude to be at a place b, and the kinetic energy

then the path integral between infinitesimally separated places implies the nonrelativistic Schrodinger wave equation

In quantum gravity, the corresponding sum is over all spacetime geometries and the quantum phase of each geometry is the Einstein-Hilbert action. The probability amplitude to transition from one spatial geometry to another is

If

is the probability amplitude of a particular spatial geometry, then this path integral implies the timeless Wheeler-DeWitt equation:

Gauss–Bonnet theorem

In 1 + 1 = 2 dimensions, the Gauss–Bonnet theorem  dramatically simplifies the Einstein-Hilbert action by relating the total curvature of an orientable closed surface to a topological invariant. The curvature of a polyhedron is concentrated at its corners where the deficit angle

and the total curvature

where Kv is the discrete Gaussian curvature at vertex v, and av is the area closer to that vertex than any other. The Gaussian curvature of a circle of radius r is the reciprocal of its radius 1/r. The Gaussian curvature at a point on a surface is the product of the corresponding minimum and maximum sectional curvatures. Hence, the total curvature of a sphere

like the topologically equivalent cube. More generally,

where X is the surface’s Euler characteristic and the genus G is its number of holes. For a sphere G = 0, and for a torus G = 1. Total curvature can only change discretely and only by changing the number of holes.

Wick rotation

The sum over histories path integral is difficult to evaluate because of the oscillatory nature of its integrand. A Wick rotation converts this difficult problem in Minkowski spacetime to a simpler one in Euclidean space by introducing an imaginary time coordinate. For example,

One motivation for the Wick rotation comes from complex analysis, where the integral of an analytic function f[z] around a closed curve in the complex plane always vanishes. If the function also decreases sufficiently rapidly at infinity, then around the contour C,

and so

which effectively rotates the integration 90o in the complex plane. Multiplying a complex number by similarly rotates the number 90o in the complex plane. The Wick rotation maps the line element by

and the area element by

This gives the  Einstein-Hilbert action by

and the the probability amplitude by

This Wick rotation converts the Feynman phase factor to the Boltzmann weight thereby connecting quantum and statistical mechanics, where toroidal boundary conditions and a positive cosmological constant Lambda > 0 ensure the negativity of the Euclidean action Se < 0.

Regge calculus

Triangulation can simplify the continuum to the discrete. The equilateral Euclidean triangle has height

and area

fig4

Regge calculus approximates curved spacetime by flat Minkowski triangles (or simplexes in higher dimensions) whose edge lengths ‘ may ultimately be shrunk to zero to recover a continuum theory. Curvature is concentrated in the deficit angles at the vertices.Dynamical Triangulation uses only equilateral triangles, but incorporates both positive and negative curvature by suitably gluing the triangles together. Causal Dynamical Triangulations uses only dynamical triangulations with a sliced or foliated structure with the time-like edges glued in the same direction

Local light cones match to enforce global causality and preclude closed time-like curves. Regge calculus and the Gauss–Bonnet theorem dramatically simplifies the Wick-rotated Euclidean action from

to

where Nis the number of triangles. Assuming periodic boundary conditions in space and time, so the model universe is toroidal with genus G =1 the Euler characteristic X =2 (G-1) and the action

where the rescaled cosmological constant

The integral in amplitude becomes the sums

where the number of triangles is twice the number of vertices, N = 2Nv,

The Simulation

Monte Carlo analysis

To analyse the statistical physics system defined by the  effective partition function, take a random walk through triangulation space sampling different geometries. The 1-2 move and its inverse 2-1 are ergodic triangulation rearrangements in 1+1 dimensions (these Pachner moves are 2-2 and 1-3 in 2d).They are special cases of the Alexander moves of combinatorial topology. The move M  splits a vertex into two at the same time slice and adds two triangles, while its inverse M_1 fuses two vertices at the same time slice and deletes two triangles. Both preserve the foliation of the spacetime. Monte Carlo sampling of the triangulations leads to a detailed balance in which moves equilibrate inverse moves. At equilibrium, the probability to be in a specific labelled triangulation of Nv+1 vertices times satisfies

After choosing to make a move, randomly select a vertex from Nv vertices, and then randomly split one of np  past time-like edges and one of nf future time-like edges. Hence the move transition probability

After choosing to make an inverse move, fusing the split vertices is the only way to return to the original triangulation. Hence the inverse move transition probability

By the effective partition function, the probability of a labelled triangulation with Nv vertices is the weighted Boltzmann factor

The probability of a move is a non-unique choice. In my simulations, I choose

To satisfy the detailed balance, the probability of an inverse move is therefore

Other choices may improve or worsen computational efficiency]. In practice, a Metropolis algorithm  accepts the rearrangements if these probabilities are greater than a uniformly distributed pseudo-random number between 0 and 1. For example, if P[M]=0.01, then the move is unlikely to be accepted.

In the initial spacetime configuration we have the universe before any move or antimove as shown below

In this simulation, Ionly looked at the 1+1 dimensional universe. In this simulation, the 1+1 dimensional universe is decomposed into 2 dimensional simplices, one with two time-like edges and a space-like edge. In the simulation, the data structure stores this kind of information which turns out

to be very useful to implement the simulation. The up pointing triangles have their spatial edges on a past time slice and the down pointing triangles have their spatial edges on a future time slice. Those time-like edges are always future pointing. In this simulation, periodic boundary conditions were chosen. What this means is that the final time-slice and the first time-slice are connected so that every vertex of the beginning time-slice are identifed with every vertex on the finnal time-slice. The toroidal topology S1 x S1 was used to achieve the periodic boundary conditions.

In my simulation, I used a data structure that stores certain information about each triangle in the triangulation. This information is stored in an array. In this array, the following information

about the triangulation is stored: type of triangle (that is, if it is up pointing or down pointing), the time slice which it is located, the vertices as well as the neighbors of the triangles. Using a labeling scheme I was able to store information about the triangles in a triangulation. Each triangle was given a key starting from 0 to n-1. The type of the triangle (which is either type I or type II) was also stored. The neighbors for the different triangles were also stored. In general, the array structure for any triangle takes the form Tn = [type; time; p1; p2; p3; n1; n2; n3] where p1; p2; p3 are the vertices of the triangle and n1; n2; n3 are the neighbor of vertex 1, neighbor of vertex 2 and neighbor of vertex 3 respectively. This entire structure is composed strictly of integers. The integer assignments are in linewith the idea of reducing the problem to a counting problem. This data structure is then taken advantage of to do the combinatorial moves to split a vertex and add two triangles and antimoves to remove two triangles. The moves are done by randomly picking a vertex and splitting it, adding two triangles in the gap left behind where ever the vertex was split. The antimoves involve randomly picking vertices and deleting the triangles associated with the vertex. This is the same thing as doing a random walk through space-time. When the moves and antimoves are done, the arrays are repeatedly updated for every move and antimove. The universe is then made to evolve by repeatedly doing moves and antimoves rapidly and the size of the universe is measured every time over every 100 such combinatorial moves.

Initial runs of the simulation give results such as those below:

In a  further post I will be exploring the critical value of the reduced cosmological constant.