On September 5, 2018, Prof. Donald Chang <bochang@ust.hk> wrote:
Hi, and thanks for your email. Regarding my paper that you referred to, What is the physical meaning of mass in view of the waveparticle duality?, as far as I know, I am the first one to propose this idea.
I started my work on the physical meaning of mass in early 80s [about 35 years ago], when I was working in the Physics Department of Rice University (Houston, Texas). I gave a talk at a meeting of the American Physical Society, and published an abstract there. But at that time I was working mainly in the field of biophysics. I decided to return to my work on fundamental problems in physics about 15 years ago. My current research interests are mainly in quantum mechanics and relativity.
My view expressed in the paper reflects my current thoughts. But of course, I cannot say it is the “last word”. Science is a continuing challenge, so, I cannot rule out any new ideas in the future.
It is not easy to publish papers on fundamental physics in mainstream journals.
The paper you referred to was only deposited in arXiv. It has not been formally published yet. But some parts of it were discussed in my recent publication.
Because the paper is only a preprint, the number of citation is still small. Last week, I had another paper published in the Optik (see attached). It is about a very important subject. Some of your colleagues may be interested.
Best wishes,
Donald ( 張東才教授 )
Hong Kong University of Science and Technology :
Dear Professor Donald C. Chang, Ph.D.,
Thank you so very much for your very kind and friendly reply. Much appreciated.
Please allow me to demonstrate my basic comprehension of your groundbreaking scientific research on the physical meaning of mass in view of the waveparticle duality. Quoting from Section 10, Interpretation of Newton’s gravitational law from the perspective of the wave view of mass (page 24),
Mass can have two meanings in classical mechanics. One is inertial mass and the other one is gravitational mass. In the above discussion, we showed that the concept of inertial mass could be an illusion. This so called “inertial mass” is actually a measure of particle’s energy, as per massenergy equivalence in the special theory of relativity.
Elementary particle has mass. As per the classical Einstein’s massenergy equivalence in STR this mass should be equivalent to energy, and as per the nonclassical (modern) de Broglie’s waveparticle duality, this energy is a wave. And such a wave should be (dualistically) complementary with this very particle physical existence, and with physical existence of its particulate inertial mass. Thus the complete logical circle has been perfectly closed onto itself, like the YinYang. Contraria Sunt Complementa, is the motto on Niels Bohr’s coat of arms. It means opposites are complementary :
ENERGY: YANG, active, wave, mind, Sun, light, fire, (“positive“);
MASS: YiN, inertial, particle, body, Moon, dark, water, (“negative“).
Niels Bohr (1885–1962) received the Nobel Prize in Physics in 1922 for foundational contributions to understanding atomic structure and quantum theory. In 1947, Niels Bohr was awarded the Order of the Elephant, a prestigious Danish distinction normally reserved for royalty or distinguished generals. The only other Danish scientist who received that honor was Tycho Brahe in 1578.
Field phenomena, like waves of energy, seem to be more fundamental in physical reality than mechanistic concept of pointlike particles. The mechanistic explanation of field phenomena is clearly impossible, but not the other way around.

atom’s inertial mass must be some energy;

there is only so much energy inside an atom;

THEREFORE: all that energy must be atom’s “inertial mass”,

BECAUSE: there simply isn’t any such energy inside an atom that could not be equivalent of its mass.
This commonly accepted notion of atom’s “inertial mass” as something separate from, or different than atom’s all other energies must be an illusion in the physical reality.
The concept of mass, with the concept of gravitational mass identified with the concept of inertial mass, is quantified and defined by gravitational phenomenology. Therefore, on purely logical grounds, the concept of mass so defined cannot then be used in the theories of physics as an explanation of the very phenomenology used to define and quantify it. Because otherwise it would become a classical instance of meaningless, circular reasoning.
In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction, but a theory of the latter has not yet been reconciled with the current model of particle physics (the Standard Model).
In other words, there is no such “inertial mass” that could be experimentally detected as something separate from gravitational interaction, even in principle, and therefore
INERTIAL MASS IS MERELY AN UNFALSIFYBLE METAPHYSICAL ASSUMPTION
that could never be experimentally verified, even in principle.
On the other hand, gravitational interaction is an obvious testable empirical fact. This gravitational interaction can only result from atom’s energy, and there is no other energy to be found in the simplest hydrogen atom than the known & obvious one.
For example, in case of magnetic attraction, by analogy, we could say that it must be due to some magnetic mass, which is something separate from electric and magnetic energy of atoms inside a magnet. Then we would experimentally search for it and not be able to find it, because magnetic interaction simply isn’t due to existence of any magnetic mass that could be experimentally detected.
Let us take hydrogen atom under consideration. It has its electric field, its magnetic field, and the associated electromagnetic angular momentum. There is only so much energy inside hydrogen atom. There isn’t even strong nuclear force there, therefore
to suppose that in addition to all the above mentioned energy, there could also be some “inertial mass” inside hydrogen atom that could be separate from it all, and in addition to it, is to invite ridicule.
There simply isn’t anything more left inside hydrogen atom, in addition to all its above mentioned energy, in order to magically produce, or appear as, any additional “inertial mass”, the mass that would have to be energy, anyway.
1. INTRODUCTION
The principle of universality of free fall, or Weak Equivalence Principle (WEP) states that all bodies fall with the same acceleration, independent of mass and composition. The WEP has been tested with very high precision for matter but never directly for antimatter. The principal goal of ALPHAg, AEgIS, and GBAR experiments is to test the Weak Equivalence Principle with antihydrogen atoms at the European laboratory for particle physics (CERN), using the antiproton decelerator (AD) to provide antiprotons and a source to provide antielectrons, which we combine to form antihydrogen atoms. Tests with charged antiparticles are hopeless, given the extreme weakness of gravity in comparison with the other forces, while tests with electrically neutral antihydrogen atoms are merely extremely difficult.
A few teams at CERN are now jumping into the next big challenge: the race to measure antimatter’s acceleration under gravity. Physicists generally expect antimatter to fall just like matter. But some fringe theories predict that it has ‘negative mass’ — it would be repelled by, rather than attracted to, matter. Antimatter with this property might account for the effects of dark energy and dark matter, the identities of which are still unknown. If the experiments were to detect any gravitational difference between matter and antimatter, it would be a radical discovery. It would mean the violation of a principle called charge, parity and time reversal (CPT) symmetry. CPT symmetry is the backbone of theories such as relativity and quantum field theory. Breaking it would, in a way, break physics. Antimatter has so far produced no antigravity signatures, and many physicists think it will remain that way, because otherwise this would shake the foundations of modern physics.
Atoms of ordinary matter fall down due to the pull of gravity, but the same might not be true of antimatter. Scientists wonder whether antimatter atoms would instead fall up when subjected to Earth’s gravity, and whether antigravity exists. In the unlikely event that antimatter falls upward, we’d have to fundamentally revise our view of physics and rethink how the universe works, Joel Fajans, a physicistat the Lawrence Berkeley National Laboratory in California, said.
2. THE 3 PIECES OF THE PUZZLE
There are 3 pieces to the puzzle of quantum gravity and quantum antigravity:
The first image made of an atom spinning:
The 3 phenomena needed for an atom to produce quantum gravity are:
 atom needs to be spinning (angular momentum);
 atom needs to produce magnetic field (magnetic dipole moment);
 atom needs to be an electric capacitor (electric dipole moment).
The above 3 phenomena also need to be combined and oriented like in an atom, i.e. ideally, spin axis needs to be aligned with magnetic axis.
The first two of the above 3 phenomena are naturally obvious. The third one is also natural, but less obvious. At the bottom of this page there is Appendix 2, Atoms as electric capacitors.
Although we know that atoms are composed of electrical charges, essentially forming tiny polarized electrical structures, we do not tend to think about massive material bodies, like planets, as composed of electrical structures, or of electrical capacitors.
The main implication of the above conjecture is that gravity does not result from mass. Well, it all depends on what mass really is. But if this implication sounds ridicules, then consider the following:
The concept of mass, with the concept of gravitational mass identified with the concept of inertial mass, is quantified and defined by gravitational phenomenology. Therefore, on purely logical grounds, the concept of mass so defined cannot then be used in the theories of physics as an explanation of the very phenomenology used to define and quantify it. — W.F. Heinrich, QuantumGravity.ca
I could not agree more. But if you are not sure what the above statement means, let me give you its equivalent:
Why are material objects heavy (why they gravitate)? Because they have inertial mass. But how do we know that they have inertial mass? Because they gravitate (are heavy). What else could be fundamentally responsible for objects being heavy while subjected to Earth’s gravity, if not their intrinsic inertial mass? So, what exactly is inertial mass? Well, it is whatever makes objects heavy while they are subjected to Earth’s gravity. It must be something inside the nucleus of atoms.
In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction, but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model.
In other words, there is no such inertial mass that could be experimentally detected as something separate from gravitational mass. Than maybe producing gravitational interaction does not require inertial mass? Maybe gravitational interaction is generated by other means than due to existence of inertial mass?
For example, in case of magnetic attraction, by analogy, we could say that it must be due to some magnetic mass, which is something separate from electric and magnetic energy of atoms inside a magnet. Then we would experimentally search for it and not be able to find it, because magnetic interaction simply isn’t due to existence of any magnetic mass that could be experimentally detected.
The notion of inertial mass is historically preelectromagnetic. At the time it was a perfectly reasonable and logical idea. We say that heavy elements in the periodic table are heavy, because they have more protons and neutrons that are heavy. However, we can also say that heavy elements in the periodic table are heavy, because they simply have more electric and magnetic energy concentrated (energy density) in the nucleus.
The mass of fundamental particles – those that carry forces and build nuclei and atoms – is often explained by the way they move through the Higgs field that is thought to pervade all the space of the Universe.
Amazing. It seems that we have found the Aether at last, because now all the space of the Universe is finally filled with homogeneously distributed Higgs particles!
Unfortunately, as we already know, the Higgs mechanism is not able to explain the mass of neutrinos. And if the Higgs mechanism were to explain mass of all the other elementary particles, then it is not clear in virtue of what such mass could result in attractive gravity.
It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing, a somewhat unfamiliar conception for the average mind. — Albert Einstein
Because mass is equivalent to energy, then mass of an atom could be energies associated with its electric fields, magnetic fields, and angular momenta of all its constituent elementary particles as they combine into an atomic structure. For the discussion of the strong nuclear force, please see:
Even masses at rest have an energy inherent to them. You’ve learned about all types of energies, including mechanical energy, chemical energy, electrical energy, as well as kinetic energy. These are all energies inherent to moving or reacting objects, and these forms of energy can be used to do work, such as run an engine, power a light bulb, or grind grain into flour. But even plain, old, regular mass at rest has energy inherent to it: a tremendous amount of energy. This carries with it a tremendous implication: that gravitation, which works between any two masses in the Universe in Newton’s picture, should also work based off of energy, which is equivalent to mass.
 Dr. Ethan Siegel, Ph.D., Astrophysicist
In this work we demonstrate that there is only one “mass” that is a measure of energy of elementary particles in atoms. Such interpretation is consistent with Einstein’s massenergy equivalence. We show that, in the classical limit, this energy will automatically appear in the equation of motion as an inertial mass.
3. HOW WOULD IT WORK ?
Let’s take a look at an atom and at an antiatom (below). If we take electric dipole moment (green arrow) as a conceptual indicator, we will get the following picture:
In case of an atom, the green arrow indicator points inward, indicating the fact that atoms of matter produce attractive gravity, and by analogy, in case of an antiatom, the green arrow indicator points outward, suggesting that antiatoms would produce repulsive gravity.
In this way, neither negative mass nor negative energy would be needed to produce antigravity. The factor that differentiate between gravity and antigravity is merely the direction of electric dipole moment in an atom, inward for gravity, or outward for antigravity.
If mass is to be a property of elementary particles, and particles can be waves of energy, then potential existence of negative mass would imply the existence of negative waves. How a negative wave could possibly look like?
But what if we juggle semantics, and instead of negative wave we say: a wave of negative energy? All the same. What could be the difference between a wave of negative energy and a wave of positive energy? Wave of negative energy would still have to be simply another wave. Waves of negative energy would have to be akin to negative radiation and to negative temperature in Kelvin scale.
4. INERTIAL MASS AS AN UNFALSIFIABLE METAPHYSICAL ASSUMPTION
The notion of inertial mass is historically preelectromagnetic.
The energy of an atom is a combination of its electric energy, magnetic energy, and its angular momenta. There is no mass of an atom that is separate, and in addition to this energy, because there is no other energy in the atom. Therefore, this combination of atom’s electric energy, magnetic energy, and its angular momenta is its gravitational mass. Therefore there could be no inertial mass of an atom that is different from its gravitational mass, and therefore there is no need for inertial mass.
The letter “m” in F=ma represents inertial mass. This is Newton’s equation of motion. The problem with it is that, for example, a car standing motionless in a parking lot is still under the influence of gravity. Gravity is always everywhere, because there is matter in the Universe. Therefore we will never be able to detect the pure existence of inertial mass, because matter always produces gravity everywhere in the Universe.
The existence of inertial mass that could be somehow independent and different from atom’s electric, magnetic, and spin energy is merely a metaphysical assumption that could never be experimentally verified, even in principle.
On the other hand, gravitational mass is an obvious empirical fact. This gravitational mass must be equivalent to atom’s energy, and there is no other energy in the atom than its electric, magnetic, and spin energy. For the discussion of the strong nuclear force, please see:
The conclusion is that what we call gravitational “mass” is not a result of the existence of some metaphysical inertial mass, which seemingly inevitable existence was hallucinated in desperation along with that of the Aether, but is an exclusive result of atom’s energy, as per Einstein’s massenergy equivalence, and this energy is a field, and field lines are naturally oriented, and directed. Therefore, based on relative orientations and directions, we will have attractive, or repulsive interactions between gravitational fields.
It would seem logical that for antimatter to antigravitate, it would have to have negative mass.
Does electron really have a negative electric charge?
No. Because “minus” and “plus” are merely labels, like North and South of a magnetic dipole. There is really nothing southern about the South magnetic pole, and therefore there is nothing negative about electron’s electric charge. Historically, we could have equally well assigned the label “minus” to the proton.
So, what is the essential difference between electron’s and proton’s electric charges?
It is merely the direction in which their field lines are oriented.
We could have decided that the proton is an “outward” electric charge, and electron is an “inward” electric charge, with label “o” for proton, and with label “i” for electron. This would actually make much more sense than plus and minus labels, because what is really so negative (minus) about electron?
Nothing. Exactly like there is nothing truly southern about the South magnetic pole.
If mass is to be a property of elementary particles, and particles can be waves of energy, then potential existence of negative mass would imply the existence of negative waves. How a negative wave could possibly look like?
But what if we juggle semantics, and instead of negative wave we say that negative mass is equivalent of a wave of negative energy? All the same. What could be the difference between a wave of negative energy and a wave of positive energy? Wave of negative energy would still have to be simply another wave. Waves of negative energy would have to be akin to negative radiation and to negative Kelvin temperature.
We could instead say that the North magnetic pole is a positive (+) pole, because the lines are directed outward from it, like from a proton.
Based on the direction of lines, all we can say is that the electron has an opposite charge to proton, and proton has an opposite charge to electron; and the North magnetic pole is an opposite pole relative to the South magnetic pole, and vice versa. Based on this empirical fact, there is electric or magnetic attraction or repulsion, i.e. attraction’s direction is opposite of repulsion’s direction.
For example, in case of magnetic attraction, by analogy, we could say that it must be due to some magnetic mass, which is something separate from electric energy. Then we would experimentally search for it and not be able to find it, because magnetic interaction simply isn’t due to existence of any magnetic mass that could be experimentally detected.
The notion of inertial mass is historically preelectromagnetic. At the time it was a perfectly reasonable and logical idea. We say that heavy elements in the periodic table are heavy, because they have more protons and neutrons that are heavy. However, we can also say that heavy elements in the periodic table are heavy, because they simply have more electric and magnetic energy concentrated (energy density) in the nucleus.
Because there is no such thing as inertial mass, there is only attractive gravitational “mass” (energy), or the repulsive gravitational one. There is no need for negative mass, or for negative energy, and that is the reason why antimatter is called antimatter, and not negativematter. If you check the definition of the term “anti”, it is defined as: “opposed to; against”.
What makes gravitational interaction attractive (positive?) or repulsive (negative?), is simply the orientation of electric dipole moment in an atom.
There is no need for inertial mass, and gravitational mass is simply a combination of atom’s electric energy, magnetic energy, and its angular momenta, and only in this sense gravity does not result from atom’s mass, because it results from atom’s energy, and this energy is a field, and field lines are naturally oriented, and directed. Therefore, based on relative orientations and directions, we will have attractive, or repulsive interactions between gravitational fields.
The nonexistence of inertial mass does not mean that there is no inertia, of course. For the same reason that gravity does not originate from inertial mass, the inertia does not stem from “inertial mass”, either.
In this work we demonstrate that there is only one “mass” that is a measure of energy of elementary particles in atoms. Such interpretation is consistent with Einstein’s massenergy equivalence. We show that, in the classical limit, this energy will automatically appear in the equation of motion as an inertial mass. – Arxiv.org/…/0404044.pdf
5. EXPERIMENTAL VERIFICATION
In general, there is a 50% chance that antihydrogen atoms would antigravitate at CERN, and therefore the above conjecture could simply be a coincidence.
Should antihydrogen atoms antigravitate, we will need to find the cause. One such possible cause, negative mass, we examined above, and rejected. What else is there as a probable cause? Unfortunately, not much else is left.
According to our understanding, the above postulated mechanism of producing quantum gravity can be generalized, and in this way it becomes scaleinvariant. Atom and antiatom constitute a blueprint for producing gravity, or antigravity.
According to our hypothesis, all we need to produce gravity or antigravity, naturally or artificially, are the following:
 spin (angular momentum),
 magnetic field (magnetic dipole moment),
 asymmetric electric capacitor (electric dipole moment),
properly combined and oriented; ideally, spin axis needs to be aligned with magnetic axis. The capacitor needs to be “asymmetric” in a sense that it must allow for an inhomogeneous electric charge density distribution, as per the BiefeldBrown effect. Such a device could be spherical, like atoms, but even that is not necessary. One such device could be a spinning cylindrical capacitor embedded in a magnetic field:
The above device produces antigravity in virtue of its capacitor’s outer plate being charged positively, like in case of an antiatom. More details about possible experimentation with this device are presented on the following page:
Because the above gravity producing mechanism is to be scaleinvariant, there are also several other experimental implications of our hypothesis that could be easily tested, few of them being proposed on the following page:
And it is not a coincidence that, like an atom, the Earth is also a spherical capacitor spinning in its magnetic field, its spin axis partially aligned with its magnetic axis. In the middle of the following page there is a fragment addressing surprising implications of this important fact:
Will antihydrogen atoms antigravitate at CERN?
Considering all of the above, it could not be more obvious that they definitely will.
Presently, the only experiment that might be in a position to give information on the sign of gravitational interaction between matter and antimatter is the ALPHA experiment, which might carry out such a measurement by the end of 2018.
More likely is the period between 2021 and 2024, when three experiments, ALPHAg, AEgIS, and GBAR should be in a position to carry out the measurement, because CERN’s accelerators are to be shut down in 2019 and 2020 for technical improvements.
APPENDIX 1. — MAXWELL’S INFLUENCE ON THE EVOLUTION OF THE IDEA OF PHYSICAL REALITY
Published on the 100th anniversary of Maxwell’s birth in James Clerk Maxwell: A Commemoration Volume, Cambridge University Press 1931
The belief in an external world independent of the perceiving subject is the basis of all natural science. Since, however, sense perception only gives information of this external world or of “physical reality” indirectly, we can only grasp the latter by speculative means. It follows from this that our notions of physical reality can never be final. We must always be ready to change these notions — that is to say, the axiomatic basis of physics — in order to do justice to perceived facts in the most perfect way logically. Actually a glance at the development of physics shows that it has undergone far — reaching changes in the course of time.
The greatest change in the axiomatic basis of physics — in other words, of our conception of the structure of reality — since Newton laid the foundation of theoretical physics was brought about by Faraday’s and Maxwell’s work on electromagnetic phenomena. We will try in what follows to make this clearer, keeping both earlier and later developments in sight. According to Newton’s system, physical reality is characterized by the concepts of space, time, material point, and force (reciprocal action of material points). Physical events, in Newton’s view, are to be regarded as the motions, governed by fixed laws, of material points in space. The material point is our only mode of representing reality when dealing with changes taking place in it, the solitary representative of the real, in so far as the real is capable of change. Perceptible bodies are obviously responsible for the concept of the material point; people conceived it as an analogue of mobile bodies, stripping these of the characteristics of extension, form, orientation in space, and all “inward” qualities, leaving only inertia and translation and adding the concept of force. The material bodies, which had led psychologically to our formation of the concept of the “material point,” had now themselves to be regarded as systems of material points. It should be noted that this theoretical scheme is in essence an atomistic and mechanistic one. All happenings were to be interpreted purely mechanically — that is to say, simply as motions of material points according to Newton’s law of motion.
The most unsatisfactory side of this system (apart from the difficulties involved in the concept of “absolute space” which have been raised once more quite recently) lay in its description of light, which Newton also conceived, in accordance with his system, as composed of material points. Even at that time the question, What in that case becomes of the material points of which light is composed, when the light is absorbed?, was already a burning one. Moreover, it is unsatisfactory in any case to introduce into the discussion material points of quite a different sort, which had to be postulated for the purpose of representing ponderable matter and light respectively. Later on, electrical corpuscles were added to these, making a third kind,’ again with completely different characteristics. It was, further, a fundamental weakness that the forces of reciprocal action, by which events are determined, had to be assumed hypothetically in a perfectly arbitrary way. Yet this conception of the real accomplished much: how came it that people felt themselves’ impelled to forsake it?
In order to put his system into mathematical form at all, Newton had to devise the concept of differential quotients and propound the laws of motion in the form of total differential equations — perhaps the greatest advance in thought that a single individual was ever privileged to make. Partial differential equations were not necessary for this purpose, nor did Newton make any systematic use of them; but they were necessary for the formulation of the mechanics of deformable bodies; this is connected with the fact that in these problems the question of how bodies are supposed to be constructed out of material points was of no importance to begin with.
Thus the partial differential equation entered theoretical physics as a handmaid, but has gradually become mistress. This began in the nineteenth century when the wave theory of light established itself under the pressure of observed fact. Light in empty space was explained as a matter of vibrations of the Aether, and it seemed idle at that stage, of course, to look upon the latter as a conglomeration of material points. Here for the first time the partial differential equation appeared as the natural expression of the primary realities of physics. In a particular department of theoretical physics the continuous field thus appeared side by side with the material point as the representative of physical reality. This dualism remains even today, disturbing as it must be to every orderly mind.
If the idea of physical reality had ceased to be purely atomic, it still remained for the time being purely mechanistic; people still tried to explain all events as the motion of inert masses; indeed no other way of looking at things seemed conceivable. Then came the great change, which will be associated for all time with the names of Faraday, Maxwell, and Hertz. The lion’s share in this revolution fell to Maxwell. He showed that the whole of what was then known about light and electromagnetic phenomena was expressed in his well known double system of differential equations, in which the electric and the magnetic fields appear as the dependent variables. Maxwell did, indeed, try to explain, or justify, these equations by the intellectual construction of a mechanical model.
But he made use of several such constructions at the same time and took none of them really seriously, so that the equations alone appeared as the essential thing and the field strengths as the ultimate entities, not to be reduced to anything else. By the turn of the century the conception of the electromagnetic field as an ultimate entity had been generally accepted and serious thinkers had abandoned the belief in the justification, or the possibility, of a mechanical explanation of Maxwell’s equations. Before long they were, on the contrary, actually trying to explain material points and their inertia on field theory lines with the help of Maxwell’s theory, an attempt which did not, however, meet with complete success.
Neglecting the important individual results which Maxwell’s life work produced in important departments of physics, and concentrating on the changes wrought by him in our conception of the nature of physical reality, we may say this: before Maxwell people conceived of physical reality — in so far as it is supposed to represent events in nature — as material points, whose changes consist exclusively of motions, which are subject to total differential equations. After Maxwell they conceived physical reality as represented by continuous fields, not mechanically explicable, which are subject to partial differential equations. This change in the conception of reality is the most profound and fruitful one that has come to physics since Newton; but it has at the same time to be admitted that the program has by no means been completely carried out yet. The successful systems of physics which have been evolved since rather represent compromises between these two schemes, which for that very reason bear a provisional, logically incomplete character, although they may have achieved great advances in certain particulars.
The first of these that calls for mention is Lorentz’s theory of electrons, in which the field and the electrical corpuscles appear side by side as elements of equal value for the comprehension of reality. Next come the special and general theories of relativity which, though based entirely on ideas connected with the field theory, have so far been unable to avoid the independent introduction of material points and total differential equations.
The last and most successful creation of theoretical physics, namely quantum mechanics, differs fundamentally from both the schemes which we will for the sake of brevity call the Newtonian and the Maxwellian. For the quantities which figure in its laws make no claim to describe physical reality itself, but only the probabilities of the occurrence of a physical reality that we have in view. Dirac, to whom, in my opinion, we owe the most perfect exposition, logically, of this theory, rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass a polarizer placed (obliquely) in its way or not.
I am still inclined to the view that physicists will not in the long run content themselves with that sort of indirect description of the real, even if the theory can eventually be adapted to the postulate of general relativity in a satisfactory manner. We shall then, I feel sure, have to return to the attempt to carry out the program which may be described properly as the Maxwellian — namely: the description of physical reality in terms of fields, which satisfy partial differential equations without singularities.
Albert Einstein, 1931
APPENDIX 2. — Atoms as electric capacitors
Two oppositely charged ellipsoid ends of polarized bipolar atoms, work the same as the parallel plate capacitors. If the voltage (energy/charge) is constant, while the area of the plates or the charged area of the ellipsoid ends increases, then the charge goes up. As an atom becomes more polarized the opposite charges at each end of the ellipsoid increases as the atoms charges become more separated. The charged area increases. The distance between the ends of neighbor atoms decreases as the atoms become more ellipsoid. An increase in the area of the charge or a decrease in the distance apart of the charges on neighbor atoms both increase the capacitance. The energy required is converted to additional charge stored between the plates or ellipsoids. Where we see forces or accelerations we see charge. Charge is starting to look like a placeholder for force on the plates. Energy is conserved.
Charged concentric spherical shell atoms
Binary orbits in a plane are stable in isolation. Stable atoms require the spherical symmetry which they acquire by the precession of their orbits out of the plane. The electrons and protons are not confined to the two dimensions of a plane. They orbit in three dimensions. Atoms are spherical spinning precessing dipoles. This is how medical scanners and microwave ovens work. In a protonelectron binary atom, the proton is close to the center of mass where it orbits and precesses. This is seen as a charge spread over a spherical surface traced out by the orbiting and precessing proton. The electron across the center of mass from the proton and farther out but orbiting with the same angular velocity as the proton is also like a charge spread over a spherical surface traced out by the orbiting and precessing electron. Is this an electron cloud? How can a cloudlike electron become a negative ion particle? What you see depends on the metaphors you use. The surface of the atom, the electron orbit, is negative for circular orbits. Negative spherical surfaces of atoms repel each other and do not clump together without being polarized. The positive surface of a positive ion would be attracted and pulled into the negative surface of an atom. They would merge until the ion and atom were at equilibrium, until the ion reached the region of charge neutrality within the atom which is halfway between the electron spherical surface and the much smaller proton spherical surface. The positive ion would be repelled by the proton within the atom beyond the region of charge neutrality. This leaves us with the familiar image of bonding as overlapping spherical atoms.
An atom as a concentric spherical capacitor
This is another approach to charge separation. Aren’t the electron and proton in the Bohr atom somewhat like the oppositely charged parallel plates of a capacitor? Or the oppositely charged concentric spherical plates of a capacitor? The proton orbit is like a sphere of charge surrounded by the much larger sphere of charge of the electron orbit. While they are neutral they are spherical and concentric. As the charges separate the spheres become ellipsoids. The ellipsoid ends become oppositely charged. Their area increases. Their capacitance increases.
The electron orbit is considered as a negatively charged sphere with a radius of r_{e} = 5.2889E11_m.
The proton orbit is considered as a positively charged sphere with a radius of r_{p}= r_{e}*m_{e}/m_{p} = 2.8804E14_m.
The distance between the spheres is r_{e}r_{p} = 5.28602E11_m apart.
Q/V = C = Farads = 4*pi*e_{0}*r = charge^{2}/energy = a^{2}*s^{4}/(kg*m^{2}), This is the capacitance of a isolated spherical capacitor. We can use this formula for a concentric spherical capacitor with r defined in a special way. Here,
r = 1/(1/(r_{p}) – 1/(r_{e})) = r_{e}*r_{p}/(r_{e}r_{p}) = r_{e}*m_{e}/(m_{p}m_{e}) = 2.88159E14_m, this r is only slightly larger than the proton sphere r_{p}.
C = 4*pi*e_{0}*r = 4*pi*e_{0}*2.88159E14_m = 3.2061E24_Farads, additional charge could be imposed by electrostatic gravity with or without a change in geometry.
APPENDIX 3.
Godel and the End of the Universe
In this talk, I want to ask how far can we go in our search for understanding and knowledge. Will we ever find a complete form of the laws of nature? By a complete form, I mean a set of rules that in principle at least enable us to predict the future to an arbitrary accuracy, knowing the state of the universe at one time. A qualitative understanding of the laws has been the aim of philosophers and scientists, from Aristotle onwards. But it was Newton’s Principia Mathematica in 1687, containing his theory of universal gravitation that made the laws quantitative and precise. This led to the idea of scientific determinism, which seems first to have been expressed by Laplace. If at one time, one knew the positions and velocities of all the particles in the universe, the laws of science should enable us to calculate their positions and velocities at any other time, past or future. The laws may or may not have been ordained by God, but scientific determinism asserts that he does not intervene to break them.
At first, it seemed that these hopes for a complete determinism would be dashed by the discovery early in the 20th century; that events like the decay of radio active atoms seemed to take place at random. It was as if God was playing dice, in Einstein’s phrase. But science snatched victory from the jaws of defeat by moving the goal posts and redefining what is meant by a complete knowledge of the universe. It was a stroke of brilliance whose philosophical implications have still not been fully appreciated. Much of the credit belongs to Paul Dirac, my predecessor but one in the Lucasian chair, though it wasn’t motorized in his time. Dirac showed how the work of Erwin Schrodinger and Werner Heisenberg could be combined in new picture of reality, called quantum theory. In quantum theory, a particle is not characterized by two quantities, its position and its velocity, as in classical Newtonian theory. Instead it is described by a single quantity, the wave function. The size of the wave function at a point, gives the probability that the particle will be found at that point, and the rate at which the wave function changes from point to point, gives the probability of different velocities. One can have a wave function that is sharply peaked at a point. This corresponds to a state in which there is little uncertainty in the position of the particle. However, the wave function varies rapidly, so there is a lot of uncertainty in the velocity. Similarly, a long chain of waves has a large uncertainty in position, but a small uncertainty in velocity. One can have a well defined position, or a well defined velocity, but not both.
This would seem to make complete determinism impossible. If one can’t accurately define both the positions and the velocities of particles at one time, how can one predict what they will be in the future? It is like weather forecasting. The forecasters don’t have an accurate knowledge of the atmosphere at one time. Just a few measurements at ground level and what can be learnt from satellite photographs. That’s why weather forecasts are so unreliable. However, in quantum theory, it turns out one doesn’t need to know both the positions and the velocities. If one knew the laws of physics and the wave function at one time, then something called the Schrodinger equation would tell one how fast the wave function was changing with time. This would allow one to calculate the wave function at any other time. One can therefore claim that there is still determinism but it is determinism on a reduced level. Instead of being able accurately to predict two quantities, position and velocity, one can predict only a single quantity, the wave function. We have redefined determinism to be just half of what Laplace thought it was. Some people have tried to connect the unpredictability of the other half with consciousness, or the intervention of supernatural beings. But it is difficult to make either case for something that is completely random.
In order to calculate how the wave function develops in time, one needs the quantum laws that govern the universe. So how well do we know these laws? As Dirac remarked, Maxwell’s equations of light and the relativistic wave equation, which he was too modest to call the Dirac equation, govern most of physics and all of chemistry and biology. So in principle, we ought to be able to predict human behavior, though I can’t say I have had much success myself. The trouble is that the human brain contains far too many particles for us to be able to solve the equations. But it is comforting to think we might be able to predict the nematode worm, even if we can’t quite figure out humans. Quantum theory and the Maxwell and Dirac equations indeed govern much of our life, but there are two important areas beyond their scope. One is the nuclear forces. The other is gravity. The nuclear forces are responsible for the Sun shining and the formation of the elements including the carbon and oxygen of which we are made. And gravity caused the formation of stars and planets, and indeed, of the universe itself. So it is important to bring them into the scheme.
The so called weak nuclear forces have been unified with the Maxwell equations by Abdus Salam and Stephen Weinberg, in what is known as the Electro weak theory. The predictions of this theory have been confirmed by experiment and the authors rewarded with Nobel Prizes. The remaining nuclear forces, the so called strong forces, have not yet been successfully unified with the electro weak forces in an observationally tested scheme. Instead, they seem to be described by a similar but separate theory called QCD. It is not clear who, if anyone, should get a Nobel Prize for QCD, but David Gross and Gerard ‘t Hooft share credit for showing the theory gets simpler at high energies. I had quite a job to get my speech synthesizer to pronounce Gerard’s surname. It wasn’t familiar with apostrophe t. The electro weak theory and QCD together constitute the so called Standard Model of particle physics, which aims to describe everything except gravity.
The standard model seems to be adequate for all practical purposes, at least for the next hundred years. But practical or economic reasons have never been the driving force in our search for a complete theory of the universe. No one working on the basic theory, from Galileo onward, has carried out their research to make money, though Dirac would have made a fortune if he had patented the Dirac equation. He would have had a royalty on every television, walkman, video game and computer.
The real reason we are seeking a complete theory, is that we want to understand the universe and feel we are not just the victims of dark and mysterious forces. If we understand the universe, then we control it, in a sense. The standard model is clearly unsatisfactory in this respect. First of all, it is ugly and ad hoc. The particles are grouped in an apparently arbitrary way, and the standard model depends on 24 numbers whose values can not be deduced from first principles, but which have to be chosen to fit the observations. What understanding is there in that? Can it be Nature’s last word? The second failing of the standard model is that it does not include gravity. Instead, gravity has to be described by Einstein’s General Theory of Relativity. General relativity is not a quantum theory unlike the laws that govern everything else in the universe. Although it is not consistent to use the non quantum general relativity with the quantum standard model, this has no practical significance at the present stage of the universe because gravitational fields are so weak. However, in the very early universe, gravitational fields would have been much stronger and quantum gravity would have been significant. Indeed, we have evidence that quantum uncertainty in the early universe made some regions slightly more or less dense than the otherwise uniform background. We can see this in small differences in the background of microwave radiation from different directions. The hotter, denser regions will condense out of the expansion as galaxies, stars and planets. All the structures in the universe, including ourselves, can be traced back to quantum effects in the very early stages. It is therefore essential to have a fully consistent quantum theory of gravity, if we are to understand the universe.
Constructing a quantum theory of gravity has been the outstanding problem in theoretical physics for the last 30 years. It is much, much more difficult than the quantum theories of the strong and electro weak forces. These propagate in a fixed background of space and time. One can define the wave function and use the Schrodinger equation to evolve it in time. But according to general relativity, gravity is space and time. So how can the wave function for gravity evolve in time? And anyway, what does one mean by the wave function for gravity? It turns out that, in a formal sense, one can define a wave function and a Schrodinger like equation for gravity, but that they are of little use in actual calculations.
Instead, the usual approach is to regard the quantum spacetime as a small perturbation of some background spacetime; generally flat space. The perturbations can then be treated as quantum fields, like the electro weak and QCD fields, propagating through the background spacetime. In calculations of perturbations, there is generally some quantity called the effective coupling which measures how much of an extra perturbation a given perturbation generates. If the coupling is small, a small perturbation creates a smaller correction which gives an even smaller second correction, and so on. Perturbation theory works and can be used to calculate to any degree of accuracy. An example is your bank account. The interest on the account is a small perturbation. A very small perturbation if you are with one of the big banks. The interest is compound. That is, there is interest on the interest, and interest on the interest on the interest. However, the amounts are tiny. To a good approximation, the money in your account is what you put there. On the other hand, if the coupling is high, a perturbation generates a larger perturbation which then generates an even larger perturbation. An example would be borrowing money from loan sharks. The interest can be more than you borrowed, and then you pay interest on that. It is disastrous.
With gravity, the effective coupling is the energy or mass of the perturbation because this determines how much it warps spacetime, and so creates a further perturbation. However, in quantum theory, quantities like the electric field or the geometry of spacetime don’t have definite values, but have what are called quantum fluctuations. These fluctuations have energy. In fact, they have an infinite amount of energy because there are fluctuations on all length scales, no matter how small. Thus treating quantum gravity as a perturbation of flat space doesn’t work well because the perturbations are strongly coupled.
Supergravity was invented in 1976 to solve, or at least improve, the energy problem. It is a combination of general relativity with other fields, such that each species of particle has a super partner species. The energy of the quantum fluctuations of one partner is positive, and the other negative, so they tend to cancel. It was hoped the infinite positive and negative energies would cancel completely, leaving only a finite remainder. In this case, a perturbation treatment would work because the effective coupling would be weak. However, in 1985, people suddenly lost confidence that the infinities would cancel. This was not because anyone had shown that they definitely didn’t cancel. It was reckoned it would take a good graduate student 300 years to do the calculation, and how would one know they hadn’t made a mistake on page two? Rather it was because Ed Witten declared that string theory was the true quantum theory of gravity, and supergravity was just an approximation, valid when particle energies are low, which in practice, they always are. In string theory, gravity is not thought of as the warping of spacetime. Instead, it is given by string diagrams; networks of pipes that represent little loops of string, propagating through flat spacetime. The effective coupling that gives the strength of the junctions where three pipes meet is not the energy, as it is in supergravity. Instead it is given by what is called the dilaton; a field that has not been observed. If the dilaton had a low value, the effective coupling would be weak, and string theory would be a good quantum theory. But it is no earthly use for practical purposes.
In the years since 1985, we have realized that both supergravity and string theory belong to a larger structure, known as M theory. Why it should be called M Theory is completely obscure. M theory is not a theory in the usual sense. Rather it is a collection of theories that look very different but which describe the same physical situation. These theories are related by mappings or correspondences called dualities, which imply that they are all reflections of the same underlying theory. Each theory in the collection works well in the limit, like low energy, or low dilaton, in which its effective coupling is small, but breaks down when the coupling is large. This means that none of the theories can predict the future of the universe to arbitrary accuracy. For that, one would need a single formulation of Mtheory that would work in all situations.
Up to now, most people have implicitly assumed that there is an ultimate theory that we will eventually discover. Indeed, I myself have suggested we might find it quite soon. However, Mtheory has made me wonder if this is true. Maybe it is not possible to formulate the theory of the universe in a finite number of statements. This is very reminiscent of Godel’s theorem. This says that any finite system of axioms is not sufficient to prove every result in mathematics.
Godel’s theorem is proved using statements that refer to themselves. Such statements can lead to paradoxes. An example is, this statement is false. If the statement is true, it is false. And if the statement is false, it is true. Another example is, the barber of Corfu shaves every man who does not shave himself. Who shaves the barber? If he shaves himself, then he doesn’t, and if he doesn’t, then he does. Godel went to great lengths to avoid such paradoxes by carefully distinguishing between mathematics, like 2+2 =4, and meta mathematics, or statements about mathematics, such as mathematics is cool, or mathematics is consistent. That is why his paper is so difficult to read. But the idea is quite simple. First Godel showed that each mathematical formula, like 2+2=4, can be given a unique number, the Godel number. The Godel number of 2+2=4, is *. Second, the meta mathematical statement, the sequence of formulas A, is a proof of the formula B, can be expressed as an arithmetical relation between the Godel numbers for A and B. Thus meta mathematics can be mapped into arithmetic, though I’m not sure how you translate the meta mathematical statement, ‘mathematics is cool’. Third and last, consider the self referring Godel statement, G. This is, the statement G can not be demonstrated from the axioms of mathematics. Suppose that G could be demonstrated. Then the axioms must be inconsistent because one could both demonstrate G and show that it can not be demonstrated. On the other hand, if G can’t be demonstrated, then G is true. By the mapping into numbers, it corresponds to a true relation between numbers, but one which can not be deduced from the axioms. Thus mathematics is either inconsistent or incomplete. The smart money is on incomplete.
What is the relation between Godel’s theorem and whether we can formulate the theory of the universe in terms of a finite number of principles? One connection is obvious. According to the positivist philosophy of science, a physical theory is a mathematical model. So if there are mathematical results that can not be proved, there are physical problems that can not be predicted. One example might be the Goldbach conjecture. Given an even number of wood blocks, can you always divide them into two piles, each of which can not be arranged in a rectangle? That is, it contains a prime number of blocks.
Although this is incompleteness of sort, it is not the kind of unpredictability I mean. Given a specific number of blocks, one can determine with a finite number of trials whether they can be divided into two primes. But I think that quantum theory and gravity together, introduces a new element into the discussion that wasn’t present with classical Newtonian theory. In the standard positivist approach to the philosophy of science, physical theories live rent free in a Platonic heaven of ideal mathematical models. That is, a model can be arbitrarily detailed and can contain an arbitrary amount of information without affecting the universes they describe. But we are not angels, who view the universe from the outside. Instead, we and our models are both part of the universe we are describing. Thus a physical theory is self referencing, like in Godel’s theorem. One might therefore expect it to be either inconsistent or incomplete. The theories we have so far are both inconsistent and incomplete.
Quantum gravity is essential to the argument. The information in the model can be represented by an arrangement of particles. According to quantum theory, a particle in a region of a given size has a certain minimum amount of energy. Thus, as I said earlier, models don’t live rent free. They cost energy. By Einstein’s famous equation, E = mc squared, energy is equivalent to mass. And mass causes systems to collapse under gravity. It is like getting too many books together in a library. The floor would give way and create a black hole that would swallow the information. Remarkably enough, Jacob Bekenstein and I found that the amount of information in a black hole is proportional to the area of the boundary of the hole, rather than the volume of the hole, as one might have expected. The black hole limit on the concentration of information is fundamental, but it has not been properly incorporated into any of the formulations of M theory that we have so far. They all assume that one can define the wave function at each point of space. But that would be an infinite density of information which is not allowed. On the other hand, if one can’t define the wave function point wise, one can’t predict the future to arbitrary accuracy, even in the reduced determinism of quantum theory. What we need is a formulation of M theory that takes account of the black hole information limit. But then our experience with supergravity and string theory, and the analogy of Godel’s theorem, suggest that even this formulation will be incomplete.
Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind. I’m now glad that our search for understanding will never come to an end, and that we will always have the challenge of new discovery. Without it, we would stagnate. Godel’s theorem ensured there would always be a job for mathematicians. I think M theory will do the same for physicists. I’m sure Dirac would have approved.