Lens

Substrate

@sam
July 26, 2025

This is a Neoplatonic retroductive account of nature, reasoning from phenomena back to geometric archetypes that generate them. In Timaeus, Plato conceives geometry as the shaping principle of the cosmos; here, the co‑principles of point‑source and space form a unified geometric field whose structure necessarily gives rise to physical force. Carl Friedrich von Weizsäcker noted in 1971 that “Philosophy as a university discipline commands no authority today… mainly because the task of philosophy is so difficult. Philosophy can be defined as continued questioning. Socrates practiced it thus, in an exemplary manner.” In this account, natural philosophy is used rather than modern approaches, with Jefimenko's modern framework based on Newtonian physics assumed.

A mass is best modeled as a stable, self-reinforcing field configuration, whose geometric topological determines its observable properties. What we traditionally call "mass" is re-understood as emergent from the co-principles of recursion—conceived as a field, where spatial-temporal organization of energy is governed by the field’s recursive geometry. This geometry characterizes its point-source—a zero-dimensional origin of energy, density, and time—and its emergent space—the three dimensions that, paradoxically, do nothing. Hence, the principle of causality arises for the field, where point-source is always the cause of space, exists through space, and essentially is the illusion of space.

Because mass operates under co-principles of the field, mass can be read as the bookkeeping term that reconciles energy conservation across any of the co-principle's domains.

In Newtonian mechanics and Jefimenko's neo-Newtonian physics, mass is not an intrinsic property of matter simply because matter can be quantified by it. Mass serves as the measure of matter, but this quantifiability does not imply that mass belongs to matter as an inherent attribute.

If mass is a boundary effect rather than an intrinsic property of matter, the dark-matter problem becomes one of field bookkeeping rather than missing matter. Wherever the point-source budget falls short, the field compensates by amplifying its space, expanding its gravitational effects until the centrifugal balance is restored. Conversely, in quantum field laboratories (particle accelerators, precision spectroscopy) the space principle is negligible in size, reflecting the inverse situation.

Field, point-source, and space

A useful geometric metaphor for a field is the recursive pentagram: the regular pentagram—or better, the dodecahedron star—has its first layer whose √5-term has a coefficient of 1 at the minimum subdivision depth Φ⁻³—as shown in A Fibonacci–Lucas Decomposition of Subdivision Lengths in the Pentagram. The lines of the geometry represent a zero-dimensional point-source and the emergent space has three-dimensions. The recursion points correspond to electric charge density, the connecting lines to electric current density (serving as the sources rather than the electric field itself), and the emergent spatial structure to the magnetic field—or, in the case of electromagnetic wave propagation, to both the electric and magnetic fields.

In the context of topology and geometry, a point is a dimensionless, indivisible locus; it has no magnitude, no breadth, no extension—merely existence without form. Contrary to popular understanding, a single connecting line is no different than a point because it lacks space. However, two distinct points can provide a distance length because they are seperated by a space. There is simply no such thing as one-dimension or two-dimensions in nature, only three-dimensions + zero point-source. When a point becomes recursive, it must include space to define the recursion—thus, space emerges as a principle originating from the point-source principle.

If matter is nothing more than energy locked into self-sustaining, ehco-like configurations of the universal co-principled field substrate, then what we register as mass is simply the energy budget of that configuration. In classical physics and in Jefimenko’s view, light is not granular and does not consist of photons; it is simply a propagating, massless wave-like field.

The tighter the recursive knot, or the smaller the space, and the larger the point-source energy budget, the more localized the mass is—due to the abundance of energy stopping collapse. Conversely, the larger the space, and the smaller the point-source budget, the greater the gravitational effec becomes—due to the lack of energy starting collapse. Both obey conservation laws, but they push energy in opposite directions in terms of availability relative to their size.

An atom contains so much localized energy in such a tiny space that it produces virtually no gravitational effect. In contrast, massive gravitational bodies lack energy relative to their spatial scale, which is why they exert such extreme gravitational pull—to compensate for their energy deficit. In the example of nuclear fission, splitting a heavy atom releases excess energy in the form of kinetic energy. Because such a large amount of energy is locked within such a small space, even the excess energy released is tremendous relative to our scale. Consequently, gravity occurs when a mass lacks energy. This is why atoms exhibit no observed gravity, and the energy of a massive gravitational mass is nearing zero—with a large energy deficit. On the other hand, magnetism and electromagnetism creates phase coherence across a mass. Static electric fields don't have phase because they don't oscillate.

Artificial magnetism

A strong permanent magnet is not natural in the sense of spontaneously arising in the universe. It is an artificial configuration that involves coherently aligning electric current density (point-source lines) of man-made chemical compounds like Nd2Fe14B in a way that rarely occurs without intervention. Magnets were first discovered in natural minerals like lodestone (a naturally magnetized form of magnetite). Ancient people noticed that lodestone could attract iron and align with Earth’s magnetic field. These were weak natural magnets formed over time, likely by exposure to lightning or Earth’s magnetic field. Modern powerful magnets are made by aligning electric current density in magnetic compound materials artificially using heat, pressure, and external magnetic fields—engineering a stronger, coherent recursion structure.

Naturally occurring magnetic fields tend to be weak, broad-scale (like planetary or stellar magnetism), dynamic (changing over time), and not coherent at the domain level. These natural magnetic fields arise from dynamic processes (like convection currents of molten iron), not man-made recursion alignments. Only in certain rare minerals, as mentioned (lodestone), do we find naturally magnetized regions—but even then they are localized and weak, likely formed under high-field geological conditions, and they're more like residual fragments of recursive imprinting, not structured artificial magnetic fields.

An electromagnet generates a magnetic field only when an electric current density is present around a wire, typically wound into a coil. The strength of an electromagnet’s field can be controlled by adjusting the current, making it highly useful in experimental and industrial settings.

Many particle physics experiments use powerful artificial electromagnets to steer and focus high-energy particles. These devices are designed and operated entirely within the framework of classical electromagnetism—or formulations within classical framework such as Jefimenko’s—without reliance on the Standard Model. Nevertheless, much of what we observe in the Standard Model arises from highly controlled artificial magnetic environments, where such electromagnetic instruments are foundational to manipulating and detecting particles.

Magnetism

Magnetism is emergent space and does not appear without an electric current density. It is impossible for a magnetic field to arise without its point-source origin due to its dependence on complete recursion. Consequently, magnetism itself is an effect, not an independent physical interaction.

A magnetic field emerges around a mass when it contains a moving point source—that is, an electric current density, the source of the electric field.

Magnetic North and South poles are the recursion chirality over a mass space—opposite orientations of recursion. The magnetic "field lines" we see are actually space deformation gradients caused by this oriented line recursion. These poles do nothing.

When a magnetic object becomes magnetized, this can be described as self-coherence resulting from the alignment of its electric current density lines.

When self-coherence is made across a mass—whether in the same self-coherent mode or its chiral inverse—it forms a stable eigenmode: a macroscopic closed-loop magnetic field that manifests as an effect of its electric current density. The greater a mass's self-coherence can become, the more potential energy (magnetic potential) can be locked into its configuration when magnetized, and the smaller and tighter its optical isopotential traces of light appear around the magnetized mass when viewed through a ferrocell instrument. The stored energy of this self-coherent mass is the source of field strength and sustains the magnetic field.

Magnetic attraction and repulsion is the result of electric current density lines acting upon each other through the fields they generate at retarded times, with the force direction still able to be determined using the Lorentz force law. Current density creates magnetic fields and these fields propagate causally and act on other currents. The resulting magnetic forces pull the current density lines together (attraction) or apart (repulsion) depending on their direction and geometry.

Point-source induction

Induction is the electromagnetic interaction caused by point-source, with the emergent electromagnetic field propagating from these changes. A charge density influences neighboring charges, generating time-varying electric and magnetic fields. In practical terms, a moving magnet creates a time-dependent effective current distribution (from the movement of bound current loops), which produces an induced electric field at the location of the conductor. This causes spatial reconfiguration around the conductor—observed as an induced electric field or current—propagating at the rate of induction, which we recognize as the speed of light.

Phase coherence and meaningful material properties

Phase-locking is the ability of two or more materials to synchronize their waves or fluxes across sources. It is fundamentally the perfect superimposition of those sources.

In magnetic materials such as a neodymium magnet, phase-locking can occur between an external source and the bound point sources within the material. This locking does not transmit energy through the magnetic medium itself; instead, the bound sources hold potential energy in a static configuration, sustaining the magnet's field.

In conductors such as copper, phase-locking is localized and transient. The superimposed point-source recursion structures of the external source and the conductor's free charges permit propagation through their shared field spaces. This propagation persists only while input kinetic energy is available, after which the superimposition decoheres.

In dielectrics, phase-locking occurs with bound point-source recursion structures that cannot translate freely. These constrained sources store the imposed field's energy in localized recursions of the dielectric until it is released or dissipated, rather than allowing continuous propagation through the medium. As a result, a dielectric resists current flow while maintaining separation of charges under an applied field. Put simply, a dielectric is like a conductor, but instead of electric current density to carry energy, it shifts bound charges and transmits energy through electrostatic displacement.

In diamagnetic materials, phase-locking with the external source is resisted. The bound point-source charges adjust only minimally, producing induced responses that cancel rather than reinforce, so no stable phase-lock is formed. This passive cancellation limits how deeply the external field can penetrate.

Much like diamagnetism, a superconductor resists phase-locking with an external source. In ordinary diamagnetic materials this resistance is partial, and some of the applied field penetrates the interior. In a superconductor, however, the resistance is complete: below its critical temperature, its charges form persistent surface currents that exactly balance the external source. This state, known as the Meissner effect, prevents penetration into the bulk. The result is perfect diamagnetism—a total exclusion of magnetic flux from the interior. In essence, a superconductor leaves its interior untouched, with surface currents forming a shield that keeps the external field out.

Mass-energy equivalence

In Newtonian physics, mass and energy are separate, so potential energy changes don't affect mass. Conversely in mainstream science, total energy in special (and general) relativity determines a system’s rest mass, so potential energy—like electromagnetic, nuclear, or gravitational binding—directly contributes; lowering potential energy reduces the system's mass.

When two atoms bind into a molecule the total energy of the bound system is less than the total energy of the separated atoms. This difference is in the binding energy. Potential energy in mainstream science is a "real" contributor to the total mass of a system but whether you notice it depends on whether you're doing physics in a framework that keeps strict energy–mass bookkeeping. Hence, how did the mainstream narrative (special and general relativity) end up with a strict mass-energy bookkeeping?

We have not measured an "energy free" mass, and instead we end up measuring the total rest-frame energy per c². Special relativity removed the notion mass and energy were distinct forms. Energy and momentum became components of a single 4-vector.

It was known Einstein did not read A gravitational and electromagnetic analogy, published around 1893 by Oliver Heaviside, and worked in isolation. Einstein's 1905 work grew largely out of German and French sources (Lorentz, Hertz/Maxwell via German channels, Mach/Poincaré), and that the English "Maxwellian" literature (e.g., Larmor, Heaviside) left no trace in Einstein's early papers.

A single atom, having no energy deficit relative to its fully unbound state, would exert no noticeable gravitational influence, while an extreme gravitational object would exert intense gravity due to an immense energy deficit relative to its mass.

The energy deficit is the gap between a system's maximum possible energy (its fully unbound components) and its actual energy in a bound state. In this framing, gravity’s strength is linked to this deficit, not to relativistic total mass-energy content.

Maxwell's classical view of atomic and molecular interactions already recognized that the act of binding releases energy, while separation requires an equal or greater input of energy. In the traditional picture, this release is attributed to a shift toward a lower-energy, more stable configuration, with the binding energy carried away by electromagnetic radiation or kinetic motion. Within the present framework, this same release marks the creation of an energy deficit relative to the unbound state, directly influencing the gravitational strength of the resulting system.

This reinterpretation leaves atomic physics—and by extension, the domains of classical electromagnetism and modern chemistry—entirely intact, as all established principles governing energy changes in binding and unbinding remain valid. At these scales, gravitational effects are negligible, and the proposed shift in coupling from total energy to energy deficit would not alter any quantum predictions relevant to atoms or larger structures. Its consequences emerge only in regimes where gravity plays a dominant role, making its primary impact felt in astrophysics.

Standard Model of particle physics

In particle physics, powerful artificial electromagnets are used to steer and focus high-energy ions in experiments. These beams can then be used to probe ions, revealing their internal structure and interactions. While classical electromagnetism is sufficient to design and build the magnets themselves, understanding the results within the framework of the Standard Model requires knowledge of the photon—the quantum carrier of the electromagnetic force—since it mediates the interactions that such probes exploit.

Because the photon is the Standard Model’s mediator of the electromagnetic force, any fundamental error in our understanding of its properties would propagate through the theoretical framework like a fault in the foundation of a house of cards, undermining the interpretation of experiments—such as ion-probing collisions—that rely on electromagnetic interactions to test the model.

Electrons in the discussion of electric charge density

The electron is better reinterpreted as an electric charge density—a zero-dimensional point source—capable of "extending" or "contracting" in its field influence, despite lacking spatial extent itself.

The interpretation of the electron in the Standard Model is complex and fundamentally useless when discussing electricity or useful matter. Electrons, as particles endowed with magnetic moments, may exhibit magnetic effects in radiation processes, but they are not themselves the source of the electric field; that role belongs to their intrinsic electric charge density. In physical experiments, these visible electrons do nothing but expire.

Jefimenko's causal electrodynamics and gravity

In Jefimenko’s electrodynamics, wave propagation is not a mysterious self-perpetuation of electric and magnetic fields "driving" each other through empty space. Instead, every electromagnetic wave, whether static in appearance or varying with time, is rooted in the underlying distributions of electric charge density and current density. These sources—described by charge density and current density at their retarded times—dictate the exact structure, timing, and strength of the fields that emerge. In this framework, the fields are effects, not causes, and their behavior can be traced directly to the motion and arrangement of their sources.

Beyond electromagnetism, Jefimenko also developed an extensive theoretical framework for strong gravitational and antigravitational nonlinear fields, proposing analogous source–field relationships in gravitation that could, in principle, account for both attractive and repulsive gravitational phenomena under extreme conditions.

See "Causality, Electromagnetic Induction, and Gravitation".

Jefimenko's modern discussion on the nature of strong gravitational fields

Some modern physical merit can be added to our treatment on the disconnect between energy and mass because gravitational field energy and mass energy aren't necessarily the same or directly interchangeable. Physicist Oleg Jefimenko writes in Causality, Electromagnetic Induction, and Gravitation that "all presently known results of the general relativity theory based on Einstein’s field equations cannot be considered reliable when these results involve gravitational fields whose gravitational-energy mass is comparable with the true mass of the system." This appears on page 157, in the discussion on the nature of gravitational fields, following his nonlinear theory for strong gravitational fields and antigravitational fields.

Max Planck and the birth of energy quanta

In 1900, Max Planck introduced the revolutionary idea that electromagnetic energy is not emitted continuously, but in discrete packets—which he called quanta. He developed this concept while studying blackbody radiation and trying to solve the ultraviolet catastrophe. By assuming that oscillators in the cavity walls could only exchange energy in discrete amounts proportional to their frequency (E = hf), Planck derived a formula that matched experimental data.

However, Planck viewed these quanta as a mathematical trick, not as physical particles. He believed the quantization applied to the interaction between matter and radiation, not to the radiation field itself. Planck strongly resisted Einstein’s 1905 proposal that light itself was made of localized particles—photons—with real, independent existence. For Planck, quanta were a statistical artifact of energy distribution, not evidence that light was fundamentally granular. He maintained a classical view of the electromagnetic field and accepted Einstein’s quantum interpretation only reluctantly—viewing quantization as a feature of emission and absorption by matter, not a property of the electromagnetic field itself.

Double-slit experiment misconception

The double-slit experiment was first performed by Thomas Young in 1801, long before quantum mechanics, as a demonstration that light behaves like a wave. By shining light through two narrow slits and observing the resulting interference pattern of bright and dark fringes, Young provided clear evidence against Newton’s particle theory of light and in favor of wave theory, helping to establish the foundation of classical electromagnetism. It wasn’t until more than a century later, with the ability to generate extremely weak beams of light or electron fields and to detect their arrival one event at a time, that the experiment revealed its quantum character.

A common misconception in popular science presentations of the double-slit experiment is that it somehow shows human consciousness or the act of “looking” with the eye directly causes the collapse of the wavefunction. In reality, the interference pattern disappears because any attempt to determine which path the field takes through the slits requires a physical interaction with a detector, and that interaction disturbs the system enough to wash out the interference. The effect is not mystical or psychological—it arises from the unavoidable influence of measurement devices on fields.

Imagine you have a tire filled with air. If you want to measure its pressure, you insert a gauge. But as soon as you connect the gauge, a little bit of air escapes into the instrument. That means the act of measuring changes the tire slightly. That’s the idea behind quantum measurement: it’s not that “looking” with your eyes changes the outcome. It’s that in order to determine which path the field disturbance takes through the slits, you need to set up a detector—and that detector must interact with the field itself. The interaction unavoidably disturbs the wave pattern, washing out the interference.