Logical Principles

Scalar and Vector Modes

Go back to Bell
L   
ogic bifurcates into principles of quantity and quality. These may be termed the scalar and vector modes (not exactly the same as the mathematical definition).

In the scalar mode, all logical principles can be arranged in a hierarchical structure from greater to lesser fundamentality.

In the vector mode, a circular logic exists and is termed Conduction.
Consistent reasoning beginning at A leads to B, its contradiction, then back to A. Taking a segment of such conductive reasoning resembles either induction or deduction.

Insurance helps individuals to cope with disaster (A). Therefore insurance is beneficial to civilization. But such buisness produces no physical values (food, housing, etc.). It merely transfers money while at the same time removing real value producers now in the insurance buisness. Hence, there is a net weakening of civilization due to the insurance industry. Therefore, insurance is harmful to civilization (B). But if an individual has no insurance he may be ruined by disaster and if individuals are so ruined civilization is harmed. Therefore, insurance is a benefit (A). The fact that, in reality, a compromise is made (we artificially assign the qualities, benefit and harm, 'scale quantifiers' , i.e. "...on a scale or 1 to 10...") does not remove or negate the contradictory nature of 'conduction".

Conduction will sustain contradictory principles if those principles are non-quantitive evaluations, i.e. not physical or mathematical. Thus, "life is worth living" and "life is not worth living" are congruent contradictory truths of evaluative nature. One is free to choose which to act on.

Conduction is useful for purposes of demonstration.
To prove an argument to be conductive eliminates the necessity of acting on it decisively as would be the case with induction or deduction. Similarly, nature itself may be indecisive when confronted with its own qualitive attributes, e.g. uncertainty principle.


Preference and Distinction

There is a quantitive relationship between the scalar and vector modes, the basis of which is 'preference' and 'distinction'.

By preference is meant that a difference of scale can be noted between two alternatives. Coefficients of 0 and 1 can be assigned to denote preference, e.g. "five is greater than two" means, symbolically, 1 > 0.
When the ratio A:B approaches infinity the preference for A approaches 1.

Distinction denotes a difference between the two parameters relative to one another in the absence of a 'context' (C).
If the preference is high, distinction between them is least because the greater becomes the context in which the lesser exists. Thus, pleasure is impossible to sense if no pain exists.
Therefore, distinction is 1 when A:B = 1.

Combining both preference and distinction yields the most likely (or optimum) state, A:B = 2:1 .
(Let x to y = 3 and y to z =1. It is easy to see that the ratio of the colored areas in the third figure is 2:1 and that its altitude is 2/3.)


L#16
go back to: Dark Matter or E/v=D/c or Bell

Theoretical Equivalence

Any change of discretionary standards of measure must produce an equally viable theory.
Both Ptolemaic and Copernican systems produce viable theoretical models (changing earth centered coordinate system in favor of sun centered).

The simplest, least contradictory, and most inclusive theory is preferable.
By this rule it must be possible to construct equivalent cosmologies involving expanding space or 'shrinking' atoms, etc..

It must also be possible to construct a viable 'extension denial' physics in which a model of the universe is constructed on the surface of a sphere with the observer at the center.
The universe is 'known' to a particle by the effects immediately upon it (on the surface of a sphere of smallest possible radius). That radius must be finite (incorporating indeterminacy) because it cannot contradict the extension model (which displays differentiable points thus requiring some space to separate them).

The principle of theoretical equivalence is then the cause of 'v' (E/v = D/c) as in Gravity

Here, qualitive information only is exchanged at superluminal velocities such that the entire present extent of the positional field (to Hubble radius) is logically (but not mechanically) connected in the time taken for information to transit the confinement boundary (D ~ the Compton wavelength). In this way the Hubble radius is logically congruent with the Compton boundary.

See also Bell Inequality


L#3
go back to: Beginning...or Parity...

Difference Congruency

(meaning: two concepts which cannot be separated by any logical mechanism whatsoever but which are nevertheless different.

1) Two quantitively distinguishable objects can only be congruent if they are the same object (front and back of same door). Thus a dog and a cat cannot be the same because they cannot occupy the same space at the same time. And they each possess differences which can be quantified, i.e. longer snout, generally larger animal, purrs, etc.
2) Two qualitively distinguishable states can be congruent without logical contradiction (like left and right as developed in the Parity section. In fact the term contradiction refers only to quantity --- never to quality because all qualities are observer dependent.
3) A qualitive state and a quantitive object can be logically congruent as in the case of zero and one, i.e. self-referentially, the concepts of quantity and quality themselves. There are no other examples of this third possibility.


L#12
go back to: Reflexible...

Two or more finites esisting relative to one another may not also exist relative to an infinite.
A contradiction results.
Example:
If an object of unit size exists relative to an infinitely large object and also to an object twice unit size then 1 = 2 because 1/(infinity) = 2/(infinity). Or, any finite is identical to any other in the presence of 0 or infinity.
Observation of a finite logically precludes the observation of 0 or infinity. Thus, one may not see an infinite distance or travel at infinite velocity or observe the infinitely small.

L#14
go back to: Determinism or CPT or Fermions or Mass, Inertia or Introduction

The fundamental postulate of this work is the Internalization of Logic by which is meant:

That the universe is the identity of logic.
That logic is what the universe is and does.
That the universe is not ruled by logic but rather is the thing itself.
That physicality is the embodiment of logical entities.
That interactions are the embodiment of logical operations.

By corollary from the former there are two laws:

The Principle of Embodiment
All that which interacts must have form.

The Principle of Interaction
All that which has form must interact.

Because the universe is by definition all that there is, any action must occur within the context of and be validated by the totality of that which presently exists.


L#14a
go back to: Determinism...

The universe as a whole does not exist in time or space. It is rather congruent with these concepts. Therefore, questions which ask what happened before time began or what lies outside of the universe are logically defective if one has accepted the Internalization of Logic.
Go Back to Sect#19

A Justification for the Derived Relationship between the Fine Structure Constant and the Electron-Proton Mass Ratio

The mathematical procedure (derivation of "Q") performed in the table in section #19, was invented by me specifically for that purpose. Its validity cannot be proven in the context of present mathematics. I therefore can only justify the steps taken so as to give the reader a means of grasping the problem as a whole. In mathematics a new procedure is "proven" by use, i.e. you use it for something else thereby generating confidence in its validity. We can only require that a new procedure absolutely DOES NOT contradict established mathematical logic.

With this in mind:

We have accepted thus far that the universe is logic itself (a completely abstract entity). Matter is the embodiment of that logic not its subject. Therefore, the equation relating the mass of an elementary particle and charge is analyzable as a logical entity, i.e. we need only examine the equation itself (F=a^3 x B^2).

Specifically, we are seeking an "exchange rate" between the isotropic gridwork and the positional fields (the spherical reference frames), which we have embedded therein. That is, how much compression/expansion of the grid is equal to how much expansion/compression of the positional field? Since they exist relative to one another they must display that interaction in some quantifiable way. And this rate or ratio cannot be arbitrary - cannot be "picked out of a hat" at random.

I thought on this for many years and could find no reason in it until I accepted it as an example of "most probable case". Meaning that, these numbers are fixed by probability from among all conceivable possibilities. But this number is infinite and infinity has no "most probable" anything. To restrict the number of possibilities between finite quantities I chose to accept the force between an electron and proton in the ground state of the hydrogen atom (this being the barest state) as being the "fixed" quantity.

I judged this to be reasonable because that force is (on both the "absolute/pure number" and "unit" scales) ~ 1xE10^-13. The exponents of all major universal numbers on these scales seem to be in multiples of 13, i.e. 10^-13, 10^-26, 10^-39, 10^26, 10^78, etc.

The equation (F=a^3 x B^2) then "fixes" a probabilistic relationship between the two parameters of interest.

We note that if we take a and B as equivalent (neither having cause to be larger than the other) then a^3=F^1/2 and similarly, B^2=F^1/2. But we see immediately that a and B have different exponents giving us cause to wonder what should be done to the distribution of F over axB.

I found no accepted mathematics for dealing with this - and so considered (what I thought to be) the most even handed way of solving for the obvious "Q" value - and got it pretty nearly (with a few minor and expected corrections).

............Q / (1/Q) .. . . = 1 = ...................aa / BB
(Q) x [ Q / (1/Q) ].... = 3/2 = . . . (a) x [ aa / BB ]
............Q^3......... .....= 3/2 = .......... . ..aaa / BB

therefore , ..... Q = (3/2)^1/3


Go Back to Sect#19

L#15
go back to: Gravity...

For D = 1; and r = 10^1 : B is .049875...
For D = 1; and r = 10^2 : B is .004999875...
For D = 1; and r = 10^3 : B is .0005000...
For D = 1; and r = 10^4 : B is .00005000...

Obviously, this series converges rapidly on 1/r. That is, when r is ten times greater, B is ten times less.


L22 go back to: CPT

Indirect instantaneousness

The universe (the infinite continuum) is initially 'chain connected' with unit indeterminacy (1 um ul^2/ ut).
In the first unit of time each particle is connected to its neighbor. Hence, a given unit is logically connected to another at infinite distance in a finite time through an infinite number of intermediate units.

Redundancy

A temporally or spatially cyclic model must be complete in one cycle.

The study of the anterior/posterior cycle or upper/lower bound spatial repetition is without merit because these are inaccessible to experience or deduction.
Cyclic models are attempts to evade the initial postulate by 'functionally grafting' a series of universes. Thus, a 'many worlds hypothesis is the direct antithesis of the initial postulate.


Next Page


Ebtx Home Page