A Digital Orrery

Article (PDF Available)inIEEE Transactions on Computers C-34(9):822 - 831 · October 1985with 528 Reads
DOI: 10.1109/TC.1985.1676638 · Source: IEEE Xplore
Abstract
We have designed and built the Orrery, a special computer for high-speed high-precision orbital mechanics computations. On the problems the Orrery was designed to solve, it achieves approximately 10 Mflops in about 1 ft3of space while consuming 150 W of power. The specialized parallelarchitecture of the Orrery, which is well matched to orbital mechanics problems, is the key to obtaining such high performance. In this paper we discuss the design, construction, and programming of the Orrery. Copyright © 1985 by The Institute of Electrical and Electronics Engineers, Inc.
25
th
i
th
a
i
=
N
X
j6=i
GM
j
|r
ij
|
3
r
ij
j
i
=
N
X
i6=j
GM
j
|r
ij
|
3
Ã
v
ij
3 (r
ij
.v
ij
) r
ij
|r
ij
|
2
!
|r
ij
|
3
= |r
ji
|
3
n
n(n1)
2
O(n
2
)
| r
ij
|
3
¡
x
2
+ y
2
+ z
2
¢
3
2
n
O(m
2
),
2 × 10
30
10
22
10
27
×10
22
10
15
10
19
1.8 × 10
15
10
13
10
3
10
17
Kg 10
5
< 0.07ms
2
O(n
2
)
O(n
2
)
O(n
2
)
O(n
2
)
×2.5
O(n
2
)
¯
¯
¯
f
i
¯
¯
¯
=
P
N
j6=i
GM
j
|~r
ij
|
2
+e
e
t
t + δt
E
i
=
1
2
m
i
v
2
i
+
N
X
i6=j
Gm
i
m
j
|r
ij
|
h
i
= m
i
| r
ij
× v
ij
|
N = 2
v
1
= v
0
+ a
0
δt
s
1
= s
0
+
δt
2
(v
1
+ v
0
)
v s
s
1
= s
0
+ v
1
2
δt
v
3
2
= v
1
2
+ a
1
δt
s
1
= s
0
+ v
0
δt +
(δt)
2
2
a
0
v
1
= v
0
+
δt
2
(a
1
+ a
0
)
4E
simple euler
= 0.3
4E
leapf rog
= 1.3 × 10
10
4AM
simple euler
= 0.19
4AM
leapf rog
= 4.1 × 10
15
10
7
s
pred
= s
0
+ v
0
δt + a
0
(δt)
2
2
+ j
0
(δt)
3
6
v
pred
= a
0
δt + j
0
(δt)
2
2
a j s
pred
v
1
= v
0
+
δt
2
(a
0
+ a
pred
) +
(δt)
2
12
(j
0
j
pred
)
s
1
= s
0
+
δt
2
(v
0
+ v
1
) +
(δt)
2
12
(a
0
a
pred
)
P EC
P (EC)
2
δt
e = 0.99
t
c
=
v
u
u
t
η |a|.
¯
¯
a
(2)
¯
¯
+
¯
¯
a
(1)
¯
¯
2
¯
¯
a
(1)
¯
¯
.
¯
¯
a
(3)
¯
¯
+
¯
¯
a
(2)
¯
¯
2
η a a
(1)
j
a j
a
(2)
1
=
6(a
0
a
1
) δt(4j
0
+ 2j
1
)
(δt)
2
a
(3)
1
=
12(a
0
a
1
) + 6δt(j
0
+ j
1
)
(δt)
3
T
T
T δt
δt T
T
T δt δt δt ¿ T
r =
l
1 + e cos θ
θ
e =
p
1 + 2Eh
2
l =
h
2
GM
|r ×v| 0.5v
2
+
GM
r
θ
θ = cos
1
µ
1
e
µ
l
|r |
1
¶¶
π
r v
2π
θ
|
~
f
approx
|
|
~
f
true
|
|
~
f
true
|
8.7 × 10
14
8.5 × 10
7
5.6 × 10
10
< 10
15
2.7 × 10
15
8.5 × 10
7
3.9 × 10
15
< 10
15
1.6 × 10
14
< 10
15
8.0 × 10
15
< 10
15
3.4 × 10
13
1.2 × 10
6
2.3 × 10
7
5.8 × 10
6
4.9 × 10
6
2.8 × 10
4
8.9 × 10
8
1.6 × 10
5
1.0 × 10
9
1.2 × 10
5
8.1 × 10
7
4.2 × 10
5
1.6 × 10
6
1.3 × 10
4
1.1 × 10
5
5.2 × 10
4
1.2 × 10
7
1.2 × 10
5
7.7 × 10
9
8.5 × 10
7
2.6 × 10
8
1.2 × 10
6
3.6 × 10
8
8.5 × 10
7
5.7 × 10
8
3.6 × 10
6
2.9 × 10
7
1.4 × 10
5
1.7 × 10
6
1.2 × 10
4
4.7 × 10
6
2.1 × 10
4
5.8 × 10
5
9.4 × 10
4
3 × 10
3
8.9 × 10
2
1.8 × 10
8
< 10
15
3.2 × 10
10
< 10
15
1.0 × 10
8
< 10
15
1.3 × 10
8
1.7 × 10
6
1.2 × 10
7
3.3 × 10
6
2.3 × 10
7
1.2 × 10
5
8.0 × 10
9
8.5 × 10
7
2.5 × 10
9
< 10
15
8.2 × 10
9
8.5 × 10
7
8.1 × 10
9
< 10
15
δt = 1 Day
P (EC)
2
4.4 × 10
4
×10
5
4.1 × 10
4
5 × 10
5
P EC 1.1 × 10
3
9.0 × 10
5
8.9 × 10
4
1.0 × 10
4
2.5 × 10
4
5.9 × 10
6
2.3 × 10
4
8.0 × 10
6
6.7 × 10
3
4.7 × 10
4
6.9 × 10
3
4.9 × 10
4
δt = 0.02 Day
P (EC)
2
2 × 10
12
1.9 × 10
6
4.3 × 10
10
1.9 × 10
6
P EC 2 × 10
9
×10
6
2.7 × 10
9
×10
6
2.9 × 10
9
1 × 10
6
3.4 × 10
9
1.1 × 10
6
3.1 × 10
4
4.7 × 10
6
3.2 × 10
4
4.7 × 10
6
P (EC)
2
10
15
δt = 0.02
3.4 ×
10
9
δt = 0.02 Day
P (EC)
2
P (EC)
2
8.1 × 10
10
2.7 × 10
7
P EC 5.6 × 10
7
×10
7
1.26 × 10
9
3.2 × 10
6
9.0 × 10
3
7.0 × 10
4
P (EC)
2
P EC
P (EC)
2
p
x
2
+ y
2
+ z
2
¡
x
2
+ y
2
+ z
2
¢
e
1
e > 1
e 1
0.5v
2
+
GM
r
|r ×v|
G Mass
1 + 2Eh
2
h
2
GM
i
th
δt
t+interv al
δt
  • Article
    We overview our GRAPE (GRAvity PipE) and GRAPE-DR project to develop dedicated computers for astrophysical N-body simulations. The basic idea of GRAPE is to attach a custom-build computer dedicated to the calculation of gravitational interaction between particles to a general-purpose programmable computer. By this hybrid architecture, we can achieve both a wide range of applications and very high peak performance. GRAPE-6, completed in 2002, achieved the peak speed of 64 Tflops. The next machine, GRAPE-DR, will have the peak speed of 2 Pflops and will be completed in 2008. We discuss the physics of stellar systems, evolution of general-purpose high-performance computers, our GRAPE and GRAPE-DR projects and issues of numerical algorithms.
  • Article
    We describe a GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analysis of planetary systems. GENGA is based on the integration scheme of the Mercury code (Chambers 1999), which handles close encounters with very good energy conservation. It uses mixed variable integration (Wisdom & Holman 1991) when the motion is a perturbed Kepler orbit and combines this with a direct N-body Bulirsch-Stoer method during close encounters. The GENGA code supports three simulation modes: Integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. GENGA is written in CUDA C and runs on all Nvidia GPUs with compute capability of at least 2.0. All operations are performed in parallel, including the close encounter detection and the grouping of independent close encounter pairs. Compared to Mercury, GENGA runs up to 30 times faster. GENGA is available as open source code from https://bitbucket.org/sigrimm/genga.
  • Article
    The Connection Machine Supercomputer system is described with emphasis on the solution to large scale physics problems. Numerous parallel algorithms as well as their implementation are given that demonstrate the use of the Connection Machine for physical simulations. Applications discussed include classical mechanics, quantum mechanics, electromagnetism, fluid flow, statistical physics and quantum field theories. The visualization of physical phenomena is also discussed and in the lectures video tapes demonstrating this capability are shown. Connection Machine performance and I/O characteristics are also described as well as the CM-2 software.
  • Article
    The long term changes of the orbital elements of the planets are described by secular perturbation theories. After a short historical discussion, the secular perturbation equations are derived by means of the formalism of the Lie series transformations. To solve the classical problem of the long term changes in the major semiaxes second order effects have to be computed. As for the long term changes in the eccentricities and inclinations, they can be computed by means of higher degree theories. However the time span over which the latter apply cannot be increased at will. This because of the divergence of the perturbative series, a fundamental property of a non-integrable system such as the N-body problem. Numerical integrations are therefore an essential tool both to assess the reliability of any analytic theory and to provide data on the fundamental frequencies of the secular system and on the occurrence of secular resonances. Examples are taken from the LONGSTOP integrations of the outer planets for 100 million years.
  • Article
    This article is an interpretive study of the theory of irreversible and dissipative systems process transformation of Nobel Prize winning physicist Ilya Prigogine and how it relates to the phenomenological study of leadership and organizational change in educational settings. Background analysis on the works of Prigogine is included as a foundation for human inquiry and metaphor generation for open, dissipative systems in educational settings. International case study research by contemporary systems and leadership theorists on dissipative structures theory has also been included to form the interpretive framework for exploring alternative models of leadership theory in far from equilibrium educational settings. Interpretive analysis explores the metaphorical significance, connectedness, and inference of dissipative systems and helps further our knowledge of human-centred transformations in schools and colleges.
  • Article
    We have designed and built GRAPE-1 (GRAvity PipE 1), a special-purpose computer for astrophysical N-body calculations. It is designed as a back-end processor that calculates the gravitational interaction between particles. All other calculations are performed on a host computer connected to GRAPE-1. For large-N calculations (N>~104), GRAPE-1 achieves about 100 Mflops-equivalent in one board of size about 40 by 30 cm at the power of 2.5 watt. The pipelined architecture of the GRAPE-1 which is specialized and optimized for the N-body calculation is the key to the high performance. The design and construction of the GRAPE-1 system are discussed.
  • Article
    The evolution of equal-mass star clusters containing a mass fraction of about 20 percent binaries has been followed using direct integration, making one run each for a total number of stars of N = 282 and N = 563, and four runs for N = 1126. For comparison the evolution of an equivalent star system where the binaries were replaced by stars twice as heavy as the other stars was followed. The pre-core-collapse evolution is driven by mass segregation between the equal-mass single stars and the binaries, which are twice as heavy. After core collapse, the cluster shows, on average, a smooth reexpansion driven by a steady rate of burning (hardening) of primordial binaries. With so much primordial fuel present, the postcollapse cluster core is significantly larger than is the case in comparison runs without primordial binaries.
  • Article
    We describe the architecture of the APEmille Parallel Computer, the new generation of the APE family of processors optimized for Lattice Gauge Theory simulations. We emphasize the features of the new machine potentially useful for applications in other areas of computational physics