Papayannopoulos, [2020] “Unrealistic Models for Realistic Computations: How Idealizations Help Represent Mathematical Structures and Found Scientific Computing”.
Scientific computing aka computational science, i.e., the general techniques for all computational branches of empirical sciences, can be divided into symbolic and numerical methods; the paper deals with the latter. Since numerical methods work with continuous values (real/complex numbers and their generalizations), there are questions of convergence, stability, and computability of algorithms over floating point arithmetic (in particular, the problem of accumulation of round-off errors). To study these questions, we need a mathematical framework for computing over uncountable sets, the most popular of which are the BSS model (aka Real-RAM) and computable analysis (aka Type Two Effectivity), and they are incompatible with each other. This article is devoted to a breakdown of these two approaches.
CA/TTE is a more fundamental and “low-level” theory, it is a form of mathematical analysis where the classical theory is preserved, but in addition computational representations of objects by infinite approximations are taken into account. This is usually done by working with some varieties of Turing machines (with an oracle or more complex representation machinery). Its origin can be traced back to the work of Borel, Banach, and Turing in the 1910s-30s, where computable subsets of R were considered (this idea was developed in the Markov school of constructivism/synthetic computability). Later approaches (“Hagen school”) moved to considering all of R (a key idea of TTE). In this model, computable functions must be continuous - as a consequence, we lose the possibility to fully use “point” comparison and rounding functions (see this tweet).
BSS is a model of computation with an idealized RAM machine capable of storing arbitrary numbers (including exact values of real numbers) in memory and instantly performing arithmetic operations on them. There are no problems with discontinuous functions, but working with algebraic and transcendental functions (which CA has no problem with), starting with trivial square roots, becomes more complicated.
So which framework is better? BSS is less realistic and less good at handling rounding errors and ill-conditioned functions, but it is still quite popular - it models stable algorithms (i.e., it does not introduce its own errors) and allows you to work with their complexity estimates. Also, BSS is closer to floating arithmetic algorithms in that it has a fixed cost of operations (unlike CA, where the cost depends on the input). CA, on the other hand, is suitable for deeper computability issues (including, for example, quantum computability) and for dealing with “ill-behaved” functions, but poses more problems for cost analysis and for dealing with well-studied numerical algorithms.
The article ends with an analogy to physical models that make work easier by unrealistic simplifications, e.g. floating numbers can be seen as analogous to experimental values - i.e., classical and computational mathematics live in different realities.