Computer software as a philosophical pathology
I like words to live in familial groups because in differences there is meaning (this is an antiPlatonist view).
Much of what passes for “education” is uncritical, Positivist, training in the definitions of favored words, in which disfavored concepts are unmentionable, like the madwoman in the attic. Thus, insofar as students even learn about “philosophy”, they learn about “Plato” and “Socrates”.
Platonic idealism, combined with a Socratic plainness shading into ugliness, then becomes all of “philosophy” for the putatively learned. The result? Neo-colonialism and, as I’ll show here, disaster.
The “philosophy of mathematics” interrogates mathematics: what objects-in-the-world are mathematics about (ontology), how can we be so certain of mathematical truth (epistemology), what is mathematical beauty (aesthetics), etc. It is not understood at all if a serious discussion, such as the discussion at this link, can start and end with Plato: yet that is precisely how philosophy is treated in the media, a music of one note with no development or dialectic.
Now, Stefan Korner’s book may be out of date: incomplete. However, it has the saving grace in that it is an interplay between Platonism (or its modern version, logicism), intuitionism, and formalism, because one never understands the part unless one begins to understand the conceptual family of which it is a member.
Thus, I understand mathematical Platonism in opposition to formalism and intuitionism, and only in this group. To discuss, as does the blogger referenced, the philosophy of mathematics as if the only philosopher was Plato or his butt buddy Socrates is utter pretension. It makes philosophy into the spinning of fables, because absent intuitionist and formalist responses (especially the mostly ignored response of intuitionism), and absent any social critique,“Platonism” reduces to the utterly absurd (because uncontested: because uncritical) fable of the Forms.
Seen in contrast only to Aristotle’s Metaphysics, the “world of forms” retains its limited truth content, because when we aspire in the world of matter to make form, we do have a mental template which has the properties of the capital I Ideal. In software, we try to assemble a chaos of bits (matter) into Form using a mental construct.
But by itself, the world of forms is a children’s fable.
How do these philosophies of mathematics apply to software?
The blogger rather idly fantasizes that software Platonism would be the belief that over and above specific programs, there is the Idea of the program, which raises all sorts of interesting problems that were also raised to Plato when he constructed the idea of form independent of matter.
Plato was asked whether there could be Ideas of “bad things”. Is there a Platonic fart? Certainly, there are aestheticians of breaking wind who rate explosions for sound and smell.
Likewise, over and above the millions of shipments of Microsoft Windows, is there an Ideal Microsoft Windows? Is this the perfect, bug-free Windows intended by its core development team? Do the ideas of the orange badgers count (the “orange badgers” are the thousands of temporary employees doing the same work as Microsoft employees for less money and fewer benefits)?
Or is Microsoft Windows so exemplary of what not to do that its Idea would be the buggiest and most odious version of Windows, say rel. 1.0 running on an underpowered Intel 386?
Or would the Idea be an averaging of all the above?
Pure Platonism, especially when through arrogance or ignorance or through a study of “philosophy” restricted to Bonehead 101 and The Fountainhead, is just malarkey, and it needs to confront more modern philosophies to serve as their genetic ground.
The blogger seems unaware that “modern” Platonism would be the “logicism” of Gottlob Frege and Bertrand Russell. Although Bertie did imply in his earlier general philosophy that Ideas exist, he and Frege were more concerned with a consequence of Platonism in the philosophy of mathematics.
This was that it seemed to them and others around 1900 that mathematics could be reduced to logic by means of a rich conception of the mathematical/logical “set” as a bridge notion. If all sets exist unproblematically, then we can derive mathematics by way of construing numbers as the “set of all sets of the same cardinality [set count]” and expressing basic arithmetic as set operations.
Because of some well-known paradoxes, this project failed: it was as if the Platonic empyrean were found to contain one or more Rebel Angels, “whose high disdain, and Sense of Injur’d Merit” would upset the apple cart.
I express one of the most famous paradoxes in my China teaching like this: during the Cultural Revolution, the village Party cell decrees: “Comrades! Revolutionaries express their commitment to the dictatorship of the proletariat and the liquidation of the landlord class by remaining clean shaven! All men must shave every morning in preparation for toil!”
“All men who do not use the services of Comrade Wong, village barber shock worker, must shave themselves! All men who do not shave themselves must be shaven by Comrade Wong!”
Needless to say, Comrade Wong is found dead by suicide, since he cannot figure out whether to shave.
Obviously, logicism and Platonism resemble, to a certain extent and with far more dignity, the nonsensical pseudo-philosophical ravings of the mad woman Ayn Rand: they are limit cases, generators of paradoxes and in themselves philosophical problems.
David Hilbert’s nihilistic formalism declared baldly that mathematics refers to nothing outside itself and is an uninstantiated game with symbols playable only by tenured professors and their favorite adjuncts because it pleases them: equivalent to chess, crossword puzzles, or what was known to Cantabrigians of the 1930s as The Higher Sodomy. Of course, this completely fails to explain why mathematics can be used outside of mathematics and why, at any time, bridges, constructed using mathematics, don’t fall down.
Hilbertian formalism as applied to software would mean that software has no meaning outside of software.
This is where the philosophy of mathematics-in-software gets interesting, because there is a philosophical pathology, the investigation of how philosophies and their misunderstood mutations escape the Platonic knocking-shop and Academy and become the “philosophies” of people working in other learned professions and in trade. Mechanicals and slaves in Plato’s time were innocent of big ideas, for the very good reason that most big ideas hadn’t been thought yet: today they inherit, albeit down, in some cases, at the community college, some big ideas, considerably shop worn, perhaps mutated, by the distance traveled in raum undt zeit.
It may seem wrong to view business and administrative software are an uninstantiated game with symbols, because we write software to “solve problems”, the fact is that in business, many companies get into creating software blindly, the software creates more problems than are solved and their data processing staffs contribute nothing to the bottom line owing to their incompetence, merely playing David Hilbert’s “games with symbols”, having found a new way beyond a tenured professorship to basically jerk themselves off.
In 2003, Harvard Business School professor Nicholas G. Carr announced “IT (information technology: software) Doesn’t Matter” [emphasis mine]. He has found that in the beginning of a new technology only does it provide competitive advantage. Certainly, many new-company entrepreneurs discover innovative ways to at least seem to get quick results using IT. Federal Express was able to do so.
But, Carr went on to show that after first entrants gain competitive advantage, the rest of the market is locked into providing the same service, incurring a new layer of costs with no clear benefits. What Carr failed to point out was that no entrants, early or late, have to provide correct software. By the time bugs matter, and by the time they are fixed, the software is a pure cost.
This appears to me to be applied Hilbertian formalism. The actual developers even of the first software worked blindly because of the end users’ inability to verbalize their needs. Since this ability is roughly the same as actual programming there was a massive duplication of effort as, in the early days, vast, unreadable and unread “requirements documents”, full of aporias and errors, were typed by secretaries and handed to programmers.
However, if anyone at the time reflected on what was going on, “philosophically”, overemphasis on a static “philosophy” that starts and stops with Plato/Socrates caused the developers and their corporate sponsors to treat buggy first versions (a competitive advantage all the same) as a Platonic “Form”, with end users, low level employees, and customers paying the price of errors. Nonetheless the buggy software was a genuine competitive advantage.
As a Platonic Form, strangely, the buggy Rel. 1.0 was “good enough for government work”. Monstrum Horrendum Derrida has pointed out Platonic aporias: perhaps the most basic is that the Platonist, approaching the limit, getting close enough, sees the infinitesimal as unworthy, a form of writing: the trace in Derrida, and thus is able to confuse the buggy release with the Form.
But over time, data processing became a cost center. Today, most programmers unconsciously are formalists who follow syntactical, Hilbertian rules. In many cases, in order to avoid errors, they are directed to use “applications generators” which force their programmers to follow Hilbertian rules with little control over the result, and reinforce an Hilbertian split between what they do and meaning. But, there’s no way of philosophically narrating this change, because “philosophy” is treated as a practice with no evolution and no pathology.
In the case of “rocket science” financial software, these tools are little understood; owing to the formalist bias against relating math to reality, the general public has no understanding of financial software as used in securitization and structured finance. As a result of Hilbertian games, and the frivolity of David Hilbert’s philosophy of mathematics, we see absence of name and address of original borrower on securitized loans and circular chains in other artifacts, where, for example, a reinsurance policy insures itself at two or more levels removed. The result? The end of the world?
For example, software developers at D. E. Shaw develop models explicitly identified, on the D. E. Shaw Web page, as “proprietary”. Unfortunately, in software, “proprietary” has a specific, rather Hilbertian, and somewhat sinister meaning.
The software’s “source code” (the instructions in a formal language readable by programmers trained in that language) cannot be viewed without D. E. Shaw’s permission, as opposed to “open” systems such as Linux for which the source code is available.
One real-world meaning of “proprietary” software is that its freedom from “bugs” is unknown to all but people biased towards thinking of the software free of bugs. Secrecy kills knowledge, and in being proprietary, D. E. Shaw’s software is logically akin to Bernie Madoff’s far cruder and less elaborate Ponzi scheme.
Platonism and formalism are in themselves bad jokes because neither, in themselves, betray any serious engagement with each other. Intuitionism is in a different weight class.
The best and noblest philosophy of programming resembles the Intuitionism of Brouwer, and as Donald Knuth wrote in a 1978 essay, its avatar, the late Edsger Dijkstra, was also Dutch, appropriately.
Mathematical Platonism, as applied to software, arrogantly assumes that the software which controls and destroys people’s lives pre-exists even before you write it and thus is an implicit ideological justification for the substitution of promises for software, and the Benthamite implication that surveillance, as software, exists: software formalism is even more irresponsible: but Dijkstra is the noble Intuitionist who not only with rigor but also with moral seriousness refuses to admit that “there is a program” for a given problem until that program has been step by step generated: from its own correctness proof!
Only software Intuitionism, a refusal of Platonic arrogance, and Formalist nihilism, betrays no reading of Kant. The final correctness of the absolute (zuivere) program might in fact be an unknowable thing in itself, a limit or upper bound. Dijkstra’s practice was what his countrymen Mondrian and van Doesburg called zuivere beelding: purity above all.
I’m serious. Arrogant Platonists at the top of companies like D. E. Shaw, protected against the consequence of their errors by ownership of source, have helped to destroy the economy because they sold “solutions” which used “chaos” but did not, could not, anticipate “black swan” unexpected conditions in part because of mere memory limitations: the Intuitionist knows that 2^64-1 is nothing more than a finite number, whereas the applied Platonist confuses 2^64-1 with infinity, even as he confused 2^31-1 with infinity in 1990.
Nihilistic and substance abusing Formalists were delighted to play what they knew to be uninstantiated games with symbols for the Platonists at the top, for money. Intuitionists were hounded out of the field as whistle-blowers and party poopers.
We live with the result. Yesterday, Luis Ignacio da Silva, president of Brazil, said that the world financial crisis, powered as it is by software, is the mischief of blue-eyed white people. Platonism and irresponsible Formalism were their ideological tools in software.
I shall now illustrate my points.
The Game of Life was developed by the Cambridge and Princeton mathematician John Horton Conway (who I was privileged to meet when I was at Princeton) in 1968, and I implemented The Game of Life on an old mainframe with 8000 bytes in 1971. In this mathematical game, imagine (imagine) a world consisting of cells. Each cell can be alive or dead. Each cell has a square neighborhood consisting of the eight neighboring cells.
If a dead cell has three neighbors it becomes live. If a live cell has four or more neighbors it dies as if from overcrowding. If a live cell has fewer than two neighbors it dies poetically and picturesquely as if from loneliness.
These simple rules have been found to create strange and wonderful patterns.
Symmetry is preserved, so a simple row of ten cells produces a sequence of stunning patterns and the message HIH. Asymmetrical patterns generate, as in the “glider” motion and, when they meet other objects, sexual reproduction and the creation of factories to clone themselves.
Now, it seemed obvious to me in 1971 to program Life on a computer. But note that most simple versions of Life have what hero computer scientist Edsger Dijkstra, who had zero tolerance for errors, would consider a showstopper bug.
The glider shape, in an empty screen, replicates in such a way as to seem to creep across the screen, merely because its starter shape creates a shape which creates a glider, offset diagonally from its “sharp end”.
It should move off the screen. One should see it disappear. But in most implementations, such as this online Java implementation of Life, the band of cells at each border is most conveniently considered as an unchanging row of dead cells. But this, as shown, gives a wrong answer.
The theoretical predictions as to the lifetime of various patterns of starter worlds can’t be confirmed with these models.
Now, it is tactically possible to use clever programming to account for the borderline problem. Indeed, Dijsktra used an ontological approach predefining words carefully and (while Dijkstra had no use for “object-oriented programming”) he would posit concepts and objects to clarify algorithms. He might say that the program’s display should be rigorously considered, by the programmer, as a peephole or window onto a lifeworld (lebenswelt) represented separately and independently by an array.
Typically, suggestions like this as made by Dijkstra before his death in 2003 were met with the same sort of response Kant encountered in response to Kant’s political writings. Kant’s paraphrase of his critics in Kant’s pamphlet “On the Old Saw” was “that is all very well, Herr Learned Professor, but we are practical men and your suggestion is elaborate: surely there must be a simpler solution to the problem from common sense”.
Kant’s riposte was that he was doing common sense…better than the typical “practical” statesmen of his or any other time, most of whom were best described, as was Talleyrand, as shit in silk stockings.
To Dijkstra, and to programmers like him, the patronizing response was “that is very nice but you just made me think, you dreg, and I didn’t think it, you dog, and I own the company, you waterfly, therefore let’s not do it: let’s NOT by any means separate the display from the array”.
However, Dijkstra would then go on, I think, to say that this merely postpones the problem to the limits of the array. In general, whenever Life generation occurs at any one of the (four) upper and lower bounds of the array, values need to be generated “beyond” this bound because those values may “feed back” to visible cells. In my glider illustration, the glider “wants” to generate a new cell below its three bottom cells as their neighbor when it reaches the bottom of the screen (or any other side), and this new cell influences the visual cell in the next step. Instead, because it is forgotten, the visible cells turn into a block of four cells which produces no new cells.
Clever programming can continually expand the array as needed, of course. In fact, the total area occupied by a mixture of live and dead cells is a completely definable rectangle, defined by the lowest and rightmost cell, the lowest and leftmost and so on. As an optimization strategy, this area can be subdivided into a list of separate neighborhoods. But, any possible program will run out of space.
The implications of this differ depending on your philosophy of mathematics.
The Platonist will conclude that the computer program is at best the shadows cast on the wall in Plato’s fable of the Cave which he tells in The Republic to illustrate the forms. Also, given Plato’s suspicion of writing, the Platonist will distrust the program in general.
The Formalist has no reason to do anything but smirk, and sell the model as reality to the next fool (yes, I have a great deal of contempt for formalism as such).
The intuitionist would keep on expanding the program but in despair of the possibility of ever writing a correct program.
Leonard Smith, in his Oxford University Press “Very Short Introduction” to Chaos, writes that computers cannot simulate chaos at all because they are finite state machines in all cases no matter how large.
We think of storage as “unlimited”. But, programming on an 8000 byte IBM 1401 in 1971, I thought that if my university could get the Chicago Police Department’s 37K 1440, there would be far more than enough storage. The computing industry in fact has repeatedly fallen prey to the illusion that numbers are intrinsically large.
The latest joke is the redefinition of a long int[eger] in the wildly overrated language C as 64 bits by various compilers (no, there’s no such thing as a “standard” C: the language cannot be standardized). 31 bits was “infinity” in 1990, but East Asia has, apparently always been at war with Oceania. Not once was the mathematically more sensible decision made, to not support arithmetic in any one fixed precision, but, using RISC or microcode, to support variable precision.
The intuitionist would say that we can always add memory, but this operation, which was first mentioned by Turing in arguing for the universality of his abstract machine, cannot be merely mentioned from the standpoint of mathematical truth.
It seems to me that from a neutral POV, the operation “obtain additional memory” would have to be part of a correct implementation of any computer model of a mathematical process which assumes infinite resources, unlike, strangely, “obtain additional time”. But, absent a World, Galactic, or Universal computer, that would consume all available matter, this is an impossibility.
But: the social implications are clear. Insofar as software is a simulation, it cannot ever be trusted.
It is said that the unexpected threat of a Russian default in 1998 triggered the failure of proprietary mathematical software developed at Long Term Capital Management in 1998 as a “black swan”, where the “black swan” is the highly improbable case that breaks the model. The Platonic “visionaries” of LTCM had seen, correctly, that while traditional business software is fixed point, far more sophisticated hedging models could be built by using floating point (real) numbers.
Unfortunately, these “visionaries” didn’t know much about history and had never read about the Bolshevik defaults on capitalist debts of the Tsar and didn’t foresee that the market would overreact to a Russian threat of default.
But: software bugs are white swans. Most software includes limitations and bugs.
It’s all very well to do philosophy as a form of shopping. Most people, including most graduate students of philosophy, are simply not widely educated enough to even begin to see philosophy in action, not in a seminar room, but as a pathological, historical dynamic. So, I’m probably pissing in the wind here.