Archive of Month February 2009 :


Conventional logic views the things from outside, so to speak. Hence they appear limited and closed. They become objects.

So this logic corresponds to the conventional ideal of science, the objectivity, always intended, but in the end remaining an idealization. For even science is persistently forced to immerse itself, to get into the thick of it, to be touched by the other, to take part, without a distance.

Actually, it is no problem to accept our own entanglement in reality as an essential condition for knowing. We merely have to factor the step towards objectivity into our computations by giving room for the spaces. They just belong to the things. The space stands for all that the thing cannot do without it. In particular the interactions, the outside contacts. It is space what mingles with others, what penetrates them and is penetrated by them.

In this way, by means of the space, the thing is able to leave itself behind it, to become another. While also remaining entirely itself, since only in permanent renewal it is. Seen in this light, its being is made exactly of this distancing from itself, of viewing itself through another’s eyes.

That is the core of every existence. And activity. It is what we sometimes call time. But before any standardization, without the artificial monotony of clocks. Lacking any coldness. It is life. And this life is knowledge. Reflection is elementary.

Small Steps

Computational interactivity is enabled by continual breaks in the program flow. Everything proceeds in small steps, which may be strung together in almost any order. In this way, even running processes remain modifiable.

While seemingly static states turn out to be processes too, for they have to be renewed again and again; otherwise they would not take place at all.


Not only human users can interact with computer programs, but also other programs. This case occurs even much more frequently. Actually it happens all the time: a great deal of the programs running in a computer just control other programs or provide them with data or hardware resources or so.

Furthermore, every program is composed of smaller units, which themselves may be called “programs”. Each serves a certain function. When required, it gets initialized and interacts with other parts of the program by means of special channels or interfaces — which, after all, are again nothing but such sub-programs serving certain functions…


Most of the activity in a computer is started and controlled by programs, not directly by the user. These programs themselves are controlled by other programs, which again depend on others, and so on. There is interactivity between different levels of control, as well as between programs residing on the same level, so to speak.

In this sense “higher” programs are normally in no other way superior to those they control. Controlling is just a special function to be fulfilled, and often this is best done by programs as simple as possible. They neither have to know more than others, nor see everything happening elsewhere, usually they decide on the base of very few indices, following strict clear rules.

The same is true for programs that integrate others into one big one. Compared with the richness of those components, these central comprising parts are very often fairly simple, lacking every extra complexity that would harden the work of coordination. The more resources the core components consume for their own and for their task, the less is left for the rest, that means, above all: for the user.


On the one hand, every program should be as simple as possible, but on the other hand, every program is characterized by the ability to comprehend different possibilities of acting, which allows it to react on different circumstances, such as different inputs.

This is true even in case that it is just a matter of becoming active or not, a simple question of yes or no, of on or off, 1 or 0.

Different possibilities and the involved moment of decision are decisive for even the smallest elements of programs. They are the base of every computational task.

Decision changes everything. The whole thing — and especially its relations to the parts. So that these become much more a matter of coordination than of addition.

Round About 100%

Of course we could say that a program existing of many other programs does contain all those and thus should comprise and possess all their possibilities too. But if this saying is associated with total control and perfect survey, than it is more than distracting. The association is definitely wrong.

One could argue that theoretically it should be possible to grasp all possible branches. And with very simple programs running in a highly reliable context this may eventually be practicable. If so, there is nothing to say against making use of them. Yet, trying to reduce the whole computing to such clearly determined programs is absolutely unrealistic. That would mean, on the one hand, closing one’s eyes to unpredictabilities arising time and time again, particularly when and where not expected; and on the other hand, it would mean restricting excessively the ways to use computers.

One great advantage of computers is definitely their ability to deal with situations that are not one hundred percent determined — thus with situations that amount in good approximation to one hundred percent of our real world.

The same is true for the usability of programs, the interactions with the users. The narrower the guidelines are those have to obey, the more the whole becomes unwieldy, nested and therefore intransparent. And it does not even end up providing more safety, but rather increasing the risk of incorrect use.

Doing Nothing

In general a program has to react to many different situations, some of them practically unpredictable. Sometimes it is better for a program not to become active at all, maybe even to crash down, than to do something that would have bad consequences. So doing nothing remains basically an alternative worth to be taken into account. It is part of each program’s spectrum of possible actions.

As we have noticed before, doing nothing is also fundamental in the sense of interrupting the flow for modulations or other actions to become effective.

In the end, both forms of inactivity are equal. They constitute the space between periods of activity. So they allow that activity to do — and thus to be! — the right thing in the right place at the right time.