Philosophy of Information

Free download. Book file PDF easily for everyone and every device. You can download and read online Philosophy of Information file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Philosophy of Information book. Happy reading Philosophy of Information Bookeveryone. Download file Free Book PDF Philosophy of Information at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Philosophy of Information Pocket Guide.

I have worked in philosophical logic and the philosophy of information, with some brief excursions into the philosophy of mathematical practice. Logic and Information Homepage of Patrick Allo. Modestly radical or radically modest. Festschrift for Jean Paul van Bendegem on the occasion of his 60th birthday Patrick Allo and Bart van Kerkhove, eds Commenting on scientific and cultural developments through various media in his native region Flanders, Jean Paul Van Bendegem b. Papers from the Third Workshop on the Philosophy of Information. Proceedings of the Third Workshop in the Philosophy of Information.

Putting Information First. Luciano Floridi and the Philosophy of Information. Introduction Link. Logic and the Philosophy of Information. Locke bk IV, ch 8, para 4. Two objects, though perfectly resembling each other, and even appearing in the same place at different times, may be numerically different: And as the power, by which one object produces another, is never discoverable merely from their idea, it is evident cause and effect are relations, of which we receive information from experience, and not from any abstract reasoning or reflection.

Hume Part III, section 1. The empiricists methodology is not without problems. The biggest issue is that all knowledge becomes probabilistic and a posteriori. What is more, these intuitions allow us to formulate scientific insights with certainty: i. This issue cannot be explained in the empirical framework. If knowledge is created by means of combination of ideas then there must exist an a priori synthesis of ideas in the human mind. According to Kant, this implies that the human mind can evaluate its own capability to formulate scientific judgments.

In his Kritik der reinen Vernunft Kant developed transcendental philosophy as an investigation of the necessary conditions of human knowledge. Gradually the term obtained the status of an abstract mass-noun, a meaning that is orthogonal to the classical process-oriented meaning. This, in its turn, lead to a revival of the philosophical interest in the concept of information. This complex history seems to be one of the main reasons for the difficulties in formulating a definition of a unified concept of information that satisfies all our intuitions.

This is the oldest meaning one finds in the writings of authors like Cicero —43 BCE and Augustine — CE and it is lost in the modern discourse, although the association of information with processes i. This sort of term-formation is notorious for the conceptual difficulties it generates. The transformation to this modern substantiated meaning seems to have been gradual and seems to have been general in Western Europe at least from the middle of the fifteenth century.

Martin, I suppose, is not a man of information beyond the line of his own business. The text has the capacity to inform me when I read it. In the same sense, when I have received information from a teacher, I am capable of transmitting this information to another student. Thus information becomes something that can be stored and measured. With hindsight many notions that have to do with optimal code systems, ideal languages and the association between computing and processing language have been recurrent themes in the philosophical reflection since the seventeenth century.

Proposals such as these made philosophers sensitive to the deep connections between language and thought. The empiricist methodology made it possible to conceive the development of language as a system of conventional signs in terms of associations between ideas in the human mind. The issue that currently is known as the symbol grounding problem how do arbitrary signs acquire their inter-subjective meaning was one of the most heavily debated questions in the eighteenth century in the context of the problem of the origin of languages.

The central question was whether language was given a priori by God or whether it was constructed and hence an invention of man himself. Typical was the contest issued by the Royal Prussian Academy of Sciences in Assuming men abandoned to their natural faculties, are they able to invent language and by what means will they come to this invention?

Philosophically more relevant is the work of Leibniz — on a so-called characteristica universalis : the notion of a universal logical calculus that would be the perfect vehicle for scientific reasoning. This principle was rejected by Wolff — who suggested more heuristically oriented characteristica combinatoria van Peursen These ideas had to wait for thinkers like Boole , An Investigation of the Laws of Thought , Frege , Begriffsschrift , Peirce who in already suggested that electrical circuits could be used to process logical operations and Whitehead and Russell —, Principia Mathematica to find a more fruitful treatment.

The fact that frequencies of letters vary in a language was known since the invention of book printing. This knowledge was used extensively to decode ciphers since the seventeenth century Kahn ; Singh In an assistant of Samuel Morse, Alfred Vail, determined the frequency of letters used in a local newspaper in Morristown, New Jersey, and used them to optimize Morse code. Historically important but philosophically less relevant are the efforts of Charles Babbage to construct computing machines Difference Engine in , and the Analytical Engine — and the attempt of Ada Lovelace — to design what is considered to be the first programming language for the Analytical Engine.

The simplest way of representing numbers is via a unary system. Here the length of the representation of a number is equal to the size of the number itself, i. This system has enormous drawbacks since in principle one needs an infinite amount of symbols to code the natural numbers and because of this the same mathematical operations adding, multiplication etc. Around CE the number zero was invented in India. From a modern perspective an infinite number of position systems is possible as long as we have 0 as a placeholder and a finite number of other symbols.

PHILOSOPHY - Epistemology: Introduction to Theory of Knowledge [HD]

Note that the length of these representations differs considerable. Using this representation, mathematical operations can be standardized irrespective of the order of magnitude of numbers we are dealing with, i. The concept of a positional number system was brought to Europe by the Persian mathematician al-Khwarizmi ca. His main work on numbers ca.

Positional number systems simplified commercial and scientific calculations. In Michael Stifel introduced the concept of the exponent of a number in Arithmetica integra Stifel compared the arithmetic sequence:. The exponent notation allowed him to rewrite the values of the second table as:. This arguably was the first logarithmic table. A more definitive and practical theory of logarithms is developed by John Napier — in his main work Napier As is clear from the match between arithmetic and geometric progressions, logarithms reduce products to sums:.

After publication of the logarithmic tables by Briggs this new technique of facilitating complex calculations rapidly gained popularity. Galileo already had suggested that the analysis of phenomena like heat and pressure could be reduced to the study of movements of elementary particles. Within the empirical methodology this could be conceived as the question how the sensory experience of the secondary quality of heat of an object or a gas could be reduced to movements of particles.

Bernoulli Hydrodynamica published in was the first to develop a kinetic theory of gases in which macroscopically observable phenomena are described in terms of microstates of systems of particles that obey the laws of Newtonian mechanics, but it was quite an intellectual effort to come up with an adequate mathematical treatment. Clausius made a conclusive step when he introduced the notion of the mean free path of a particle between two collisions. This opened the way for a statistical treatment by Maxwell who formulated his distribution in , which was the first statistical law in physics.

The definitive formula that tied all notions together and that is engraved on his tombstone, though the actual formula is due to Planck was developed by Boltzmann:. It describes the entropy S of a system in terms of the logarithm of the number of possible microstates W , consistent with the observable macroscopic states of the system, where k is the well-known Boltzmann constant.

In all its simplicity the value of this formula for modern science can hardly be overestimated. Thus it connects the additive nature of logarithm with the extensive qualities of entropy, probability, typicality and information and it is a fundamental step in the use of mathematics to analyze nature. Later Gibbs refined the formula:. The modern theories of information emerged in the middle of the twentieth century in a specific intellectual climate in which the distance between the sciences and parts of academic philosophy was quite big.

The research program of logical positivism was a rigorous reconstruction of philosophy based on a combination of empiricism and the recent advances in logic. It is perhaps because of this intellectual climate that early important developments in the theory of information took place in isolation from mainstream philosophical reflection.

A landmark is the work of Dretske in the early eighties Dretske Since the turn of the century, interest in Philosophy of Information has grown considerably, largely under the influence of the work of Luciano Floridi on semantic information. Also the rapid theoretical development of quantum computing and the associated notion of quantum information have had it repercussions on philosophical reflection.

The research program of logical positivism of the Wiener Kreis in the first half of the twentieth century revitalized the older project of empiricism. Its ambition was to reconstruct scientific knowledge on the basis of direct observations and logical relation between statements about those observations. The old criticism of Kant on empiricism was revitalized by Quine Within the framework of logical positivism induction was invalid and causation could never be established objectively. Scientific theories formulated as general laws can never be verified definitively, but they can be falsified by only one observation.

Thus it can be said that the amount of empirical information conveyed by a theory, or its empirical content , increases with its degree of falsifiability.

Is There a Philosophy of Information? | SpringerLink

Popper [ ], emphasis in original. Popper is aware of the fact that the empirical content of a theory is related to its falsifiability and that this in its turn has a relation with the probability of the statements in the theory. Theories with more empirical information are less probable. In a passage that is programmatic for the later development of the concept of information he defines the notion of logical probability:.

The logical probability of a statement is complementary to its falsifiability: it increases with decreasing degree of falsifiability. The logical probability 1 corresponds to the degree 0 of falsifiability and vice versa. It is possible to interpret numerical probability as applying to a subsequence picked out from the logical probability relation for which a system of measurement can be defined, on the basis of frequency estimates.

These issues were later developed in philosophy of science. Although the work of Carnap motivated important developments in both philosophy of science and philosophy of information the connection between the two disciplines seems to have been lost. There is no mention of information theory or any of the more foundational work in philosophy of information in Kuipers a , but the two disciplines certainly have overlapping domains. See, e. The use of base-2 logarithms ensures that the code length is measured in bits binary digits. It is easily seen that the communication entropy of a system is maximal when all the messages have equal probability and thus are typical.

This formula, that can be interpreted as the inverse of the Boltzmann entropy, covers a number of our basic intuitions about information:. Shannon offers a theoretical framework in which binary strings can be interpreted as words in a programming language containing a certain amount of information see 3.

Logarithms as a reduction of multiplication to addition see 3. This problem of relating a set of statements to a set of observations and defining the corresponding probability was taken up by Carnap , Statements of this type are either analytical or contradictory. In the words of his student Solomonoff :. Through his own formal linguistic analysis, he was able to assign a priori probabilities to any possible string of symbols that might represent the universe. The method for assigning probabilities Carnap used, was not universal and depended heavily on the code systems used.

In a paper in Solomonoff , a,b was the first to sketch an outline of a solution for this problem. He formulated the notion of what is now called a universal probability distribution : consider the set of all possible finite strings to be programs for a universal Turing machine U and define the probability of a string x of symbols in terms of the length of the shortest program p that outputs x on U.

This notion of Algorithmic Information Theory was invented independently somewhat later separately by Kolmogorov and Chaitin The actual definition of the complexity measure is:. Algorithmic Information Theory a. Algorithmic Information Theory has gained rapid acceptance as a fundamental theory of information. The idea that algorithmic complexity theory is a foundation for a general theory of artificial intelligence and theory of knowledge has already been suggested by Solomonoff and Chaitin Probably because of its technical nature, the theory has been largely ignored by the philosophical community.

Yet, it stands out as one of the most fundamental contributions to information theory in the twentieth century and it is clearly relevant for a number of philosophical issues, such as the problem of induction. In a mathematical sense information is associated with measuring extensive properties of classes of systems with finite but unlimited dimensions systems of particles, texts, codes, networks, graphs, games etc. This suggests that a uniform treatment of various theories of information is possible.

Information-B: Probabilistic, information-theoretic, measured quantitatively. The historical material presented in this article suggests that reflection on Information-A logic, knowledge is historically much more interwoven than was generally known up till now. The research program of logical positivism can with hindsight be characterized as the attempt to marry a possible worlds interpretation of logic with probabilistic reasoning Carnap , ; Popper ; for a recent approach see Hutter et al.

However, an attempt to unify Information-A and Information-B seems a viable exercise. Verlinde , even presented a reduction of gravity to information see the entry on information processing and thermodynamic entropy. With respect to the main definitions of the concept of information, like Shannon Information, Kolmogorov complexity, semantic information and quantum information, a unifying approach to a philosophy of information is possible, when we interpret it as an extension to the philosophy of mathematics.

If we look at the foundations of information and computation there are two notions that are crucial: the concept of a data set and the concept of an algorithm. Once we accept these notions as fundamental the rest of the theory data and computation unfolds quite naturally. One might sustain a Formalist, Platonic or intuitionistic view of the mathematical universe see entry on philosophy of mathematics and still agree on the basic notion of what effective computation is. The theory of computing, because of its finitistic and constructivist nature, seems to live more or less on the common ground in which these theories overlap.

Information as a scientific concept emerges naturally in the context of our every day dealing with nature when we measure things. Examples are ordinary actions like measuring the size of an object with a stick, counting using our fingers, drawing a straight line using a piece of rope. These processes are the anchor points of abstract concepts like length, distance, number, straight line that form the building blocks of science.

The fact that these concepts are rooted in our concrete experience of reality guarantees their applicability and usefulness. The earliest traces of information processing evolved around the notions of counting, administration and accountancy. Example: Tally sticks One of the most elementary information measuring devices is unary counting using a tally stick. Tally sticks were already used around 20, years ago. The process of unary counting is based on the elementary operation of catenation of symbols into sequences.

This measuring method illustrates a primitive version of the concept of extensiveness of information: the length of the sequences is a measure for the amount of items counted. Note that such a sequential process of counting is non-commutative and non-associative.

This example helps to understand the importance of context in the analysis of information. In itself a scratch on a stick may have no meaning at all, but as soon as we decide that such a scratch represents another object or event it becomes a meaningful symbol. When we manipulate it in such a context we process information. In principle a simple scratch can represent any event or object we like: symbols are conventional.

Definition: A symbol is a mark, sign or word that indicates, signifies, or is understood as representing an idea, object, or relationship. Symbols are the semantic anchors by which symbol manipulating systems are tied to the world. Observe that the meta-statement:. Symbol manipulation can take many forms and is not restricted to sequences. Many examples of different forms of information processing can be found in prehistoric times.

Example: Counting sheep in Mesopotamia With the process of urbanization, early accounting systems emerged in Mesopotamia around BCE using clay tokens to administer cattle Schmandt-Besserat Different shaped tokens were used for different types of animals, e. After the registration the tokens were packed in a globular clay container, with marks representing their content on the outside. The container was baked to make the registration permanent.

Thus early forms of writing emerged. After BCE the tokens were mounted on a string to preserve the order. The historical transformation from sets to strings is important. It is a more sophisticated form of coding of information. Formally we can distinguish several levels of complexity of token combination:.

Sequences of symbols code more information than multisets and multisets are more expressive than sets. Thus the emergence of writing itself can be seen as a quest to find the most expressive representation of administrative data. When measuring information in sequences of messages it is important to distinguish the aspects of repetition , order and grouping. The extensive aspects of information can be studied in terms of such structural operations see entry on substructural logics.

We can study sets of messages in terms of operators defined on sequences of symbols. We define the class of sequences:. Observation : Systems of sequences with contraction, commutativity and associativity behave like sets. Consider the equation:. The structural aspects of sets, multisets and strings can be formulated in terms of these properties:.

A set is a collection of objects in which each element occurs only once:. Sets are associated with our normal everyday naive concept of information as new, previously unknown, information. We only update our set if we get a message we have not seen previously. This notion of information is forgetful both with respect to sequence and frequency. The set of messages cannot be reconstructed. This behavior is associated with the notion of extensionality of sets: we are only interested in equality of elements, not in frequency.

A multiset is a collection of objects in which the same element can occur multiple times. Multisets are associated with a resource sensitive concept of information defined in Shannon Information.

An Introduction to Information Philosophy

We are interested in the frequency of the messages. This concept is forgetful with regards to sequence. We update our set every time we get a message, but we forget the structure of the sequence. This behavior is associated with the notion of extensiveness of information: we are both interested in equality of elements, and in frequency. The whole structure of the sequence of a message is stored. Sequences are associated with Kolmogorov complexity defined as the length of a sequence of symbols.

Sets may be interpreted as spaces in which objects can move freely. When the same objects are in each others vicinity they collapse in to one object.


  • Luciano Floridi | Philosophy of Information.
  • Philosophy of Information - 1st Edition;
  • Basic Concepts from Floridi’s Philosophy of Information?
  • Why Information Matters - The New Atlantis?
  • Telomere Territory and Cancer!
  • The Nine Symphonies in Full Score.

Multisets can be interpreted as spaces in which objects can move freely, with the constraint that the total number of objects stays constant. This is the standard notion of extensiveness: the total volume of a space stays constant, but the internal structure may differ. Sequences may be interpreted as spaces in which objects have a fixed location. In general a sequence contains more information than the derived multiset, which contains more information than the associated set. Observation : The interplay between the notion of sequences and multisets can be interpreted as a formalisation of the malleability of a piece of wax that pervades history of philosophy as the paradigm of information.

Different sequences forms are representations of the same multiset matter. The volume of the piece of wax length of the string is constant and thus a measure for the amount of information that can be represented in the wax i. In terms of quantum physics the stability of the piece of wax seems to be an emergent property: the statistical instability of objects on an atomic level seem to even out when large quantities of them are manipulated. The notion of a set in mathematics is considered to be fundamental.

Any identifiable collection of discrete objects can be considered to be a set. The relation between theory of sets and the concept of information becomes clear when we analyze the basic statement:. Which reads the object e is an element of the set A. Observe that this statement, if true, represents a piece of semantic information. It is wellformed, meaningful and truthful. The idea that mathematical concepts are defined implicitly by a set of axioms was proposed by Hilbert but is not uncontroversial see entry on the Frege-Hilbert controversy.

The fact that the definition is implicit entails that we only have examples of what sets are without the possibility to formulate any positive predicate that defines them. Elements of a set are not necessarily physical, nor abstract, nor spatial or temporal, nor simple, nor real. The only prerequisite is the possibility to formulate clear judgments about membership. This implicit definition of the notion of a set is not unproblematic.

We might define objects that at first glance seem to be proper sets, which after scrutiny appear to be internally inconsistent. This is the basis for:. The crux of these paradoxes lies in the combination of the notions of: Universality , Negation , and Self-reference. Any person who is not Cretan can state that all Cretans always lie. For a Cretan this is not possible because of the universal negative self-referential nature of the statement. If the statement is true, he is not lying which makes the statement untrue: a real paradox based on self contradiction.

Along the same lines Russel coined the concept of the set of all sets that are not member of themselves , for which membership cannot be determined. Apparently the set of all sets is an inadmissible object within set theory. In general there is in philosophy and mathematics a limit to the extent in which a system can verify statements about itself within the system. The implicit definition of the concepts of sets, entails that the class is essentially open itself. There are mathematical definitions of objects of which it is unclear or highly controversial whether they define a set or not.

Modern philosophy of mathematics starts with the Frege-Russell theory of numbers Frege , , Goodstein , see entry on alternative axiomatic set theories in terms of sets. If we accept the notion of a class of objects as valid and fundamental, together with the notion of a one-to-one correspondence between classes of objects, then we can define numbers as sets of equinumerous classes. Any set of, say four, objects then becomes a representation of the number 4 and for any other set of objects we can establish membership to the equivalence class defining the number 4 by defining a one to one correspondence to our example set.

The fragment of the mathematical universe that emerges is relatively uncontroversial and both Platonists and constructivists might agree on its basic merits. We observe that addition and multiplication specify multisets : both are non-contractive and commutative and associative. If we get both messages m and n , the total amount of information in the combined messages is the sum of the amount of information in the individual messages.

This leads to the following constraints:. Furthermore we want bigger numbers to contain more information than smaller ones, which gives a:. Theorem: The Logarithm is the only mathematical operation that satisfies Additivity, Monotonicity and Normalisation. When we decide that 1 multisets are the right formalisation of the notion of extensiveness, and 2 multiplication is the right operation to express additivity, then the logarithm is the only measurement function that satisfies our constraints.

For finite sets we can now specify the amount of information we get when we know a certain element of a set conditional to knowing the set as a whole. The bigger the set, the harder the search is, the more information we get when we find what we are looking for. The associated function is the so-called Hartley function:. Definition: If a sample from a finite set S uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function Hartley :. The combination of these definitions gives a theorem that ties together the notions of conditional information and probability:.

The information about an element x of a set S conditional to the set is equal to the probability that we select this element x under uniform distribution, which is a measure of our ignorance if we know the set but not which element of the set is to be selected. If we consider S to be a set of messages, then the probability that we select an element x from the set i. Using these results we define the conditional amount of information in a subset of a finite set as:. The formal properties of the concept of probability are specified by the Kolmogorov Axioms of Probability:.

Note that this is the same notion of additivity as defined for the concept of information. At subatomic level the Kolmogorov Axiom of additivity loses its validity in favor of a more subtle notion see section 5. From a philosophical point of view the importance of this construction lies in the fact that it leads to an ontologically neutral concept of information based on a very limited robust base of axiomatic assumptions:. The notions of a set of messages or a set of micro states are specializations of the more general mathematical concept of a set.

The concept of information already exists on this more fundamental level. Although many open questions still remain, specifically in the context of the relation between information theory and physics perspectives on a unified theory of information now look better than at the beginning of the twenty-first century. The definition of the amount of information in a number in therms of logarithms allows us to classify other mathematical functions in terms of their capacity to process information. The Information Efficiency of a function is the difference between the amount of information in the input of a function and the amount of information in the output Adriaans [ OIR ].

It allows us to measure how information flows through a set of functions. We have:. In general deterministic information processing systems do not create new information. They only process it. The following fundamental theorem about the interaction between information and computation is due to Adriaans and Van Emde Boas :. Likewise for Kolmogorov complexity, the output of a program can never be more complex than the length of the program itself, plus a constant.

This is analyzed in depth in Adriaans and Van Emde Boas In a deterministic world it is the case that if:. The fact that it might take a long time to compute the number is irrelevant as long as the computation halts.

Bibliographic Information

Estimating the information efficiency of elementary functions is not trivial. The primitive recursive functions see entry on recursive functions have one information expanding operation, counting , one information discarding operation, choosing , all the others are information neutral. The information efficiency of more complex operations is defined by a combination of counting and choosing. From an information efficiency point of view the elementary arithmetical functions are complex families of functions that describe computations with the same outcome, but with different computational histories.

Some arithmetical operations expand information, some have constant information and some discard information. During the execution of deterministic programs expansion of information may take place, but, if the program is effective, the descriptive complexity of the output is limited. The flow of information is determined by the succession of types of operations, and by the balance between the complexity of the operations and the number of variables.

We briefly discuss the information efficiency of the two basic recursive functions on two variables and their coding possibilities:. Addition When computing sums of natural numbers we make choices in the order of the computations that influence the information efficiency of the resulting computation. Depending on the order of the underlying operations a variety of values may emerge for the information efficiency: e. Even from a cognitive point of view we experience this difference in difficulty when computing the same sum in different ways:.

Observation : Information efficiency is non-associative for addition. This explains, in part, why the so-called subset Sum Problem see section 6. The information efficiency of expressions describing sums of subsets of numbers is not uniquely defined, a fact that influences the possibilities to search these sets.

Addition is associated with information storage in terms of sequences or strings of symbols. It is information discarding for natural numbers bigger than 1. Still, addition has information preserving qualities. If we add numbers with different log units we can reconstruct the frequency of the units from the resulting number:.

Since the information in the building blocks, , 10 and 1, is given the number representation can still be reconstructed. This implies that natural numbers code in terms of addition of powers of k in principle two types of information: value and frequency.

We can use this insight to code complex typed information in single natural numbers. See section 3. Multiplication is by definition information conserving. Still multiplication does not preserve all information in its input: the order of the operation is lost. This is exactly what we want from an operator that characterizes an extensive measure: only the extensive qualities of the numbers are preserved.

This leads to the observation that some numbers act as information building blocks of other numbers, which gives us the concept of a prime number :. Definition: A prime number is a number that is only divisible by itself or 1. The Fundamental Theorem of Arithmetic can be seen as a theorem about conservation of information: for every natural number there is a set of natural numbers that contains exactly that same amount of information.

The factors of a number form a so-called multiset : a set that may contain multiple copies of the same element: e. This makes multisets a powerful device for coding information since it codes qualitative information i. This implies that natural numbers in terms of multiplication of primes also code two types of information: value and frequency.

Again we can use this insight to code complex typed information in single natural numbers. Position based number representations using addition of powers are straightforward and easy to handle and form the basis of most of our mathematical functions. This is not the case for coding systems based on multiplication. Many of the open questions in the philosophy of mathematics and information arise in the context of the concepts of the Fundamental Theorem of Arithmetic and Primes. We give a short overview:.

Ir regularity of the set of primes. Since antiquity it is known that there is an infinite number of primes. The proof is simple. Suppose the set of primes P is finite. Now multiply all elements of P and add 1. The resulting number cannot be divided by any member of P , so P is incomplete. A refinement of the density estimation is given by the so-called Riemannn hypothesis , formulated by him in Goodman and Weisstein [ OIR ] , which is commonly regarded as deepest unsolved problems in mathematics, although most mathematicians consider the hypothesis to be true.

In efficiency of Factorization. Since multiplication conserves information the function is, to an extent, reversible. The process of finding the unique set of primes for a certain natural number n is called factorization. Factorization by trial and error of a relatively simple number, of, say, two hundred digits, which codes a rather small message, could easily take a computer of the size of our whole universe longer than the time passed since the big bang. So, although theoretically feasible, such algorithms are completely unpractical. Factorization is possibly an example of so-called trapdoor one-to-one function which is easy to compute from one side but very difficult in its inverse.

Whether factorization is really difficult, remains an open question, although most mathematicians believe the problem to be hard. Note that factorization in this context can be seen as the process of decoding a message. If factorization is hard it can be used as an encryption technique.

Classical encryption is based on multiplying codes with large prime numbers. Suppose Alice has a message encoded as a large number m and she knows Bob has access to a large prime p. Since factorization is difficult any other person that receives the message n will have a hard time reconstructing m. Primality testing versus Factorization. Although at this moment efficient techniques for factorization on classical computers are not known to exist, there is an efficient algorithm that decides for us whether a number is prime or not: the so-called AKS primality test Agrawal et al. So, we might know a number is not prime, while we still do not have access to its set of factors.

Classical- versus Quantum Computing. This algorithm has a non-classical quantum subroutine, embedded in a deterministic classical program. Currently it is not clear whether larger quantum computers will be stable enough to facilitate practical applications, but that the world at quantum level has relevant computational possibilities can not be doubted anymore, e. As soon as viable quantum computers become available almost all of the current encryption techniques become useless, although they can be replaced by quantum versions of encryption techniques see the entry on Quantum Computiong.

In a philosophical context this implies that the semantics of a formal system rich enough to contain elementary mathematics cannot be defined in terms of mathematical functions within the system, i. Central is the concept of a Recursive Function. Such functions are defined on numbers.

Basically they are elementary arithmetical functions operating on natural numbers like addition, subtraction, multiplication and division and all other functions that can be defined on top of these. We give the basic structure of the proof. Suppose F is a formal system, with the following components:. Assume furthermore that F is consistent, i. According to the fundamental theorem of arithmetic any number can be uniquely factored in to its primes.

This allows us to code any sequence of symbols as a specific individual number in the following way:.


  • Mobile Peer-to-Peer Computing for Next Generation Distributed Environments: Advancing Conceptual and Algorithmic Applications (Premier Reference Source);
  • Genders, Transgenders and Sexualities in Japan (Routledge Studies in Asias Transformations);
  • Modern Methods in Kinetics.
  • Essential Evolutionary Psychology.
  • Automotive paint from prep to final coat.
  • Aeschylus: Persians and Other Plays!
  • Meditations on First Philosophy;

With this observation conditions close to those that lead to the paradox of Russel are satisfied: elementary arithmetic itself is rich enough to express: Universality , Negation , and Self-reference. Since arithmetic is consistent this does not lead to paradoxes, but to incompleteness. Theorem: Any formal system that contains elementary arithmetic is fundamentally incomplete. It contains statements that are true but not provable. In the context of philosophy of information the incompleteness of mathematics is a direct consequence of the rich possibilities of the natural numbers to code information.

In principle any deterministic formal system can be represented in terms of elementary arithmetical functions. Consequently, If such a system itself contains arithmetic as a sub system, it contains a infinite chain of endomorphisms i. Such a system is capable of reasoning about its own functions and proofs but since it is consistent and thus the construction of paradoxes is not possible within the system it is by necessity incomplete.

Recursive functions are abstract relations defined on natural numbers. In principle they can be defined without any reference to space and time. Such functions must be distinguished from the operations that we use to compute them. These operations mainly depend on the type of symbolic representations that we choose for them.

Observation : There are at least two different perspectives from which we can study the notion of computation. The semantics of the symbols is different under these interpretations. Definition: Deterministic Computing on a Macroscopic Scale can be defined as the local, sequential, manipulation of discrete objects according to deterministic rules. In nature there are many other ways to perform such computations. One could use an abacus, study chemical processes or simply manipulate sequences of pebbles on a beach. The fact that the objects we manipulate are discrete together with the observation that the dataset is self-referential implies that the data domain is in principle Dedekind Infinite:.

Since the data elements are discrete and finite the data domain will be countable infinite and therefore isomorphic to the set of natural numbers. Note that the correspondence f is specified explicitly. As soon as such an index function is defined for a class of objects in the real world, the manipulation of these objects can be interpreted a form of computing.

Once we choose our symbols and our operational rules the system starts to produce statements about the world. The statement is wellformed , meaningful and truthful.

Logic and Information

We can study symbol manipulation in general on an abstract level, without any semantic implications. Such a theory was published by Alan Turing — Turing developed a general theory of computing focusing on the actual operations on symbols a mathematician performs Turing For him a computer was an abstraction of a real mathematician sitting behind a desk, receiving problems written down on an in-tray the inut , solving them according to fixed rules the process and leaving them to be picked up in an out-tray the output.

Turing first formulated the notion of a general theory of computing along these lines. The machines can read and write symbols on the tape and they have a transition function that determines their actions under various conditions. On an abstract level Turing machines operate like functions. There is an infinite number of Turing machines. The self-referential nature of general computational systems allows us to construct machines that emulate other machines. Using a technique called diagonalization, where one analyzes an enumeration of all possible machines running on descriptions of all possible machines, Turing proved that such a machine can not exist.

More formally:. Theorem: There is no Turing machine that predicts for any other Turing machine whether it stops on a certain input or not. Not every machine will stop on every input. The existence of universal Turing machines indicates that the class embodies a notion of universal computing : any computation that can be performed on a specific Turing machine can also be performed on any other universal Turing machine. This is the mathematical foundation of the concept of a general programmable computer. These observations have bearing on the theory of information: certain measures of information, like Kolmogorov complexity, are defined, but not computable.

Since Turing machines were defined to study the notion of computation and thus contain elementary arithmetic. The class of Turing machines is in itself rich enough to express: Universality , Negation and Self-reference. Consequently Turing machines can model universal negative statements about themselves. Observation : Since they can emulate each other, the Recursive Function Paradigm and the Symbol Manipulation Paradigm have the same computational strength. Any function that can be computed in one paradigm can also by definition be computed in the other.

Definition: An infinite set of computational functions is Turing complete if it has the same computational power as the general class of Turing machines. In this case it is called Turing equivalent. Such a system is, like the class of Turing machines, universal: it can emulate any computable function. The philosophical implications of this observation are strong and rich, not only for the theory of computing but also for our understanding of the concept of information. There is an intricate ration between the notion of universal computing and that of information.

Precisely the fact that Turing Systems are universal allows us to say that they process information, because their universality entails invariance:. Proof: The proof is simple and relevant for philosophy of information. This proof forms the basis of the theory of Kolmogorov complexity and is originally due to Solomonoff a,b and discovered independently by Kolmogorov and Chaitin Note that this notion of invariance can be generalized over the class of Turing Complete Systems:.

1. Information in Colloquial Speech

Big Invariance Theorem: The concept of information measured in terms of the length of the input of a computation is asymptotically invariant for for Turing Complete systems. Proof: Suppose we have a Turing Complete system F. How strong this result is becomes clear when we analyze the class of Turing complete systems in more detail. Each of these proposals in its own way clarifies aspects of the notion of computing. Later much more examples followed. The class of Turing equivalent systems is diverse.

Apart from obvious candidates like all general purpose programming languages C, Fortran, Prolog, etc. The table below gives an overview of some conceptually interesting systems:. Observation : The class of Turing equivalent systems is open, because it is defined in terms of purely operational mappings between computations. Observation : The general theory of computation and information defined by the class of Complete Turing machines is ontologically neutral.

It is not possible to derive any necessary qualities of computational systems and data domains beyond the fact that they are general mathematical operations and structures. Data domains on which Turing equivalent systems are defined are not necessarily physical, nor temporal, nor spatial, not binary or digital.

At any moment a new member for the class can be introduced. We know that there are computational systems that are weaker than the class of Turing machines e. We cannot rule out the possibility that one-day we come across a system that is stronger. The thesis that such a system does not exist is known as the Church-Turing thesis see entry on Church-Turing thesis :.

Church-Turing Thesis: The class of Turing machines characterizes the notion of algorithmic computing exactly. Arguments in favor of the thesis : The theory of Turing machines seems to be the most general theory possible that we can formulate since it is based on a very limited set of assumptions about what computing is. The fact that it is universal also points in the direction of its generality. Even if we could think of such a more powerful system, the in- and output for such a system would have to be finite and discrete and the computation time also finite. So, in the end, any computation would have the form of a finite function between finite data sets, and, in principle, all such relations can be modeled on Turing machines.

The fact that all known systems of computation we have defined so far have the same power also corroborates the thesis. Arguments against the thesis : The thesis is, in its present form, unprovable. The class of Turing Complete systems is open. It is defined on the basis of the existence of equivalence relations between systems.

In this sense it does not define the notion of computing intrinsically. Consequently it does not allow us to exclude any system from the class a priori. At any time a proposal for a notion of computation might emerge that is fundamentally stronger. What is more, nature provides us with stronger notions of computing in the form of quantum computing.

Quantum bits are really a generalization of the normal concept of bits that is associated with symbol manipulation, although in the end quantum computing does not seem to necessitate us to redefine the notion of computing so far. We can never rule out that research in physics, biology or chemistry will define systems that will force us to do so. Indeed various authors have suggested such systems but there is currently no consensus on convincing candidates Davis Being Turing complete seems to be quite a natural condition for a formal system.

Any system that is sufficiently rich to represent the natural numbers and elementary arithmetical operations is Turing complete. What is needed is a finite set of operations defined on a set of discrete finite data elements that is rich enough to make the system self-referential: its operations can be described by its data elements. This explains, in part, why we can use mathematics to describe our world.

The abstract notion of computation defined as functions on numbers in the abstract world mathematics and the concrete notion of computing by manipulation objects in our every day world around us coincide. The concepts of information end computation implied by the Recursive Function Paradigm and the Symbol Manipulation Paradigm are the same. Observation : If one accepts the fact that the Church-Turing thesis is open, this implies that the question about the existence of a universal notion of information is also open.

At this stage of the research it is not possible to specify the a priori conditions for such a general theory. We have a reasonable understanding of the concept of classical computing, but the implications of quantum physics for computing and information may determine the philosophical research agenda for decades to come if not longer. Still it is already clear that the research has repercussions for traditional philosophical positions: the Laplacian view Laplace [] that the universe is essentially deterministic seems to be falsified by empirical observations.

Our universe is effectively a process that generates information permanently. Classical deterministic computing seems to be too weak a concept to understand its structure. Standard computing on a macroscopic scale can be defined as local, sequential, manipulation of discrete objects according to deterministic rules. The operation of multiplication with the associated logarithmic function characterizes our intuitions about additivity of the concept of information exactly.

The notion of a multiset is associated with the properties of commutativity and associativity. This program can be extended to other classes of numbers when we study division algebras in higher dimensions. The following table gives an overview of some relevant number classes together with the properties of the operation of multiplication for these classes:.

The table is ordered in terms of increasing generality. This are the number classes for which we have adequate finite symbolic representations on a macroscopic scale. Complex numbers can be interpreted as vectors in a two dimensional plane. Consequently they lack the notion of a strict linear order between symbols. Addition is quite straightforward:. Observation : Multiplication is not information efficient for imaginary numbers. More complicated numbers systems with generalizations of this type of multiplication in 4 and 8 dimensions can be defined.

For each of the number classes in the table a separate theory of information measurement, based on the properties of multiplication, can be developed. Up to the real numbers these theories satisfy our intuitive notions of extensiveness of information. For complex numbers the notion of information efficiency of multiplication is destroyed. The quaternions lack the property of commutativity and the octonions that of associativity. These models are not just abstract constructions since the algebras play an important role in our descriptions of nature:.

We briefly discuss the application of vector spaces in quantum physics. Classical information is measured in bits. Implementation of bits in nature involves macroscopic physical systems with at least two different stable states and a low energy reversible transition process i. The most fundamental way to store information in nature on an atomic level involves qubits. Quantum algorithms have, in some cases, a fundamentally lower complexity e. Definition: The quantum bit , or qubit , is a generalization of the classical bit.

The quantum state of qubit is represented as the linear superposition of two orthonormal basis vectors:. Under this mathematical model our intuitions about computing as local, sequential, manipulation of discrete objects according to deterministic rules evolve in to a much richer paradigm:.