词条 | applied logic |
释义 | applied logic Introduction the study of the practical art of right reasoning. The formalism (formal logic) and theoretical results of pure logic can be clothed with meanings derived from a variety of sources within philosophy as well as from other sciences. This formal machinery also can be used to guide the design of computers and computer programs. The applications of logic cover a vast range, relating to reasoning in the sciences (science) and in philosophy, as well as in everyday discourse. They include (1) the various sorts of reasoning affecting the conduct of ordinary discourse as well as the theory of the logical relations that exist within special realms of discourse—between two commands, for example, or between one question and another, (2) special forms of logic designed for scientific applications, such as temporal logic (of what “was” or “will be” the case) or mereology (the logic of parts and wholes), and (3) special forms for concepts bearing upon philosophical issues, such as logics that deal with statements of the form “I know that . . . ,” “I believe that . . . ,” “It is permitted to . . . ,” “It is obligatory to . . . ,” or “It is prohibited to . . . .” The critique of forms of reasoning (reason) Correct and defective argument forms In logic an argument consists of a set of statements, the premises, whose truth supposedly supports the truth of a single statement called the conclusion of the argument. An argument is deductively (deduction) valid when the truth of the premises guarantees the truth of the conclusion; i.e., the conclusion must be true, because of the form of the argument, whenever the premises are true. Some arguments that fail to be deductively valid are acceptable on grounds other than formal logic, and their conclusions are supported with less than logical necessity. In other potentially persuasive arguments, the premises give no rational grounds for accepting the conclusion. These defective forms of argument are called fallacies (fallacy). An argument may be fallacious in three ways: in its material content, through a misstatement of the facts; in its wording, through an incorrect use of terms (term); or in its structure (or form), through the use of an improper process of inference. As shown in the diagram, fallacies are correspondingly classified as (1) material, (2) verbal, and (3) formal. Groups 2 and 3 are called logical fallacies, or fallacies “in discourse,” in contrast to the substantive, or material, fallacies of group 1, called fallacies “in matter”; and groups 1 and 2, in contrast to group 3, are called informal fallacies. Kinds of fallacies Material fallacies The material fallacies are also known as fallacies of presumption, because the premises “presume” too much—they either covertly assume the conclusion or avoid the issue in view. The classification that is still widely used is that of Aristotle's (Aristotle) Sophistic Refutations: (1) The fallacy of accident is committed by an argument that applies a general rule to a particular case in which some special circumstance (“accident”) makes the rule inapplicable. The truth that “men are capable of seeing” is no basis for the conclusion that “blind men are capable of seeing.” This is a special case of the fallacy of secundum quid (more fully: a dicto simpliciter ad dictum secundum quid, which means “from a saying 【taken too】 simply to a saying according to what 【it really is】”—i.e., according to its truth as holding only under special provisos). This fallacy is committed when a general proposition is used as the premise for an argument without attention to the (tacit) restrictions and qualifications that govern it and invalidate its application in the manner at issue. (2) The converse fallacy of accident argues improperly from a special case to a general rule. Thus, the fact that a certain drug is beneficial to some sick persons does not imply that it is beneficial to all people. (3) The fallacy of irrelevant conclusion is committed when the conclusion changes the point that is at issue in the premises. Special cases of irrelevant conclusion are presented by the so-called fallacies of relevance. These include ( a) the argument ad hominem (speaking “against the man” rather than to the issue), in which the premises may only make a personal attack on a person who holds some thesis, instead of offering grounds showing why what he says is false, ( b) the argument ad populum (an appeal “to the people”), which, instead of offering logical reasons, appeals to such popular attitudes as the dislike of injustice, ( c) the argument ad misericordiam (an appeal “to pity”), as when a trial lawyer, rather than arguing for his client's innocence, tries to move the jury to sympathy for him, (d) the argument ad verecundiam (an appeal “to awe”), which seeks to secure acceptance of the conclusion on the grounds of its endorsement by persons whose views are held in general respect, ( e) the argument ad ignorantiam (an appeal “to ignorance”), which argues that something (e.g., extrasensory perception) is so since no one has shown that it is not so, and (f) the argument ad baculum (an appeal “to force”), which rests on a threatened or implied use of force to induce acceptance of its conclusion. (4) The fallacy of circular argument, known as petitio principii (“begging the question”), occurs when the premises presume, openly or covertly, the very conclusion that is to be demonstrated (example: “Gregory always votes wisely.” “But how do you know?” “Because he always votes Libertarian.”). A special form of this fallacy, called a vicious circle, or circulus in probando (“arguing in a circle”), occurs in a course of reasoning typified by the complex argument in which a premise p1 is used to prove p2; p2 is used to prove p3; and so on, until pn − 1 is used to prove pn; then pn is subsequently used in a proof of p1, and the whole series p1, p2, . . . , pn is taken as established (example: “McKinley College's baseball team is the best in the association 【 pn = p3】; they are the best because of their strong batting potential 【 p2】; they have this potential because of the ability of Jones, Crawford, and Randolph at the bat 【 p1】.” “But how do you know that Jones, Crawford, and Randolph are such good batters?” “Well, after all, these men are the backbone of the best team in the association 【 p3 again】.”). Strictly speaking, petitio principii is not a fallacy of reasoning but an ineptitude in argumentation: thus the argument from p as a premise to p as conclusion is not deductively invalid but lacks any power of conviction, since no one who questioned the conclusion could concede the premise. (5) The fallacy of false cause (non causa pro causa) mislocates the cause of one phenomenon in another that is only seemingly related. The most common version of this fallacy, called post hoc ergo propter hoc (“after which hence by which”), mistakes temporal sequence for causal connection—as when a misfortune is attributed to a “malign event,” like the dropping of a mirror. Another version of this fallacy arises in using reductio ad absurdum reasoning: concluding that a statement is false if its addition to a set of premises leads to a contradiction. This mode of reasoning can be correct—e.g., concluding that two lines do not intersect if the assumption that they do intersect leads to a contradiction. What is required to avoid the fallacy is to verify independently that each of the original premises is true. Thus, one might fallaciously infer that Williams, a philosopher, does not watch television, because adding A:Williams, a philosopher, watches television. to the premises P1:No philosopher engages in intellectually trivial activities. P2:Watching television is an intellectually trivial activity. leads to a contradiction. Yet it might be that either P1 or P2 or both are false. It might even be the case that Williams is not a philosopher. Indeed, one might even take A as evidence for the falsity of either P1 or P2 or as evidence that Williams is not really a philosopher. (6) The fallacy of many questions (plurimum interrogationum) consists in demanding or giving a single answer to a question when this answer could either be divided (example: “Do you like the twins?” “Neither yes nor no; but Ann yes and Mary no.”) or refused altogether, because a mistaken presupposition is involved (example: “Have you stopped beating your wife?”). (7) The fallacy of non sequitur (“it does not follow”) occurs when there is not even a deceptively plausible appearance of valid reasoning, because there is an obvious lack of connection between the given premises and the conclusion drawn from them. Some authors, however, identify non sequitur with the fallacy of the consequent (see below Formal fallacies (applied logic)). Verbal fallacies These fallacies, called fallacies of ambiguity, arise when the conclusion is achieved through an improper use of words. The principal instances are as follows: (1) Equivocation occurs when a word or phrase is used in one sense in one premise and in another sense in some other needed premise or in the conclusion (example: “The loss made Jones mad 【= angry】; mad 【= insane】 people should be institutionalized; so Jones should be institutionalized.”). The figure-of-speech fallacy is the special case arising from confusion between the ordinary sense of a word and its metaphorical, figurative, or technical employment (example: “For the past week Joan has been living on the heights of ecstasy.” “And what is her address there?”). (2) Amphiboly occurs when the grammar of a statement is such that several distinct meanings can obtain (example: “The governor says, ‘Save soap and waste paper.' So soap is more valuable than paper”). (3) Accent is a counterpart of amphiboly arising when a statement can bear distinct meanings depending on which word is stressed (example: “Men are considered equal.” “Men are considered equal.”). (4) Composition occurs when the premise that the parts of a whole are of a certain nature is improperly used to infer that the whole itself must also be of this nature (example: a story made up of good paragraphs is thus said to be a good story). (5) Division—the reverse of composition—occurs when the premise that a collective whole has a certain nature is improperly used to infer that a part of this whole must also be of this nature (example: in a speech that is long-winded it is presumed that every sentence is long). But this fallacy and its predecessor can be viewed as versions of equivocation, in which the distributive (distribution) use of a term—i.e., its application to the elements of an aggregate (example: “the crowd,” viewed as individuals)—is confused with its collective use (“the crowd,” as a unitary whole)—compare “The crowd were filing through the turnstile” with “The crowd was compressed into the space of a city block.” Formal fallacies Formal fallacies are deductively invalid arguments that typically commit an easily recognizable logical error. A classic case is Aristotle's fallacy of the consequent, relating to reasoning from premises of the form “If p1, then p2.” The fallacy has two forms: (1) denial of the antecedent, in which one mistakenly argues from the premises “If p1, then p2” and “not- p1” (symbolized ∼ p1) to the conclusion “not- p2” (example: “If George is a man of good faith, he can be entrusted with this office; but George is not a man of good faith; therefore, George cannot be entrusted with this office”), and (2) affirmation of the consequent, in which one mistakenly argues from the premises “If p1, then p2” and “ p2” to the conclusion “ p1” (example: “If Amos was a prophet, then he had a social conscience; he had a social conscience; hence, Amos was a prophet”). Most of the traditionally considered formal fallacies, however, relate to the syllogism. One example may be cited, that of the fallacy of illicit major (or minor) premise, which violates the rules for “distribution.” (A term is said to be distributed when reference is made to all members of the class. For example, in “Some crows are not friendly,” reference is made to all friendly things but not to all crows.) The fallacy arises when a major (or minor) term that is undistributed in the premise is distributed in the conclusion (example: “All tubers are high-starch foods 【undistributed】; no squashes are tubers; therefore, no squashes are high-starch foods 【distributed】”). Epistemic logic Epistemic logic deals with the logical issues arising within the gamut of such epistemological (epistemology) concepts as knowledge, belief, assertion, doubt, question-and-answer, or the like. Instead of dealing with the essentially factual issues of alethic logic (Greek: alētheia, “truth”)—i.e., with what is actually or must necessarily or can possibly be the case—it relates to what people know or believe or maintain or doubt to be the case. The logic of belief From the logical standpoint, a belief is generally analyzed as a relationship obtaining between the person who accepts some thesis on the one hand and the thesis that he accepts on the other. Correspondingly, given a person x, it is convenient to consider the set Bx of x's beliefs and represent the statement “ x believes that p” as p ∊ Bx. (The symbol ∊ represents membership in a set, ∉ its denial.) To articulate a viable logic of belief, it is, at the very least, essential to postulate certain minimal conditions of rationality regarding the parties whose beliefs are at issue: 1. Consistency: “If x believes that p, then x does not believe that not- p”; i.e., If p ∊ Bx, then ∼ p ∉ Bx. “If not- p, then x does not believe that p”; i.e., If ⊢ ∼ p, then p ∉ Bx. Example: If “Jesus was a Zealot” ( p) is among (∊) the beliefs of Ralph (BRalph), then “Jesus was not a Zealot” (∼ p) is not among (∉) Ralph's beliefs. It is an accepted thesis (⊢) that “Jesus was not a Zealot.” Hence, “Jesus was a Zealot” is not among Ralph's beliefs. (The symbol “⊢” is used to indicate that the sentence to its right is a valid deductive consequence of the sentence【s】 on the left. In cases where it appears as an isolated prefix, it signifies “theoremhood (theorem)”—i.e., a deductive consequence from no premises.) 2. Conjunctive (conjunction) composition and division: “If x believes that p1, and x believes that p2, etc., to x believes that pn, then x believes that p1 and p2, etc., and pn”; i.e., If ( p1 ∊ Bx, p2 ∊ Bx, . . . , pn ∊ Bx), then ( p1 · p2 · . . . · pn) ∊ Bx, and conversely. Example: If “cats are affectionate” ( p1), “cats are clean” ( p2), etc., to “cats are furry” ( pn) are among (∊) Bob's beliefs (BBob), then “cats are affectionate and clean, etc., and furry” ( p1 · p2 · . . . · pn) is also a belief of Bob's. 3. Minimal inferential capacity: “If x believes that p, and q is an obvious consequence of p, then x believes that q”; i.e., If p ∊ Bx and p ⊧ q, then q ∊ Bx. Example: “If x believes that his cat is on the mat, and his cat's being on the mat has an obvious consequence that something is on the mat, then x believes that something is on the mat.” Here item 3 is a form of the entailment principle, but with ⊧ representing entailment of the simplest sort, designating obvious consequence—say, deducibility by fewer than two (or n) inferential steps, employing only those primitive rules of inference that have been classified as obvious. (In arguments about beliefs, however, all repetitions of the application of this version of the entailment principle must be avoided.) These principles endow the theory with such rules as 1.“If x believes that not- p, then x does not believe that p”; i.e., If ∼ p ∊ Bx, then p ∉ Bx. 2.“If x believes that p, and x believes that q, then x believes that both p and q taken together”; i.e., If p ∊ Bx and q ∊ Bx, then p · q ∊ Bx. 3.“If x believes that p, then x believes that either p or q”; i.e., If p ∊ Bx, then p ∨ q ∊ Bx, given “ p ⊢ p ∨ q” as an “obvious” rule of inference (where ∨ means “or”). One key question of the logical theory of belief relates to the area of iterative beliefs (example: “Andrews believes that I believe that he believes me to be untrustworthy”). Clearly, one would not want to have such theses as: 1.“If y believes that x believes that p, then x believes that p”; i.e., If ( p ∊ Bx) ∊ By, then p ∊ Bx ( y ≠ x) 2.“If y believes that x believes that p, then y believes that p”; i.e., If ( p ∊ Bx) ∊ By, then p ∊ By ( y ≠ x) But when the iteration is subject-uniform rather than subject-diverse, it might be advantageous to postulate certain special theses, such as If p ∊ Bx, then ( p ∊ Bx) ∊ Bx, which in effect limits the beliefs at issue to conscious beliefs. The plausibility of this thesis also implicates its converse—namely, whether there are circumstances under which someone's believing that he believes something would necessarily vouch for his believing of it (that is, whether it is legitimate to argue that “if x believes that he believes that p, then he believes that p”); i.e., If ( p ∊ Bx) ∊ Bx, then p ∊ Bx. According to this thesis, the belief set Bx is to have the feature of second-order—as opposed to direct—applicability. From q ∊ Bx, it is not, in general, permissible to infer q, but one is entitled to do so when q takes the special form p ∊ Bx—i.e., when the belief at issue is one about the subject's own beliefs. The theory is predicated on the view that belief is subject to logical compulsion but that the range of this compulsion is limited since people are not logically omniscient. Belief here is like sight: man has a limited range of logical vision; he can see clearly in the immediate logical neighbourhood of his beliefs but only dimly afar. The logic of knowing The propositional sense of knowing (i.e., knowing that something or other is the case), rather than the operational sense of knowing (i.e., knowing how something or other is done), is generally taken as the starting point for a logical theory of knowing. Accordingly, the logician may begin with a person x and consider a set of propositions Kx to represent his “body of knowledge.” The aim of the theory then is to clarify and to characterize the relationship “ x knows that p” or “ p is among the items known to x,” which is here represented as p ∊ Kx. There can be false knowledge only in the sense that “he thought he knew that p, but he was mistaken.” When the falsity of purported knowledge becomes manifest, the claim to knowledge must be withdrawn. “I know that p, but it may be false that p” is a contradiction in terms. When something is asserted or admitted as known, it follows that this must be claimed to be true. But what sort of inferential step is at issue in the thesis that “ x knows p” leads to “ p is true”? Is the link deductive, inductive, presuppositional, or somehow “pragmatic”? Each view has its supporters: on the deductive (deduction) approach, p ∊ Kx logically implies (deductively entails) p; on the inductive (induction) approach, p ∊ Kx renders p extremely probable, though not necessarily certain; on the presuppositional approach, p ∊ Kx is improper (nonsensical) whenever p is not true; and on the pragmatic approach, the assertion of p ∊ Kx carries with it a rational commitment to the assertion of p (in a manner, however, that does not amount to deductive entailment). From the standpoint of a logic of knowing, the most usual practice is to assume the deductive approach and to lay it down as a rule that if p ∊ Kx, then p is true. This approach construes knowledge in a very strong sense. According to a common formula, knowledge is “true, justified belief.” This formulation, however, seems defective. Let the expression Jx p be defined as meaning “ x has justification for accepting p”; then p ∊ Kx = p · Jx p · p ∊ Bx. For example, the proposition “Jane knows that (KJane) the gown is priceless ( p)” means (=) “The gown is priceless, and Jane has justification for accepting that it is priceless (JJane p) and Jane believes that it is priceless ( p ∊ BJane).” One cannot but assume that the conceptual nature of J is such as to underwrite the rule: “If x is justified in accepting p, then he is justified in accepting ‘Either p or q' ”; i.e., (rule J) If Jx p, then Jx( p ∨ q),(J) in which q can be any other proposition whatsoever. The components p, q, and x may be such that all of the following obtain: 1. not- p 2. q 3. x believes that p; i.e., p ∊ Bx 4. x does not believe that q; i.e., q ∉ Bx and, indeed, x believes that not-q; i.e., ∼ q ∊ Bx 5. x is justified in accepting q; i.e., Jx q 6. x believes that either p or q; i.e., p ∨ q ∊ Bx Clearly, on any reasonable interpretation of B and J, this combination of six premises is possible. But the following consequences would then obtain: 7. p ∨ qby item 2 above 8. Jx ( p ∨ q)by item 5 above and by J 9. ( p ∨ q) ∊ Kxby items 6, 7, and 8 The conclusion (9) is wrong, however; x cannot properly be said to know that either p or q when p ∨ q is true solely because of the truth of q (which x rejects), but p ∨ q is believed by x solely because he accepts p (which is false). This example shows that the proposed definition of knowledge as “true, justified belief” cannot be made to work. The best plan, therefore, seems to be to treat the logic of knowing directly, rather than through the mediation of acceptance (belief ) and justification. Since Aristotle's day, stress has been placed on the distinction between actual, overt knowledge that requires an explicit, consciously occurring awareness of what is known and potential, tacit knowledge that requires only implicit dispositional awareness. Unless p ∊ Kx is construed in the tacit sense, the following principles will not hold: If p ∊ Kx and p ⊢ q, then q ∊ Kx. If p ∊ Kx and q ∊ Kx, then ( p · q) ∊ Kx. These two rules, if accepted, however, suffice to guarantee the principle If p1, p2, . . . , pn ⊢ q, then p1∊Kx, p2∊Kx, . . . , pn∊Kx ⊢ q∊Kx. Similar considerations regarding the potential construction of knowledge govern the answer to the question of whether, when something is known, this fact itself is known: if p ∊ Kx, then ( p ∊ Kx) ∊ Kx. This principle is eminently plausible, provided that the membership of Kx is construed in the implicit (tacit) rather than in the explicit (overt) sense. The logic of questions Whether a given grouping of words is functioning as a question may hinge upon intonation, accentuation, or even context, rather than upon overt form: at bottom, questions represent a functional rather than a purely grammatical category. The very concept of a question is correlative with that of an answer, and every question correspondingly delimits a range of possible answers. One way of classifying questions is in terms of the surface characteristics of this range. On this basis, the logician can distinguish (among others): ● (1) yes/no questions (example: “Is today Tuesday?”), ● (2) item-specification questions (example: “What is an instance of a prime number?”), ● (3) instruction-seeking questions (example: “How does one bake an apple pie?”), and so on. From the logical standpoint, however, a more comprehensive policy and one leading to greater precision is to treat every answer as given in a complete proposition (“Today is not Tuesday,” “Three is an example of a prime number,” and so on). From this standpoint, questions can be classed in terms of the nature of the answers. There would then be factual questions (example: “What day is today?”) and normative questions (example: “What ought to be done in these circumstances?”). The advantage of the propositional approach to answers is that it captures the intrinsically close relationship between question and answer. The possible answers to (1) “What is the population of A-ville?” and (2) “What is the population of B-burgh?” are seemingly the same—namely, numbers of the series 0, 1, 2, . . . . But once complete propositions are taken to be at issue, then an answer to 1, such as “The population of A-ville is 5,238,” no longer counts as an answer to 2, since the latter must mention B-burgh. This approach has the disadvantage, on the other hand, of obscuring similarities in similar questions. One can now no longer say of two brothers that the questions “Who is Tom's father?” and “Who is John's father?” have the same answer. With every question Q can be correlated the set of propositions A( Q) of possible answers to Q. Thus, “What day of the week is today?” has seven conceivable answers, of the form “The day of the week today is Monday,” and the like. A possible answer to a question must be a possibly true statement. Accordingly, the question “What is an example of a prime number?” does not have “The Washington Monument is an example of a prime number” among its possible answers. A question can be said to be true (truth-value) if it has a true answer—i.e., if (∃ p) 【 p · p ∊ A( Q)】, which (taking the existential quantifier ∃ to mean “there exists . . . ”) can be read “There exists a proposition p such that p is true and p is among the answers of Q.” Otherwise it is false—i.e., all its answers are false. If he never came at all, the question “On what day of the week did he come?” is a false question in the sense that it lacks any true answer. A true question can be called contingent if it admits of possible answers that are false, as in “Where did Jones put his pen?” In logic and mathematics there are, presumably, no contingent questions. Questions can have presuppositions, as in “Why does Smith dislike Jones?” Any possible answer here must take the form “Smith dislikes Jones because . . .” and so commits one to the claim that “Smith dislikes Jones.” Every such question with a false presupposition must be a false question: all its possible answers (if any) are false. Besides falsity, questions can exhibit an even more drastic sort of “impropriety.” They can be illegitimate in that they have no possible answers whatsoever (example: “What is an example of an even prime number different from two?”). The logic of questions is correspondingly three-valued: a question can be true (i.e., have a true answer), illegitimate (i.e., have no possible answer at all), or false (i.e., have possible answers but no true ones). One question, Q1, will entail another, Q2, if every possible answer to the first deductively yields a possible answer to the second, and every true answer to the first deductively yields a true answer to the second. In this sense the question “What are the dimensions of that box?” entails the question “What is the height of that box?” Practical logic The theory of reasoning with concepts of practice—of analyzing the logical relations obtaining among statements about actions and their accompaniments in choosing, planning, commanding, permitting, and so on—constitutes the domain of practical logic. The logic of preference The logic of preference—also called the logic of choice, or proairetic logic (Greek proairesis, “a choosing”)—seeks to systematize the formal (formal system) rules that govern the conception “ x is preferred to y.” A diversity of things can be at issue here: (1) Is x preferred to y by some individual (or group), or is x preferable to y in terms of some impersonal criterion? (2) Is on-balance preferability at issue or preferability in point of some particular factor (such as economy or safety or durability)? The resolution of these questions, though vital for interpretation, does not affect the formal structure of the preference relationships. Symbolization and approach taken in proairetic logic The fundamental tools of the logic of preference are as follows: (1) (strong) preference: x is preferable to y, symbolically x ≫ y, (2) indifference: x and y are indifferent, x ≅ y, defined as “neither x ≫ y nor y ≫ x,” and (3) weak preference: x is no less preferred than y, x y, defined as “either x ≫ y or x ≅ y.” Since preference constitutes a relationship, its three types can be classed in terms of certain distinctions commonly drawn in the logic of relations (logical relation): that of reflexivity (whether holding of itself: “John supports himself”), that of symmetry (whether holding when its terms are interchanged: “Peter is the cousin of Paul”; “Paul is the cousin of Peter”), and that of transitivity (transitive law) (whether transferable: a ≫ b and b ≫ c; therefore a ≫ c). Once it is established that the (strong) preference relation (≫) is an ordering (i.e., is irreflexive, asymmetric, and transitive), it then follows that weak preference () is reflexive, nonsymmetric, and transitive and that indifference (≅) is an equivalence relation (i.e., reflexive, symmetric, and transitive). One common approach to establishing a preference relation is to begin with a “measure of merit” to evaluate the relative desirability of the items x, y, z, . . . , that are at issue. Thus for any item x, a real-number quantity is obtained, symbolized \\# ( x). (Such a measure is called a utility measure, the units are called utiles, and the comparisons or computations involved constitute a preference calculus.) In terms of such a measure, a preference ordering is readily introduced by the definitions that (1) x ≫ y is to be construed as \\# ( x) \\> \\# ( y), (2) x y as \\# ( x) ≥ \\# ( y), and (3) x ≅ y as \\# ( x) = \\# ( y), in which ≥ means “is greater than or equal to.” Given these definitions, the relationships enumerated above must all obtain. Thus, the step from a utility measure to a preference ordering is simple. Construction of a logic of preference In constructing a logic of preference, it is assumed that the items at issue are propositions p, q, r, . . . and that the logician is to introduce a preferential ordering among them, with p ≫ q to mean “ p's being the case is preferred to q's being the case.” The problem is to systematize the logical relationships among such statements in order to permit a determination of whether, for example, it is acceptable to argue that “if either p is preferable to q or p is preferable to r, then p is preferable to either q or r,” symbolized ( p ≫ q ∨ p ≫ r) ⊃ 【 p ≫ ( q ∨ r)】 (in which ⊃ means “implies” or “if . . . then”), or to argue similarly that ( p ≫ q · r ≫ q) ⊃ 【( p · r) ≫ q】. For example, “If eating pears ( p) is preferable to eating quinces ( q) and eating rhubarb ( r) is preferable to eating quinces, then eating both pears and rhubarb is preferable to eating quinces.” The task is one of erecting a foundation for the systematization of the formal rules governing such a propositional preference relation—a foundation that can be either axiomatic or linguistic (i.e., in terms of a semantical criterion of acceptability). One procedure—adapted from the ideas of the Finnish philosopher Georg Henrik von Wright (b. 1916), a prolific contributor to applied logic—is as follows: beginning with a basic set of possible worlds (or states of affairs) w1, w2, . . . , wn, all the propositions to be dealt with are first defined with respect to these by the usual logical connectives (connective) (∨, · , ⊃, and so on). Given two elementary propositions p and q, there are just the following possibilities: both are true (truth-value), p is true and q is false, p is false and q is true, or both are false. Corresponding to each of these possibilities is a possible world; thus, w1 = p · q w2 = p · ∼ q w3 = ∼ p · q w4 = ∼ p · ∼ q. The truth of p then amounts to the statement that one of the worlds w1, w2 obtains, so that p is equivalent to w1 ∨ w2. Moreover, a given basic preference/indifference ordering among the wi is assumed. On this basis the following general characterization of propositional preference is stipulated: If delta (δ) is taken to represent any (and thus every) proposition independent of p and q, then p is preferable to q ( p ≫ q), if for every such δ it is the case that every possible world in which p and not- q and δ are the case ( p · ∼ q · δ) is w-preferable to every possible world in which not- p and q and δ is the case (∼ p · q · δ)—i.e., when p · ∼ q is always preferable to ∼ p · q provided that everything else is equal. It is readily shown that through this approach such general rules as the following are obtained: 1.If p is preferable to q, then q is not preferable to p; i.e., p ≫ q ⊢ ∼ ( q ≫ p). 2.If p is preferable to q, and q is preferable to r, then p is preferable to r; i.e., ( p ≫ q · q ≫ r) ⊢ ( p ≫ r). 3.If p is preferable to q, then not- q is preferable to not-p; i.e., p ≫ q ⊢ ∼ q ≫ ∼ p. 4.If p is preferable to q, then having p and not- q is preferable to having not- p and q; i.e., p ≫ q ⊢ ( p · ∼ q) ≫ (∼ p · q). The preceding construction of preference requires only a preference ordering of the possible worlds. If, however, a measure for both probability (probability and statistics) and desirability (utility (utility and value)) of possible worlds is given, then one can define the corresponding \\#-value (see below) of an arbitrary proposition p as the probabilistically weighed utility value of all the possible worlds in which the proposition obtains. As an example, p may be the statement “The Franklin Club caters chiefly to business people,” and q the statement “The Franklin Club is sports-oriented.” It may then be supposed as given that the following values hold: WorldProbabilityDesirability w1 = p · q1/6-2 w2 = p · ∼ q2/6+1 w3 = ∼ p · q2/6-1 w4 = ∼ p · ∼ q1/6+3 The \\#-value of a proposition is determined by first multiplying the probability times the desirability of each world in which the proposition is true and then taking the sum of these. For example, the \\#-value of p is determined as follows: p is true in each of w1 and w2 (and only these); the probability times the desirability of w1 is 1/6 × (-2), and that of w2 is 2/6 × (+1); thus \\#-( p) is 1/6 × (-2) + 2/6 × (+1) = 0. (The \\#-value corresponds to the decision theorists' notion of expected value.) By this procedure it can easily be determined that \\#( p) = 0 \\#(∼ p) = 1/6 \\#( q) = -(4/6) \\#(∼ q) = 5/6. Since both \\#( p) \\> \\#( q) and \\#(∼ q) \\> \\#(∼ p), one correspondingly obtains both p ≫ q and ∼ q ≫ ∼ p in the example at issue—i.e., “That the Franklin Club should cater chiefly to business people is preferable to its being sports-oriented” and “Its not being sports-oriented is preferable to its not catering chiefly to business people.” (The result is, of course, relative to the given desirability schedule specified for the various possible-world combinations in the above tabulation.) A more complex mode of preference results, however, if—when some basic utility measure, \\#( x), is given—instead of having p ≫ q correspond to the condition that \\#( p) \\> \\#( q), it is taken to correspond to \\#( p) - \\#(∼ p) \\> \\#( q) - \\#(∼ q). This mode will be governed by characteristic rules, specifically including all those listed above. The logic of commands Some scholars have maintained that there cannot be a logic of commands (instructions, orders), inasmuch as there can be no logic in which validity of inference cannot be defined. Validity, however, requires that the concept of truth be applicable (an argument being valid when its conclusion must be true if its premises are true). But, since commands—and for that matter also instructions, requests, and so on—are neither true nor false, it is argued that the concept of validity cannot be applied, so there can be no valid inference in this sphere. This line of thought, however, runs counter to clear intuitions that arise in specific cases, in which one unhesitatingly reasons from commands and sets of commands. If an examination has the instructions “Answer no fewer than three questions! Answer no more than four questions!” one would not hesitate to say that this implies the instruction, “Answer three or four questions!” This seeming impasse can be broken, in effect, by importing truth into the sphere of commands through the back door: with any command one can associate its termination statement, which, with future-tense reference, asserts it as a fact that what the command orders will be done. Thus, the command “Shut all the windows in the building!” has the termination statement “All the windows in the building will be shut.” In case of a pure command argument—i.e., one that infers a command conclusion from premises that are all commands—validity can be assessed in the light of the validity of the purely assertoric syllogism composed of the corresponding termination statements. Thus the validity of the command argument given above derives from the validity of the inference from the premises “No fewer than three questions will be answered and no more than four questions will be answered” to the conclusion “Three or four questions will be answered.” The logical issues of pure command inference can be handled in this manner. But what of the mixed cases in which some statement—premise or conclusion—is not a command? Special case 1 One mixed case is that in which the premises nontrivially include noncommands, but the inferred conclusion is a command. Some writers have endorsed the rule that there is no validity unless the command conclusion is forthcoming from the command premises alone. This, however, invalidates such seemingly acceptable arguments as “Remove all cats from the area; the shed is in the area; so, remove all cats from the shed.” It is more plausible, however, to stipulate the weaker condition that an inference to a command conclusion cannot count as valid unless there is at least one command premise that is essential to the argument. Subject to this restriction, a straightforward application of the above-stated characterization of validity can again be made. This approach validates the above-mentioned command inference via the validity of the assertion inference: “All cats will be removed from the area; the shed is in the area; so, all cats will be removed from the shed.” (The rule under consideration suffices to block the unacceptable argument from the factual premise “All the doors will be shut” to the command conclusion “Shut all the doors.”) Special case 2 Another mixed case is that in which the premises nontrivially include commands, but the inferred conclusion is an ordinary statement of fact. Some authorities stipulate that no indicative conclusion can be validly drawn from a set of premises which cannot validly be drawn from the indicative among them alone. This rule would seem to be acceptable, though subject to certain significant provisos: (1) It must be restricted to categorical (categorical proposition) rather than conditional commands. “If you want to see one of the world's tallest buildings, look at the Empire State Building” conveys (inter alia) the information that “The Empire State Building is one of the world's tallest buildings.” (2) Exception must be made for those commands that include in their formulation—explicitly or by way of tacit presupposition—reference to a factual datum. “John, give the book to Tom's brother Jim” yields the fact that Jim is Tom's brother; and “John, drive your car home” (= “John, you own a car: drive it home”) yields “John owns a car.” With suitable provisos, however, the rule can be maintained to resolve the issues of the special case in view. Deontic logic The propositional modalities relating to normative (or valuational) classifications of actions and states of affairs, such as the permitted, the obligatory, the forbidden, or the meritorious, are characterized as deontic modalities (Greek deontos, “of that which is binding”) and systematized in deontic logic. Though this subject was first treated as a technical discipline in 1926, its current active development dates from a paper published in 1951 by von Wright. As a highly abstracted branch of logical theory, it leaves to substantive disciplines—such as ethics and law—the concrete questions of what specific acts or states of affairs are to be forbidden, permitted, or the like (just as deductive logic does not meddle with what contingent issues are true but tells only what follows when certain facts or assumptions about the truth are given). It seeks to systematize the abstract, purely conceptual relations between propositions in this sphere, such as the following: if an act is obligatory, then its performance must be permitted and its omission forbidden. In given circumstances, either any act is permitted itself or its omission is permitted. The systematization and relation to alethic modal logic In the systematization of deontic logic (formal system), the symbols p, q, r, . . . may be taken to range over propositions dealing both with impersonal states of affairs and with the human acts involved in their realization. Certain special deontic operations (operator) can then be introduced: P( p) for “It is permitted that p be the case”; F( p) for “It is forbidden that p be the case”; and O( p) for “It is obligatory that p be the case.” In a systematization of deontic logic, it is necessary to take only one of these three operations as primitive (i.e., as an irreducible given), because the others can then be introduced in terms of it. For example, when P alone is taken as primitive (as is done here), the following can be introduced by definition: “It is obligatory that p” means “It is not permitted that not- p,” and “It is forbidden that p” means “It is not permitted that p”; i.e., O( p) = ∼P(∼ p) and F( p) = ∼P( p). The logical grammar of P is presumably to be such that one wants to insist upon the rule: Whenever ⊢ p ⊃ q, then ⊢ P( p ⊃ q). Further, a basic axiom for such an operator as P is ⊢ P( p ⊃ q) ⊃ (P( p) ⊃ P ( q)), from which it immediately follows that Whenever ⊢ p ⊃ q, then ⊢ P( p) ⊃ P ( q). Example: “Since one's helping Jones, who has been robbed, entails that one help someone who has been robbed, being permitted to help Jones (who has been robbed) entails that one be permitted to help someone who has been robbed.” This yields such principles as “If both p and q are permitted, then p is permitted and q is permitted” and “If p is permitted, then either p or q is permitted”; i.e., ⊢ P( p · q) ⊃ 【P( p) · P( q)】 and ⊢ P( p) ⊃ P( p ∨ q). And, once it is postulated that “A p exists that is permitted”—i.e., ⊢ (∃ p)P( p)—then the statement that “It is not permitted that both p and not- p”—i.e., ∼P( p · ∼ p)—is also yielded. Moreover, on any adequate theory of P, it is necessary to have such principles as “Either p or not- p is permitted”; i.e., ⊢ P( p ∨ ∼ p). On the other hand, certain principles must be rejected, such as “If p is permitted and q is permitted, then both p and q taken together are permitted”—i.e., ⊣ 【P( p) · P( q)】 ⊃ P( p · q), in which ⊣ symbolizes the rejection of a thesis—and that “If either p or q is permitted, then p is permitted”—i.e., ⊣ P( p ∨ q) ⊃ P( p). The first of these, accepted unqualifiedly, would lead to the untenable result that there can be no permission-indifferent acts—i.e., no acts such that both they and their omission are permitted—since this would then lead to P( p · ∼ p). The second thesis would have the unacceptable result of asserting that, when at least one member of a pair of acts is permitted, then both members are permitted. In all respects so far considered, deontic logic is wholly analogous to the already well-developed field of alethic modal logic, which deals with statements of the form “It is possible that . . .” (symbolized M), “It is necessary that . . .” (symbolized L), and so on, with P in the role of possibility ( M) and O in that of necessity ( L). This parallel, however, does not extend throughout. In alethic logic, the principle that “necessity implies actuality” obviously holds (i.e., ⊢ Lp ⊃ p). But its deontic analogue, that “obligation implies actuality” (i.e., ⊢ O p ⊃ p), must be rejected, or rather an analogous thesis holds only in the weakened form that “obligation implies permissibility” (i.e., ⊢ O p ⊃ P p). Controversy exists about the relation of deontic to alethic modal logic, principally in the context of Immanuel Kant's (Kant, Immanuel) thesis that “ought implies can” (i.e., ⊢ O p ⊃ Mp), but also about the theses ad impossibile nemo obligatur—“no one is obliged to do the impossible” (i.e., ⊢ ∼Mp ⊃ ∼O p)—and “necessity implies permissibility” (i.e., ⊢Lp ⊃ P p). Although this thesis is generally accepted, some scholars want to strengthen the thesis to “necessity implies obligation” (i.e., Lp ⊃ O p), or, equivalently, to “permissibility implies possibility” (i.e., ⊢ P p ⊃ Mp), with the result that only what is possible can count as permitted, so that the impossible is forbidden. Some would deny that it is wrong (i.e., impermissible) to act to realize the impossible, rather than merely unwise. It has been proposed that deontic logic may perhaps be reduced to alethic modal logic. This approach is based on the idea of a normative code delimiting the range of the permissible. In this context, what signalizes an action as impermissible is that it involves a violation of the code: the statement that the action has occurred entails that the code has been violated and so leads to a “ sanction.” This line of thought leads to the definition of a modal operator F p = L( p ⊃ σ), “ p necessarily implies a sanction,” in which sigma (σ) is the sanction produced by code violation. Correspondingly, one then obtains “For p to be permitted means that p does not imply by necessity a sanction”—i.e., P p = ∼ L( p ⊃ σ)—and “For p to be obligatory means that not doing p implies by necessity a sanction”—i.e., O p = L(∼ p ⊃ σ). Assuming a systematization of the alethic modal operator L, these definitions immediately produce a corresponding system of deontic logic that—if L is a normal modality—has many of the features that are desirable in a modal operator. It also yields, however—through the “paradoxes of strict implication”—the disputed principle that “The assumption that p is not possible implies that p is not permissible”; i.e., ⊢ ∼Mp ⊃ ∼P p. This and other similar consequences of the foregoing effort to reduce deontic logic to modal logic have been transcended by other scholars, who have resorted to a mode of implication (symbolized as →) that is stronger than strict implication (as necessary material implication is called) and then defining F p as p → σ instead of as above. Alternative deontic systems Each of the three principal deontic systems that have been studied to date is analogous to one of the alethic modal systems that were developed in the mid-20th century. These foundational alethic systems differ by virtue of the different axioms and rules adopted for such modalities as necessity, possibility, and contingency. In the system designated M, for example, developed by the aforementioned Finnish logician G.H. von Wright, the adverb “possibly,” symbolized M, is taken as the fundamental undefined modality in terms of which the other modalities are constructed. “Necessarily p,” symbolized Lp, for example, is defined in the system M as “not possibly not- p”; i.e., Lp = ∼ M∼ p. Alternatively, in an equivalent system, T, “necessarily p” is taken as primitive, and “possibly p” is defined as “not necessarily not- p”; i.e., Mp = ∼ L∼ p. Several nonequivalent systems have been developed by the conceptual pragmatist C.I. Lewis (Lewis, C.I.) (1883–1964), primary author of Symbolic Logic (1932), the foundational work in this field. Of these systems, that known as S4 includes all of the system M but adds also the axiom that “ ‘Necessarily p' implies ‘It is necessary that necessarily p' ”—i.e., Lp ⊃ LLp—whereas that known as S5 adds still another axiom, that “ ‘Possibly p' implies ‘It is necessary that possibly p' ”—i.e., Mp ⊃ LMp. The analogous deontic systems are then as follows: 1.DM (the deontic analogue of the system M of von Wright or of the system T). To a standard system of propositional logic the following rule is added: “Any proposition, if true, ought to be true”; that is, If ⊢ p then ⊢ O p. Example: Given that “to forgive is divine” ( p), then “to forgive ought to be divine” (O p). Axioms: A1.“If p is obligatory, then not- p is not obligatory”; i.e., O p ⊃ ∼O∼ p. A2.“If p ought to imply q, then if p is obligatory q is obligatory”; i.e., O( p ⊃ q) ⊃ (O p ⊃ O q). 2.DS4 (the deontic analogue of Lewis' system S4). To M one adds the axiom: A3.“If p is obligatory, then p ought to be obligatory”; i.e., O p ⊃ OO p. Example: “If John ought to pay his debts” (O p), then it is obligatory that John ought to pay his debts” (OO p). 3.DS5 (the deontic analogue of Lewis' system S5). To M one adds the axiom: A4.“If p is not obligatory, then p ought to be nonobligatory”; i.e., ∼O p ⊃ O∼O p. A straightforward semantical systematization of systems of deontic logic can be provided as follows: given a domain of complex propositions built up from atomic propositions (p, q, r, . . .) with the use of propositional connectives (∼, · , ∨, ⊃) and O, a deontic model set Δ for this domain can be characterized as any set chosen from these propositions that meets the following conditions (in which “iff” means “if and only if”): 1.Not- p is in the set if and only if p is not in the set; i.e., ∼ p ∊ Δ iff p ∉ Δ. 2.“Both p and q together” is in the set if and only if p is in the set and q is in the set; i.e., ( p · q) ∊ Δ iff p ∊ Δ and q ∊ Δ. 3.“Either p or q” is in the set if and only if either p is in the set or q is in the set; i.e., ( p ∨ q) ∊ Δ iff p ∊ Δ or q ∊ Δ. 4.“That p implies q” is in the set if and only if either p is not in the set or q is in the set; i.e., ( p ⊃ q) ∊ Δ iff p ∉ Δ or q ∊ Δ. 5.“That p is obligatory” is in the set whenever p is posited; i.e., O p ∊ Δ whenever ⊢ p. 6.“That not- p is not obligatory” is in the set whenever “ p is obligatory” is in the set; i.e., ∼O ∼ p ∊ Δ whenever O p ∊ Δ. 7.“That q is obligatory” is in the set whenever both “ p is obligatory” is in the set and “That p implies q is obligatory” is in the set; i.e., O q ∊ Δ whenever both O p ∊ Δ and O( p ⊃ q) ∊ Δ. A proposition can be characterized as a deontic thesis (D-thesis) if it can be shown that, in virtue of these rules, it must belong to every deontic model set. It can be demonstrated that the D-thesis in this sense will coincide exactly with the theorems of DM—the first of the above three systems. Furthermore, if one adds one of the additional rules: 8′.“That p ought to be obligatory” is in the set whenever “ p is obligatory” is in the set; i.e., OO p ∊ Δ whenever O p ∊ Δ. 8″.“That p ought to be non-obligatory” is in the set whenever “ p is not obligatory” is in the set; i.e., O∼O p ∊ Δ whenever ∼O p ∊ Δ. then the corresponding D′ or D″ theses will coincide exactly with the theorems of the deontic systems DS4 and DS5, respectively—numbers 2 and 3 above. Logics of physical application Certain systems of logic are built up specifically with particular physical applications in view. Within this range lie temporal logic; spatial, or topological, logic; mereology, or the logic of parts and wholes generally; as well as the logic of circuit analysis. Since the field of topological logic is still relatively undeveloped, the reader is referred to the bibliography for a recent source that provides some materials and references to the literature. Temporal logic The object of temporal logic—variously called chronological logic or tense logic—is to systematize reasoning with time-related (time) propositions. Such propositions generally do not involve the timeless “is” (or “are”) of the mathematicians' “three is a prime,” but rather envisage an explicitly temporal condition (examples: “Bob is sitting,” “Robert was present,” “Mary will have been informed”). In this area, statements are employed in which some essential reference to the before-after relationship or the past-present-future relationship is at issue; and the ideas of succession, change, and constancy enter in. Classic historical treatments Chronological logic originated with the Megarians (Megarian school) of the 4th century BC, whose school (not far from Athens) reflected the influence of Socrates and of Eleaticism. In the Megarian conception of modality, the actual is that which is realized now, the possible is that which is realized at some time or other, and the necessary is that which is realized at all times. These Megarian ideas can be found also in Aristotle, together with another temporalized sense of necessity according to which certain possibilities are possible prior to the event, actual then, and necessary thereafter, so that their modal status is not omnitemporal (as in the Megarian concept) but changes in time. The Stoic (Stoicism) conception of temporal modality is yet another cognate development, according to which the possible is that which is realized at some time in the present or future, and the necessary that which is realized at all such times. The Diodorean concept of implication (named after the 4th-century-BC Megarian logician Diodorus Cronus) holds, for example, that the conditional “If the sun has risen, it is daytime” is to be given the temporal construction “All times after the sun has risen are times when it is daytime.” The Persian logician Avicenna (980–1037), the foremost philosopher of medieval Islām, treated this chronological conception of implication in the framework of a general theory of categorical propositions (such as “All A is B”) of a temporalized type and considerably advanced and developed the Megarian-Stoic theory of temporal modalities. Fundamental concepts and relations of temporal logic The statements “It sometimes rains in London,” “It always rains in London,” and “It is raining in London on Jan. 1, AD 3000,” are all termed chronologically definite, in that their truth or falsity (truth-value) is independent of their time assertion. By contrast, the statements “It is now raining in London,” “It rained in London yesterday,” and “It will rain in London sometime next week” are all chronologically indefinite, in that their truth or falsity is not independent of their time of assertion. The notation |t ⊢ p is here introduced to mean that the proposition p, often in itself chronologically indefinite, is represented as being asserted at the time t. For example, if p1 is the statement “It is raining in London today” and t1 is Jan. 1, 1900, then “|t1 ⊢ p1” represents the assertion made on Jan. 1, 1900, that it is raining today—an assertion that is true if and only if the statement “It is raining in London on Jan. 1, 1900,” is true. If the statement p is chronologically definite, then (by definition) the assertions “|t ⊢ p” and “|t′ ⊢ p” are materially equivalent (equivalence) (i.e., have the same truth value) for all values of t and t′. Otherwise, p is chronologically indefinite. The time may be measured, for example, in units of days, so that the time variable is made discrete. Then (t + 1) will represent “the day after t-day,” (t - 1) will represent “the day before t-day,” and the like. And, further, the statements p1, q1, and r1 can then be as follows: p1:“It rains in London today.” q1:“It will rain in London tomorrow.” r1:“It rained in London yesterday.” The following assertions can now be made: P:|t ⊢ p1 Q:| t - 1 ⊢ q1 R:| t + 1 ⊢ r1. Clearly, for any value of t whatsoever, the assertions P, Q, and R must (logically) be materially equivalent (i.e., have the same truth value). This illustration establishes the basic point—that the theory of chronological propositions must be prepared to exhibit the existence of logical relationships among propositions of such a kind that the truth of the assertion of one statement at one time may be bound up essentially with the truth (or falsity) of the assertion of some very different statement at another time. A (genuine) date is a time specification that is chronologically stable (such as “Jan. 1, 3000,” or “the day of Lincoln's assassination”); a pseudodate is a time specification that is chronologically unstable (such as “today” or “six weeks ago”). These lead to very different results depending on the nature of the fundamental reference point—the “origin” in mathematical terms. If the origin is a pseudodate—say, “today”—the style of dating will be such that its chronological specifiers are pseudodates—tomorrow, the day before yesterday, four days ago, and so on. If, on the other hand, the origin is a genuine date, say that of the founding of Rome or the accession of Alexander, the style of dating will be such that all its dates are of the type: two hundred and fifty years ab urbe condita (“since the founding of the city”). Clearly, a chronology of genuine dates will then be chronologically definite, and one of pseudodates will be chronologically indefinite. Let p be some chronologically indefinite statement. Then, in general, another statement can be formed, asserting that p holds (obtains) at the time t. Correspondingly, let the statement-forming operation Rt be introduced. The statement Rt( p), which is to be read “ p is realized at the time t,” will then represent the statement stating explicitly that p holds (obtains) specifically at the time t. Thus, if t1 is 3:00 PM Greenwich Mean Time on Jan. 1, 2000, and p1 is the (chronologically indefinite) statement “All men are (i.e., are now) playing chess,” then “Rt1( p1)” is the statement “It is the case at 3:00 PM Greenwich Mean Time on Jan. 1, 2000, that all men are playing chess.” Systematization of temporal reasoning On the basis of these ideas, the logical theory of chronological propositions can be developed in a systematic, formal way (formal system). It may be postulated that the operator R is to be governed by the following rules: (T1) The negation of a statement p is realized at a given time if and only if it is not the case that the statement is realized at that time; i.e., Rt (∼ p) ≡ ∼Rt( p), in which ≡ signifies equivalence and is read “if and only if.” (T2) A conjunction of two statements is realized at a given time if and only if each of these two statements is realized at that time: Rt( p · q) ≡ 【Rt( p) · Rt( q)】. Example: “John and Jane are at the railroad station at 10:00 AM—Rt( p · q)—if and only if John is at the station at 10:00 AM—Rt( p)—and Jane is at the station at 10:00 AM—Rt( q).” If a statement is realized universally—i.e., at any and every time whatsoever—it can then be expressed more simply as being true without any temporal qualifications; hence the rule: (T3) If for every time t the statement p is realized, then p obtains unqualifiedly; i.e., (∀t)Rt( p) ⊃ p, in which ∀ is the universal quantifier. If two times are involved, however, then the left-hand term in rule (T3) can be expressed within the second time frame as “It will be the case τ from now that, for every time t, it will be the case t from the first now that p”; i.e., Rτ【(∀t)Rt( p)】. It is an algebraic rule, however, that an Rt operator can be moved to the right past an irrelevant quantifier; hence Rτ【(∀t)Rt( p)】 ≡ (∀t){Rτ【Rt( p)】}; and, correspondingly, with the existential quantifier ∃: “It will be the case τ from now that there exists a time t such that p will be realized at t” is equivalent to saying “There exists a time t such that it will be the case τ from now that p will be realized t from the first now” (in which τ is a second time); i.e., (T4) Rτ【(∃t)Rt( p)】 ≡ (∃t){Rτ【Rt( p)】}. It is notable that the left-hand side of this equivalence is itself equivalent with (∃t)Rt( p) since what follows the initial Rτ is a chronologically definite statement. Finally, there are two distinct ways of construing iterations of the Rt operator, depending on the choice of origin of the second time scale. Thus a choice is required between two possible rules: (T5-I) Rτ【Rt( p)】 ≡ Rt( p) (T5-II) Rτ【Rt( p)】 ≡ Rτ + t( p). Taking these rules as a starting point, two alternative axiomatic theories are generated for the logic of the operation of chronological realization. Apart from strictly technical results establishing the formal relationships between the various systems of chronological logic, the most interesting findings about the systems of tense logic relate to the theory of temporal modalities. The most striking finding concerns the logical structure of the system of modalities, be it Megarian or Stoic: in which F(t) signifies “t is future.” It has been shown that the forms, or structures, of both of these systems of temporal modalities are given by the aforementioned system S5 of C.I. Lewis. Exactly parallel results are obtained for modalities of past times, Pt( p): p was realized at some (past) time t; and ∼Pt(∼ p): p has been realized at all (past) times. mereology The founder of mereology was the Polish logician Stanisław Leśniewski (Leśniewski, Stanisław). Leśniewski was much exercised about Russell's paradox of the class of all classes not elements of themselves—if this class is a member of itself, then it is not; and if it is not, then it is (example: “This barber shaves everyone in town who does not shave himself.” Does the barber then shave himself ? If he does, he does not; if he does not, he does.). Basic concepts and definitions The paradox results, Leśniewski argued, from a failure to distinguish the distributive and the collective interpretations of class expressions. The statement “ x is an element of the class of X's” is correspondingly equivocal. When its key terms (element of, class of ) are used distributively, it means simply that x is an X. But, if these terms are used collectively, it means that x is a part (proper or improper) of the whole consisting of the X's—i.e., that x is a part of the object that meets the following two conditions: (1) that every x is a part of it and (2) that every part of it has a common part with some x. On either construction of class membership, one of the inferences essential to the derivation of Russell's paradox is blocked. Leśniewski presented his theory of the collective interpretation of class expressions in a paper published in 1916. Eschewing symbolization, he formulated his theorems and their proofs in ordinary language. Later he sought to formalize the theory by embedding it within a broader body of logical theory. This theory comprised two parts: protothetic, a logic of propositions (not analyzed into their parts); and ontology, which contains counterparts to the predicational logic (of subjects and predicates), including the calculus of relations and the theory of identity. On his own approach, mereology was developed as an extension of ontology and protothetic, but the practice of most later writers has been to develop as a counterpart to mereology a theory of parts and wholes that is simply an extension of the more familiar machinery of quantificational logic employing ∃ and ∀. This is the course adopted here. An undefined relation Pt serves as the basis for an axiomatic theory of the part relation. This relation is operative with respect to the items of some domain D, over which the variables α, β, γ, . . . (alpha, beta, gamma, and so on) are assumed to range. Thus, αPtβ is to be read “alpha is a part of beta”—with “part” taken in the wider sense in which the whole counts as part of itself. Two definitions are basic: 1.“α is disjoint from β”; i.e., α|β is defined as obtaining when “there exists no item γ such that γ is a part of α and γ is a part of β”; i.e., ∼(∃γ)(γPtα · γPtβ). Example: “The transmission (α) is disjoint from the motor (β) if there exists no machine part (γ) such that it is a part of the transmission and also a part of the motor.” 2.“S has the sum of (or sums to) α”; i.e., S Σα is defined as obtaining when “for every γ, this γ is disjoint from α if and only if, for every β, to be a member of S is to be disjoint from γ”; i.e., (∀γ)【γ|α ≡ (∀β)(β ∊ S ⊃ β|γ)】. SΣα thus obtains whenever everything disjoint from α is disjoint from every S-element (β) as well, and conversely. Example: “A given group of buildings (S) comprises (Σ) the University of Oxford (α) when, for every room in the world (γ)—office, classroom, etc.—this room is disjoint from the university if and only if, in the case of each building (β), for it to be a member (∊) of the group that comprises the university (S ) it must not have this room as a part (β|γ).” Axiomatization of mereology A comprehensive theory of parts and wholes can now be built up from three axioms: The first axiom expresses the fact that “for every α and every β, if α is a part of β and β is a part of α, then α and β must be one and the same item”; i.e., (∀α)(∀β)(αPtβ · βPtα ⊃ α = β); hence, the axiom: (A1) Items that are parts of one another are identical. The second axiom expresses the fact that “for every α and every β, α is a part of β if and only if, for every γ, if this γ is disjoint from β it is then disjoint from α as well”; i.e., (∀α)(∀β)【αPtβ ≡ (∀γ)(γ|β ⊃ γ|α)】; hence, the axiom: (A2) One item is part of another only if every item disjoint from the second is also disjoint from the first. The third axiom expresses the fact that “if there exists an α that is a member of a nonempty set of items S, then there also exists a β that is the sum of this set”; i.e., (∃α)(α ∊ S ) ⊃ (∃β)SΣβ; hence, the axiom: (A3) Every nonempty set has a sum. Several theorems follow from these axioms: The first states that “for every α, α is a part of α”; i.e., (∀α)αPtα; hence, the theorem: (T1) Every item is part of itself. The second theorem states that “for every α, for every β, and for every γ, if α is a part of β, and β is a part of γ, then α is a part of γ”; i.e., (∀α)(∀β)(∀γ)【(αPtβ · βPtγ) ⊃ αPtγ】; hence, the theorem: (T2) The Pt-relation is transitive. The third theorem states that “for every α, for every β, and for every γ, if γ is a part of α only when it is also a part of β, then α is identical with β”; i.e., (∀α)(∀β)(∀γ)【(γPtα ≡ γPtβ) ⊃ α = β】; hence, the theorem: (T3) Any item is completely determined by its parts; items are identical when they have the same parts in common. The fourth theorem states that “for every α and every β, there exists a γ that is the sum of α and β”; i.e., (∀α)(∀β)(∃γ)({α, β}Σγ); hence, the theorem: (T4) Any two items whatsoever may be summed up. In this form as a formal theory of the part relation, the history of mereology can be dated from some drafts and essays of Leibniz (Leibniz, Gottfried Wilhelm) prepared in the late 1690s. Computer (computer science) design and programming (computer program) In the most general terms a computer is a device that calculates a result (“output”) from one or more initial items of information (“input”). Inputs and outputs are usually represented in binary (binary number system) terms (bit)—i.e., in strings of 0s and 1s—and the values of 0 and 1 are realized in the machine by the presence or absence of a current (of electricity, water, light, and so on). When the output is a completely determined function of the input, the connection between a computer and the two-valued logic of propositions is immediate, for a valid argument can be construed as a partial function of the truth values (truth-value) of the premises such that when the premises each have the value true, so does the conclusion. One of the simplest computers has one input, either 0 or 1 (i.e., a current either off or on), and one output, namely, the reverse of the input. That is, when 0 is input, 1 is output, and, conversely, when 1 is input, 0 is output. This is also the behaviour of the truth function (connective) negation (∼ p) when applied to the truth values true and false. Thus a circuit elements that behaves in such a way is called a NOT gate: When no current is input from the left, a current flows out on the right, and, conversely, when a current flows in from the left, none is output to the right. Similarly, devices with two inputs and one output correspond in behaviour to the truth functions conjunction ( p · q) and disjunction ( p ∨ q). Specifically, in an AND gate, current flows out to the right only when current is present in both inputs; otherwise there is no output. In an OR gate, current is output when a current is present in either or both of the inputs on the left. Other truth functional connectives are easily constructed using combinations of these gates. For example, the conditional, ( p ⊃ q), is represented by: There is no output if there is input from p (“ p” is true) and none from q (“ q” is false). It is also possible to connect these gates to memory devices (computer memory) that store intermediate results in order to construct circuits that perform elementary binary arithmetic: addition, subtraction, multiplication, and division. These simple circuits, and others like them, can be connected together in order to perform various computations such as determining the implications of a set of premises or determining the numerical value of a mathematical function for specific argument values. The details of computer design and architecture depend less on logical theory and more on the mathematical theory of lattices (see algebra: Lattice theory) and are outside the scope of this article. In computer programming, however, logic has a significant role. Some modern computers, such as the ones in automobiles or washing machines, are dedicated; that is, they are constructed to perform only certain sorts of computations. Others are general-purpose computers, which require a set of instructions about what to do and when to do it. A set of such instructions is called a program. A general-purpose computer operating under a program begins in an initial state with a given input, passes through intermediate states, and should eventually stop in a final state with a definite output. For a given program, the various momentary states of the machine are characterized by the momentary values of all the variables in the program. In 1974 the British computer scientist Rod M. Burstall first remarked on the connection between machine states and the possible worlds used in the semantics of modal logic. The use of concepts and results from modal logic to investigate the properties and behaviour of computer programs (e.g., does this program stop after a finite number of steps?) was soon taken up by others, notably Vaughan R. Pratt (dynamic logic), Amir Pnueli (temporal logic), and David Harel (process logic). The connection between the possible worlds of the logician and the internal states of a computer is easily described. In possible world semantics, p is possible in some world w if and only if p is true in some world w′ accessible to w. Depending on the properties of the accessibility relation (reflexive, symmetric, and so on), there will be different theorems about possibility and necessity (“ p is necessary” = “∼ M∼ p”). The accessibility relation of modal logic semantics can thus be understood as the relation between states of a computer under the control of a program such that, beginning in one state, the machine will (in a finite time) be in one of the accessible states. In some programs, for instance, one cannot return from one state to an earlier state; hence state accessibility here is not symmetric. (For detailed treatments of this subject, refer to the Bibliography (logic, history of).) Hypothetical reasoning and counterfactual conditionals A simple conditional, or “if,” statement asserts a strictly formal relationship between antecedent (“if” clause) and consequent (“then” clause): “If p, then q,” without any reference to the status of the antecedent. The knowledge status of this antecedent, however, may be problematic (unknown), or known-to-be-true, or known-to-be-false. In these three cases, one obtains, respectively, the problematic conditional (“Should it be the case that p—which it may or may not be—then q”), the factual conditional (“Since p, then q”), and the counterfactual conditional (“If it were the case that p—which it is not—then q”). Counterfactual conditionals have a special importance in the area of thought experiments in history as well as elsewhere. Material implication, p ⊃ q, construed simply as the truth-functional “either not- p or q,” is clearly not suited to represent counterfactual conditionals, because any material implication with a false antecedent is true: when p is false, then p ⊃ q and p ⊃ ∼ q are both true, regardless of what one may choose to put in place of q. But even when a stronger mode of implication is invoked, such as strict implication or its cognates, the problem of auxiliary hypotheses (soon to be explained) would still remain. It seems most natural to view a counterfactual conditional in the light of an inference to be drawn from the contrary-to-fact thesis represented by its antecedent. Thus, “If this rubber band were made of copper, then it would conduct electricity” would be construed as an incomplete presentation of the argument resulting from its expansion into: Assumption: “This rubber band is made of copper.” Known fact: “Everything made of copper conducts electricity.” Conclusion: “This rubber band conducts electricity.” On the analysis, the conclusion (= the consequent of the counterfactual) appears as a deductive consequence of the assumption (= the antecedent of the counterfactual). This truncated-argument analysis of counterfactuals is a contribution, in essence, of a Polish linguistic theorist, Henry Hiż (b. 1917). On Hiż's analysis, counterfactual conditionals are properly to be understood as metalinguistic (metalanguage)—i.e., as making statements about statements. Specifically, “If A were so, then B would be so” is to be construed in the context of a given system of statements S, saying that when A is adjoined as a supplemental premise to S, then B follows. This approach has been endorsed by the American Roderick Chisholm, an important writer in applied logic, and has been put forward by many logicians, most of whom incline to take S, as above, to include all or part of the corpus of scientific laws (nature, law of). The approach warrants a closer scrutiny. On fuller analysis, the following situation, with a considerably enlarged group of auxiliary hypotheses, comes into focus: Known facts:1.“This band is made of rubber.” 2.“This band is not made of copper.” 3.“This band does not conduct electricity.” 4.“Things made of rubber do not conduct electricity.” 5.“Things made of copper do conduct electricity.” Assumption: Not-2; i.e., “This band is made of copper.” When this assumption is introduced within the framework of known facts, a contradiction obviously ensues. How can this situation be repaired? Clearly, the logician must begin by dropping items 1 and 2 and replacing them with their negations—the assumption itself so instructs him. But a contradiction still remains. The following alternatives are open: Alternative 1:Retain: 3, 4.Reject: 1, 2, 5. Alternative 2:Retain: 4, 5.Reject: 1, 2, 3. That is, the analyst actually has a choice between rejecting 3 in favour of 5 or 5 in favour of 3, resulting in the following conditionals: ● “If this rubber band were made of copper, then it would conduct electricity” (since copper conducts electricity). ● “If this rubber band were made of copper, then copper would not (always) conduct electricity” (since this band does not conduct electricity). If the first conditional seems more natural than the second, this is owing to the fact that, in the face of the counterfactual hypothesis at issue, the first invites the sacrifice of a particular fact (that the band does not conduct electricity) in favour of a general law (that copper conducts electricity), whereas the second counterfactual would have sacrificed a law to a purely hypothetical fact. On this view, there is a fundamental epistemological difference between actual and hypothetical situations: in actual cases one makes laws give way to facts, but in hypothetical cases one makes the facts yield to laws. But in more complex cases the fact/law distinction may not help matters. For example, assume a group of three laws L1, L2, L3, where ∼ L1 is inconsistent with the conjunction of L2 and L3. If asked to hypothesize the denial of L1—so that the “fact” that one is opposing is itself a law—then what remains is a choice between laws; the distinction between facts and laws does not resolve the issue, and some more sophisticated mechanism for a preferential choice among laws is necessary. Additional Reading Applications of logic in unexpected areas of philosophy are studied in Evandro Agazzi (ed.), Modern Logic—A Survey: Historical, Philosophical, and Mathematical Aspects of Modern Logic and Its Applications (1981). William L. Harper, Robert Stalnaker, and Glenn Pearce (eds.), IFs: Conditionals, Belief, Decision, Chance, and Time (1981), surveys hypothetical reasoning and inductive reasoning. On the applied logic in philosophy of language, see Edward L. Keenan (ed.), Formal Semantics of Natural Language (1975); Johan van Benthem, Language in Action: Categories, Lambdas, and Dynamic Logic (1991), also discussing the temporal stages in the working out of computer programs, and the same author's Essays in Logical Semantics (1986), emphasizing grammars of natural languages. David Harel, First-Order Dynamic Logic (1979); and J.W. Lloyd, Foundations of Logic Programming, 2nd extended ed. (1987), study the logic of computer programming. Important topics in artificial intelligence, or computer reasoning, are studied in Peter Gärdenfors, Knowledge in Flux: Modeling the Dynamics of Epistemic States (1988), including the problem of changing one's premises during the course of an argument. For more on nonmonotonic logic, see John McCarthy, “Circumscription: A Form of Non-Monotonic Reasoning,” Artificial Intelligence 13(1–2):27–39 (April 1980); Drew McDermott and Jon Doyle, “Non-Monotonic Logic I,” Artificial Intelligence 13(1–2):41–72 (April 1980); Drew McDermott, “Nonmonotonic Logic II: Nonmonotonic Modal Theories,” Journal of the Association for Computing Machinery 29(1):33–57 (January 1982); and Yoav Shoham, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence (1988). |
随便看 |
|
百科全书收录100133条中英文百科知识,基本涵盖了大多数领域的百科知识,是一部内容开放、自由的电子版百科全书。