Eric Wasiolek

Life Story

     

 

   Home
   Personal History
   Family and Friends
   World Travels
   Academics
   Books and Articles
   Career
   Philosophical Papers
   Poetry
   Projects
   Contact Me
 
"You must be the change you wish to see in the world."
- Mahatma Gandhi

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Human and Computer Decisions Intentions Desires and Beliefs

Philosophical Papers --- 5/27/2020

 

Intention and Computational Decisions, Desires and Computational Goals

Do computers have desires and intentions?  My basic answer is no, but I need to explain why that is my answer.  Clearly computers can make a decision, i.e., implement decision branches.  This can even be done at the assembler level with the branch instruction.  Human beings make decision too, and when we say we decide that we will go out with Sally, there is an implied intention to go out with Sally.  So, human ‘decisions’ are clearly intention points.  But, computer ‘decisions,’ although they effective choose between A and B (more precisely between execution path A and execution path B) do not appear to be intentions.  They are merely a sequence of instructions which when condition is reached a branch occurs and jumps to a second sequence of instructions.  Human decisions jump too, in the sense that if we decide we will not go out with Sally, then we jump to the routine of calling her up and giving her the sad news and to not preparing to go to prom with her.  If one the other hand we decide to go out with Sally, we jump to the routine of calling her and telling her that we will go out with her and then we prepare to go to prom with her which includes getting a tuxedo that matches the color of her dress.  So, what is different in the human decision scenario and the computer decision scenario?  I would say that in the human case there is an intention and in the computer case there is no intention rather just a jump to a new sequence of instructions. 

Human Desires and Computer Goals

Desires and Goals.  Can we represent a desire in a computer?  I would say not, but then I need to explain why I would say not.  When we try to implement desires in a computer, we resort to implementing goals.  Certainly we can represent goals in an algorithm and get a robot to stack the boxes nicely as that is the goal and the robotic sequence of actions to reach the state of stacked boxes is part of the goal routine to reach the stacked box state.  Clearly, we can program a computer to execute a series of steps that result in a goal state.  Did the computer DESIRE to have stacked boxes?  Before we answer that question, let’s consider desire in the case of a human entity.  If a person has a desire to have stacked boxes, they probably represent in their mind a set of stack boxes (represent the goal state, but you can do this in a computer too).  They then form the intention to move the boxes so that they stack up, i.e., engage in a series of hand movements to stack the boxes retaining all the time the image of what the stacked boxes would look like.  So, what’s the difference?  Seems like the robotic behavior and the human behavior are quite similar.  I would say the difference is in the execution.  In the human case we can’t move our arms in the appropriate way, or act in any way, until there is an intention to act (some psychologists would argue that we can act in some cases without intention as in sleep walking or other subconscious states).  In the computer case there is a mere (mostly sequential) execution of statements which effect steps to stack, where there is some code to determine how you get from a state with boxes on the floor to a set of stacked boxes.  As I’ve said with all computation, computation can SIMULATE human mental states and sequences and SIMULATE human behavior, but to what extent is a SIMULATION the same as what it SIMULATES?  Clearly we can have a computer program simulate a predator prey scenario but the simulation is an abstraction of the actual predator prey scenario (without the blood or fight for example) which represents only those details that are relevant to what we want to simulate.

So, this begs the question of what is an intention, and why can’t an intention be represented computationally or simulated?  It seems most of what we simulate is behavior or what follows the intention. 

Human and Computer Decisions

In the case of a computer algorithm there is a SEQUENCE of caused events which result in a BRANCH which results in a new sequence of caused events.  I.e., the branch is CAUSED.  In the case of a human decision we are saying that there isn’t a sequence of events which precedes or causes the intention to act.  Regardless of Raymond’s soft free will he says Stanford espouses the question remains is the intention or volition CAUSED or NOT CAUSED?  Is there a prior cause to the intention or is there not?  We know in a computer there is a prior cause to the branch.  Stanford may say there are all sorts of premotor planning and inclinations which influence a decision, but if you can’t say no to those inclinations then the decision to do x or not do x is NOT FREE.  I don’t see how there can be SOFT free will.  There is only free will or no free will and the question hangs on whether the prior event CAUSES or DOESN’T CAUSE the volition.  If I am right, we are back to Kant trying to explain free will in a causal universe.  And he gives an explanation which may be at odds with modern science.  As a point of contrast, we may say a human that intends to do x can also intend to do not x.  I.e., there is a choice.  In the case of a computer branch there is no choice, when the conditions are met, i.e., a and b are equal (BRE) or a and b are not equal (BNE) then the appropriate path is DETERMNED, it CANNOT BE OTHERWISE.  (This is consistent with my theory that computers have no volition, free will, or intentions, or even desires in that measure.  They are just a determined electromechanical (robot) device.  Whether a computer can have a belief or representation of a belief or a desire is a separate thought I will take up elsewhere.) With a human an intention to do x CAN BE OTHERWISE.  I.e., it is free in that sense and not determined. In intending to do x or intending to not do x a human may enter a state of deliberation, however, when the choice is made it is not determined the choice is free.  I contend that Libet’s neural activity before consciousness of the choice to raise a finger is neural activity related to the subject’s deliberation to raise a finger or not, i.e., alternative motor planning scenarios before a choice.

Stanford claims that free will is, at least in part, the ability or power to do otherwise.  Stanford is saying your choice to do x is free if you can do not x.  Clearly, I contend that one can choose or have an intention that they don’t have the ability to enact. Kant would seem to agree with this, as he makes it clear that a good will is good in its intention regardless of whether the intention has any effect, as it wouldn’t in the case of inability.  There seem to be abundant examples in human behavior of intentions that lack ability or power to bring about.  I can certainly choose or intend or desire to be an astronaut without the ability to ever become an astronaut.  One might say that such intentions are unrealistic as we can’t make them come about. But this is just the case of A intending x which A cannot bring about, however, A could still choose not x (which may be more realistic in this case).  Now in cases where one cannot choose to do not x we would say there is some sort of compulsion perhaps and the choice is not completely free.  If A tells B to shoot C otherwise A will shoot B, if B decides to shoot C it is not clear that B can decide not to shoot C as there is the compulsion of A shooting B.  Maybe this is a reasonable theory or point at least about free choices.  Clearly A can make a choice but there are negative consequences for A doing either x or not x.  Do two negative consequences make a choice not free?  The Stanford text doesn’t read this way and it is an interesting characterization of in ability to choose otherwise because of two negative consequences. In such a case A will most probably choose to shoot C as there would be a prioritization of negative consequences where being a murderer is not as bad as being murdered unless C is a friend or relation.   In a computer you could have a program with two negative consequences, i.e., if a and b are equal the disk drive is destroyed and if a and b are not equal the CPU is destroyed, but which will happen will depend on whether a and b are equal or not in a determinative fashion (given the condition the path is determined).

Computer Decisions with Indeterminate Input

However, I need to modify or temper my point of computers being determinative systems when sensors are involved.  In a computational system with sensors, the input does not determine the output according to the algorithm as the input may not be strictly anticipated in advance and may vary depending upon the environment.  Hence you have a determinative algorithm but varied input that can’t be strictly anticipated and hence varied output that can’t be strictly anticipated.  How does this relate to computer decisions?  It seems that the comments about the determinacy of computer decisions would hold even in systems with sensors.  Consider that in a determinative system the output is determined through the algorithm by the input.  Then consider if you submit various input data sets to such an algorithm, the output would be determined by the algorithm in each case.  Now, systems with sensors are just a more general case of this, i.e., determinative algorithms with varied input data sets.  The determinativeness of the algorithm doesn’t change, and it is in the algorithm where computer ‘decisions’ are made, once an input set is selected from the environment it is determined what choice or decision the algorithm will make and what the consequential output will be.  As the determinacy of the point of decision is by the algorithm and not the input data set (which does set the condition for the branch and the choice in that sense) my former comments on the determinacy of computer decisions hold even in systems with sensors.  The decision will be whatever the input data set is and once you have that input data set the decision cannot be otherwise.   Robotic systems often or usually have sensors especially if they are autonomous robots (note the very strong distinction between a ‘robot’ which is remote controlled, i.e., decisions are by the human operator within the abilities of the robot, versus an autonomous robot where the robot decides and acts without a human operator). Even if you have a stream of varied input coming into a sensor and hence a varied stream of input data the decision of what to do with that data (what output to produce) is automatically determined by the algorithm and cannot be otherwise.

Maybe this is wrong, you can’t do what you intend but you can do other than what you intend i.e., you can choose otherwise.

Clearly, we can do things like represent liking relations in a graph, i.e., in a computational representation. 

 Copyright Eric Wasiolek 5/27/2020

 Designed and Managed by:
 Eric Wasiolek
   Copyright 2021 www.ericwasiolek.com Home | Contact Us | Login