As a simple example, if one of the possible acts on the first turn is "Shoot my own foot off", a human planner will decide this is a bad idea generally - eliminate all sequences beginning with this action. But we've flattened this structure out of our representation.
We don't have sequences of acts, just flat "actions". So, yes, there are a few minor complications. Obviously so, or we'd just run out and build a real AI this way. In that sense, it's much the same as Bayesian probability theory itself. But this is one of those times when it's a surprisingly good idea to consider the absurdly simple version before adding in any high-falutin' complications.
Consider the philosopher who asserts, "All of us are ultimately selfish; we care only about our own states of mind. The mother who claims to care about her son's welfare, really wants to believe that her son is doing well - this belief is what makes the mother happy.
She helps him for the sake of her own happiness, not his. That's not going to make her happy, just dead. Even our simple formalism illustrates a sharp distinction between expected utility, which is something that actions have; and utility, which is something that outcomes have. Sure, you can map both utilities and expected utilities onto real numbers. But that's like observing that you can map wind speed and temperature onto real numbers. It doesn't make them the same thing.
The philosopher begins by arguing that all your Utilities must be over Outcomes consisting of your state of mind. When we object that people sometimes do sacrifice their lives, the philosopher's reply shifts to discussing Expected Utilities over Actions: But in English it all sounds the same.
The choices of our simple decision system are those with highest Expected Utility, but this doesn't say anything whatsoever about where it steers the future. It doesn't say anything about the utilities the decider assigns, or which real-world outcomes are likely to happen as a result. It doesn't say anything about the mind's function as an engine. To save your son's life, you must imagine the event of your son's life being saved, and this imagination is not the event itself.
It's a quotation, like the difference between "snow" and snow. But that doesn't mean that what's inside the quote marks must itself be a cognitive state.
If you choose the action that leads to the future that you represent with "my son is still alive", then you have functioned as an engine to steer the future into a region where your son is still alive. Not an engine that steers the future into a region where you represent the sentence "my son is still alive".
To steer the future there , your utility function would have to return a high utility when fed ""my son is still alive"", the quotation of the quotation, your imagination of yourself imagining. Recipes make poor cake when you grind them up and toss them in the batter. And that's why it's helpful to consider the simple decision systems first. Mix enough complications into the system, and formerly clear distinctions become harder to see.
So now let's look at some complications. Clearly the Utility function mapping Outcomes onto Utilities is meant to formalize what I earlier referred to as "terminal values", values not contingent upon their consequences. What about the case where saving your sister's life leads to Earth's destruction by a black hole? In our formalism, we've flattened out this possibility. Outcomes don't lead to Outcomes, only Actions lead to Outcomes.
Your sister recovering from pneumonia followed by the Earth being devoured by a black hole would be flattened into a single "possible outcome". And where are the "instrumental values" in this simple formalism? Actually, they've vanished entirely! You see, in this formalism, actions lead directly to outcomes with no intervening events. There's no notion of throwing a rock that flies through the air and knocks an apple off a branch so that it falls to the ground.
Throwing the rock is the Action, and it leads straight to the Outcome of the apple lying on the ground - according to the conditional probability function that turns an Action directly into a Probability distribution over Outcomes.
In order to actually compute the conditional probability function, and in order to separately consider the utility of a sister's pneumonia and a black hole swallowing Earth, we would have to represent the network structure of causality - the way that events lead to other events. And then the instrumental values would start coming back.
If the causal network was sufficiently regular, you could find a state B that tended to lead to C regardless of how you achieved B. Then if you wanted to achieve C for some reason, you could plan efficiently by first working out a B that led to C, and then an A that led to B. This would be the phenomenon of "instrumental value" - B would have "instrumental value" because it led to C.
C itself might be terminally valued - a term in the utility function over the total outcome. Or C might just be an instrumental value, a node that was not directly valued by the utility function.
Instrumental value, in this formalism, is purely an aid to the efficient computation of plans. It can and should be discarded wherever this kind of regularity does not exist. Suppose, for example, that there's some particular value of B that doesn't lead to C. Would you choose an A which led to that B? Or never mind the abstract philosophy: If you wanted to go to the supermarket to get chocolate, and you wanted to drive to the supermarket, and you needed to get into your car, would you gain entry by ripping off the car door with a steam shovel?
Instrumental value is a "leaky abstraction", as we programmers say; you sometimes have to toss away the cached value and compute out the actual expected utility. Part of being efficient without being suicidal is noticing when convenient shortcuts break down. Though this formalism does give rise to instrumental values, it does so only where the requisite regularity exists, and strictly as a convenient shortcut in computation. But if you complicate the formalism before you understand the simple version, then you may start thinking that instrumental values have some strange life of their own, even in a normative sense.
That, once you say B is usually good because it leads to C, you've committed yourself to always try for B even in the absence of C. People make this kind of mistake in abstract philosophy, even though they would never, in real life, rip open their car door with a steam shovel. You may start thinking that there's no way to develop a consequentialist that maximizes only inclusive genetic fitness , because it will starve unless you include an explicit terminal value for "eating food".
People make this mistake even though they would never stand around opening car doors all day long, for fear of being stuck outside their cars if they didn't have a terminal value for opening car doors. From Wikipedia, the free encyclopedia.
The Nature of Human Values. Towards the Underlying Structure". Underlying structure and multidimensional scaling". Australian Journal of Psychology. Beliefs, attitudes, and values: A theory of organization and change. Journal of Social Psychology.
Retrieved from " https: Views Read Edit View history. This page was last edited on 5 April , at
Instrumental values are ways of being that help us reach our terminal values. It is the terminal values that define the overall goal we want to achieve during our existence and the instrumental.
Dec 07, · Instrumental Values refer to preferable modes of behaviour and include values like honesty, sincerity, ambition, independence, obedience, imaginativeness, courageousness, competitiveness, and also some negative traits too. Organisations also have Instrumental Values (which can be ascertained from.
Milton Rokeach that proposed a list including two sets of values, the terminal values and instrumental ones: Terminal Values refer to desirable end‐states of existence; the goals that a person would like to achieve during their . Instrumental and Terminal Values Worksheet Step 1: Read all the instrumental and terminal values listed in the boxes below. Step 2: Cross off the five least important instrumental and the five least important terminal.
Essays - largest database of quality sample essays and research papers on Terminal And Instrumental Values. Rokeach Value Survey On the following pages are two lists of values; 18 terminal values and 18 instrumental values. Terminal values are “end result” values describing what you want to get out of life.