Why does sentiment add value, and why is that added value is unique to the frame of reference of those who feel the sentiment? — Bliss
You are an object in your universe. Maybe you relate to the Rembrandt painting in more ways than you relate to the Picasso painting. Maybe you relate to both paintings equally, but you know society relates to Rembrandt more than Picasso. Maybe you have knowledge that the girl you like is into Rembrandt but not Picasso, and the Rembrandt makes a future with her possible that the Rembrandt couldn't. Maybe neither you not anyone you know relates to either painting, but you know the Rembrandt has more exchange-value than the Picasso and that you can take more actions with more money. There are any number of answers.What for example, could be the difference in 'possible futures' between the value to me of a Rembrandt painting vs a Picasso
Unlike the one between the paintings, I don't know that this is a logical comparison, because these aren't really situations where you're required to choose one option over the other; writing a book and buying a book are not two alternate means of reaching the same outcome, generally speaking. I've written and published a book, but I don't have a copy of it.What do we make of the person who chooses to write a book rather than purchase a book? How do we measure the value to them of the book they're writing vs the monetary value of the book they would have purchased?
This is the traditional understanding of value, yes. I find this traditional understanding limiting, and I'd like to push the limitation.It seems to me that that value, as meaningfulness, is less about number of possible futures than about the meaningfulness of those futures
If you think dynamic relationships cant be modeled quantitatively then boy does the 1700s have some news for you.It is in the nature of what we find valuable that in order to continue to find value, we have to qualitatively change our relationship to what we find meaningful. This cannot be subsumed under a quantitative measurement scale because the scale will always be stuck in a prior definition.
I was tempted to use an additional example, one not based on motion of physical objects, because I feared you would interpret the motion example as being encompassing of the entire idea. Ultimately, I decided you would probably be able to see how motion is only one of the infinite stratum of phenomena the idea applies to, and that you would generalize motion to other phenomena instead of assuming I'm reducing the other phenomena to motion. My mistake.You're trying to use a mode of description(mathematical description of motion of physical objects) designed for one purpose, and apply it to an entirely different stratum of phenomena, subjective valuation of human beings
First, some disambiguation; just because the robot dog has "random" programming doesn't mean you can interact with it in more ways than you could the live dog. In fact, I'd argue the opposite: randomization algorithms have a necessarily finite number of outputs (every random algorithm inevitably repeats itself after a finite amount of time, or, equivalently, after some finite amount of time it is impossible for a random algorithm to produce a novel output), whereas live objects do not necessarily have a finite number of outputs, and it is always possible for a living object to produce a novel output. Under the assumption that the robot dog is a perfect physical recreation of the live dog, such that they are physically capable of taking all the same physical actions, the live dog will necessarily take a larger number of actions over a long enough duration of time.let's say we are trying to choose between two objects put in front of us. One is a robot dog with a randomizing program , so that its behavior will always be unpredictable. Next to it is a live dog. Most of us would say that the live dog is more valuable in general to us, not necessarily in monetary terms(the robot could be made of pure gold and diamonds). What makes the live dog more interesting than the robot? We could interact with the robot in a potentially infinite variety of ways given its randomness. But the dog will appear valuable to us in terms of its purposefulness and its ability to relate to us, to understand and care about us.
I am absolutely not assuming any object has an objective value. My entire argument is, in fact, that the value of an object is relative. For example, a life jacket is much more valuable to a drowning person than it is to a person on land, because without the life jacket zero actions will be possible for the drowning person in the future.But there is no one-size-fits-all description for different aspects of our world as we encounter them. Your mode of thinking is called objectivism.
As someone who wrote their thesis on the relativity of points in space, I would disagree with this wholeheartedly. Every point in space can be represented in an infinite number of ways, and none of those ways are objectively correct, or incorrect. That said, I don't see how this argument in any way contradicts mine.A point in space seems to be perfectly objective
Uh, yes we can. Quantum software in particular is entirely governed by behavior of objects even smaller than molecules. All non-quantum software is reducible to binary operations, and all binary operations can be reduced to uses of hardware. Given enough people, any non-quantum software can be reproduced by people pulling levers. People have built functioning computers in Minecraft, even.We can't understand the software of a computer by reducing it to hardware or the behavior of molecules.
I like to imagine value as emerging from broken symmetry; more specifically, from broken possibility symmetry. That is, different objects have different values because they make different numbers of futures possible. If every object could be used to make the same number of futures possible - if possibility symmetry existed - then every object would have equal value — Bliss
If I were the car, then my possibilities might be diminished. For that reason, if I were a car, I might feel some apprehension about being purchased. Because I am the person buying the car instead though, my possibilities increase; assuming I transition from has_car=false to has_car=true, then I will be able to go places I could not have gone before, and there's nowhere I could've gone before that I no longer can. What you've highlighted, though, is the relativity of the value of events: different perspectives can value the same event differently.When something, like a car, is associated with a person the future possibilities for that car decreases from n number of associations to just the one or a handful. So, it seems to me that value, in your terms, is about decreasing possibilities rather than increasing possibilities
I think measuring different components of value in different units is a limitation. The emotional component of my analysis of value is missing precisely because I'm trying to avoid making it separate from my underlying notion of value.Your theory of value is also a bit too cold. Where is the "sentimental" in it? People have things of sentimental value because of an emotional investment. This component is missing from your analysis.
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.