Comments

  • The Knowledge Explosion
    Hi Jake,

    I'm pleased to engage with you on this. I didn't intend my initial post to be an objection to your view (and certainly not to the presentation of it here, which I think is written quite nicely). Rather, I was sharing one academic philosopher's take on whether the issue is being discussed in philosophy scholarship as well as what I took to be the plausibility of the view.

    To clarify my point: I think we can distinguish between knowledge and technology. (Within technology, I also think we can distinguish between technological knowledge and the application of technological knowledge. For instance, we may discover how to make a nuclear bomb, but to apply this knowledge--and actually make one--is something different. Yet, I would not deny that if we have the technological knowledge, someone will apply it, or try to. I think this is at least part of your insight. And I think we see with, for instance, AI, this is the case.) Knowledge is a broader category than technology, which seems to be a species of knowledge. It seems to me that your view is strongest when applied to technology. But that there are other species of knowledge that don't seem so obviously problematic in the way you suggest. So, it would be interesting to see if you could extend your view to other species of knowledge. For instance, mathematical knowledge. It doesn't seem to me that learning the next number in pi is problematic in the way that nuclear technology is. But without proving that all species of knowledge endanger us or without limiting the type(s) of knowledge you have in mind, your argument is not as convincing as it could be.

    Moreover, given the dangers we face in the world today, it seems that knowledge is our best way out of some of them. For instance, while nuclear technology is troubling, perhaps the more serious problem humanity faces is climate change. In many respects, it seems the best and only way to save ourselves is with knowledge. People who deny the human contribution to climate change need to come to know that we are playing a big role in the problem before they will be ready to change. Alternatively, presumably those who do deny the human role should at least think it is important to figure out what is causing it. Moreover, we need to learn how we can stop it-- what are the most effective ways to slow it? Some people also think that technology will help us by e.g., coming up with a technique to remove excess carbon.

    If this is right, then even if some kinds of knowledge are dangerous, others might really help us out on the global scale. So, we need to determine the likelihood of nuclear war and climate catastrophe. (But doing this requires knowledge.)

    Another form of knowledge that would help would be increased moral knowledge. Along with this, if we had more and better knowledge about how to educate people morally, then we'd be in better shape. Again, this might be the only thing that can obviate the dangers certain technologies pose. One might deny that moral knowledge is possible or that we can learn better ways to make people morally better, but these are arguments that need to be made.

    At any rate, I think your view is worthwhile and would be a welcome addition to philosophical discussions. In case you don't know about it https://philpapers.org/ is a great place to see if there are any other thinkers who are raising similar concerns.
  • The Knowledge Explosion
    Here are a few thoughts from an academic philosopher.

    I think this is a good, provocative read. It might benefit from some refining, though. I say this because in regards to whether the thesis is true and whether it is being discussed (which you raised on The Philosophers' Cocoon), in the sense (1) the thesis is true (or plausible, at any rate), it is being discussed, and in the sense that (2) the thesis is not being discussed, the thesis is not true.

    As far as (1), your thesis is about knowledge but your discussion seems to be about technology in particular. Limited to the progress of technology, it seems plausible to say that there is a legitimate concern about whether technology is advancing at too fast a pace for humans to survive. I think this is a well worn issue in philosophy and outside of it. I'm no expert but in contemporary discussions, one likely finds this sort of concern being debated in applied/ medical ethics and in STS fields. The earliest example I can think of being raised in the history of philosophy is Plato's critique of writing (a technology) in 'Phaedrus'.

    As for (2) if you don't intend to limit your point to technology, or applied knowledge, then I don't think your claim is being discussed (though I might be wrong). But it doesn't seem plausible to me. Consider whether you think that learning more about mathematics is dangerous. Or, learning more about the oceans. Or how non-human animals experience the world. Or the universe. Or physics. There are any number of fields where discovery and the progression of knowledge is valuable that seem to me to undermine your concern.

    If you don't intend to limit your point to technology, you might want to refine it along the following lines. Any knowledge that can have practical applications is dangerous, and to think that more is better is wrong.