I think the problem is something like this: You want to say that “Consciousness can only be identified through behaviors” and also “Therefore, anything with certain specified behaviors is conscious.” I’m not persuaded by the idea that “being alive” consists of behaviors, but let’s grant it. The argument is still shaky. The fact that (at the moment) we can only identify consciousness through behaviors doesn’t mean that all things that exhibit those behaviors must be conscious. Compare: Some Xs are Y;
a is an X; therefore
a is Y. This doesn’t follow.
Here’s another way to think about it. You’ve said you don’t like speculating about the future, but if consciousness is truly a scientific problem, as we both believe it is, then at some future point we’re going to know a lot more about it. Let’s imagine that someday we’ll be able to say the following: “Consciousness (C) is caused by (X + Y + Z), and only by (X + Y + Z), and is necessarily so caused.” So, in determining a particular case, we could say, “C iff (X + Y + Z); ~(X + Y + Z); therefore, ~C”. This would give us objective criteria to ascertain consciousness for any given entity. It wouldn’t rely on either behavior or subjective reports.
Now, lest you think I’m deliberately practicing sleight of hand, let me point out that this happy state of affairs is only true if it turns out that X and Y and Z are both objective and unproblematically causal. This may not be the case; we are currently clueless about what gives rise to consciousness. But if it is true, then the hard problem will have been solved. We will know what causes consciousness, and why this is necessarily so. Wouldn’t it be prudent, then, to assume that our current reliance on behavioral markers to identify consciousness is an unfortunate crutch, and that there is no important connection between the two? After all, we know that behaviors don’t
cause consciousness, but something does. When we learn what that something is, we may be able to abandon functional “explanations” entirely.
A final thought: Perhaps all you’re saying is that AIs and robots and other artifacts
might be conscious, for all we know. To me, that’s unobjectionable, though unlikely. It’s only when we start saying things like “Joe AI
is conscious, which we know because of its behaviors,” that anti-functionalists like me get aroused.