If you think like that then you mean to say that the information accessible to us is insufficient to conclude the presence of consciousness. So, here I am, talking to my friend and his conduct is identical in important respects to mine - he talks, acts just like me - and I, from that, make the following analogical inference:
1. I talk, act, initiate, respond in certain ways and I'm conscious.
2. My friend also does talk, act, initiate, respond in the same way as I do.
Ergo,
3. My friend is conscious. — TheMadFool
I don't think we're justified in making that inference. Two things are going on. One, we assume that we're all biological beings that are pretty much built the same way, so that if I have a body and I'm conscious, and you have a similar body, then you should also be conscious. But I'm not justified in assuming that matter even exists, let alone that you or I are made of it. The belief in the existence of some external non-conscious stuff is just that: a belief. It's equally likely, for all I can tell, that this is all a dream and your (and my) body is just part of a dream. If that's the case, then I should no more assume other people are conscious than I should assume people in my dreams are conscious.
The assumption that materialism is the case is also contradicted by the Hard Problem of Consciousness. At this point in time, we should have some scientific theory, if only a very primitive one, about how consciousness arises from non-conscious stuff, but of course, the theories are all over the place, from panpsychism to mysterianism to computationalism to outright denial of consciousness itself. This, I think, is evidence that materialism (and substance dualism) is not the case. That means that everyone I meet are probably dream figures who may or may not be conscious.
The other reason we assume other people are conscious is we don't want solipsism to be true.
Now, if I'm to doubt my argument from analogy above, there must be a relevant dissimilarity between my friend and me. If none can be found, the argument is cogent and I, perforce, must accept that my friend, like me, is too conscious.
Coming to AI, we seem reluctant to follow the same logic i.e. the following intriguing scenario is the case for AI:
4. I talk, act, initiate, respond in certain ways and I'm conscious.
5. An AI does act, initiate, respond in the same way as I do.
BUT...
6. I hesitate to conclude the AI is conscious.
We're trying to eat the cake and have it too. If you have doubts about the AI being conscious, this uncertainty automatically extends to your friend too and, conversely, if you believe your friend's conscious, the AI must also be conscious!
That doesn't necessarily follow. If I believe consciousness is only produced by organic brains, I could be sure my friend (who I believe has an organic brain) is conscious, yet doubt whether any machines are conscious.
Something about the evidence for consciousness is problematic. Either we believe it can be mimicked perfectly in which case there's no difference between your friend and a p-zombie and nonphysicalism is true or it can't be and AI that pass the Turing test are truly conscious.
The thing that's problematic about it is everything is filtered through our own minds, so it's impossible to verify whether any other minds exist. Solipsism will always be a viable option.