>>but the meanings carry very diverse technological, ethical, metaphysical and even spiritual connotations
Agreed on this being the reason we need to figure out what consciousness is or even, like the old censor definition of pornography: "I'll know it when I see it".
Personally, I'm more interested in ethical aspects. If there's a construct that's conscious (in the sense of being self-aware), do we have a responsibility to not destroy it? What about turning it off?
I suspect it's an emergent phenomenon, and AI's nowhere near the level of complexity for it to happen (as a side note, I used to be in and out of the MIT Media Lab on a regular level, one day I got a major belly laugh from Marvin Minsky when I told him in all seriousness that " 'AI' just means 'we can't do that quite yet' ". long story).
I subscribe to the "thousand brains" theory of cortical organization (which says nothing about consciousness), but that looks at mammalian organization. Birds have something parallel in function to the cortex in the pallium, which is quite different, but there do appear to be self-aware birds (corvids, parrots) - so self-awareness does not seem to be structurally dependent, so IMO it's entirely possible that an AI someday might be self-aware.
As always, just my $0.02