Posted on January 28, 2019 at 9:29 AM
Written by Stephen Rainey
If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.
Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.
This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties?
The plausibility of these technological claims ought to be separated from the technical feasibility of the neuroscience that underwrites them. When it is reported that facebook will develop a device to allow users to type with their minds or their thoughts, this is at best an extravagant claim – one sometimes owed more to headline writers than the hacks behind of the main piece. With the majority of consumer devices marketed as ‘neurotechnology’ it is implausible that they actually operate via detecting and recording brain signals. In fact, the claims reported as a ‘Facebook thought-reader’ are described by Mark Zuckerberg himself as a brain reader. But that’s possibly just as implausible as the headline writer’s ‘thought reading’ gloss.
Something like a commercially available mind reader is unlikely to have the number, density, or sensitivity of electrodes to be able to detect neural signals. Far more likely is that such a device will respond to electrical activity in the muscles of the face, whose signals are maybe 200 times as strong as those in the brain, and much more closely positioned to the device’s passive electrodes. In all likelihood, typing with such a device exploits micro-movements made when thinking carefully about words and phrases. Muscles used in speaking those words are activated as if preparing to speak them, hence corresponding to them in a way that can be operationalised into a typing context. Impressive technology, perhaps, but not as complex or interesting as mind reading.
What of the neuroscience behind these claims? Typically, bench science becomes technological reality at some point, such is the nature of innovation cycles. Is mind reading technology on the horizon?
That types of signals in specific areas appear to be ‘behind’ our conscious activity suggests that activity ought to be classifiable in a quite objective way. At least some neurotechnological development paradigms would suggest that this was the case. Claims have been made about ‘accessing thoughts’, ‘decoding dreams’, ‘identifying images from brain signals’, ‘reading hidden intentions’. Attending to the brain signals means getting to the mental content, it seems.
The link between brain signals and mental states like thoughts is not clear. Certainly, it seems as if a great deal more information than is captured through measuring brain signals is required if meaningful inferences are to be drawn from them about thought content or dreams. For example, in Yukiyasu Kamitani’s dream decoding work, 30-45 hours of interview per participant is required in order to classify a small number of objects dreamt of. This is impressive neuroscience experimentation, but it isn’t a simple ‘reading of the brain’ to ‘decode a dream’. Despite how brain signals are read, or decoded, it may remain that they are not identical with the states, not the specific content of thought
Interview is an interesting supplement to brain signal recording because it specifically deals in verbal disclosures about the experience of mental states. The objective recording of the brain signal, its functional sorting, is perhaps insufficient as an account of a mental state precisely in that it has no experiential dimension. The objective promise of recording signals might be exactly what cuts them off from the mind. Neuroscientist Steven Rose suggests something like this as he writes,
‘…the meaning of any experience is then not “in the brain” but in a mind which is an open system, depending to be sure on the brain, but not isolable within it. That is, the mind is wider than the brain. This is not a dualist position, but a rejection of the philosophy of a mechanical materialism that constantly seeks to reduce higher order phenomena to lower ones.’
So maybe these kinds of approaches can’t, on their own, deliver an account of mental content. That sort of mind-reading, at least, might be off the table. Could they nevertheless give objective data that could serve for neurofeedback, and thereby cognitive optimisation, or enhancement?
If we can’t access the specific, rich, experiential content of a mental state via some device it might still be instrumentally useful to get data on what our brains are up to. We could perhaps use that to make decisions about why we feel a certain way, why we are disposed to act somehow, and perhaps enact different behaviours should the brain data appear to underwrite an unsatisfactory response. Neurofeedback has been shown to be effective in treating attention deficit hyperactivity disorder (ADHD), for example, so perhaps it could be generally helpful in behaviour and attitude modification.
Since we would be the ones deciding on the data what we’d like to modify about our attitude or behaviour, any ethical issue of dominance might appear not to arise. No-one is making me change my behaviour, so the decisions I make based on the data are mine, and my autonomy is unassailed. Maybe. But maybe there’s something in the idea of one’s brain data as more authoritative than a perspective that ought to prompt a pause.
There is at least the possibility that, with this kind of set-up, one might take on the task of regulating oneself according to brain data more than contextual factors. “should i be anxious?”, asked in relation to a context this can have various answers relating to whether and how one ought to be in the situation as it unfolds. But asked generally, in terms of brain data, what can we say? That anxiety is bad, maybe? I ought to modulate my response such that anxiety is reduced?
If brain data is thought of as objective, then the states they underwrite too take on an air of objectivity, and so invite a value judgement in an objective sense – like ‘anxiety is bad’. If we think of neurofeedback in this sense, one that invites responding to ourselves as objects in general, there does seem to be a challenge to autonomy as we perhaps are steered toward a kind of value blindness. ADHD is a recognised condition, there are codifiable behaviours associated with it. A subjective perspective in generalisn’t quite as specifiable – there is a dynamic interaction among people, environment, culture, biology, context, in which subjective perspectives operate. Generalised neurofeedback-based behaviour and attitude modification may need more scrutiny than it at first appears.
At any rate, there is the already noted issue that consumer ‘neural’ technologies likely don’t approach the complexity or sensitivity required to actually record neural signals. In all likelihood, a commercially available neural feedback device for behaviour or attitude modification would be responding to very general wave patterns not specific to particular brain or mental states. Or, indeed, they might be responding to electrical activity in facial muscles. A nascent frown is probably not a great basis for altering one’s perspective on some given situation.
Even if the data from the brain don’t actually represent a genuine, contentful state of mind for the user of a neurotechnological device, problems may remain. Such data may be taken as genuine representations of mental states by others, besides oneself. There is scope here for issues of misapprehension and mistreatment. Neurotechnology offers exciting developments from neuroscience, but we’d do well not to get lost in the hype.
This piece is reblogged from https://stephenrainey.wordpress.com/2018/10/31/better-living-through-neurotechnology/