Features | Brain chips, battle suits, and cochlear implants

Rebecca Falvey examines the benefits and pitfalls of brain-computer interface technology

An interface system that allows an owl monkey on a treadmill to control the movement of a 200-pound humanoid robot is on its way to allowing people who have lost the ability to communicate to do so again.

This is the result of two decades of research into brain-computer interfaces (BCI): invasive neural prosthetics that harness brain signals accompanying movement and translate them into the movement of a cursor on a computer, a keyboard, a prosthetic limb, or a separate machine, like a robot.

These interfaces may someday bring immense independence to people confined to a wheelchair, bed, or their own brain – for instance,those paralyzed with Lou Gehrig’s disease (or ALS), stroke, or cerebral palsy.

A company called BrainGate is currently conducting research into mind-controlled wheelchairs and prosthetic limbs. This emerging technology will not only benefit the disabled and disadvantaged but also the performance of the military and youth entertainment.

The U.S. Army has invested at least $4 million in the development of “thought helmets,” which would enable soldiers to communicate without speaking – not with sentences and words, but intentions and apprehensions.

The Canadian military has also sponsored a significant amount of research in this field over the past two decades, with the intention of restoring limb function after injury, in addition to its long-term objective of “performance enhancement.”

Top-of-the-line toy and video game companies, such as Mattel, Nokia, Sega Toys and Uncle Milton are working with the company NeuroSky. NeuroSky’s “ThinkGear” technology is a non-invasive brain-computer interface using electroencephalography (EEG), which records the electrical activity along the scalp, produced by firing neurons within the brain.

Invasive BCI, on the other hand, are implanted into the grey matter of the brain during neurosurgery.

In 2004, the first clinical trial of invasive BCI was conducted on a 25-year-old Massachusetts native named Matthew Nagle, who was paralyzed from the neck down. A 96-electrode implant was placed over the motor cortex controlling his dominant left arm and hand. The implant was linked to the outside of his skull, which could then be connected to a computer. The computer was trained to discern his thought patterns and associate them with the movements he chose.

On account of this device, Nagle was able to do everything that most people can do on the computer and internet by pressing a button or moving a cursor. He could also open and close his prosthetic left hand. The device had to be removed within a year based on FDA regulations, as the immune system of the brain at this point in the technology’s development will generally reject the foreign device within the two-year mark.

Independence for those with active brains but inactive bodies is one of the main goals of this research. In this respect, according to Nagle’s own accounts, the procedure was successful.

“I can’t put it into words,” Religion & Ethics Newsweekly quoted him as saying. “It’s just – I use my brain. I just thought it. I said, ‘Cursor go up to the top right,’ and it did.”

I’ve grown up with a deaf mother and a father who works as an interpreter with the deaf community, generally comprised of students in school. I learned American Sign Language at the same time I learned English.

Since the 1980s, increasing numbers of deaf people have been implanted with cochlear implants – minimally invasive electronic prosthetics that can be surgically implanted within the inner ear of those who are deaf or hard of hearing. Though their ethical quandaries are different from those of brain-computer interfaces, they have created a lot of controversy. There is no real fear of computerization resulting from cochlear implants, but rather a familiar feeling of oppression in the deaf community.

It’s been reported that the implants have the potential to improve the hearing of those born hard of hearing or those who have gone deaf later in life, but do not have a very strong impact on those who are born completely or almost completely deaf. It’s now commonly decided in the medical field that cochlear implants have a better chance of working the earlier they are implanted – as early as twelve months old. If a baby is given a cochlear implant and decides to take it out later in life, perhaps because they were already too deaf for it to work and would rather live their life as a decidedly deaf person with a chosen language, any residual hearing is likely to be diminished.

The controversy in the deaf community arises from a feeling of a dismissal of their culture.

Emlyn Murray and Erin Beaver are both young women I know in my hometown of Halifax. Both were implanted with cochlear implants – Erin under her parents’ discretion when she was three, and Emlyn decided to receive one with her parents’ encouragement when she was fourteen. Erin is planning on having her implants removed as they had no effect on her and caused her pain.

Emlyn felt a discomfort and a terrible, robotic noise in her head in the beginning, but she has noticed a gradual improvement in hearing over the past five years. She once used a hearing aid in the other ear and favoured that side, the hearing in the implant has improved to the extent that she no longer uses the hearing aid.

Though Erin is generally opposed to putting cochlear implants in babies and Emlyn is an advocate of it, they both have concerns that “sign language and other distinctions will be lost in the rapidly growing technology,” as Emlyn wrote in an email. Emlyn was born with some hearing, and worked hard at learning how to improve her communication with the implant.

My mother has always taken very strong offense to being called “disabled,” and as she has successfully run her own teaching and consulting business for around thirty years and has lived self-sufficiently her entire life, it’s not hard to see why. While the implants do help some who were born with an ability to hear that could be salvaged, they do not help everyone.

The major fear, which has proven true in many families, is that a baby will be given an implant as a child, and raised as an entirely hearing child. Yet they may not develop a good sense of hearing and be left with a loss of language and culture, straining to hear and having no real way to express themselves and communicate, or identify with the culture they’ve been thrown into.

There’s a strong connection between this fear and the reality that my mother’s deaf generation grew up with. From age five to eighteen she lived at a boarding school for the hearing impaired, a fairly abusive place that forced the students to lip-read and wouldn’t let them sign. They generally left these children in the dark, giving them a sense of shame.

Though the cochlear implants do not carry the same form of abusive oppression, the same feeling of loss is prevalent in the children who can neither hear correctly know how to sign. My father, who works with deaf children, almost all of whom have cochlear implants, notices that all of his students are supposedly on the way to hearing, yet still need him to interpret for them.

The main purpose of the medical research into neural prosthetics is to help those who have lost their independence and ability to communicate. “In a study of several hundred quadriplegics,” explained Sam Musallam, an assistant professor in McGill’s department of Electrical and Computer Engineering, “most of them said they wish they could at least have their hand movement back.”

Musallem’s research has contributed tremendously to BCI’s ability to identify how the brain commands physical motion and endpoints – research that can be applied both to physical prosthetics as well as computer use. Victims of ALS in particular require some form of BCI in order to continue engaging with the outside world.

“They degenerate and become locked in,” said Musallam. “These are people who for a long time were thought to be in a coma or a vegetative state where they can’t move at all, but they’re conscious inside. So nothing happens to the brain, but the ability to move diminishes. I think the potential is just huge to allow them to communicate instead of think that they’re in a coma.” What is mistaken for a coma is just the lack of ability to respond to stimuli. “But there is nothing wrong with their brain.”

In regards to the individual, this progress also highlights the emphasis society puts on the importance of the individual and independence, and the choice of self-sufficiency and technology over community dependence. Brain-computer interfaces are an ironic development for the wishes of the individual, as this technology will simultaneously give movement to those who have lost the ability to move, and take away the need for it from those who have the ability but choose not to exercise it.

“One of the groups of people who often ask me about this and wonders when we’re going to be ready to implant humans are gamers,” said Stephen Helms Tillery, an assistant professor in the School of Biological and Health Systems Engineering at Arizona State University. “So those guys I think are real keen to deal with an interface directly to a computer.”

A progressing technology such as this opens a Pandora’s box of ethical questions. Where do we as a society want to draw the line with regards to enhancing the human brain and body with neuroelectronic devices? When weighing the risks and benefits of the growing technology, do the benefits of helping the disabled outweigh the possible negative effects on society and the individual?
One concern with the growing research is the prioritization of who the technology would benefit first. This is basically the“disparity of distribution of wealth,” Tillery explained. “So if you have a development that gives people an advantage, often it’s the people who already have advantages who can afford it and get access to the technology.”

It is easy to take this topic and spiral into theories of mind control, questions of personhood, and reruns of The Six Million Dollar Man. The military has put a lot of money into this research over the past two decades, some of it being inspired by brain injuries of Canadian soldiers. Yet the military’s concerns are not exclusively recuperative. As documented in the book Mind Wars, by Jonathan Moreno, the American government aims to enhance the military with neural prosthetics and interfaces, creating “network-centric warfare.” These goals may also be met by the mind reading caps the military’s $4-million research hopes for. This neurologically controlled future of the military would involve interfaces that would gather information from surroundings and transmit them to a central command post. There is also the idea of battle suits that would contain sensors able to detect injuries and administer drugs.

Dystopian as these prospects are, the “computerization of humans” fear goes both ways – the fear of altering the human body with technology, and the fear of the human consciousness being stored in a computer. In regards to the fear of computerization, the technology right now is still “infantile”, said Musallam. He added, however, that while people like himself have little control of the final uses of their research, their medical potential makes them worthwhile.

“Most, if not all of the military funding, from DARPA [Defense Advanced Research Projects Agency], that has gone to prosthetics, has gone with the goal of developing prosthetics for paralyzed patients. Now, first of all, knowledge is independent of good or bad application. Knowledge is knowledge. … So the military will use what it wants to use, as it has in the past. Whether I produce research that is funded by government agencies, or military agencies, or by private donation, those results are public knowledge, they can be used by the military,” explained Musallam. The intention of the money spent may not be at the top of the list of public interest or health, but the knowledge will eventually be publicly accessible.

“The internet started as a DARPA project. … I don’t think the fears should be with the funding, but I think the fears should be redirected toward open discussion and ethical constraint on the application of the technology,” Musallam continued.

With a heavy balance of both risks and benefits, the general consensus of researchers in this field, including Musallam and Tillery, is that there is not a strong reason to oppose the research at this time, but that BCI is a technology that should be kept in responsible hands. Though with no concept of the future there is no surefire way to do this, making it a technology or danger no different than any other. The idea is not, as Tillery put it, to “try to prevent these technologies or try and keep them down, because I don’t think you can, but the trick is to be aware and think ahead about them, to see the potentialities coming and be prepared to deal with them somehow.” Musallam has a similar opinion: “Progress is going to happen, knowledge is going to come out, people are going to make use of that knowledge and I think the most important thing is to make sure from very early the opinion of people is heard. We don’t want to have this technology mature and then all of a sudden have a group come out and say, ‘No wait this is a problem.’ I think opinion should accompany the development of this technology because it could impact us with what it’s used for.”


Comments posted on The McGill Daily's website must abide by our comments policy.
A change in our comments policy was enacted on January 23, 2017, closing the comments section of non-editorial posts. Find out more about this change here.