Mind-reading and Google Glass?

Google-Glass-MIndRDR

What do you think of when you hear of mind-reading and telekinesis? For me, these words stir up childhood fantasies of super powers similar to X-men’s Professor X. Recently, these fantasies have come crashing into reality with the emergence of MindRDR – as claimed rather over dramatically by the Daily Mail. Typically over the top, the Daily Mail is slightly getting ahead of itself, but the potential of MindRDR is extremely exciting.

MindRDR is a combination between the technology of Google Glass and a Neurosky EEG biosensor. The biosensor analyses brain activity and converts it into an action for Google Glass to take. So far, it lets the wearer take photos and upload them to social media. This is done without using their voice or touch. Instead, the sensor picks up on the user’s brainwaves to determine if it wants to be used. This is done by the wearer either concentrating or relaxing, MindRDR then provides the wearer with visual feedback to tell them how near they are to taking a photo, and then sharing it to social media.

The visual feedback is channelled through Glass and takes the form of a horizontal line in the centre of its screen. The wearer can make this line rise to the top of the screen through concentrating or let it fall to the bottom through relaxing. If the line reaches the top of the screen a photo will be taken. Once this happens, a new screen appears with the line in the centre again. If the line falls to the bottom, the photo will be discarded. If the line reaches the top of the screen it will be uploaded to social media.

What needs to be clarified about this piece of technology is that it cannot read your mind in the way that I always assume when I hear those words. It works by picking up activity levels in particular areas of the brain, usually near the surface. These signals can sometime be confused by other electronic activity in the brain, or the wearer’s muscles, but the Neurosky tech does its best to cut these out. Yet it is still limited, it can’t tell the difference between your want to take a photo and, for instance, the embarrassing memory you’ve been trying to suppress for the last three years. This is why its only function is to take photos.

That is not to say that this will always be the case. To try and speed up the development the app has been released on GitHub as opensource. The potential this has for improving the lives of people with lock-in syndrome, severe multiple sclerosis and quadriplegia is huge – but a long way off.

Sources:

Slate

Daily Mail

MindRDR

Jack is a Falmouth University graduate and a self-employed writer with a love of technology.