top of page
JM Wesierski

Reading the Mind (Preface)

Updated: May 24, 2022

JM Wesierski is pictured to left interacting with a computer screen that shows various colorful brain scans. To the right is Feng Xie wearing the Mark IV hat that data is being pulled from
JM Wesierski (left), Feng Xie (right)

Recently, my team and I purchased ourselves a hat. However, not just any ordinary hat. It won't shade you from the sun nor complete an outfit (though maybe one day). Instead, we acquired it because it does something much more important: it can read your mind. The problem is we don’t yet understand the language.


It's no wizard's sorting hat either, though it can be used to tell us whether we are cunning or courageous. This is because it is referred to as an electroencephalography or EEG headset. A few ways scientist gather brain activity are from magnetism, blood flow, or dissection. However, the information network of the brain is very similar to a computer: not by using wires, but by running on electricity. Those signals are exactly what an EEG reads.


Why am I interested - JM

I previously worked as a researcher of digital mental health at UC Irvine and have since gone on to develop software full time. I garnered experience independently in both the psyche and technological fields before expectedly becoming fascinated with a great intersection of the two: neurotechnology.


However, both positions and accompanying projects have been for the interests of others and, while I have had smaller personal projects, I yearn to build something of great use. This headwear or 'Brain-Computer Interface' (BCI) was created by OpenBCI to communicate with an application for recording brain activity. You can see why it appeals to me.

A Caricature of Professor Charles Xavier from the X-Men franchise, wearing his mind amplifying helmet Cerebro with pictures of his students, implying that he is thinking of them
Professor X using the BCI 'Cerebro'

After studying language at Paris Diderot University in France, I developed une plus grande appréciation for linguistics. Through my experience, I became interested in translating the brain activity of thoughts into words using a software algorithm (as opposed to a mouth). Yes, mind reading. However, that is not so much the goal but rather the direction as translation can only be improved.


I will be using this journal to chronicle the approach and findings of what we are calling Project Crypt. We started with the:


Question

How can patterns in the brain activity corresponding to thoughts be translated into meaningful sentences?


Hypothesis

With a machine learning algorithm and Brain-Computer Interface we can use the associated brain activity of spoken words to translate thoughts into coherent sentences in real-time.


(Neuro) Methods

To begin Project Crypt we will use the Ultracortex Mark IV (a BCI worn by Feng above) to output the brain activity of a few individuals while speaking (See Materials below). We are using this activity to try and find neural correlates of language. More specifically, we are looking for an area, thought, or way of thinking that can provide repeatable patterns of data corresponding to spoken words.


The individual being studied will be lain down and given noise canceling headphones and an eye mask to reduce mental stimulation and distraction. To start off we will scan a broad area of their potentially active perisylvian cortices for comprehensive data before narrowing to specific nodes of interest. Diagram I depicts the different sections of the cerebrum and Diagram II displays the node locations of an EEG. The eloquent cortexes highlighted have shown promise during thought tracking, language decoding, and real time sentence translation.


Diagram labeled Anatomy and Functional Areas of the Brain with the brain broken into 11 multi colored areas
Diagram I

The areas highlighted blue are of primary concern:

6. Motor Function Area - A cortex highly involved in controlling movement, such as for facial expressions and the verbalization of words.

8. Broca's Area - A language processing section critical in the production of coherent speech.

The areas underlined in green are secondary but of interest:

3. Wernicke's Area - Takes in words being heard and processes the visuals, feelings, and experiences evoked from said word.

4. Sensory Area - Of less importance in speech generation but significant in controlling the lips and throat to produce speech noise.

7. Association Area - Where the prior mentioned places associate words with memory and vice versa.

Titled Ultracortex Mark IV Node Locations (35 total). 15 are highlighted blue
Diagram II

Diagram II shows the 35 different locations (or nodes) available for analyzation by The Mark IV. However, our computer board can only collect data from 16 channels at the same time. The 15 highlighted correspond to the areas of interest from Diagram I (with one node left for as needed use). Each highlighted node will have it's own individual stream of brain activity.


(Tech) Methods

Each of our 16 active nodes will record a live stream of electrophysiological signal similar to a heart monitor at a doctor's office. The spikes of such activity can be represented in numerical patterns.


Like Google Translate, Project Crypt's algorithm will be sequence-to-sequence with Encoder-Decoder architecture. Facebook with UC San Francisco attempted to use such methods for the aforementioned "sentence translation" by comparing spoken scripts and neural signals obtained via electrocardiogram (on but not in the brain).


To train the Crypt Algorithm we will cross analyze

  • Manner of thinking and neural activity (See Lang below): When speaking, the different ways we think about the same words or phrases can evoke different degrees and connections of neural activity.

  • Neural activity and words spoken: Prior to translating thought, numerical brain activity must be compared to sentences spoken as to identify which number sets correspond to which words. This will be done as a percentage of likelihood.

  • Words thought and context or other words: We will chronologically take the words with the highest probability of being in a thought and, with potential context variables, output the most likely sentence.

Though we will first focus on finding numerical correlates for individual words, we must also pull key words from a complete thought, and finally recreate an anticipated sentence. We believe it more feasible to start off predicting a sentence from pieces of a thought than focus on finding direct correlates of each part of speech.


Contributing to prediction will also be context variables, such as colors in a question related to a rainbow, so as to constrain the array of possible answers. However, algorithm refinement must be made synchronously with research into which thoughts that can even be translated. After all, we may not just be looking for a way to read the mind but, as well, a new way to think so that our minds can be read. (Like a programming language)

The babel fish comes from the book series the Hitchhikers Guide to the galaxy. You put it in your ear and it can translate any language in the galaxy. Proof that god does not exist
Anatomy of the language translating 'Babel Fish' from the Hitchhiker's Guide to the Galaxy

(Lang) Methods

The majority of referenced research has approached the problem of telepathy primarily from a neural perspective: How do we translate brain activity into text? Project Crpyt will be focusing as much, if not more, from a linguistical perspective: How can we think in a way that is best translatable?


To pinpoint which thoughts evoke the most obvious and replicable neural activity, Project Crpyt will experiment heavily with the different ways we think of words. We have begun creating a verbal questionnaire to engage different cortical zones such as those related to senses, emotion, and memory. For example: when speaking, neural activity in an area related to emotion can help attune translation through the prior mentioned cross analyzation. However, we will begin with translating questions that have finite answers like colors, then slowly expand vocabular such as one would learn a language.

Using this questionnaire we will develop a type of cognispeak derived from the manner or subject of speech that provides the best translatability such as by emphasizing phonemes. Changes to the way we speak for purposes of communication may seem cumbersome but the nuances would only be similar to how one talks to their dog or even more conveniently like acronyms and emojis for texting.


Materials (Hardware)

Cyton+Daisy 16-channel (Seen on back of our hat above)

A computer board wired to pull up to 16 channels (nodes) of activity from the brain at 125 Hz each. This information can be viewed on a computer through the USB dongle and OpenBCI software. The board attaches to the back of the

A 3D printed headset that can hold up to 35 dry nodes (though only in combination with a 16 channel computer) and

Can be used in combination with:

Gel Electrodes, EMG/ECG Electrode Cables, Dry EEG combs


Materials (Software)

OpenBCI GUI (Seen on screen above)

A software tool for visualizing, recording, and streaming data from the OpenBCI Boards. Pulled data will be assessed in

An online laboratory that uses scientific data to develop and running different algorithmic applications which will be hosted on

An online code sharing forum where we will make our code available for public reference.

Other potential software:


Anticipated Problems

Diagram of Sigmund Freud's Popular Ego-Id and Super Ego depicted as an iceberg in the ocean floating between the conscious and unconscious.
Diagram III

1. Layers of Thought

From Sigmund Freud we recognized that an innumerable number of variables and psychological processes go into producing seemingly shallow thoughts (Diagram III). The 'Freudian Slip' was coined to be an error in speech due to the interference of unconscious thought. When it comes to memory, to just tell a simple joke one must hold in their mind both the setup and the punchline simultaneously.


For further consideration, what is the difference in brain activity when physically touching a hot surface, just thinking something is hot, and recalling something we remember to be hot? Given the amount of neural data we are pulling, how will the Crypt algorithm differentiate and suppress unconscious or unintended thoughts to prevent a sort of 'Freudian Blip'?


2. Intricate Brain Network

We often describe the brain like a computer. It has different areas for various functions like output, input, and memory. However, in a computer these can be individually accessed and understood; the brain is more like if all the wires and boards were fused together.


Furthermore, these sections are not 'connected' per say, but instead work as one singular network. This makes it almost impossible to focus on any specific area of activity without neglecting another. As well, one set of neurons often does not correlate with the same thought over time. So how will project Crypt feasibly study our primary areas of language while also accounting for and connecting subtle contributions from outliers?


3. Technology and Science

The previously mentioned prevailing attempts at telepathy required a contained language, after-the-fact translation, or penetration of the skull. Other researchers have found success at translating neuro-muscular activity corresponding to the jaw though we believe this practice limits technology and potential users (such as those with a vocal or facial disorder).


The clearest signals can be obtained using invasive (in the brain) scans such as Neuralink or semi-invasive like the prior mentioned Facebook. Being non-invasive, EEG brings with it significant interference in signal quality due to the skull and hair. However, we chose our method because it is the most safe, adjustable, and feasible for both independent research and public usage.

Two cartoon individuals wearing popping pink devices traced along their jaw. Though their mouths are closed, the implication is that they are talking as there is a text bubble above them.
The Alter Ego JCI (Jaw-Computer Interface)

Conclusion

Mind reading is a task that cannot be done alone. We are starting Project Crpyt to learn what it takes to read the mind, not because we know how. In fact, many of you have probably already spotted flaws in our methods. Therefore, any constructive comments left below will be duly considered and could potentially help our population get one step closer to mind reading.


After a thorough understanding of the technology we are using, Reading the Mind (Chapter 1) will describe the functional areas, language techniques, and topics that evoked the strongest and most reproducible neural responses. We will also familiarize ourselves with the code used by previous researchers for similar endeavors to understand the benefits and shortcomings.


Finally, we will describe the beginnings of our own application and Crypt Algorithm (with a link to our public Github repo). We do these things so that anyone can replicate or contribute to our work, while selectively leaving out information of intellectual property which can be gained upon request.

If you're interested in our progress or anticipate our failure, be sure to hit the subscribe button to stay updated with our next chapters. Thank you for reading and be sure to check out some of our other work!

 

Citations (referenced respectively)





















Comments


bottom of page