Get AI summaries of any video or article — Sign up free
I literally connected my brain to GPT-4 with JavaScript thumbnail

I literally connected my brain to GPT-4 with JavaScript

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The Crown EEG device measures brain electrical impulses with electrodes and streams them to a JavaScript SDK for programmatic access.

Briefing

A wearable EEG device called the Crown can turn brain activity into machine-readable signals—and a JavaScript workflow can route those signals into GPT-4 for real-time, thought-triggered outputs. The core move is simple: measure brain waves with tiny electrodes, stream the data through a JavaScript SDK, detect specific mental states or trained thought patterns, and then use those detections to prompt GPT-4 via the OpenAI API.

The Crown sits on the back of the head and uses multiple electrodes to capture electrical impulses from the brain, which show up as brain waves. Those waves shift with cognitive state: delta waves appear during sleep at roughly 2 Hz, alpha waves rise to around 10 Hz when relaxed, and gamma waves climb to about 35 Hz during high focus. Brain activity is also dynamic—patterns change quickly based on mental processes and external stimuli—so the system needs more than raw measurement.

Neurosity, the company behind the Crown, provides a dashboard that can train algorithms to recognize a person’s custom thought patterns. In the walkthrough, the creator trains the system by repeatedly imagining biting into a lemon and then relaxing when prompted. After enough repetitions (described as around 30), the dashboard begins to detect that specific mental pattern; when the thought is active, a chart “goes wild,” and when it isn’t, the chart steadies. The same approach can be used for other gestures or mental cues, such as a right-hand pinch or tongue-based patterns.

On the coding side, the workflow starts with a Node.js project and installs the Neurosity SDK. The program initializes the device using a device ID from the mobile app, logs in with email and password, then subscribes to a stream of raw brainwave data. The stream arrives at a sampling rate of 256 Hz—256 samples per second—batched into groups of 16 samples roughly every 62.5 milliseconds, and split across eight channels. While the raw feed is available in JSON, the more practical path is subscribing to higher-level “states” (like calm or focus) or to trained events.

For thought-triggered control, the key feature is event recognition via Neurosity’s Kinesis interface: after training, the code can listen for a named event such as “left hand pinch.” When that event fires, the system can run side-effect code—most notably, sending a prompt to GPT-4. The OpenAI SDK then authenticates and calls a chat completion endpoint using the gpt-4 model, returning text similar to what users see in ChatGPT.

From there, the output can be converted to speech and transmitted to a Bluetooth earpiece, enabling hands-free responses. The transcript pushes the idea further with speculative use cases: thinking a trained cue to request an excuse for being late, using a cue to get help on a difficult exam question, or triggering image capture through camera-enabled glasses for GPT-4 to interpret and answer. The throughline is that brain-signal recognition plus a straightforward JavaScript-to-OpenAI pipeline can make “intent” act like an input device—turning cognition into a programmable trigger.

Cornell Notes

A wearable EEG device (Neurosity’s Crown) measures brain waves and streams them to a JavaScript SDK, where trained mental patterns can be detected as events. The transcript describes how brain activity shifts across delta, alpha, and gamma ranges, then focuses on training custom cues—like imagining biting a lemon—so the system can recognize when that thought occurs. In code, a Node.js app initializes the device, logs in, subscribes to raw brainwave data (256 Hz, eight channels), and then uses higher-level state/event streams instead of parsing everything manually. When a recognized event fires, the app calls the OpenAI API (gpt-4) to generate text, which can then be converted to voice and played through a Bluetooth earpiece. The practical takeaway is that brain signals can be treated like an input to AI workflows via JavaScript.

How does the Crown turn brain activity into data a computer can use?

The Crown is a wearable electroencephalogram (EEG) with tiny electrodes that sit on the back of the head. It measures electrical impulses from the brain and streams the results to a mobile app over Bluetooth or Wi‑Fi. Through the Neurosity JavaScript SDK, the data can be accessed in JSON form, including a raw brainwaves stream and higher-level “states” or trained events.

What brain-wave frequencies correspond to different mental states mentioned in the transcript?

The transcript links sleep and wakefulness to frequency bands: delta waves around 2 Hz during sleep, alpha waves around 10 Hz when awake and relaxed, and gamma waves around 35 Hz during high focus (e.g., solving a coding problem or playing chess). It also notes that brainwave patterns change rapidly with cognitive state and stimuli.

Why does training matter, and how is it done in the walkthrough?

Raw brainwave data is noisy and changes with context, so the system uses a dashboard to train algorithms to recognize personal thought patterns. The walkthrough trains the model by repeatedly imagining biting into a lemon and then relaxing when prompted. After roughly 30 repetitions, the system detects that pattern—shown by a chart that reacts when the thought is present and stays steady when it isn’t. The same method is used for other cues like a right-hand pinch or tongue-based patterns.

What are the key characteristics of the raw brainwave stream in the code example?

The raw stream is sampled at 256 Hz, meaning 256 samples per second. Samples are batched into groups of 16, emitted about every 62.5 milliseconds. The data is also broken into eight channels, producing large objects full of numeric values.

How does a recognized brain event become a GPT-4 prompt?

After training, the app listens for specific events using Neurosity’s Kinesis interface (e.g., “left hand pinch”). When the event is detected, the program calls the OpenAI SDK, authenticates, and creates a chat completion request with the model set to gpt4. It sends an array of messages and receives generated text, which can then be used for downstream actions like text-to-voice playback.

What downstream outputs are suggested after GPT-4 generates text?

The transcript suggests converting GPT-4’s text into audio using a text-to-voice model, saving it as an audio file, and sending it to a Bluetooth earpiece. It also sketches speculative applications: thinking a trained cue to trigger an excuse request, using a cue to get help on an exam question, or triggering camera-enabled glasses to capture an image and ask GPT-4 for an answer.

Review Questions

  1. What problem does event/state recognition solve compared with processing raw EEG data directly?
  2. Describe the data-rate and structure of the raw brainwave stream (sampling rate, batching, and channels).
  3. How does the system connect a trained mental cue to an OpenAI chat completion request?

Key Points

  1. 1

    The Crown EEG device measures brain electrical impulses with electrodes and streams them to a JavaScript SDK for programmatic access.

  2. 2

    Brain-wave frequency bands shift with mental state, with gamma waves cited around 35 Hz during high focus.

  3. 3

    Neurosity’s dashboard training lets the system recognize personal thought patterns by repeating a cue and relaxing on prompt.

  4. 4

    Raw brainwave streaming arrives at 256 Hz, batched into 16-sample chunks about every 62.5 ms, and split across eight channels.

  5. 5

    Using higher-level “states” and trained events is more practical than parsing raw EEG for real-time control.

  6. 6

    A Node.js app can detect a named brain event and then call the OpenAI API (gpt4) to generate text via chat completions.

  7. 7

    Generated text can be converted to speech and delivered through Bluetooth audio, enabling hands-free AI responses.

Highlights

Training turns messy EEG into recognizable personal events, such as imagining biting a lemon and then relaxing until the chart reliably responds.
The raw stream is high-frequency (256 Hz) and multi-channel (eight channels), which makes direct analysis cumbersome without higher-level abstractions.
A simple pipeline—Neurosity event → OpenAI chat completion (gpt4) → text-to-voice—turns intent into an AI-triggered output.

Topics

Mentioned