Transcribe voice notes from Apple Watch (using AI)
Based on Reflect Notes's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Apple Watch recordings can be transcribed hands-free and routed into Reflect by using Whisper Memos plus Zapier.
Briefing
Voice notes recorded on an Apple Watch can be automatically transcribed and routed into Reflect daily notes through a chain that runs Apple Watch → Whisper Memos → Zapier → Reflect. The practical payoff is simple: record hands-free from the watch, wait roughly half a minute, and the transcription lands in the same place as other Reflect voice entries—organized under Reflect’s “audio memos” parent note via backlinked formatting.
The workflow starts on the Apple Watch using a shortcut button that begins recording and transcribes spoken text. When recording stops, the audio note uploads into the Voice Memos app, and Zapier picks it up in the background. In the creator’s setup, the transcription appears in the daily note after about 30 seconds, then shows up as plaintext in Reflect if the Zapier test succeeds.
Setting up the integration requires two key pieces: a Whisper Memos app configuration on iPhone and a Zapier “catch all” integration. On the iPhone, the app used is called Whisper Memos, created by Voytek (a Reflect engineer). Inside the app, the user enables a Zapier “catch all” integration, creates an integration ID, and saves it. The setup also involves running a Zapier “send test” action from the app side so Zapier can confirm it receives the memo payload.
In Zapier, the process begins by creating a Zap with a trigger from Whisper Memos. The trigger event is “Memo Created,” and the integration name is tied to the catch-all identifier shown in Whisper Memos (the transcript notes an example like voice_underscore_memos). Because Whisper Memos is invite-only, the Zapier instructions inside the app must be followed so Zapier can recognize the app and event; otherwise, searching Zapier won’t reveal it. Zapier then requires an API key copied from the app to connect the accounts.
The second step adds Reflect as the action. The recommended action is “append to a daily note,” matching how Reflect’s own voice transcriber behaves. The Reflect action uses a graph ID and a specific formatting approach: the memo content is inserted into an open text box after a shift-return, with the “audio memos” backlinked parent bullet (square-bracket backlink) used so new entries attach to the correct parent note. The transcript also flags a common Zapier mistake—preselected fields can break the mapping, so the graph name must be typed manually.
After publishing the Zap, a full end-to-end test is run: a watch recording uploads, Whisper Memos transcribes, Zapier forwards the text, and Reflect displays the result. The notes also mention how Reflect can group multiple audio memos under a single parent audio memo, and that Apple Watch entries can be merged with existing audio memo notes so past and new transcriptions accumulate together. The end result is a low-friction way to capture transcribed ideas while driving, exercising, or otherwise without needing the phone.
Cornell Notes
Apple Watch voice notes can be transcribed and delivered into Reflect daily notes automatically by chaining Apple Watch → Whisper Memos → Zapier → Reflect. The Apple Watch records via a shortcut button; after stopping, the audio uploads to Whisper Memos, and Zapier forwards the transcription to Reflect. Setup hinges on enabling Whisper Memos’ Zapier “catch all” integration on iPhone, then creating a Zap in Zapier with the trigger “Memo Created.” The Reflect action should append to the daily note and use the correct graph ID plus backlinked “audio memos” formatting so entries land under the right parent bullet. After publishing, a test confirms that Reflect receives plaintext transcriptions, then real watch recordings appear after roughly 30 seconds.
What is the end-to-end path for getting an Apple Watch recording into Reflect?
Why does Whisper Memos require special setup before Zapier can use it?
What Zapier trigger and event should be selected?
How should the Reflect step be configured so transcriptions land in the right place?
What formatting or field-mapping pitfalls does the setup warn about?
What should a successful test look like before using the Apple Watch?
Review Questions
- What two integrations must be enabled/connected before Zapier can receive Apple Watch transcriptions (and why)?
- Which Zapier trigger event name is used to detect new Whisper Memos entries?
- How does the Reflect action ensure new transcriptions attach under the correct “audio memos” parent note?
Key Points
- 1
Apple Watch recordings can be transcribed hands-free and routed into Reflect by using Whisper Memos plus Zapier.
- 2
Whisper Memos must have its Zapier “catch all” integration enabled on the iPhone, including creating an integration ID and using the provided API key.
- 3
Zapier needs a Zap with the Whisper Memos trigger set to the “Memo Created” event.
- 4
The Reflect action should append to the daily note and use the correct graph ID plus backlinked “audio memos” formatting so entries group correctly.
- 5
Zapier tests should be run before relying on watch recordings; successful tests produce plaintext output in Reflect.
- 6
Manual entry of graph-related fields matters—preselected/mismatched Zapier fields can break the mapping.
- 7
End-to-end delivery takes about 30 seconds in the described setup, since audio must move through Whisper Memos and Zapier before appearing in Reflect.