Electron in 2026: When It's Still the Right Choice
The Default Advice
The default advice in 2026 is clear: do not use Electron. Use Tauri. Use native. Use a web app. Electron is bloated, memory-hungry, and ships an entire browser engine for what could be a 5 MB binary.
I agree with this advice in most cases.
Falavra is an Electron app. I chose Electron deliberately, after shipping a native Swift app (DropVox) in the same month. This is not a post defending Electron out of ignorance or inertia. It is a post about understanding when the criticism does not apply and when the alternatives cannot do what you need.
What Falavra Does
Falavra is a desktop application for language learning through media. You point it at a video or audio file (or a YouTube URL), and it transcribes the content, segments it by sentence, and lets you review each sentence with playback controls, translation, and vocabulary extraction.
The technical requirements:
- Speech recognition using sherpa-onnx, an ONNX Runtime-based inference engine that runs Whisper and other speech models locally. The Node.js binding is
sherpa-onnx-node. - Local database using better-sqlite3 for storing transcriptions, vocabulary, and user progress.
- Video/audio downloading by shelling out to yt-dlp for YouTube content and ffmpeg for format conversion.
- Rich UI with waveform visualization, sentence highlighting, synchronized playback, and a vocabulary review interface.
- Offline operation. Everything runs locally. No cloud APIs, no subscriptions, no data leaving the machine.
These requirements dictated the technology choice.
Why Not Tauri
Tauri is the most common suggestion when someone says "I'm building a desktop app." It is lighter than Electron, uses the system webview instead of shipping Chromium, and the backend is written in Rust. I evaluated it seriously.
The problem is the backend. Tauri's Rust backend does not have access to the Node.js ecosystem. sherpa-onnx has a Node.js binding (sherpa-onnx-node) that wraps the C++ library with a JavaScript-friendly API. There is no equivalent Rust binding. I would have needed to write Rust FFI bindings to the sherpa-onnx C API myself, handle memory management across the FFI boundary, and maintain those bindings as sherpa-onnx evolves.
better-sqlite3 is another Node.js native addon with no Rust equivalent at the same quality level. Rust has rusqlite, which is excellent, but migrating to a different SQLite binding means rewriting the data layer entirely, not just porting syntax.
The yt-dlp integration is language-agnostic (it is a subprocess call), but the ffmpeg integration I use for audio extraction involves fluent-ffmpeg, a Node.js library with a mature API for composing complex ffmpeg pipelines.
Each individual dependency could be replaced. But replacing all of them simultaneously while learning Rust would have tripled the development time. Tauri's advantages in bundle size and memory consumption are real, but not worth a 3x development timeline for a first version.
Why Not Native
I already know Swift. I just shipped DropVox in it. Why not build Falavra natively?
Because the UI requirements are fundamentally different. DropVox is a menu bar utility with simple views -- a popover, a settings window, a floating drop zone. SwiftUI handles these elegantly.
Falavra has a complex, data-dense interface. A waveform visualizer with scrub controls. A sentence list with synchronized highlighting. A vocabulary panel with search, filtering, and spaced repetition statistics. A media player with subtitle-style overlay. Building this in SwiftUI would be possible but slower than building it in React, where I have 11 years of muscle memory for exactly this kind of interface.
And the native module problem remains. sherpa-onnx has a Swift/Objective-C binding, but it is less mature than the Node.js binding. The documentation is thinner. The community using it is smaller. I would be debugging ML integration issues with fewer resources to draw on.
The honest calculation: Electron lets me use my fastest UI stack (React + TypeScript + Tailwind) with my required dependencies (sherpa-onnx-node, better-sqlite3, yt-dlp). The productivity advantage was decisive.
What Electron Gives You
Full Node.js in the Main Process
This is the core advantage and the one that matters most for Falavra. The main process is a real Node.js environment. It can require native addons, spawn child processes, access the file system without restrictions, and use the full breadth of the npm ecosystem.
// Main process: spawn yt-dlp to download a YouTube video
import { spawn } from 'child_process';
function downloadVideo(url: string, outputPath: string): Promise<void> {
return new Promise((resolve, reject) => {
const ytdlp = spawn('yt-dlp', [
'--format', 'bestaudio',
'--output', outputPath,
'--no-playlist',
url
]);
ytdlp.on('close', (code) => {
code === 0 ? resolve() : reject(new Error(`yt-dlp exited with ${code}`));
});
});
}
// Main process: transcribe audio using sherpa-onnx-node
import { OfflineRecognizer } from 'sherpa-onnx-node';
function transcribe(audioPath: string): string {
const recognizer = new OfflineRecognizer(config);
const stream = recognizer.createStream();
const waveData = readWavFile(audioPath);
stream.acceptWaveform({
sampleRate: waveData.sampleRate,
samples: waveData.samples
});
recognizer.decode(stream);
return stream.result.text;
}
// Main process: query the local database
import Database from 'better-sqlite3';
const db = new Database(dbPath);
const sentences = db.prepare(`
SELECT * FROM sentences
WHERE transcription_id = ?
ORDER BY start_time
`).all(transcriptionId);
Three different native integrations -- a subprocess, a native addon, and a native database -- all in the same process, all using their best-in-class Node.js implementations. No FFI wrappers. No language bridges. No compromises on library quality.
Typed IPC
Communication between the main process and the renderer uses Electron's IPC (Inter-Process Communication). With TypeScript, you can type the entire channel:
// shared/ipc-types.ts
export interface IpcChannels {
'transcribe': {
args: [audioPath: string, language: string];
return: TranscriptionResult;
};
'download-video': {
args: [url: string];
return: { outputPath: string };
};
'get-sentences': {
args: [transcriptionId: string];
return: Sentence[];
};
}
// main process
ipcMain.handle('transcribe', async (_, audioPath: string, language: string) => {
return await transcriptionService.transcribe(audioPath, language);
});
// renderer process
const result = await window.electron.invoke('transcribe', audioPath, 'en');
// result is typed as TranscriptionResult
This pattern gives you type safety across the process boundary. If the main process handler returns the wrong shape, TypeScript catches it at compile time. In practice, this makes IPC feel like calling a regular async function.
Mature Tooling
electron-builder handles packaging, code signing, notarization, DMG creation, and auto-updates. I wrote about the manual macOS distribution pipeline for DropVox -- the signing order, the notarytool invocations, the Sparkle integration. For Falavra, electron-builder does all of that with configuration:
{
"build": {
"appId": "dev.helsky.falavra",
"mac": {
"category": "public.app-category.education",
"hardenedRuntime": true,
"gatekeeperAssess": false,
"entitlements": "build/entitlements.mac.plist",
"notarize": {
"teamId": "TEAMID"
}
},
"dmg": {
"background": "build/dmg-background.png",
"icon": "build/dmg-icon.icns",
"contents": [
{ "x": 150, "y": 190 },
{ "x": 450, "y": 190, "type": "link", "path": "/Applications" }
]
}
}
}
That JSON replaces about 200 lines of shell scripts and workflow configuration. It is not perfect -- I will discuss the edge cases shortly -- but for the common path, it works.
A Decade of Production Use
Electron has been used in production by VS Code, Slack, Discord, Figma (until recently), Notion, and hundreds of other applications. The runtime is battle-tested. The bugs are documented. The workarounds are known. The community has solutions for obscure problems.
When I hit an issue with native module loading on macOS 15, I found the answer in a GitHub issue from 2024 with a clear workaround. When I needed to configure the V8 memory settings for large audio processing, there were multiple blog posts and Stack Overflow answers explaining the options. This ecosystem depth has real value.
The Real Downsides
I will not pretend the criticism is unfounded. Electron has genuine costs.
Memory
Falavra at idle consumes 200-300 MB of RAM. That is the Chromium renderer, the V8 engine, the Node.js main process, and the framework overhead. DropVox, doing similar work in native Swift, idles at 30-50 MB.
The 200 MB baseline is not catastrophic on modern machines with 16 or 32 GB of RAM. But it is noticeable. It shows up in Activity Monitor. Users who monitor resource usage will see it and form an opinion.
During active transcription with sherpa-onnx running, Falavra peaks at 800 MB to 1.2 GB depending on the model size. This is the combined cost of the ONNX runtime, the audio data in memory, and the Electron overhead. A native implementation would likely use 400-600 MB for the same workload.
Bundle Size
The Falavra DMG is approximately 280 MB. This includes Chromium, Node.js, the React application, the sherpa-onnx native module, and the ML model files. DropVox's DMG is 18 MB.
Most of this is Chromium, which is roughly 120 MB compressed. You ship an entire browser engine regardless of how simple your UI is. For Falavra, where the UI is genuinely complex, this is a reasonable cost. For a simpler app, it would be absurd.
The V8 Memory Cage
This is the most technical and most frustrating downside I encountered. V8 (Chrome's JavaScript engine) has a security feature called the memory cage that restricts where JavaScript ArrayBuffers can be allocated. External ArrayBuffers -- allocated by native code and exposed to JavaScript -- must be within the V8 memory cage.
sherpa-onnx-node's readWave() function returns audio data as an external ArrayBuffer allocated by the C++ layer. This ArrayBuffer is outside the V8 memory cage. In Electron 28+, accessing it causes an immediate crash with no useful error message.
The fix was to stop using sherpa-onnx's built-in WAV reader entirely and write a pure JavaScript implementation:
function readWavFile(buffer: Buffer): { samples: Float32Array; sampleRate: number } {
const view = new DataView(buffer.buffer, buffer.byteOffset, buffer.byteLength);
// Parse WAV header
const numChannels = view.getUint16(22, true);
const sampleRate = view.getUint32(24, true);
const bitsPerSample = view.getUint16(34, true);
// Find data chunk
let offset = 36;
while (offset < buffer.length - 8) {
const chunkId = String.fromCharCode(
view.getUint8(offset),
view.getUint8(offset + 1),
view.getUint8(offset + 2),
view.getUint8(offset + 3)
);
const chunkSize = view.getUint32(offset + 4, true);
if (chunkId === 'data') {
offset += 8;
break;
}
offset += 8 + chunkSize;
}
// Convert to Float32Array (inside V8 memory cage)
const bytesPerSample = bitsPerSample / 8;
const numSamples = (buffer.length - offset) / bytesPerSample / numChannels;
const samples = new Float32Array(numSamples);
for (let i = 0; i < numSamples; i++) {
const sampleOffset = offset + i * numChannels * bytesPerSample;
if (bitsPerSample === 16) {
samples[i] = view.getInt16(sampleOffset, true) / 32768;
} else if (bitsPerSample === 32) {
samples[i] = view.getFloat32(sampleOffset, true);
}
}
return { samples, sampleRate };
}
This allocates the Float32Array in JavaScript, inside the V8 memory cage, and copies the data from the Buffer. It works, but it is slower than the native reader and uses more memory temporarily (the Buffer and the Float32Array coexist briefly).
I spent two days debugging this crash. The error message in the Electron console was FATAL ERROR: v8::ArrayBuffer::Detach Only ArrayBuffers allocated by V8 can be detached. No stack trace pointing to sherpa-onnx. No indication that the issue was a native module returning an external buffer. Just a crash.
This is the kind of issue you only encounter in Electron because it is the only framework where native C++ code and JavaScript share memory through V8's specific allocation model.
Native Module Rebuilds
Electron uses a specific version of Node.js internally, and native modules must be compiled against that version's ABI (Application Binary Interface). If you install sherpa-onnx-node or better-sqlite3 with regular npm install, they compile against your system Node.js. Electron will refuse to load them.
The fix is electron-rebuild, which recompiles all native modules against Electron's Node.js:
npx electron-rebuild
This must run after every npm install that adds or updates a native module. It must run on the target platform (you cannot rebuild macOS modules on Linux). And it must match the exact Electron version in your package.json.
When it works, it is invisible. When it fails, the error messages are often about missing symbols or ABI mismatches that require understanding C++ linking to debug.
When Electron Is Wrong
I have now shipped both native and Electron apps. Based on that experience, here is when Electron is the wrong choice:
Simple utilities. If your app is a menu bar icon, a settings window, and a background service, use native (Swift for macOS, C# for Windows) or Tauri. The 200 MB memory overhead and 120 MB Chromium bundle are not justified.
Battery-critical applications. Chromium is not power-efficient compared to native rendering. If your app runs continuously and users are on laptops, the battery drain is meaningful. DropVox went native specifically because a menu bar app must have negligible battery impact.
Apps without Node.js needs. If your app does not use native Node.js modules, does not spawn child processes, and does not need the npm ecosystem for its core functionality, there is no reason to ship Node.js. Tauri gives you the web-based renderer with a smaller footprint.
Performance-critical rendering. If your app needs to render at 60fps consistently (games, video editors, audio DAWs), Chromium's rendering pipeline adds latency that native rendering avoids. The difference is measurable on high-refresh-rate displays.
When Electron Is Right
And here is when the criticism does not apply:
Heavy native module usage. When your core functionality depends on Node.js native addons that do not have equivalents in other languages, Electron is the only framework that gives you those modules with zero friction.
Complex, data-dense UIs. When the interface has dozens of interactive components, complex state management, real-time updates, and responsive layouts, web technology with React is genuinely faster to develop than native UI frameworks. Not because native frameworks cannot do it, but because the ecosystem of UI libraries, state management tools, and component patterns in React is deeper.
Cross-platform with shared codebases. When you need macOS and Windows (and maybe Linux) from a single codebase, Electron provides this with minimal platform-specific code. Native development means maintaining two or three separate codebases.
Node.js ecosystem integration. When your app is a desktop interface for functionality that already exists in the Node.js ecosystem -- yt-dlp wrappers, ffmpeg pipelines, ML runtimes, database tools -- Electron lets you use those tools directly instead of porting them.
My Assessment
I shipped DropVox (native Swift) and Falavra (Electron) within the same month. DropVox is faster, lighter, and more power-efficient. Falavra has a richer UI, deeper functionality, and access to tools that do not exist outside Node.js.
Neither choice was wrong. Both were deliberate.
The technology discourse around Electron in 2026 is dominated by people who have strong opinions about what other developers should use. The Tauri advocates point at memory usage and say Electron is wasteful. The native advocates point at bundle size and say Electron is lazy. The web advocates say desktop apps should just be PWAs.
They are all correct in their specific context and wrong as general advice. The right tool depends on your requirements, your team's skills, your timeline, and your dependencies. Not the zeitgeist.
If you need Node.js native modules, complex web-based UI, and cross-platform support, Electron remains the best option in 2026. Not because it is perfect. Because nothing else can do what it does.
If you do not need those things, use something else. I did, for DropVox, and it was the right call.
The nuance is in knowing which situation you are in.
I write about desktop development, native apps, and the decisions behind choosing one technology over another at helrabelo.dev. For questions or to compare notes on Electron vs native, find me on LinkedIn.