Why Open-Source Clients Matter for Terminal Sharing Security
When a terminal sharing tool claims to protect your data with end-to-end encryption, you face a fundamental question: how do you know?
The tool’s website might feature padlock icons and reassuring language about “military-grade security.” Their documentation might describe encryption protocols in convincing detail. But unless you can see the actual code running on your machine, you’re trusting marketing claims rather than verifiable facts.
Open-source clients change this dynamic. When the source code is publicly available, security becomes auditable rather than aspirational. You can trace exactly what happens to your data from the moment you type a command to the moment it leaves your machine. Claims about encryption aren’t promises—they’re implementation details you can verify.
The Trust Problem with Closed-Source Software
Every piece of software you run on your computer could, in principle, do anything. It could read your files, capture your keystrokes, exfiltrate your credentials, or phone home with your browsing history. You don’t know what it actually does because you can’t see the code.
With closed-source software, you trust the vendor. You trust that their claims match their implementation. You trust that their developers don’t make mistakes. You trust that their build pipeline hasn’t been compromised. You trust that they won’t be compelled to add surveillance capabilities by a government. You trust that a malicious insider won’t slip in a backdoor.
These aren’t paranoid concerns. The SolarWinds attack in 2020 compromised build systems to inject malicious code into software updates, affecting over 18,000 organisations including US government agencies. Users trusted SolarWinds; their trust was exploited.
For terminal sharing specifically, the stakes are high. Your terminal session might include credentials, API keys, proprietary code, or commands that reveal internal infrastructure. A malicious or compromised terminal sharing tool could capture everything.
What Open-Source Enables
When a tool’s client is open-source, the trust model inverts. Instead of trusting the vendor, you trust the code—which you (or someone you trust) can examine.
The practical benefits stack up. Security researchers can audit the implementation for vulnerabilities. Cryptographers can verify that encryption algorithms are used correctly. Privacy advocates can confirm that data handling matches stated policies. And the developer community can spot suspicious behaviour that a closed review process might miss.
This collective scrutiny tends to improve security over time. Bugs get reported and fixed publicly. Questionable design decisions face community criticism. The knowledge that others are watching creates pressure to do things right.
Consider the difference when evaluating encryption claims. A closed-source tool says “we use AES-256 encryption.” You have no way to verify this. An open-source tool makes the same claim, and you can search the codebase for encryption function calls, examine how keys are generated and stored, trace the data flow from plaintext to ciphertext, and confirm that encryption actually happens before data leaves your machine.
The Audit Trail: Vulnerabilities Found Through Open-Source Review
Open-source’s security value isn’t theoretical. Major vulnerabilities have been discovered through public code review—sometimes saving the internet from catastrophic failures.
The Heartbleed vulnerability in OpenSSL exemplifies both the risks and benefits of open-source. The bug, introduced in December 2011, allowed attackers to read up to 64KB of server memory per request, potentially exposing private keys, session tokens, and user credentials. It affected an estimated 66% of web servers and earned a CVSS severity score of 10—the maximum.
Heartbleed existed in open-source code for over two years before discovery. Critics point to this as evidence that “many eyes” don’t guarantee security. But consider the counterfactual: had OpenSSL been closed-source, the same bug would have existed, potentially for longer, and discovery would have depended entirely on the vendor’s internal processes rather than independent researchers.
More importantly, Neel Mehta at Google discovered Heartbleed through line-by-line code audit—exactly the kind of review that open-source enables. The fix was developed and deployed rapidly because anyone could understand the problem and contribute solutions. The disclosure process, while imperfect, allowed affected organisations to patch before mass exploitation. None of this would have been possible with closed-source code.
Log4Shell, discovered in late 2021, showed similar dynamics. The vulnerability in the open-source Log4j library allowed remote code execution through a JNDI lookup feature that had existed since 2013. When Chen Zhaojun at Alibaba Cloud Security Team discovered it, the community mobilised rapidly. Within days, patches were available and guidance was published. The open nature of the code meant that anyone could understand the vulnerability, assess their exposure, and implement mitigations.
The Spectrum of Trust Models
Not all open-source is equivalent, and the client-server architecture of terminal sharing tools creates nuances worth understanding.
Fully open-source tools publish code for both client and server components. tmate is BSD-licensed with server code available at github.com/tmate-io/tmate-ssh-server. upterm’s client and server are the same binary with different execution modes, all Apache-2.0 licensed. With these tools, you can audit the entire system and run your own infrastructure if you want complete control.
Open-source client with closed-source server represents a middle ground. The client running on your machine is auditable, so you can verify what data leaves your device and how it’s encrypted. The server is opaque, but if encryption is truly end-to-end, the server handles only ciphertext anyway. Signal follows this model: the client apps are open-source under the AGPLv3, while the server infrastructure is proprietary. This works for E2EE systems because the server’s inability to read content is enforced cryptographically, not by policy.
Fully closed-source tools require complete trust in the vendor. You cannot verify encryption claims, audit data handling, or confirm that the tool behaves as documented. This doesn’t mean such tools are insecure—many are professionally developed with internal security practices—but it does mean you’re trusting rather than verifying.
For terminal sharing specifically, the open-source client model makes particular sense. Your primary concern is what happens on your machine: what gets captured, what gets encrypted, and what gets transmitted. An open-source client lets you answer these questions definitively, regardless of server architecture.
Practical Verification Techniques
Knowing that code is open-source is one thing; actually verifying it is another. Several techniques make this practical even if you’re not a professional security researcher.
Reading the source is the most direct approach. Modern terminal sharing clients are typically thousands, not millions, of lines of code. You can search for relevant patterns: encryption function calls, network transmission code, key generation and storage. Even without understanding every line, you can verify that encryption libraries are used and that data passes through them before transmission.
Network traffic inspection provides external verification. Tools like Wireshark capture packets on your network interface. If a tool claims end-to-end encryption, transmitted data should appear random—high entropy with no readable strings. Seeing cleartext terminal content in packet captures definitively disproves encryption claims. mitmproxy can intercept HTTPS traffic (with appropriate certificate installation) to inspect what’s being sent to remote servers.
Reproducible builds go further by verifying that the binary you run matches the published source code. The ideal case: you compile the client yourself from audited source and run your own build. More practically, some projects provide reproducible build configurations that let you verify the official binary was built from the claimed source code.
Dependency auditing extends verification to libraries the client uses. The client code might be perfect, but if it depends on a compromised library, the system is compromised. Tools like npm audit, cargo audit, and pip-audit check dependencies against known vulnerability databases. Software composition analysis tools provide deeper examination for high-security environments.
When Scrutiny Revealed Problems
Open-source’s value shows most clearly when community scrutiny catches issues that internal review missed.
In August 2023, the popular .NET mocking library Moq added a dependency called SponsorLink without clear disclosure. Community members examining the change discovered that SponsorLink scanned local git configurations for developer email addresses, hashed them, and transmitted them to an external CDN. This data collection happened automatically when developers built projects using Moq—over 476 million downloads to that point.
The outcry was immediate because the code was visible. Developers could see exactly what SponsorLink did. The original code was obfuscated, but decompilation revealed its behaviour. Had Moq been closed-source, this privacy violation might have continued indefinitely.
Browser telemetry studies by Douglas Leith at Trinity College Dublin demonstrated similar dynamics. His 2020-2021 research showed that Chrome transmits every URL visited to Google, while Edge reports browsing activity to Microsoft. This wasn’t discovered through leaked documents or whistleblowers—it was found by examining open-source Chromium code and analysing network traffic. The research led to increased adoption of privacy-focused forks like LibreWolf and ungoogled-chromium, which exist precisely because the codebase is open enough to modify.
VS Code provides another instructive example. Microsoft’s code editor is based on the open-source VS Code project but includes proprietary telemetry and extensions. Community examination of the codebase revealed the extent of this telemetry, leading to the creation of VSCodium, a build of VS Code without Microsoft’s proprietary additions. Users who want the editor without the telemetry can now verify they’re getting it.
The Open-Source Client Advantage for Terminal Sharing
Terminal sharing tools handle particularly sensitive data. Commands you type might include passwords passed as arguments, database credentials in connection strings, API keys in environment variables, or proprietary code visible in your editor. The tool sees everything your terminal sees.
An open-source client provides assurance that this sensitive data is handled appropriately. You can verify that encryption happens before transmission. You can confirm that credentials aren’t logged or sent separately. You can check that the tool doesn’t exfiltrate data beyond what’s necessary for its stated purpose.
klaas exemplifies this approach. The client code is published under the MIT license on GitHub. Users concerned about security can read the Rust source code to see exactly how sessions are encrypted and transmitted. The encryption implementation isn’t hidden behind claims—it’s visible for inspection.
This doesn’t mean you must personally audit every tool you use. Most developers won’t read the source of every dependency. But the possibility of auditing creates accountability. Developers know their code might be examined. Security researchers can investigate suspicious behaviour. And when questions arise, answers are available in the codebase rather than locked behind vendor communications.
Making the Assessment
When evaluating a terminal sharing tool’s security, consider these questions:
Is the client open-source? Can you access the code running on your machine, or are you trusting compiled binaries?
Are security claims verifiable? When the tool says “end-to-end encrypted,” can you trace the encryption in the source code, or is it just marketing language?
What’s the project’s security history? Have vulnerabilities been responsibly disclosed and fixed? Is there evidence of security audits?
What’s the dependency situation? Does the client rely on well-maintained libraries, or obscure packages that might be compromised?
Can you build from source? If extreme assurance is required, can you compile the client yourself to eliminate trust in the build pipeline?
These questions don’t have binary answers. A closed-source tool from a reputable vendor with published security audits might be more trustworthy than an open-source project with no review history. Context matters. But all else being equal, the ability to verify security claims is strictly better than the alternative.
Terminal sharing requires trusting something with access to your most sensitive workflows. Open-source clients let you make that trust decision based on evidence rather than faith.
Questions? Join our GitHub Discussions or reach out on 𝕏 @klaas_sh.
Related Articles

