- Published on
OxiDock
- Authors
- Name
- Sidharth Singh
- https://x.com/sid10singh
Welcome to OxiDock β a cross-platform VPS file browser written in Rust and React, designed to let you connect to remote servers over SSH/SFTP and manage files as naturally as using a local file manager.
OxiDock is not a web-based file manager that requires a server-side daemon. It embeds a native SSH client directly inside the application via russh, establishes connections from the user's device, and communicates with the React frontend through Tauri 2's type-safe IPC layer. There is no middleman, no proxy server, no browser tab β just a native desktop or mobile app talking directly to your VPS.
At its core, OxiDock provides directory browsing, file previews, uploads, downloads, image viewing with caching, and SSH key management β all with performance instrumentation and a two-tier caching strategy that makes navigation feel instant.
What Makes OxiDock Different?
Most remote file browsers fall into one of two categories: web UIs that require a companion daemon running on the server (Filebrowser, Cloud Commander), or terminal-based tools that sacrifice usability for simplicity (scp, rsync, Midnight Commander over SSH). OxiDock sits in neither camp:
- Native SSH/SFTP via
russhβ no shelling out tosshorscp, no external binaries, no PATH dependencies - Lazy SFTP channel pooling via
tokio::sync::OnceCellβ one channel created per connection, reused for every operation - Two-tier caching β frontend directory prefetch (up to 20 child dirs in parallel) plus backend mtime-aware image LRU on disk
- True cross-platform β single Rust + React codebase targeting Linux, macOS, Windows, and Android
- Biometric-gated key storage on mobile β SSH private keys protected by fingerprint/face before access
The Rust layer handles all I/O, cryptography, and session management. The React layer handles all rendering, navigation state, and user interaction. They communicate exclusively through Tauri's invoke() mechanism β serialized arguments in, serialized results out, no shared memory.
Why OnceCell Over Reconnecting
An SFTP session is not a simple socket. Creating one requires three round trips with the server: opening an SSH channel, requesting the sftp subsystem on that channel, and completing the SFTP protocol handshake. On a high-latency connection to a VPS across the ocean, this can easily cost 100-300ms per operation.
The naive approach creates a new SFTP channel for every file operation:
async fn list_dir(handle: &client::Handle<ClientHandler>, path: &str) -> AppResult<Vec<FileEntry>> {
// 3 round trips on every single call
let channel = handle.channel_open_session().await?;
channel.request_subsystem(true, "sftp").await?;
let sftp = SftpSession::new(channel.into_stream()).await?;
let entries = sftp.read_dir(path).await?;
// channel dropped here, wasted
// ...
}
If a user navigates through 10 directories, that is 30 unnecessary round trips and up to 3 seconds of pure channel setup overhead. Every file preview, every upload, every download repeats this cost.
OxiDock instead uses tokio::sync::OnceCell to lazily initialize a single SFTP channel per SSH connection and reuse it for the lifetime of that session:
pub struct SshSession {
handle: client::Handle<ClientHandler>,
pub(crate) host: String,
pub(crate) user: String,
sftp: OnceCell<SftpSession>,
}
impl SshSession {
pub(crate) async fn sftp(&self) -> AppResult<&SftpSession> {
self.sftp
.get_or_try_init(|| async {
let channel = self
.handle
.channel_open_session()
.await
.map_err(|e| AppError::Sftp(format!("Failed to open channel: {e}")))?;
channel
.request_subsystem(true, "sftp")
.await
.map_err(|e| {
AppError::Sftp(format!("Failed to request sftp subsystem: {e}"))
})?;
SftpSession::new(channel.into_stream())
.await
.map_err(|e| AppError::Sftp(format!("Failed to init SFTP session: {e}")))
})
.await
}
}
OnceCell::get_or_try_init guarantees that the initialization closure runs exactly once, even if multiple tasks call sftp() concurrently. Subsequent calls return a reference to the already-initialized session with zero overhead β no atomic CAS, no lock acquisition, just a pointer read from the cell.
This is conceptually identical to connection pooling in database drivers: pay the connection cost once, amortize it across all queries. The difference is that SFTP's stateful channel model makes pooling more natural β a single channel can multiplex arbitrary file operations without contention at the protocol level.
Core Architecture
OxiDock follows a clean split between the native Rust layer (SSH, SFTP, key storage, file I/O) and the React frontend (UI, state, navigation). All communication flows through Tauri's invoke() IPC.
React (MUI Material) ββinvoke()βββΆ Tauri commands (commands.rs)
β
βββ KeyStore βββΆ ssh_keys.json (disk)
β
βββ SshSessionManager
β
βββ russh SSH (authentication, channel mgmt)
β
βββ russh-sftp (file operations, dir listing)
The Rust entry point wires everything together in lib.rs:
pub fn run() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("oxidock=debug"))
.format_timestamp_millis()
.init();
tauri::Builder::default()
.plugin(tauri_plugin_fs::init())
.plugin(tauri_plugin_opener::init())
.plugin(tauri_plugin_dialog::init())
.plugin(tauri_plugin_process::init())
.setup(|app| {
let app_dir = app.path().app_data_dir().expect("Failed to get app data dir");
std::fs::create_dir_all(&app_dir).ok();
let vault_path = app_dir.join("ssh_keys.json");
let key_store = Arc::new(KeyStore::new(vault_path));
let session_mgr = Arc::new(SshSessionManager::new(key_store.clone()));
app.manage(key_store);
app.manage(session_mgr);
#[cfg(mobile)]
app.handle().plugin(tauri_plugin_biometric::init())?;
#[cfg(mobile)]
app.handle()
.plugin(tauri_plugin_mobile_onbackpressed_listener::init())?;
Ok(())
})
.invoke_handler(tauri::generate_handler![
commands::store_key,
commands::list_keys,
commands::delete_key,
commands::get_key,
commands::list_supported_key_types,
commands::ssh_connect,
commands::ssh_test_connection,
commands::ssh_disconnect,
commands::ssh_list_sessions,
commands::sftp_list_dir,
commands::sftp_read_file_preview,
commands::sftp_download_file,
commands::sftp_save_file,
commands::sftp_create_dir,
commands::sftp_upload_file,
commands::sftp_cache_image,
commands::open_file_externally,
commands::sftp_delete_file,
])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
KeyStore and SshSessionManager are constructed once in setup, wrapped in Arc, and registered via app.manage(). Tauri then injects them as State<'_, Arc<T>> into any command handler that requests them. Mobile-only plugins (biometric authentication, Android back-gesture listener) are conditionally loaded behind #[cfg(mobile)].
1. SshSessionManager
Role: Session lifecycle and authentication.
The SshSessionManager maintains a HashMap<String, Arc<SshSession>> protected by a tokio::sync::Mutex, keyed by UUID session IDs. It supports two authentication paths:
pub struct SshSessionManager {
sessions: Arc<Mutex<HashMap<String, Arc<SshSession>>>>,
key_store: Arc<KeyStore>,
}
Key-based authentication retrieves the PEM from the KeyStore, parses it via russh::keys::PrivateKey, negotiates the best RSA hash algorithm with the server, and authenticates:
pub async fn connect_with_key(
&self,
host: &str,
port: u16,
user: &str,
key_name: &str,
passphrase: Option<&str>,
) -> AppResult<String> {
let pem = self.key_store.retrieve_key_pem(key_name).await?;
let private_key = if let Some(pass) = passphrase {
PrivateKey::from_openssh(pem.as_bytes())
.and_then(|k| k.decrypt(pass))
.map_err(|e| AppError::Ssh(format!("Failed to decode key: {e}")))?
} else {
PrivateKey::from_openssh(pem.as_bytes())
.map_err(|e| AppError::Ssh(format!("Failed to decode key: {e}")))?
};
let mut handle = self.establish_connection(host, port).await?;
let hash_alg = handle.best_supported_rsa_hash().await.ok().flatten().flatten();
let key_with_hash = PrivateKeyWithHashAlg::new(Arc::new(private_key), hash_alg);
let auth_result = handle
.authenticate_publickey(user, key_with_hash)
.await
.map_err(|e| AppError::Ssh(format!("Auth failed: {e}")))?;
if !auth_result.success() {
return Err(AppError::Ssh("Authentication rejected by server".into()));
}
self.store_session(handle, host, user).await
}
Password authentication follows the same pattern but calls authenticate_password instead.
Test connections authenticate and immediately disconnect β useful for verifying server profiles before saving them without leaving orphaned sessions in the pool:
pub async fn test_connection_with_key(
&self, host: &str, port: u16, user: &str,
key_name: &str, passphrase: Option<&str>,
) -> AppResult<()> {
// ... authenticate ...
let _ = handle.disconnect(russh::Disconnect::ByApplication, "", "en").await;
Ok(())
}
Session storage generates a UUID v4 and inserts the SshSession into the map:
async fn store_session(
&self,
handle: client::Handle<ClientHandler>,
host: &str,
user: &str,
) -> AppResult<String> {
let session_id = Uuid::new_v4().to_string();
let session = Arc::new(SshSession {
handle,
host: host.to_string(),
user: user.to_string(),
sftp: OnceCell::new(),
});
let mut sessions = self.sessions.lock().await;
sessions.insert(session_id.clone(), session);
Ok(session_id)
}
The Mutex here protects only the session map, not individual sessions. Once a session is retrieved via get_session, operations on it (including the OnceCell-based SFTP channel) proceed without holding the map lock.
2. KeyStore β SSH Key Vault
Role: Persistent storage and retrieval of SSH private keys.
The KeyStore persists keys as a JSON file at app_data_dir()/ssh_keys.json. Each key is base64-encoded before storage, with metadata (name, auto-detected type, fingerprint, creation timestamp) stored alongside:
struct KeyRecord {
name: String,
key_type: KeyType,
fingerprint: String,
created_at: String,
key_pem_b64: String,
}
Key type detection inspects PEM headers and, for OpenSSH-format keys, decodes the base64 payload to find the algorithm identifier string:
pub fn detect_key_type(pem: &str) -> AppResult<KeyType> {
let trimmed = pem.trim();
if trimmed.starts_with("-----BEGIN RSA PRIVATE KEY-----") {
return Ok(KeyType::Rsa);
}
if trimmed.starts_with("-----BEGIN EC PRIVATE KEY-----") {
return Ok(KeyType::Ecdsa);
}
if trimmed.starts_with("-----BEGIN PRIVATE KEY-----") {
return Ok(KeyType::Pem);
}
if trimmed.starts_with("-----BEGIN OPENSSH PRIVATE KEY-----") {
let body: String = trimmed
.lines()
.filter(|l| !l.starts_with("-----"))
.collect();
if let Ok(decoded) = base64::engine::general_purpose::STANDARD.decode(body.as_bytes()) {
let payload = String::from_utf8_lossy(&decoded);
if payload.contains("ssh-rsa") { return Ok(KeyType::Rsa); }
if payload.contains("ssh-ed25519") { return Ok(KeyType::Ed25519); }
if payload.contains("ecdsa-sha2") { return Ok(KeyType::Ecdsa); }
}
return Ok(KeyType::Pem);
}
Err(AppError::UnsupportedKeyType(
"Key format not recognized. Supported: PEM, RSA, ECDSA, Ed25519".into(),
))
}
This supports legacy PEM headers (BEGIN RSA PRIVATE KEY, BEGIN EC PRIVATE KEY), generic PKCS#8 (BEGIN PRIVATE KEY), and modern OpenSSH format (BEGIN OPENSSH PRIVATE KEY) with embedded algorithm detection. The auto-detection runs before any write, so the vault never contains keys of unknown type.
File I/O is serialized through a Mutex<()> β a coarse lock that ensures load_index_sync and save_index_sync never interleave. This is sufficient because key operations are infrequent (user-initiated) and the vault file is small.
3. SFTP Operations
Role: All remote file system interactions.
Every SFTP operation follows the same pattern: acquire the pooled SFTP session, perform the operation, log performance metrics:
pub async fn list_dir(session: &Arc<SshSession>, path: &str) -> AppResult<Vec<FileEntry>> {
let total_start = std::time::Instant::now();
let sftp_acquire_start = std::time::Instant::now();
let sftp = session.sftp().await?;
let sftp_acquire_ms = sftp_acquire_start.elapsed().as_secs_f64() * 1000.0;
let readdir_start = std::time::Instant::now();
let entries = sftp
.read_dir(path)
.await
.map_err(|e| AppError::Sftp(format!("Failed to read directory: {e}")))?;
let readdir_ms = readdir_start.elapsed().as_secs_f64() * 1000.0;
let mut files: Vec<FileEntry> = Vec::new();
for entry in entries {
let name = entry.file_name();
if name == "." || name == ".." { continue; }
let full_path = if path.ends_with('/') {
format!("{path}{name}")
} else {
format!("{path}/{name}")
};
let attrs = &entry.metadata();
let is_dir = attrs.is_dir();
let size = attrs.size.unwrap_or(0);
let modified = attrs.mtime.map(|t| {
chrono::DateTime::from_timestamp(t as i64, 0)
.map(|dt| dt.to_rfc3339())
.unwrap_or_default()
});
let is_image = if is_dir { false } else { is_image_ext(&name) };
files.push(FileEntry { name, path: full_path, is_dir, size, modified, is_image });
}
files.sort_by(|a, b| {
b.is_dir.cmp(&a.is_dir)
.then_with(|| a.name.to_lowercase().cmp(&b.name.to_lowercase()))
});
log::info!(
"[PERF] list_dir \"{}\" β total: {:.2}ms | sftp_acquire: {:.2}ms | read_dir: {:.2}ms | entries: {}",
path, total_ms, sftp_acquire_ms, readdir_ms, files.len(),
);
Ok(files)
}
Every operation emits structured performance logs with millisecond-precision timing. On the first call after connection, sftp_acquire_ms includes channel creation (~100-300ms). On subsequent calls, it drops to near-zero thanks to OnceCell.
File preview uses a heuristic to distinguish text from binary content:
let is_text = preview_data
.iter()
.all(|&b| b == b'\n' || b == b'\r' || b == b'\t' || (b >= 0x20 && b <= 0x7E) || b >= 0x80);
Text files are returned as UTF-8 strings; binary files are base64-encoded. This avoids requiring the frontend to handle raw byte arrays over IPC.
Recursive directory deletion uses Box::pin for the recursive async call to satisfy Rust's requirement that recursive async functions produce Pin<Box<dyn Future>>:
async fn delete_dir_recursive(session: &Arc<SshSession>, dir_path: &str) -> AppResult<()> {
let entries = list_dir(session, dir_path).await?;
for entry in &entries {
if entry.is_dir {
Box::pin(delete_dir_recursive(session, &entry.path)).await?;
} else {
let sftp = session.sftp().await?;
sftp.remove_file(&entry.path).await
.map_err(|e| AppError::Sftp(format!("Failed to delete \"{}\": {e}", entry.path)))?;
}
}
let sftp = session.sftp().await?;
sftp.remove_dir(dir_path).await
.map_err(|e| AppError::Sftp(format!("Failed to remove directory \"{dir_path}\": {e}")))?;
Ok(())
}
4. Unified Error Handling
Role: Single error type across all Tauri commands.
AppError uses thiserror for ergonomic variant definitions and implements Serialize manually so errors cross the Tauri IPC boundary as human-readable strings:
#[derive(Debug, thiserror::Error)]
pub enum AppError {
#[error("SSH error: {0}")]
Ssh(String),
#[error("SFTP error: {0}")]
Sftp(String),
#[error("Key storage error: {0}")]
KeyStore(String),
#[error("Session not found: {0}")]
SessionNotFound(String),
#[error("IO error: {0}")]
Io(String),
#[error("Unsupported key type: {0}")]
UnsupportedKeyType(String),
#[error("{0}")]
Other(String),
}
impl Serialize for AppError {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(&self.to_string())
}
}
From implementations for std::io::Error and russh::Error allow ? propagation from low-level crate errors all the way up through command handlers without manual mapping at every call site.
Two-Tier Caching Strategy
OxiDock implements caching at two layers β TypeScript in-memory caches for directory listings and image paths, and a Rust disk cache for full image files β so that navigating through directories and viewing images feels responsive even over high-latency connections.
Frontend: Directory Prefetch
When a directory listing completes, the frontend fires off parallel SFTP requests for up to 20 child subdirectories:
const dirCache = new Map<string, FileEntry[]>();
const inflightDirs = new Set<string>();
const MAX_PREFETCH_DIRS = 20;
export function prefetchChildren(entries: FileEntry[], sessionId: string): void {
const dirs = entries.filter((e) => e.is_dir).slice(0, MAX_PREFETCH_DIRS);
for (const dir of dirs) {
if (dirCache.has(dir.path) || inflightDirs.has(dir.path)) continue;
inflightDirs.add(dir.path);
invoke<FileEntry[]>("sftp_list_dir", { sessionId, path: dir.path })
.then((childEntries) => {
setDirCached(dir.path, childEntries);
})
.catch(() => {})
.finally(() => inflightDirs.delete(dir.path));
}
}
All prefetch work is fire-and-forget. Errors are silently swallowed β if a subdirectory is not accessible, the cache simply won't have an entry for it, and the next navigation will fetch it normally. The inflightDirs set prevents duplicate requests for the same path.
When the user clicks into a subdirectory that was prefetched, the cached listing is returned instantly from the Map β zero network round trips. This transforms a sequential "click, wait, render" flow into "click, render" for the common case of drilling into visible subdirectories.
Frontend: Image Path LRU
A custom LRUMap extends Map to track which remote images have already been downloaded to local disk by the Rust backend:
class LRUMap<K, V> extends Map<K, V> {
private maxSize: number;
constructor(maxSize: number) {
super();
this.maxSize = maxSize;
}
override get(key: K): V | undefined {
if (!super.has(key)) return undefined;
const value = super.get(key)!;
super.delete(key);
super.set(key, value);
return value;
}
override set(key: K, value: V): this {
if (super.has(key)) {
super.delete(key);
}
super.set(key, value);
while (this.size > this.maxSize) {
const oldest = this.keys().next().value;
if (oldest !== undefined) {
super.delete(oldest);
}
}
return this;
}
}
Map iteration order in JavaScript is insertion order, so the oldest entry is always this.keys().next(). Promoting a key on get (delete + re-insert) moves it to the end. This gives O(1) LRU behavior without a separate linked list.
Backend: Disk Image Cache with mtime Freshness
When the frontend requests an image, the Rust backend checks a local disk cache before downloading:
pub async fn cache_image(
session: &Arc<SshSession>,
path: &str,
cache_dir: &std::path::Path,
remote_mtime: Option<u64>,
) -> AppResult<String> {
let ext = path.rsplit('.').next().unwrap_or("bin");
let safe_key = base64::Engine::encode(
&base64::engine::general_purpose::URL_SAFE_NO_PAD,
path.as_bytes(),
);
let cache_file = cache_dir.join(format!("{safe_key}.{ext}"));
// Check freshness: if cached and mtime matches, skip download
if cache_file.exists() {
if let Some(remote_mt) = remote_mtime {
if let Ok(meta) = std::fs::metadata(&cache_file) {
if let Ok(modified) = meta.modified() {
let cached_ts = modified
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs())
.unwrap_or(0);
if cached_ts >= remote_mt {
return Ok(cache_file.to_string_lossy().to_string());
}
}
}
} else {
return Ok(cache_file.to_string_lossy().to_string());
}
}
// Cache miss β download full image
let sftp = session.sftp().await?;
let data = sftp.read(path).await
.map_err(|e| AppError::Sftp(format!("Failed to download image: {e}")))?;
tokio::fs::write(&cache_file, &data).await
.map_err(|e| AppError::Sftp(format!("Failed to write cached image: {e}")))?;
// Background LRU eviction
if !IMAGE_EVICTION_RUNNING.swap(true, Ordering::Relaxed) {
let dir = cache_dir.to_path_buf();
tokio::task::spawn_blocking(move || {
evict_cache_lru(&dir, IMAGE_CACHE_MAX_BYTES);
IMAGE_EVICTION_RUNNING.store(false, Ordering::Relaxed);
});
}
Ok(cache_file.to_string_lossy().to_string())
}
Cache filenames use URL-safe base64 encoding of the remote path, preserving the original file extension. Freshness is determined by comparing the cached file's filesystem mtime against the remote mtime reported by SFTP metadata. If the remote file hasn't changed, the download is skipped entirely.
LRU eviction runs in a background spawn_blocking task (to avoid blocking the async runtime with filesystem I/O) and is guarded by an AtomicBool flag to prevent concurrent eviction runs. The eviction sorts cached files by modification time and deletes the oldest until total size drops below the 200 MB cap:
static IMAGE_EVICTION_RUNNING: AtomicBool = AtomicBool::new(false);
const IMAGE_CACHE_MAX_BYTES: u64 = 200 * 1024 * 1024;
fn evict_cache_lru(cache_dir: &std::path::Path, max_bytes: u64) {
// ... collect files with sizes and mtimes ...
files.sort_by_key(|&(_, _, mtime)| mtime);
let to_free = total_size - max_bytes;
let mut freed: u64 = 0;
for (path, size, _) in &files {
if freed >= to_free { break; }
if std::fs::remove_file(path).is_ok() {
freed += size;
}
}
}
The full caching pipeline: frontend imageCache checks if a remote path has a known local path. If not, it calls sftp_cache_image (Tauri command), which checks the disk cache with mtime freshness. On a miss, it downloads via SFTP, writes to disk, triggers background eviction if needed, and returns the local path. The frontend then reads the local file via the Tauri FS plugin and renders it as a blob URL.
Cross-Platform Design
OxiDock targets desktop (Linux, macOS, Windows) and mobile (Android) from a single codebase using Tauri 2's platform abstraction.
Conditional Plugin Loading
Mobile-only plugins are loaded behind #[cfg(mobile)] gates in lib.rs:
#[cfg(mobile)]
app.handle().plugin(tauri_plugin_biometric::init())?;
#[cfg(mobile)]
app.handle()
.plugin(tauri_plugin_mobile_onbackpressed_listener::init())?;
On desktop, these lines are compiled out entirely β no runtime overhead, no dead code. The frontend wraps the biometric plugin in a useBiometric hook that gracefully falls back when the plugin is absent, so the same React components work on both platforms.
Android-Specific File Handling
File downloads and external opening require special treatment on Android, where apps cannot write to arbitrary filesystem paths and open_path is unsupported:
#[cfg(target_os = "android")]
{
let cache_dir = app.path().app_cache_dir().unwrap_or_default().to_string_lossy().to_string();
let open_path = if path.starts_with(&cache_dir) {
// Copy from private app cache to shared storage
let share_dir = std::path::PathBuf::from("/storage/emulated/0/Download/OxiDock");
std::fs::create_dir_all(&share_dir)?;
let file_name = std::path::Path::new(&path)
.file_name().unwrap_or_default().to_string_lossy().to_string();
let dest = share_dir.join(&file_name);
std::fs::copy(&path, &dest)?;
dest.to_string_lossy().to_string()
} else {
path
};
let file_url = format!("file://{}", open_path);
tauri_plugin_opener::open_url(&file_url, None::<&str>)?;
}
Cached images live in the app's private cache directory, which is inaccessible to other apps. To open an image in the system viewer, OxiDock first copies it to shared storage (/storage/emulated/0/Download/OxiDock) and then opens the file:// URI. On desktop, a direct open_path call suffices.
Similarly, sftp_save_file tries /storage/emulated/0/Download on Android first (the user-visible Downloads folder), falling back to platform-resolved directories via Tauri's path API. Filename collision handling appends a counter (photo (1).jpg, photo (2).jpg) to avoid overwriting existing files.
IPC Command Surface
All Tauri commands are registered in a single invoke_handler and dispatched by name from the frontend:
| Category | Command | Description |
|---|---|---|
| Keys | store_key | Store a new SSH key (auto-detects type) |
list_keys | List all stored keys (metadata only) | |
delete_key | Delete a key by name | |
get_key | Retrieve raw PEM for a key | |
list_supported_key_types | Return supported key type enum | |
| SSH | ssh_connect | Connect with key or password, return session ID |
ssh_test_connection | Test auth without persisting session | |
ssh_disconnect | Drop a session by ID | |
ssh_list_sessions | List active sessions with metadata | |
| SFTP | sftp_list_dir | List directory contents |
sftp_read_file_preview | Preview first N bytes of a file | |
sftp_download_file | Download file as raw bytes | |
sftp_save_file | Download and save to local filesystem | |
sftp_create_dir | Create a remote directory | |
sftp_upload_file | Upload bytes to a remote path | |
sftp_cache_image | Cache a remote image to local disk | |
sftp_delete_file | Delete a file or directory (recursive) | |
| Misc | open_file_externally | Open a local file in the system viewer |
Each command is a thin async wrapper in commands.rs that extracts State, delegates to the appropriate module (key_store, ssh_manager, sftp_ops), and logs timing:
#[tauri::command]
pub async fn sftp_list_dir(
session_mgr: State<'_, Arc<SshSessionManager>>,
session_id: String,
path: String,
) -> AppResult<Vec<FileEntry>> {
let start = std::time::Instant::now();
let session = session_mgr.get_session(&session_id).await?;
let result = sftp_ops::list_dir(&session, &path).await;
log::info!(
"[CMD] sftp_list_dir \"{}\" β total_cmd: {:.2}ms",
path,
start.elapsed().as_secs_f64() * 1000.0,
);
result
}
Frontend Architecture
The frontend is a React 18 application using MUI v6 with Emotion for styling, bundled by Vite 6. There is no client-side router β navigation is entirely state-driven.
State-Driven Navigation
The App component manages a single activeSession state that determines what renders:
activeSession === nullandautoConnecting === false: Show the server list or key manager, controlled by abottomTabindex. A frosted-glass bottom dock provides tab switching.autoConnecting === true: Show a loading spinner with the default server's name and host.activeSession !== null: Show theFileBrowsercomponent, connected to the active SSH session.
This avoids the complexity of a router for what is fundamentally a two-state application: connected or not connected.
The Glass Dock
When disconnected, the bottom navigation renders as a fixed, centered pill with a frosted-glass effect:
<Box
sx={{
position: "fixed",
bottom: 16,
left: "50%",
transform: "translateX(-50%)",
zIndex: 1200,
display: "flex",
justifyContent: "center",
gap: 1,
px: 3,
py: 1,
borderRadius: "9999px",
backdropFilter: "blur(24px)",
WebkitBackdropFilter: "blur(24px)",
bgcolor: "rgba(30, 30, 46, 0.55)",
border: "1px solid rgba(255, 255, 255, 0.08)",
boxShadow: "0 8px 32px rgba(0, 0, 0, 0.35)",
mb: "env(safe-area-inset-bottom, 0px)",
}}
>
Active tabs use a subtle background tint and 1.1x icon scale with transition: all 0.25s ease. The env(safe-area-inset-bottom) margin respects Android's gesture navigation bar and notched displays.
Android Back Gesture Stack
On Android, the hardware/gesture back button is intercepted via @kingsword/tauri-plugin-mobile-onbackpressed-listener and handled in priority order:
- Drawer open β close drawer
- Theme picker expanded β collapse
- FileBrowser can go back (preview open or deeper path) β delegate to
FileBrowser.handleBack() - Connected at root β disconnect
- App root, first back β show "Press back again to exit" snackbar
- App root, second back β
exit(0)via@tauri-apps/plugin-process
This is implemented using refs to avoid stale closures in the event listener:
const drawerOpenRef = useRef(drawerOpen);
const activeSessionRef = useRef(activeSession);
drawerOpenRef.current = drawerOpen;
activeSessionRef.current = activeSession;
useEffect(() => {
let unlisten;
const setup = async () => {
try {
unlisten = await registerBackEvent(() => {
if (drawerOpenRef.current) { setDrawerOpen(false); return; }
if (activeSessionRef.current) {
const fb = fileBrowserBackRef.current;
if (fb?.canGoBack()) { fb.handleBack(); return; }
setActiveSession(null);
return;
}
if (rootBackCountRef.current === 0) {
rootBackCountRef.current = 1;
setExitSnackbar(true);
return;
}
exit(0);
});
} catch { /* Plugin not available on desktop */ }
};
setup();
return () => { unlisten?.unregister(); };
}, []);
The registerBackEvent call is wrapped in a try/catch so the same code runs on desktop (where the plugin is absent) without errors.
Theme System
OxiDock ships with Tokyo Night and Catppuccin (Mocha, Macchiato, Frappe, Latte) themes. ThemeContext wraps MUI's ThemeProvider and persists the selection to localStorage:
export function AppThemeProvider({ children }: { children: React.ReactNode }) {
const [themeName, setThemeName] = useState(() => {
try {
return localStorage.getItem('oxidock-theme') || defaultThemeName;
} catch {
return defaultThemeName;
}
});
const theme = useMemo(() => {
const def = themes[themeName] ?? themes[defaultThemeName];
return createTheme(def.options);
}, [themeName]);
return (
<ThemeContext.Provider value={value}>
<ThemeProvider theme={theme}>
<CssBaseline />
{children}
</ThemeProvider>
</ThemeContext.Provider>
);
}
Themes are defined as MUI ThemeOptions objects with full palette overrides. The drawer includes a collapsible theme picker with grouped variants (Catppuccin sub-list) and a check mark on the active selection.
Security Considerations
OxiDock is built for personal VPS management, not enterprise deployment. Several design choices reflect this:
Host key verification is not implemented. The russh ClientHandler accepts all server public keys unconditionally:
async fn check_server_key(
&mut self,
_server_public_key: &russh::keys::PublicKey,
) -> Result<bool, Self::Error> {
Ok(true)
}
This makes first-time connections frictionless but provides no protection against MITM attacks. A production implementation would maintain a known_hosts file and prompt the user on first connection or key change.
Keys are stored as base64 on disk, not encrypted at rest. The JSON vault at ssh_keys.json uses base64 encoding for transport safety, but the file itself is not encrypted. On mobile, key access is gated by biometric authentication (fingerprint/face) before the UI allows viewing or adding keys, but the underlying file is readable by any process with the app's data directory permissions.
Session IDs are UUID v4. They are cryptographically random and unguessable, but they exist only in memory β there is no session persistence across app restarts. Closing the app drops all sessions.