Skip to main content
Deno 2 is finally here πŸŽ‰οΈ
Learn more

proc

The complete toolkit for process management, async iterables, and high-performance data transforms in Deno.

Run child processes with a fluent API. Transform data between formats at 330 MB/s. Work with async iterables using the Array methods you already know. All with error handling that actually makes sense.

πŸ“š Full Documentation | πŸš€ Quick Start | πŸ“Š Performance Guide

import { run, read, enumerate } from "jsr:@j50n/proc";
import { fromCsvToRows, toTsv } from "jsr:@j50n/proc/transforms";

// Run processes and capture output
const lines = await run("ls", "-la").lines.collect();

// Chain processes like a shell pipeline
const result = await run("cat", "data.txt")
  .run("grep", "error")
  .run("wc", "-l")
  .lines.first;

// Transform data between formats at high speed
await read("sales.csv")
  .transform(fromCsvToRows())
  .filter(row => parseFloat(row[3]) > 1000)
  .transform(toTsv())
  .writeTo("high-value.tsv");

// Work with async iterables using familiar Array methods
const commits = await run("git", "log", "--oneline")
  .lines
  .map(line => line.trim())
  .filter(line => line.includes("fix"))
  .take(5)
  .collect();

// Errors propagate naturally - handle once at the end
try {
  await run("npm", "test")
    .lines
    .filter(line => line.includes("FAIL"))
    .toStdout();
} catch (error) {
  console.error(`Tests failed: ${error.code}`);
}

Why proc?

Solves backpressure by design β€” Traditional streams require complex coordination between producers and consumers. proc uses async iterators (pull-based) instead of streams (push-based), eliminating backpressure entirely. No buffering, no coordination, no memory pressure.

Errors that just work β€” Errors propagate through pipelines naturally, just like data. No edge cases, no separate error channels, no callbacks. One try-catch at the end handles everything.

Async iterables that feel like Arrays β€” Use map, filter, reduce, flatMap, take, drop, and more on any async iterable. No more wrestling with streams.

Powerful process management β€” Run commands, pipe between processes, capture output, and control execution with a clean, composable API.

High-performance data transforms β€” Convert between CSV, TSV, JSON, and Record formats with streaming support. Or use the WASM-powered flatdata CLI for 330 MB/s throughput (7x faster than pure JavaScript).

Type-safe and ergonomic β€” Full TypeScript support with intuitive APIs that guide you toward correct usage.

Features at a Glance

πŸš€ Process Management

  • Run commands β€” run(), pipe(), result(), toStdout()
  • Chain processes β€” Shell-like pipelines with .run()
  • Capture output β€” Lines, bytes, or full output
  • Error handling β€” Natural propagation through pipelines

πŸ”„ Async Iterables

  • Array-like methods β€” map, filter, reduce, flatMap, forEach, some, every, find
  • Slicing & sampling β€” take, drop, slice, first, last, nth
  • Concurrent operations β€” concurrentMap, concurrentUnorderedMap with concurrency control
  • Utilities β€” enumerate, zip, range, cache()

πŸ“Š Data Transforms

  • Format conversion β€” CSV ↔ TSV ↔ JSON ↔ Record
  • Streaming processing β€” Constant memory usage for any file size
  • LazyRow optimization β€” Up to 1.7x faster parsing
  • flatdata CLI β€” WASM-powered tool for 330 MB/s throughput

⚑ Performance

  • flatdata CLI: ~330 MB/s (WASM subprocess)
  • Record format: 60-93 MB/s (in-process)
  • JSON transforms: 70-98 MB/s
  • TSV transforms: 57-72 MB/s
  • CSV transforms: 10-27 MB/s (with LazyRow: 1.05-1.7x faster)

Installation

import * as proc from "jsr:@j50n/proc";

Or import specific functions:

import { run, enumerate, read } from "jsr:@j50n/proc";

Data Transforms (Optional)

Data transforms are in a separate module to keep the core library lightweight:

⚠️ Experimental (v0.24.0+): Data transforms are under active development. API may change as we improve correctness and streaming performance.

// Core library - process management and async iterables
import { run, enumerate, read } from "jsr:@j50n/proc";

// Data transforms - CSV, TSV, JSON, Record conversions
import { fromCsvToRows, toTsv } from "jsr:@j50n/proc/transforms";

See the Data Transforms Guide for details.

Key Concepts

Properties vs Methods: Some APIs are properties (.lines, .status, .first) and some are methods (.collect(), .map(), .filter()). Properties don’t use parentheses.

Resource Management: Always consume process output via .lines.collect(), .lines.forEach(), or similar. Unconsumed output causes resource leaks.

Error Handling: Processes that exit with non-zero codes throw ExitCodeError when you consume their output. Use try-catch to handle failures.

Enumeration: enumerate() wraps iterables but doesn’t add indices. Call .enum() on the result to get [item, index] tuples.

Quick Examples

Stream and process large compressed files

import { read } from "jsr:@j50n/proc";

// Read, decompress, and count lines - all streaming, no temp files!
const lineCount = await read("war-and-peace.txt.gz")
  .transform(new DecompressionStream("gzip"))
  .lines
  .count();

console.log(`${lineCount} lines`); // 23,166 lines

Transform data between formats

import { read } from "jsr:@j50n/proc";
import { fromCsvToRows, toJson } from "jsr:@j50n/proc/transforms";

// Convert CSV to JSON Lines with filtering
await read("sales.csv")
  .transform(fromCsvToRows())
  .filter(row => parseFloat(row[3]) > 1000)
  .map(row => ({
    id: row[0],
    customer: row[1],
    amount: parseFloat(row[3])
  }))
  .transform(toJson())
  .writeTo("high-value.jsonl");

Run a command and capture output

import { run } from "jsr:@j50n/proc";

const result = await run("git", "rev-parse", "HEAD").lines.first;
console.log(`Current commit: ${result?.trim()}`);

Handle errors gracefully

import { run } from "jsr:@j50n/proc";

try {
  // Errors propagate through the entire pipeline
  // No need for error handling at each step
  await run("npm", "test")
    .lines
    .map(line => line.toUpperCase())
    .filter(line => line.includes("FAIL"))
    .toStdout();
} catch (error) {
  // Handle all errors in one place
  if (error.code) {
    console.error(`Tests failed with code ${error.code}`);
  }
}

Transform async iterables

import { enumerate } from "jsr:@j50n/proc";

const data = ["apple", "banana", "cherry"];

const numbered = await enumerate(data)
  .enum()
  .map(([fruit, i]) => `${i + 1}. ${fruit}`)
  .collect();

console.log(numbered); // ["1. apple", "2. banana", "3. cherry"]

Process large files efficiently

import { read } from "jsr:@j50n/proc";

const errorCount = await read("app.log")
  .lines
  .filter(line => line.includes("ERROR"))
  .reduce((count) => count + 1, 0);

console.log(`Found ${errorCount} errors`);

Parallel processing with concurrency control

import { enumerate } from "jsr:@j50n/proc";

const urls = ["url1", "url2", "url3", /* ... */];

await enumerate(urls)
  .concurrentMap(async (url) => {
    const response = await fetch(url);
    return { url, status: response.status };
  }, { concurrency: 5 })
  .forEach(result => console.log(result));

Features

  • Process execution β€” run(), pipe(), result(), toStdout()
  • Array-like methods β€” map, filter, reduce, flatMap, forEach, some, every, find
  • Slicing & sampling β€” take, drop, slice, first, last, nth
  • Concurrent operations β€” concurrentMap, concurrentUnorderedMap with concurrency control
  • Data transforms β€” CSV, TSV, JSON, Record format conversions with streaming
  • Utilities β€” enumerate, zip, range, read (for files)
  • Caching β€” cache() to replay iterables
  • Writable iterables β€” Push values into async iterables programmatically

Documentation

Contributing

Contributions are welcome! See the contributor guide for details on:

  • Project architecture
  • Coding standards
  • Testing strategy
  • Documentation guidelines

License

MIT

Building Documentation

The WASM book (labs/wasm/docs/) generates HTML, EPUB, and PDF outputs.

Prerequisites (Debian/Ubuntu)

# Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# mdbook
cargo install mdbook

# Document generation tools
sudo apt install pandoc weasyprint ghostscript imagemagick

Build

./build-site.sh

Outputs:

  • labs/wasm/docs/book/ β€” HTML
  • labs/wasm/docs/book/book.epub β€” EPUB with cover
  • labs/wasm/docs/book/book.pdf β€” PDF with cover