#
LogBus Docs
LogBus aims to be a "Swiss Army" knife for processing events. The primary use case is to process system & application logs but it supports a broad set of other use cases. As of 2025, LogBus has some features not common in many other utilities in this space.
The target audience is for dev & sysadm teams that know some basic JavaScript. It is written in TypeScript targeting the Deno runtime. It comes with many useful plugins and has a refreshing simple custom plugin development story.
#
Conceptual Overview
LogBus executes a pipeline that begins with one or more sources (inputs) and ends with one or more sinks (outputs). A pipeline is just a graph/network of stages. A stage is a plugin (aka module) & its configuration, a list of input channels to receive events from, and a list of output channels to emit events to. Events flow from sources to downstream stages until they reach a sink.
By default, the plugin module is derived from the stage name, but that can be specified via module. By default, a stage's output channels is its name, but that can be specified via outputs. A stage can be named using any acceptable YAML key except for these reserved names: READY, SHUTDOWN, ERRORS, STATS, PRESSURE, RELIEF. LogEngine.error() will emit errors to the ERRORS channel and LogEngine.stats() will emit stats to the STATS channel. A stage can use the errors & stats plugins to process those events.
An example to help explain is this pipeline that will listen for HTTP requests in one flow and send JSON objects to that web server in another.
pipeline:
read-http:
config:
host: localhost
port: 1337
read-globs:
config:
globs: [in.json]
parse-json:
inputs: [read-globs]
write-http:
inputs: [parse-json]
config:
request: !!js/function >-
function(event) {
return {
method: 'PUT',
url: `http://localhost:1337/user/${event.id}?tag=hi&tag=mom`,
headers: {
'content-type': 'application/json',
},
}
}
errors:
inputs: [ERRORS]
config:
interval: 5
log:
inputs: [read-http, errors]
$ logbus -c examples/http.yml
PIPELINE FLOWS:
🚰 read-globs →️ parse-json →️ write-http 🪣
🚰 read-http →️ log 🪣
🚰 errors →️ log 🪣
For a real-world example that demonstrates many of the features, please see the journald-opensearch example which is very close to the standard pipeline used by TFKS to wrangle its system & application logs.
#
Getting Started
Executables for common platforms are available here. Check out some of the use cases for ideas. For quick iterations & spot-checking, it is recommended to start out with some sample data, a file source, and a stdout sink.
#
Usage
The logbus executable takes a config file as input, which contains the pipeline definition and any template & plugin references. Pipelines can be checked with the --check CLI flag to print the pipeline in a more human-friendly format. Any loops or dead-ends will be detected.
#
Caveats
#
Cycles
There is currently no support for loop detection.
#
Mixed / Out-of-Order Reading
EventEmitter is used to push events through the pipeline. It is possible to mix plugins in ways that will lead to undesirable results. For example, reading multiple inputs in a block fashion (eg with the read-file) and then sending them to the same stage will result in blocks being processed in a non-determinate way. The read-globs plugin does not suffer from this since it emit objects.