For twenty-five years designing OSS systems for major North American telecom operators, I kept seeing the same pattern: these platforms never reached true automation and always required humans in the loop just to remain operational. This human involvement looked the same across every technological wave — business users forced to navigate heavy UIs, frameworks, and process layers, without ever disappearing from the equation.
If the industry failed to eliminate manual operations and interface overhead, the model itself must change.
The proposal: an extremely lightweight, machine-oriented OSS approach built around automated JSON flows — deterministic, transparent, and fully machine-driven. No human involvement. The way OSS should have worked from the beginning. This is part of my dothuma.ai technology stack, built on minimal, formally defined execution mechanisms.
jsonwf, presented here, is not an AI component. It is a deliberately simple, lightweight foundation for machine-driven OSS processes — introducing a way to build operational chains without heavy UIs, without frameworks, without human routine.
Every enterprise workflow engine solves the same problem with the same overhead. Apache Airflow needs a scheduler, a metadata database, a web server, and Python operators. AWS Step Functions requires state machine definitions, IAM roles, CloudWatch logging, and Lambda functions. MuleSoft needs connectors, flows, transformations, and a runtime cluster.
The infrastructure is heavier than the problem.
For most provisioning and fulfillment workflows the actual logic is simple — call these functions in this order, merge the results, handle errors. The complexity is in the scaffolding, not the work.
jsonwf removes the scaffolding.
A jsonwf workflow is a single JSON document. It contains the input data, the execution queue, and the accumulated results — all in one place.
{
"ne_name": "cisco7559",
"vrf_name": "CUST_A", "vrf_customer": "acme", "rd": "65000:100",
"if_name": "eth0.100", "vlan": 100,
"fn": ["find_device", "create_vrf", "create_subinterface", "assign_interface_vrf"],
"fn_completed": []
}
The engine iterates fn[], calls each function, merges the result back into the document, moves the function name to fn_completed[]. When fn[] is empty — the workflow is done.
{
"ne_name": "cisco7559",
"vrf": {"vrf_name": "CUST_A", "vrf_customer": "acme", "rd": "65000:100"},
"subinterface": {"if_name": "eth0.100", "vlan": 100, "encapsulation": 1},
"fn": [],
"fn_completed": ["find_device", "create_vrf", "create_subinterface", "assign_interface_vrf"],
"status": "done",
"done_at": "2026-04-08T00:01:21Z"
}
The document grew from order to provisioned result by accumulation. No external state. No database row. No orchestration context stored separately. The document is the audit trail.
JSON templates defining the workflow structure per service type. Defaults, constraints, and fn[] sequence. Immutable — the recipe, not the order. In-flight workflows continue on the catalog version they started with. No migration, no breaking changes.
Takes a catalog template and customer input. Produces the populated workflow document. Customer provides only fields that differ from catalog defaults — delta only.
Single function: execute(doc). Iterates fn[], calls the mapper, merges result into doc, moves to fn_completed[]. Resume is built-in — if execution stops, fn[] retains the remaining steps. Restart the engine with the same document.
Per-function error strategies: abort, warn_continue, retry(N), rollback. The document records every error with timestamp and strategy applied.
A Python dict mapping function names to callables. Every function in fn[] must exist in the mapper before the engine starts. Adding a new capability means adding one entry. Local functions, API calls, database writes, external notifications — all look the same to the engine.
Every function takes one argument and returns one argument — the document.
def create_vrf(doc):
result = api_call("create_vrf", {
"ne_name": doc["ne_name"],
"vrf_name": doc["vrf_name"],
"vrf_customer": doc["vrf_customer"],
"rd": doc.get("rd"),
})
doc["vrf"] = result
return doc
No parameters. No side effects outside the document. Stateless. Retryable. Testable with a mock document.
If execution fails at step 3 of 7:
{
"fn": ["step4", "step5", "step6", "step7"],
"fn_completed": ["step1", "step2", "step3:warn"],
"fn_errors": [{"fn": "step3", "error": "...", "strategy": "warn_continue"}],
"status": "running"
}
Restart the engine with this document — it continues from step 4. No re-running completed steps. No reconstructing state from logs. The document remembers where it was.
The document is live. At any point — before, during, or after execution — a human or another system can reach in and modify it. Add a field the next function will need. Change a value. Inject a new function into fn[]. The engine picks up the modified state on the next step.
This is what makes jsonwf suitable for semi-automated workflows — the machine executes what it can, flags what needs human input, and the human modifies the document and resumes.
When fn[] is empty — status is set to done. Completion notification is not a special case. It is just another function in fn[]:
{
"fn": ["validate", "alloc_ip", "assign_interface", "notify_nms", "send_email"]
}
notify_nms reads doc["instance_id"] and posts to the NMS. send_email reads doc["customer_email"] and sends a confirmation. Any notification channel — Slack, webhook, SMS — is one mapper entry away.
A five-step provisioning chain, executed and verified:
Input:
{
"ne_name": "cisco7559",
"parent_ip": "192.168.0.0", "parent_cidr": 24, "cidr": 28,
"vrf_name": "TEST_VRF", "vrf_customer": "testco", "rd": "65000:200",
"if_name": "eth1.200", "vlan": 200,
"fn": ["find_device","alloc_range","create_vrf","create_subinterface","assign_interface_vrf"]
}
Result:
{
"fn": [],
"fn_completed": ["find_device","alloc_range","create_vrf","create_subinterface","assign_interface_vrf"],
"status": "done",
"allocated_range": {"ip": "192.168.0.16", "cidr": 28},
"vrf": {"vrf_name": "TEST_VRF", "vrf_customer": "testco"},
"interface_vrf": {"if_name": "eth1.200", "vrf_name": "TEST_VRF"}
}
Five API calls. One document. No human in the loop. No external state.
jsonwf is not suitable for high-frequency event processing, long-running parallel workflows, or systems requiring distributed coordination. For those use cases Kafka, Airflow, or Step Functions are the right tools.
jsonwf is suitable for provisioning, fulfillment, and configuration workflows where the sequence is known, the document is the context, and simplicity is a requirement.
| field | type | description |
|---|---|---|
fn | list | execution queue — functions to run |
fn_completed | list | audit trail — completed functions |
fn_errors | list | error log — failed functions with strategy and timestamp |
status | string | pending / running / done / error / rolled_back |
done_at | string | ISO timestamp when fn[] became empty |
| all other fields | any | domain data — read and written by functions |
| strategy | behavior |
|---|---|
abort | clear fn[], set status=error, preserve fn_completed for audit |
warn_continue | log warning, continue to next function |
retry(N) | retry up to N times, then abort |
rollback | set status=rolled_back, preserve fn_completed |
jsonwf/src/ jsonwf_engine.py execute(doc) — main loop jsonwf_mapper.py MAPPER dict + all domain functions jsonwf_validator.py validate(doc, mapper) — pre-flight jsonwf_error.py handle(doc, fn_name, error) — error strategies