# Execution Model

## Native workflow contract

The skill should expose two workflow paths:

### 1. Meeting orchestration

Inputs:
- topic
- participant roles
- optional background context
- max rounds
- optional human-in-the-loop mode

Core loop:
1. Load role configs.
2. Build a shared scratchpad.
3. Ask the PM router for the next step.
4. Dispatch the selected role.
5. Append the response to the scratchpad.
6. Allow user feedback when the workflow pauses.
7. Finish with a Markdown report.

### 2. Single-role analysis

Inputs:
- role name
- topic
- optional background context

Core loop:
1. Load one role config.
2. Build a focused prompt.
3. Ask that role for a direct judgment.
4. Return the response as analysis output.

## Human-in-the-loop checkpoints

Recommended pause points:
- before selecting participants
- before round 1 if the topic is ambiguous
- after any role response if the user wants to redirect the meeting
- after final report if the user wants a second round

## Contract for router functions

The skill layer should keep these helpers isolated:

- `select_participants(plan)`
- `route_pm(plan, state, participants, client)`
- `run_role(role_config, prompt, client)`

This makes the orchestration layer easy to test and easy to swap later.

## Practical OpenClaw implementation strategy

Because this skill must stay native to OpenClaw, the actual runtime should be built as a small Python workflow layer inside the skill directory.

Recommended shape:

- `scripts/runtime.py`
  - session state
  - prompt assembly
  - role loading
  - PM routing
  - execution loop

- `scripts/handlers.py`
  - one handler for meeting orchestration
  - one handler for single-role analysis
  - one handler for user interruptions / feedback

- `scripts/prompts.py`
  - PM prompt template
  - role prompt template
  - feedback prompt template

The runtime should not call the meeting project's CLI. It should only read role definitions from the existing JSON files.
