Fixing Assumptions for Using AI as Accelerator
Context
Development support using AI is being incorporated not only by some experts, but into everyday work.
However, in many contexts, even though AI is introduced, cases are also observed where the quality of decisions does not improve as expected, or confusion is amplified depending on how it is utilized.
This situation is not due to AI's lack of capability, but in many cases stems from the human side not sufficiently fixing assumptions.
Objective
The purpose of this topic is not to treat AI simply as an "existence that produces correct answers."
To enable humans to continue taking on decisions even under incomplete assumptions, we organize prerequisites for safely incorporating AI as an accelerator.
Minimal Inputs
To use AI as decision support, at minimum, the following assumptions need to be fixed:
-
SoT (Source of Truth)
Location of source of truth that AI may refer to -
Constraints
Boundary conditions that AI must not cross -
Non-Changeables
Assumptions excluded from proposals and generation -
Uncertainties
Domains where speculation is allowed and not allowed
If these are not made explicit, AI output tends to lack consistency.
Working Model
Work using AI assumes the following working model:
- AI generates decision candidates
- Humans always take on adoption or rejection of decisions
- Output is treated as hypotheses
- Hypotheses are given verification conditions and expiration dates
The moment AI output is treated as finalized items, the decision-making subject may no longer remain on the human side.
Tactics
Intervention to make AI function as an accelerator is not increasing generation accuracy.
- First define decision boundaries
- Separate assumptions AI may handle and assumptions it cannot
- Make explicit criteria for adopting or rejecting output
These are not operational procedures, but the design of the decision structure.
Risks
AI utilization has unique failures.
- Plausibility of output substitutes for decisions
- It looks like progress even with assumptions remaining ambiguous
- Where decision responsibility lies becomes unclear
These are not problems that arose by AI introduction itself, but results where existing Failure Patterns were made visible or amplified.
Interaction with Failure Patterns
AI utilization particularly strongly connects with the following Failure Patterns:
-
Context-Blind Change
Output continues being updated without assumptions being fixed -
Test-Passing Illusion
"It worked" / "it passed" is treated as basis for correctness -
Decision-less Agility
AI is used as an excuse for postponing decisions
Unless these are treated as assumptions, AI does not support decisions and accelerates confusion.
Resulting Context
If AI can be used with assumptions fixed, AI behaves as follows:
- Accelerates search for decision candidates
- Presents choices that tend to be overlooked
- Shortens learning loops
AI is not a decision-maker. It is a device that greatly expands the capability of humans who take on decisions.