Skip to main content

Context-Blind Change

A Failure Pattern of Context Erosion and Unverifiable Correctness

Summary

Context-Blind Change is a Failure Pattern in which, even though changes are made continuously, their assumptions, reasons for decisions, and constraints are lost without being shared.

In this Pattern, making changes quickly is not itself denied. Rather, in situations where uncertainty is high and change frequency is demanded, proceeding without documenting assumptions becomes the most rational choice, and this Pattern addresses the structure in which "correctness" becomes unverifiable as a result.


Context

In software under operation, various changes large and small occur continuously, such as specification changes, feature additions, refactoring, and bug fixes.

In many cases, individual changes are locally rational, and reviews and tests pass. However, on what assumptions or constraints the change was based is often unreadable from code or tickets.

When this state continues, no one can explain the behavior or constraint conditions of the software as a whole.

Forces

The main dynamics that generate this Pattern are as follows:

  • Pressure for speed of change
    Stopping change itself is regarded as a risk, and decisions to proceed with implementation are easily prioritized over confirming assumptions.

  • Assumptions becoming implicit
    Understanding that is supposedly shared within the team is not documented, and reasons for decisions remain outside code or diffs.

  • Chain of local optimization
    Each change looks individually correct, so it is hard to notice that overall consistency is being lost.

  • Supplemental effect of support tools
    AI and automation tools assist with change work, so changes can hold even without making assumptions explicit.

Failure Mode

Because assumptions are not shared, the legitimacy of changes cannot be verified after the fact.

As a result, the following forms of breaking proceed simultaneously:

  • The "Source of Truth" disperses into code fragments
    Specifications and rules are embedded in implementation fragments, and it becomes impossible to judge which is the standard.

  • Reasons for state transitions cannot be tracked
    Why transitions occur to that state cannot be identified, and inconsistencies between display and internal state become hard to detect.

  • Reasons for change cannot be tracked
    What the change was to protect is lost, and subsequent changes are made based on speculation.

Consequences

  • Each fix tends to break elsewhere
    (Part I: What Breaks — State / Boundary)

  • The scope of change impact cannot be explained, and decisions are delayed
    (Part I: What Breaks — Responsibility / Boundary)

  • Specification confirmation is replaced by code reading
    (Part I: What Breaks — Data / Boundary)

  • AI and new members cannot effectively assist
    Because assumptions and constraints are not made explicit, use as decision support is limited.
    (Part II: Why It Breaks — Context Erosion)

Countermeasures

The following are not a list of solutions, but counter-patterns for changing dynamics with minimal intervention against Failure Mode.

  • Rather than change content, first articulate the assumptions and constraints being protected
  • Do not document everything, but leave minimal context needed for change decisions
  • Leave it in a form that can be verified later whether the change was correct

Resulting Context

Changes still occur frequently, and not all decisions are determined in advance.

However, by making assumptions visible, the appropriateness of changes can be verified after the fact.

As a result, changes are treated not as mere diffs, but as a learnable history of decisions.

See also

  • Specification-by-Absence
    A derived pattern in which changes accumulate without assumptions being made explicit, and undefined matters become fixed as de facto specifications.

  • Test-Passing Illusion
    A structure in which, in environments where assumptions are not shared, limited success tends to be treated as a proxy indicator of correctness.


Appendix: Conceptual References

Appendix: References

  • Pamela Zave, Michael Jackson, Four Dark Corners of Requirements Engineering, 1997.
  • David L. Parnas, On the Criteria To Be Used in Decomposing Systems into Modules, 1972.
  • Barry W. Boehm, Software Engineering Economics, 1981.
  • Gojko Adzic, Specification by Example: How Successful Teams Deliver the Right Software, 2011.