Skip to content

Context-Minimization Pattern NEW

Problem

User-supplied or tainted text lingers in the conversation, enabling it to influence later generations.

Solution

Purge or redact untrusted segments once they've served their purpose:

  • After transforming input into a safe intermediate (query, structured object), strip the original prompt from context.
  • Subsequent reasoning sees only trusted data, eliminating latent injections.
sql = LLM("to SQL", user_prompt)
remove(user_prompt)              # tainted tokens gone
rows = db.query(sql)
answer = LLM("summarize rows", rows)

How to use it

Customer-service chat, medical Q&A, any multi-turn flow where initial text shouldn't steer later steps.

Trade-offs

  • Pros: Simple; no extra models needed.
  • Cons: Later turns lose conversational nuance; may hurt UX.

References

  • Beurer-Kellner et al., ยง3.1 (6) Context-Minimization.