From Guessing to Knowing

The 5 Switches That Make AI Ask Before It Executes

1. Introduction

AI is brilliant at following a path—but hopeless at reading your mind.

Give it crystal-clear instructions and it’ll nail the outcome; give it ambiguity and it’ll fill the gaps with confident guesses.

The fix isn’t “better prompts,” it’s making curiosity a non-negotiable part of the process.

2. Problem Context

Most teams respond to AI’s misses by over-engineering the prompt — packing it with adjectives, edge cases, and every nuance they can think of.

It feels productive, but it’s really just front-loading assumptions. The AI still charges ahead without truly understanding the gaps.

The real fix isn’t “more detail” in the request — it’s structuring the interaction so the AI is obligated to stop, surface its uncertainties, and resolve them before doing the work.

That shift — from verbose prompting to curiosity by design—turns an unpredictable executor into a disciplined collaborator.

3. The Five Curiosity Switches

  1. Questions-First Guardrail
    Don’t let the AI rush into answers. Instruct it to first surface clarifying questions — grouped by users, data, UX, edge cases, and success metrics. If a category has no questions, it must explain why. This forces broad coverage before it narrows in on solutions.
    Example: “Before proposing a solution, surface 7 clarifying questions grouped by: users, data, UX, edge cases, success metrics. If any group is empty, explain why.”
  2. Confidence Thresholds
    Require the AI to return a confidence score from 0–1 for its proposed answer. If that score is below 0.8, it must pause execution and ask targeted questions to raise it. This simple guardrail stops confident nonsense before it hits production.
    Example: “Return a confidence score (0–1). If <0.8, pause execution and ask what you need to raise it.”
  3. Schema-Driven Curiosity
    Give the AI a JSON Schema for the desired spec or output. If any required field is missing or marked “unknown,” it must generate questions specific to that field—never inventing defaults. This turns vague specs into structured interrogations.
  4. Assumptions Log + Approval
    Instruct the AI to list every assumption it’s making, with a rationale. Each assumption gets labeled as safe, risky, or blocking. Safe ones proceed; risky or blocking ones must be turned into questions for human review. PMs love this—it mirrors real ADR (Architecture Decision Record) discipline.
    Example: “Classify assumptions as safe/risky/blocking; turn risky/blocking into questions.”
  5. Ambiguity Triggers
    Define the red flags that automatically trigger questions, such as:
    • Vague verbs (“optimize,” “improve”)
    • Unbounded nouns (“reports,” “notifications”)
    • Hidden constraints (latency, PII, SLAs)
    • Plurals with no cardinality (“users can have dashboards”)
    When the AI sees these, it knows to ask before it acts.
    Example: “List all assumptions with rationale. Label each as: ‘safe’, ‘risky’, or ‘blocking’. Proceed only on safe; convert risky/blocking into questions.”

4. Conclusion

You can’t train AI to “be thoughtful” by hoping for better behavior — it has to be engineered into the process.

When curiosity is wired into the workflow, the model stops guessing and starts gathering the context it needs to get it right.

Make it earn clarity before it earns execution, and you’ll find it acting less like a random idea generator and more like a disciplined product manager.

Dimitar Bakardzhiev

Getting started