AI: Use It, Don’t Get Lost

The 50% Jump: Where AI Really Fits in Delivery

AI can reliably take us about halfway to done. The trick is knowing when to lean in heavily (AI does the heavy lifting) and when to pull back (humans take control, with AI as a surgical assistant). 

The 50% Jump is a discipline: use AI to leap forward on the first half of delivery, then finish with proven engineering and governance.

Think of the book Dune and “Kpitzat Haderech”, which means the shortening of the way, a leap across distance without losing direction. This is the 50% Jump: AI helps you skip the slow start and boring middle, while you complete the journey.

Don’t get lost. (AI Generated)

Why AI & the 50% Jump works: 

  • Developers move faster. In a controlled experiment, Copilot users finished tasks 55.8% faster than peers (arXiv).

  • Team-level impact. Microsoft and Accenture field studies show 8–22% more PRs per week with AI access(An MIT Exploration of Generative AI). DORA’s 2025 data links higher AI adoption with better productivity, job satisfaction, and less toil (DORA2025)

  • Caveat: Security and quality risks rise if you skip reviews. Multiple studies show AI-generated code is more prone to insecure patterns when unchecked (arXiv). ⚠️

The Delivery Lifecycle, with AI “Hints”

We’ve formalized the 50% Jump into a 9-phase delivery loop, and for each phase, we mark how much to rely on AI:

  1. Business Requirements (BRD) — gather, validate, sign off.
    Balanced AI. Primarily used for proofreading or summarization, but can ask for insights like potential gaps in design and testing.

  2. Review & Decomposition — break into epics, user stories, testing/acceptance criteria, and estimates.
    AI HEAVY. Let AI propose INVEST-quality stories, Given/When/Then acceptance criteria, test inventory, and LOE estimates.

  3. Design & Traceability — draft solution flows, security checks, traceability matrix.
    AI HEAVY. Use AI to sketch flows, suggest interfaces, and auto-build traceability links (and data provenance tags to ensure the team is aware of the source of the documents).

  4. Development — implement, peer review, unit test.
    AI Jump (50%). Generate scaffolding, boilerplate, and test stubs with AI. Switch back to human engineering for architecture, optimization, and secure design.
    Use AI surgically for field mappings, mocks, challenging bug fixes or refactoring, and doc polish.

  5. Testing — Linting, QA, integration, regression.
    Balanced AI. AI can generate additional edge test cases, but human QA owns the verdict.

  6. Documentation — user/admin guides, training material.
    AI HEAVY. Offload boilerplate and formatting to AI; humans verify accuracy.

  7. User Acceptance Testing (UAT) — client validation, feedback, sign-off.
    Minimal AI. Keep human-led, but AI can help generate UAT scripts (even test data)  tied to traceability.

  8. Production Deployment — deploy, smoke test, monitor.
    Minimal AI. Change management and release governance stay human-owned.

  9. Post-Go-Live Support — hyper-care, transition, retrospective.
    AI Assist. Use AI for log summarization and retrospective facilitation, not for incident resolution.

The Delivery Lifecycle, with AI “Hints”.

Guardrails: the “Security Spine”

  • Before the Jump: classify BRDs, exclude PII, use enterprise AI that doesn’t train on your data.

  • During the Jump: apply OWASP LLM Top 10 (security risks for large language model (LLM) applications), lint outputs like junior engineer work, and tag all AI-sourced artifacts.

  • After the Jump: keep your code reviews, SAST/DAST, and DORA metrics unchanged.

👉🏻 Speed from AI, assurance from engineering

How to Measure

  • AI-to-Human acceptance ratio of stories/tests/specs.

  • Pull Requests (PR) review duration and lead time — small, steady improvements are expected.

  • Requirements rework rate — should trend down if AI is used early.

  • Security exceptions per KLOC — must not worsen.
    KLOC = thousand lines of code.

Close

The 50% Jump is not about “AI everywhere.” It’s about knowing where AI is a multiplier (requirements, decomposition, docs) and where humans must own the outcome (security, design, release). Start with one feature, track the metrics, and scale where the data shows value.

Get the 50% Jump handout PDF:

By submitting this form, you agree to our Privacy Policy.

Previous
Previous

Documentation in the Age of AI

Next
Next

The Cost of Sloppy Customer Onboarding