Before the First Line of Code

Apr 8, 2026 | Uncategorized | 0 comments

By RKeithElliott

 

A Note on Using AI tools Responsibly
This paper discusses methods of using AI tools in a business context. The prompts and examples in this paper are designed to be practical and immediately usable. Before trying them, a few considerations are worth keeping in mind.

Use approved tools. Many organizations have policies governing which AI tools may be used for business purposes. Use only tools your organization has sanctioned and be sure to follow any applicable data handling guidelines.

Protect sensitive information. Project proposals often contain confidential financial, strategic, or personnel information. Do not paste sensitive content into consumer AI tools or any platform not approved for confidential business data.

Validate the output. AI analysis is a starting point, not a verdict. The value of the AI lens is in the questions it surfaces and the assumptions it exposes — not in treating its conclusions as definitive. Human judgment remains essential, particularly where the AI flags something that requires organizational context to interpret correctly.

The goal is better questions, not automated decisions. AI does not approve or reject projects. It interrogates reasoning. Any decision rightfully remains where it belongs: with the people who understand the business.

The Streetlight Effect
There is an old joke that goes something like this. A police officer finds a drunk man crawling on his hands and knees under a streetlight late at night. The drunk says he’s looking for his keys. The officer helps search, and after finding nothing, asks: “Are you sure you dropped them here?” The drunk replies: “No, I dropped them in the park — but the light is better here.”

The joke has become a kind of cultural meme for something known as “the streetlight effect”: a bias in decision making to study what is measurable or accessible rather than what is most relevant.

This paper asks the question “Is it possible our efforts at using AI productively in Enterprise IT suffer from the same effect?”

Specifically, the current AI conversation is heavily focused on coding assistance and developer productivity. The ROI framing is almost entirely about doing the same work faster. AI investment concentrates on coding partly because coding is measurable — lines of code, pull requests, and velocity metrics feel concrete even when they don’t translate to business value. The light is better here.

But relatively little attention is being paid to a different question: what if AI’s biggest value lies in preventing weak work from starting in the first place?” Shouldn’t we also be looking for improvements upstream? The light is not as good, but maybe that’s where we will find the keys.

For CIOs, PMOs, enterprise architects, portfolio leaders, and transformation sponsors, the issue shouldn’t be whether AI can help write more code, but whether AI can help organizations commit resources more intelligently before delivery begins.

And, in the process, potentially save money. Because often, the most expensive line of code is the one that should never have been written.

Which raises the obvious question, how big is the problem? Let’s look at the data.

The Problem
Despite decades of improvement in tools and methods, IT projects [1] fail at stunning rates. The numbers are nothing short of staggering.

By the Numbers: The Cost of IT Project Failure
In the U.S.
U.S. total cost of unsuccessful projects: $260 billion; operational failures from poor quality software: $1.56 trillion. [2]
Standish Group — decades of tracking
One widely cited 2025 summary of the proprietary Standish CHAOS data reports only 29% of IT projects are considered successful (on time, on budget, full scope); 19% are outright failures; 52% are “challenged” — late, over budget, or missing key features. [3]
McKinsey
A 2012 study found 17% of large IT projects go so badly they threaten the company’s existence. [4]

Defining Failure
Software can fail in ways beyond simply displaying the “blue screen of death”. And likewise, so can software projects. They fall into broad general areas encompassing these all-too-familiar failure modes.

Development Failure
Late Cancellation: The project is cancelled mid-development after substantial sunk costs.
Scope Collapse: The project is delivered with so many features cut that it no longer delivers the expected benefits.
Budget/Schedule Failure: The project is delivered, but it is so late and/or over budget that the original business case is destroyed.
Technical Failure: It doesn’t work, or is unreliable, or it creates more problems than it solves.

Delivery Failure
Adoption Failure: The system works but users don’t use it, work around it, or revert to previous methods.
Benefits Realization Failure: The projected ROI, efficiency gains, or strategic outcomes never materialize, even though the system functions correctly
Organizational Fit Failure: The system is built for a process or structure that changed during development, or that doesn’t reflect how work actually gets done.
Obsolescence at Delivery: The project took so long that by go-live, the business need has moved on or a better solution has emerged elsewhere.

Development and delivery failures can both be placed into a broad category we might refer to as “execution failures.” To address this category, organizations adopt strategies that target specific areas of project execution. But mitigation strategies for project execution failures have no effect on yet another, and more insidious, mode of failure.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.