Back to Home

Respecting the Craft: How to Convince Software Engineers to Use AI Without Trivialising Their Work

The hardest part of AI adoption in software engineering is no longer the models. It is persuading proud, experienced engineers to let those models into their daily craft without feeling slower, deskilled, or disrespected.

My first serious conversation about AI with a skeptical engineer did not start with benchmarks; it started with a sigh. A senior developer on my team, brilliant and deeply respected, looked at me over a video call and said: "I tried this AI thing for a week. It kept hallucinating types, I had to double‑check everything, and honestly, I felt slower and dumber. Why are we pushing this so hard?"

It would have been easy, and tempting, to reply with a slide full of statistics promising 50% productivity gains. Instead, I said something else: "You might be right about your experience. Let's treat that as data, not disobedience. Then let's decide together where AI actually helps, and where it does not."

That response came from a leadership philosophy I have carried for years — empowerment, trust and daring to fail — forged from growing up in Maiduguri, studying computer science against the odds, and leading teams by granting them freedom even when it scared everyone around me.

Convincing software engineers to use AI is not a matter of rolling out a tool and enforcing a quota. It is a matter of honouring their craft while inviting them into a new one.

1. Start With the Inconvenient Evidence

In leadership, the quickest way to lose engineers' trust is to sell them a fantasy they can disprove in a single afternoon at their keyboard.

Recent work by METR, widely reported in the press, did something unusual: it measured how experienced open‑source developers performed with and without AI tools on real tasks in codebases they already knew deeply. Their expectation, before touching the tools, was that AI would make them roughly 20–24% faster. The reality was sobering: when allowed to use AI assistants like Cursor and Claude, they took about 19% longer.

Developers in that study still felt faster with AI. They coded less by hand, searched less, and spent more time prompting and editing. But the clock told a different story. They were trading raw speed for a less effortful, more pleasant workflow — more like editing an essay than facing a blank page.

Expected vs Actual Effect of AI on Task Time for Experienced Developers Conceptual chart based on METR's findings, showing expected time reduction versus observed time increase. Effect on task completion time Percentage change −24% Expected +19% Observed
Figure 1. In one controlled study, experienced developers working in familiar codebases expected AI to speed them up by around a quarter — but were actually about a fifth slower.

At the same time, other studies and industry reports find substantial productivity gains from AI, especially for less experienced developers or when working on unfamiliar code. Atlassian's 2025 State of DevEx report, for example, suggests developers are saving over ten hours a week through AI tooling, while still losing a comparable amount of time to organisational friction such as poor documentation and unclear priorities.

The picture is therefore nuanced: AI is not universally good or bad. It is context‑sensitive. When we stand in front of engineers and pretend otherwise, we immediately undermine our credibility.

The first step in convincing engineers to use AI is to admit, calmly and publicly, that the skeptical engineers are sometimes right.

2. What Your Engineers Are Really Resisting

When an engineer tells you, "AI is slowing me down," they are rarely making a philosophical statement about progress. They are reporting a concrete experience of friction, risk and identity.

2.1. They are defending quality and maintainability

Experienced engineers are paid to protect the long‑term health of the codebase. Studies on GitHub Copilot adoption in open‑source projects show that while overall volume of code can increase, the maintenance burden on core contributors often grows, with more rework falling on their shoulders.

When they see AI propose code that compiles but subtly violates invariants, they are not being conservative; they are doing their job.

2.2. They are protecting their craft identity

For many senior engineers, writing and reading code is not a chore; it is a craft, as meaningful as writing prose or composing music. They do not want AI "taking away" the part of work that they find most satisfying, leaving them with coordination and damage control.

Survey work on developers' mental models of AI suggests that experience shapes how they see these tools: juniors tend to view AI as a teacher, while seniors are more likely to view it as a junior colleague — or to assign it no role at all.

2.3. They are wary of managerial hype

Developers are not blind to the pressures executives face. They read the same headlines about AI "replacing" engineers. When adoption is framed as a cost‑cutting mandate rather than a craft‑enhancing opportunity, resistance is rational self‑defence.

2.4. They fear silent punishment for honest mistakes

Most importantly, engineers know that if an AI suggestion introduces a subtle production bug, the blame will attach to the human who approved it, not to the tool. In organisations where failure is not normalised, this risk makes adoption emotionally expensive.

Unless we address these deeper concerns — quality, identity, hype and blame — no amount of tooling training will produce genuine adoption.

3. Principles for Persuasion: Empowerment, Trust and Daring to Fail

My own leadership philosophy, shaped over years of working with high‑performing teams, can be summarised in three words: empowerment, trust and daring to fail. I have found that the same principles are essential in introducing AI to engineers.

3.1. Empower engineers as co‑designers, not passive recipients

Instead of announcing, "From next quarter, everyone must use AI for 30% of their work," invite a group of respected engineers — including skeptics — to design the AI adoption strategy with you. Give them real influence over:

  • Which tools to trial.
  • Which workflows to target first.
  • What metrics to track (speed, defect rates, satisfaction).
  • What guardrails to put in place.

When engineers help design the system, they are far more willing to give it a fair chance — even if they start out doubtful.

3.2. Offer trust upfront, as a default

In my own teams, I do not wait for people to "earn" trust before giving them autonomy. I give trust early and clearly, and I make myself responsible for what happens next.

Applied to AI, this means:

  • Trusting engineers to decide where AI is currently helpful and where it is not.
  • Trusting them to say "no" to AI on safety‑critical paths until they have evidence.
  • Trusting them to design review processes suitable for AI‑assisted code.

If adoption feels like a test of loyalty, engineers will rightly resist it. If it feels like an invitation to exercise judgment, they lean in.

3.3. Normalise trying and failing with AI, in public

Just as I normalise failure when we ship new products — documenting what did not work and what we learned — I try to normalise failed experiments with AI.

This means celebrating honest write‑ups of cases where AI slowed a team down or produced subtle bugs, as long as the team caught them and shared the lessons. The unspoken message is: "You will not be punished for taking reasonable risks with AI inside the guardrails we set together."

4. Map Where AI Helps — and Where It Hurts

One practical way to move the conversation from ideology to evidence is to build a simple map of tasks in your organisation and discuss, explicitly, where AI is likely to help or hinder.

Task type AI often helps AI often hurts or adds friction
Exploring unfamiliar code Summarising files, proposing refactor plans, suggesting starting points.
Green‑field feature prototypes Scaffolding boilerplate, generating tests, exploring alternative designs.
Highly familiar, stable codebases Minor gains for rote tasks. Can slow experts down as they double‑check suggestions that add little value.
Performance‑critical or safety‑critical paths Assisting with test generation and documentation. Unreviewed AI suggestions are dangerous; even small errors have large consequences.
Documentation and learning materials Drafting docs, examples, tutorials, code comments. Over‑reliance can mask real gaps in understanding if not reviewed.
Figure 2. AI's impact on productivity is strongly context‑dependent. Engineers are more willing to adopt AI when leadership openly acknowledges both sides of this table.

Used in a workshop, this table becomes a living artefact. Teams can annotate it with concrete examples from their own work, gradually building an internal evidence base instead of arguing from abstract principles.

5. Design AI Adoption Experiments With Engineers, Not For Them

To move from theory to practice, I have found the most effective pattern is a series of deliberate, time‑boxed experiments co‑designed with the people doing the work.

A simple adoption experiment
  1. Choose a narrow, meaningful workflow. For example: unit test generation for a specific service, or draft PR descriptions for a particular codebase.
  2. Co‑define success metrics. Not only speed, but also defect rates, perceived cognitive effort, and satisfaction.
  3. Run for 4–6 weeks. Allow volunteers (including skeptics) to use AI in that workflow, keeping a shared log of surprises, failures and wins.
  4. Review the evidence together. Hold a session where engineers present the results — not leadership — and decide collectively whether to scale, adjust, or abandon this use case.

The role of leadership here is not to predetermine the answer, but to guarantee psychological safety and time for serious experimentation, just as we would for a new product bet.

Healthy signal
↑ AI experiments owned by teams
Engineers suggest and lead AI pilots, including ones that conclude "not useful here yet".
Unhealthy signal
↑ AI quotas from above
Adoption is tracked in dashboards, but no one can explain how AI changed real outcomes.

6. Teach Engineers to Treat AI as a Junior Colleague

One of the more useful insights from recent survey work is that experienced developers tend to conceptualise AI not as a teacher, but as a junior colleague — one who is fast, tireless, sometimes brilliant, and frequently wrong.

I often frame AI to my teams in exactly those terms:

  • Let it propose, but you decide. AI can draft, suggest, scaffold — but a human owns the final decision and the consequences.
  • Give it boring work first. Start with boilerplate, tests, documentation, migration scripts, not with the most delicate parts of the system.
  • Demand explanations. Ask the model to justify non‑obvious changes and to highlight assumptions, just as you would with a new hire.

This framing respects engineers' expertise instead of bypassing it. It also makes their role in an AI‑rich future clearer: less typing every bracket by hand, more exercising judgment about architectures, trade‑offs and ethics.

7. Avoid the Three Classic Mistakes

If you want to convince engineers to use AI, there are three patterns of behaviour that will reliably produce the opposite result.

7.1. Confusing visibility with value

Mandating that every pull request show a certain percentage of "AI‑generated code" is a recipe for gaming the system. The correct place to measure value is in long‑term outcomes: defect trends, lead time, developer satisfaction, incident rates — not in tool usage statistics.

7.2. Using AI as a surveillance tool

Engineers will not adopt a tool that doubles as a monitoring mechanism for management. Logging to improve safety is one thing; logging to rank individuals is another. Make it explicit, in writing, that AI usage data will not be used for performance stack‑ranking.

7.3. Offloading organisational dysfunction onto AI

As Atlassian's research suggests, AI can save developers hours per week, and yet overall productivity remains flat because organisational issues — unclear goals, poor documentation, unnecessary meetings — consume the freed time.

If leadership refuses to address those systemic issues, no AI tool will make engineers feel genuinely helped. They will correctly conclude that AI is being used as a band‑aid for structural problems.

8. In the End: Convincing by Letting Go

The paradox of leading AI adoption is that the more we try to control engineers into using AI, the more resistance we create. The way through is, in some sense, the same approach that allowed me to build strong teams in the first place: give people real freedom, back it with trust, and make it safe to try and fail in full view of others.

We will convince engineers to use AI not by promising that it will always make them faster, but by proving that we will still value their judgment when it does not.

Empowerment, trust and daring to fail — translated into the practical work of bringing AI into the hands of the very people who understand its limits best.

If we can do that, sincerely and consistently, something important happens. Skepticism does not vanish — nor should it — but it becomes a productive force. Engineers use their doubts to design better experiments, stricter guardrails and more honest measures of success. They become partners, not obstacles, in discovering where AI is genuinely valuable in software engineering, and where the most human parts of the craft must remain fully in our hands.

Tags: AI Adoption · Software Engineering · Leadership · Developer Experience · Organisational Change