It’s all ChatGPT’s fault until it’s not anymore

A few months ago, we hid the fact that we were using ChatGPT. Today, we list ChatGPT, Claude, Perplexity, Manus, and Veo3 on our org chart as “junior” contributors.
When something goes wrong, the default reaction is:
- Dev: “Shoot, I missed that bug when Claude added the retrieval feature.”
- Marketing: “ChatGPT mixed up the facts from the meeting transcript.”
- Sales/Ops: “The AI didn’t leverage the full context of the call notes.”
The pattern is the same in every department: the blame lands on the AI, not on the prompt or on the process. It isn’t about the talent of the person executing the task. It’s about the ability to prompt an LLM effectively; how well we translate the context we have into a clear, actionable request.
When prompting is weak, the output is weak, and the team looks for someone (or something) to own the mistake. We’ve moved from:
- Hiding AI usage →
- Openly advocating AI‑in‑the‑Loop (AI‑ITL) →
- Treating the model as a junior employee.
Once the model is on the team, the same rigor we apply to human contributors must apply to it. If we don’t standardize, we end up with “AI‑slop” that erodes quality. So how do we operationalize AI In The Loop?
a. Choose the right platform
Platform | When to use it | What it gives you |
---|---|---|
OpenAI Custom GPTs | You’re already on the OpenAI stack | Fine‑tuned prompts, built‑in guardrails, version control |
Anthropic Claude Artifacts | You prefer Anthropic’s safety‑first model | Reusable prompt templates, context‑aware chaining |
Workflow engines (lindy.ai, n8n.io, Make.com) | You need orchestration across multiple tools | Automate data ingestion, post‑processing, and hand‑offs |
b. Define the Unit of Work
Ask yourself: What exactly must be delivered?
- A document (spec, proposal, PRD)
- A URL (published article, knowledge‑base entry)
- A zipped bundle of design assets
- A video (demo, tutorial)
- A slide deck
For each unit, write an output specification that includes:
- Format (Markdown, PDF, MP4, etc.)
- Style guide (tone, branding, citation rules)
- Acceptance criteria (e.g., “no factual errors > 1%”)
Map AI‑Ops to Core Business Functions
Business Area | Typical AI‑ITL Task | Desired Output |
---|---|---|
Sales | Drafting proposals from CRM data | Polished proposal PDF |
Legal | Generating contract drafts | Editable Word document with clause checks |
Backend Development | Writing boilerplate API code from specs | Git‑ready repository |
Frontend Development | Producing component skeletons from design tokens | Ready‑to‑use React/TSX files |
UX Design | Summarising user research into journey maps | Visually formatted Figma file |
Project Documentation & PRDs | Collating meeting notes into structured docs | Markdown PRD with traceability matrix |
By cataloging each function, you can attach the right prompt template, version‑control workflow, and quality gate to every AI‑generated artifact. AI‑ITL is no longer a “nice‑to‑have” experiment—it’s a core production line.
If we treat it casually, we risk:
- Inconsistent quality (the dreaded “AI slop”)
- Escalated blame cycles that damage morale
- Regulatory or compliance gaps when AI‑generated content is unchecked
Conversely, a disciplined AI‑Ops framework gives you:
- Predictable, audit‑ready outputs
- Faster onboarding (new hires can trust the same prompt libraries)
- Clear ownership—when something fails, you can trace it to a prompt version, not to a mysterious “AI.”
Closing Thought
If we’re going to keep AI on our team, we must manage it the way we manage any junior employee: give it a clear job description, provide the tools to succeed, and hold it to the same standards we hold our people to.
- Assess all current workflows where you delegate tasks to an LLM.
- Document the Unit‑of‑Work and acceptance criteria for each.
- Choose a platform (Custom GPT, Claude Artifacts, or a workflow engine).
- Build reusable prompt libraries and version‑control them like code.
- Implement a review gate, human or automated. Ensure all outputs are checked against the spec before it ships.