AI Is a Tool. Treat It Like One.
AI Is a Tool. Treat It Like One.
Tags: AI · Consulting · Career · Database Engineering · Soft Skills · Junior Engineers
I built a bartender app on a Saturday afternoon.
Nothing fancy — a Flask app backed by SQLite that tracked every bottle in my home bar. Beers, spirits, cocktails, a rotating display menu. I pulled it together with AI assistance and it worked. First weekend, running on my local machine, doing exactly what I needed it to do.
Then I started thinking about what it would take to run this thing properly. Not at home on my MacBook — actually deployed, reliable, something I'd stake my reputation on. And that's when I hit the wall.
The app had a hardcoded secret key sitting right in the source code. The database was a flat SQLite file with no error handling — one bad query and the whole thing would crash with no graceful fallback. There were three versions of the same files scattered across the project because AI had iterated with me and left the old ones behind. The deployment story was nonexistent.
It worked. I couldn't fully support it.
That gap — between "it works" and "I can support it" — is the most important thing I want every engineer to understand about AI right now.
The Carpenter Didn't Fear the Power Saw
Let's start with what AI is, because a lot of the anxiety around it comes from treating it as something it isn't.
AI is a tool. A genuinely powerful one — but a tool. When power tools came along, carpenters didn't resist them because they were afraid of being replaced. The good ones picked them up, learned what they could do, and got better and faster at their jobs. The ones who refused fell behind.
The same thing is happening now. AI tools are accelerating output in ways that weren't possible a few years ago. If you're not using them, you're doing more work than you need to. Use them. Don't be afraid of them.
But here's what the carpenter analogy also teaches us: the power saw doesn't make you a carpenter. You still need to know what you're cutting, why, and what happens if you get it wrong. The tool amplifies your skill — it doesn't replace it.
Don't Let AI Build What You Can't Support
This is the one I feel most strongly about, and it comes directly from my Saturday afternoon with Flask and SQLite.
AI is remarkably good at generating code that works. What it can't do is transfer understanding. When I ran that bartender app at home, everything was fine because the stakes were zero. If it crashed, I rebooted it. If the database got corrupted, I rebuilt it. No one was affected but me.
Now imagine that same dynamic in a production environment. A junior engineer uses AI to build a database migration script. It runs correctly in testing. They push it to production. It works — until something unexpected happens at 2 a.m. on a Tuesday, and no one on the team can explain what the script actually does, why it made the choices it made, or where to start looking.
This isn't hypothetical. It's the COBOL problem playing out in real time.
There are mainframe shops right now that are looking at AI as a workforce solution for their COBOL codebase. Bring in new developers, have AI help them read and modify decades-old COBOL they've never seen. The logic is understandable. The risk is real.
A developer who didn't write the code and doesn't understand it is not a replacement for someone who does — regardless of what generated it. AI can help a junior engineer produce senior-looking code. It cannot give them the ten years of production incidents that taught the senior engineer what questions to ask.
The standard I apply: if your entire team cannot support what AI helped you build — if they cannot read it, debug it, explain it to a customer, and troubleshoot it at 2 a.m. — it should not go to production. Full stop.
Make it a prerequisite. Before any AI-assisted code ships, have the engineer who's responsible for it walk through it out loud. Not to prove they wrote it. To prove they understand it.
Sensitive Data Has a Definition — Know It
One of the most common questions I get from teams adopting AI tools is some version of: "What can we actually put into the prompt?"
The answer starts with a clear definition of what sensitive data actually means in your context, because the default assumption — "nothing important" — is usually wrong.
Customer PII: Names, email addresses, phone numbers, account identifiers. This one is obvious, but it's violated constantly when engineers paste query results into AI tools to debug a performance problem.
Connection strings and credentials: Production database URLs, API keys, passwords, service account tokens. These should never leave your environment in any form.
Proprietary business logic: The schema of your production database, your pricing model, your internal data structures. This is less obvious, but pasting your entire schema into a public AI tool means that schema now exists outside your control.
Regulated data: Anything covered by HIPAA, PCI, SOX, GDPR, or any other compliance framework your organization operates under. This category isn't optional — violations have legal consequences.
The practical test I use: if this data appeared in a news article, would your company have a problem? If yes, it doesn't go into a public AI tool.
Enterprise AI deployments with proper data agreements change this calculus — but only if you've actually read and understood the agreement. "We use Microsoft Copilot so it's fine" is not a data governance policy.
Use AI to Learn. Don't Ship What You Haven't Learned.
Here's where I'll contradict myself slightly, and I think it's worth sitting with the contradiction.
AI is one of the best learning tools I've encountered in my career. If I want to understand how something works — a new framework, an unfamiliar database feature, a deployment pattern I haven't used — I can use AI to build a working example, ask it to explain what it did and why, and iterate until I actually understand it. That's valuable. That's how I should have approached my Flask app if I'd intended to run it seriously.
The distinction that matters is what happens after you learn.
I used AI to build a home bar tracking app in an afternoon. I didn't deeply understand Flask Blueprints, or SQLite connection handling, or how Jinja2 templating decisions affect performance at scale. And that was fine — because it never had to run in production, and when I struggled with parts of it, the cost was entirely mine.
For junior engineers, this is where I want to be direct: the bumps and bruises that come from building something yourself, watching it fail, and figuring out why — those are not just pain. They are knowledge transfer. When a senior engineer tells you to watch out for connection pool exhaustion, or to never trust a backup you haven't tested, or to check your indexes before you tune your queries — that knowledge came from something going wrong. AI is going to smooth over a lot of those moments.
I'm not saying that's entirely bad. I'm saying be aware of what you're trading. The engineer who has never had a production incident is not the same as the engineer who has had three and learned from all of them. AI can help you go faster. It cannot give you the scar tissue.
Use AI to learn a new skill. Build the thing. Break it. Understand what broke and why. Then — and only then — decide whether you and your team are ready to support it in production.
The Summary, Without the Fluff
Four things I believe about AI as an engineering tool:
Don't ship what you can't support. If AI built it and you can't explain it, it doesn't go to production. This applies to junior engineers, senior engineers, and consultants — including me.
Use it like a power tool. It makes good engineers faster. It doesn't make non-engineers into engineers. Pick it up, learn what it can do, and use it.
Know your data boundaries. Define what sensitive means in your organization and treat that line as non-negotiable.
Learn with it, carefully. Build things you're not comfortable with. Let AI help you understand new territory. Just don't conflate "it works in my sandbox" with "I can run this in production."
The engineers I trust most with production systems are the ones who can tell you exactly what their code does and what happens when it doesn't. AI doesn't change that standard. It just gives us more ways to build things we don't fully understand yet.
That's both the opportunity and the risk.
If your team is navigating AI adoption in a database or cloud environment — figuring out what to build, what to ship, and what questions to ask before you go to production — this is exactly the kind of work we do. Get in touch at sixcolumnsolutions.com
Is your team navigating AI adoption in a database or cloud environment?
Six Column Solutions helps teams figure out what to build, what to ship, and what questions to ask before going to production. If your engineers are using AI tools and you're not sure what's safe to let through — that's exactly the kind of work we do.
Get in Touch