You’re Hiring in AI? Here’s How Not to Embarrass Yourself
From fine-tuned confusion to finally knowing who does what in AI — and why guessing job titles just won’t cut it anymore.
I’m Markéta, an HR from the wild world of startups, with experience as an HR advisor in VC. To keep myself updated, I’m diving into learning by writing a newsletter covering various topics in HR and people ops.
If you’re interested, subscribe and join me on this journey!🧡
It crept in slowly… and then all at once.
One day I was minding my own business — the next I was in a hiring meeting casually saying “fine-tuned model” like I hadn’t just Googled it in the bathroom five minutes earlier.
Let’s be honest:
“Basic tech literacy” somehow turned into “please understand the difference between vector embeddings, LLM APIs, and supervised learning… and hire accordingly.”
And if you're hiring for anything even vaguely AI-adjacent, you can't afford to get the roles mixed up anymore.
Because here’s the thing:
An AI Engineer is not the same as a Machine Learning Engineer.
And half the people calling themselves Data Scientists? Just BI analysts with stronger LinkedIn branding.
I learned this the hard way — by confidently saying things I didn’t fully grasp, and immediately regretting it.
So here’s how I ended up sorting out the basics for myself —
What each role actually does, how to spot the right person, and why titles alone won’t save you.
⚠️ Why Getting the Role Right Actually Matters
In fast-moving teams, you’re not hiring for theory — you’re hiring to solve something right now.
And when you bring in the wrong kind of “AI person,” things go sideways fast:
You thought you were hiring someone to build a working chatbot — but they want to redesign transformer architecture from scratch.
You needed dashboards and clear insights — and they pitch fine-tuning a BERT variant.
You got excited about someone who “knows AI” — turns out, they’ve never deployed a model or even touched an API.
The title on the CV might sound right.
But if the skill set doesn’t match the actual problem, you’re not just wasting time — you’re throwing off the entire roadmap.
🧭 Quick Map of roles
🧠 Role-by-Role Breakdown
AI Engineer
They turn LLMs into actual product features. Think smart tools, assistants, or chatbots that actually work. They write prompts, connect APIs, and prototype like lightning. They don’t build models — they build with them.
✅ Good sign: They can walk you through how a feature works, from user input to API response, without opening a whiteboard.
🚩 Red flag: Fluent in GPT jargon, clueless about rate limits or what breaks in production.
Machine Learning Engineer
They design, train, and deploy models — often from scratch. Heavy on pipelines, data flow, and optimization. More engineering than experimentation, more backend than UI.
✅ Good sign: Has taken a model all the way to production and can explain why it’s still running (or not).
🚩 Red flag: Lives in Jupyter. Ships nothing.
🧪 Quick note: Jupyter Notebook is like a digital lab notebook for coding. It’s where data scientists and ML engineers test code, run experiments, and visualize results — all in one place. Super useful for exploration. But if someone *only* works in Jupyter and never ships to production?
Data Scientist
They live for patterns. They explore, analyze, test, and explain. Sometimes they build simple models — but mostly, they turn messy data into clear direction. The good ones ask the right questions before running a query.
✅ Good sign: Knows when to model and when a basic graph is enough.
🚩 Red flag: Everything’s “AI” — even pivot tables.
Data Engineer
They make sure your data is clean, accessible, and doesn’t fall apart at 2 a.m. They build pipelines, clean chaos, and feed the rest of the AI stack. If your data is garbage, it’s usually because you don’t have one of these.
✅ Good sign: Thinks in flows, cares about documentation, asks how the data will be used.
🚩 Red flag: Delivers 200 raw tables and considers the job done.
ML Researcher
They push the edge. Read papers, write papers, invent things, tweak architectures. Ideal for deep tech and innovation teams — not so much for shipping MVPs.
✅ Good sign: Knows when cutting-edge is worth it (and when it’s not).
🚩 Red flag: Wants to re-train GPT from scratch… when you just needed a working summarizer.
MLOps Engineer
They make sure your models actually work after they’re deployed. Think: versioning, monitoring, rollback plans, retraining. They’re the reason your AI doesn’t crash at 3am.
✅ Good sign: Talks about alerts, latency, and rollback strategy with confidence — and actual examples.
🚩 Red flag: Can’t tell you if a model’s accuracy has dropped… or why.
🔍 AI Engineer vs. ML Engineer: The Most Confused Pair in Tech
Let’s clear this one up, because this is the mistake most teams make first.
AI Engineer ≠ ML Engineer.
AI Engineers don’t train models — they use existing ones (like GPT) and turn them into real product features. Think product-minded builders.
ML Engineers do the opposite — they train, tweak, and deploy custom models, often from structured data.
One builds with models. The other builds the models.
So if you’re expecting a chatbot demo by Friday and you hired someone who wants to optimize batch sizes in PyTorch... yeah, it’s gonna be a long week.
📚 Glossary (for Curious Non-Tech Folks)
Just in case:
Inference – when the model gives you an answer
Fine-tuning – adapting a pre-trained model to your data
Embeddings – how models represent content as numbers
RAG – combining a model with your internal data
Prompt engineering – writing prompts that get useful results
Hallucination – when the model confidently gives wrong answers
MLOps – keeping models stable in real-world apps
Zero-shot / Few-shot – doing a task with little or no training data
Latent space – how the model “maps” knowledge
Temperature – how creative or random the model output gets
🔗Further Reading (If You’re Actually Serious About This)
Emerging Architectures for LLM Applications – a16z
For bold and motivated HR, product, or leadership folks who want to understand how real LLM-powered apps are actually built. Smart and strategic.Elements of AI – University of Helsinki
This is the course if you never want to feel completely lost again. It’s not light reading — but it builds real understanding from the ground up.AI Engineer vs. ML Engineer – Michael Lyam
Still not sure who does what? This breakdown is as clear as it gets.Understanding MLOps – Refonte
Great overview of what it actually means to do MLOps — and what skills to look for in candidates.MLOps Engineer vs. ML Platform Engineer – Yardstick
A useful comparison of two roles that sound almost identical — but aren’t.
🎯 Final Word: Don’t Hire for the Acronym. Hire for the Problem.
There’s no such thing as a general “AI person.”
There are engineers, analysts, researchers, builders, and operators. All with totally different strengths.
Before you post that job, ask yourself:
Do I need to build with an LLM? → AI Engineer
Do I need to train a new model? → ML Engineer
Do I need insights from our data? → Data Scientist
Do I need to clean up our data mess? → Data Engineer
Do I need models that don’t fall apart in production? → MLOps Engineer
Do I want to push the edge of what’s possible? → ML Researcher



