When “Just Ask” Becomes Enough — The End of Prompt Engineering as We Knew It

I used to tweak my prompts like a mad scientist — agonizing over every comma, re-ordering every adjective, trying to find that “magic formulation” that would make the AI actually get it. I’d write:

“Write a short story about a lonely robot on Mars, in the style of 1950s sci-fi, with emotional depth and a hopeful ending.”

Maybe I’d get something decent. Maybe a jumble. Often, I’d try again. And again. Because some people online claimed there was a formula — a perfect prompt blueprint that would always work.

Then, just a few weeks ago, I saw the headline: “We don’t need prompt engineering anymore — just say what you mean.”

It seemed unbelievable. But I tried it anyway.

Instead of the carefully-tuned prompt, I simply wrote:

“Create five possible plot-summaries for a Mars-robot story.”

No style constraints. No “in the 1950s sci-fi tone.” Nothing “special.”

The output? Instantly — not one, but five meaningfully different story ideas. Some gritty, some wistful, some hopeful, some haunting. Not the same recycled plot over and over. Real variety. Real potential.

That’s when it hit me: maybe the whole “prompt engineering” hustle was — at least often — overhyped.


📌 What Changed Under the Hood

Recently, a research technique dubbed Verbalized Sampling shook up assumptions about prompting. According to its proponents, instead of micromanaging prompts with elaborate phrasing, we can simply shift to intent-driven requests— letting the model infer what we really want, without needing to translate that into some “perfect” prompt syntax. Medium+2Generative AI+2

In practice, that’s as simple as changing:

“Write a descriptive paragraph about a haunted house in the style of Gothic horror, with vivid imagery.”

to

“Generate three ideas for a haunted-house scene.”

Suddenly, models that once delivered formulaic or “safe” responses began producing diverse, creative, and surprisingoutputs. No complicated prompt logic, no fine-tuning needed, no layered instructions. Medium+2LinkedIn+2


Why This Feels Like a Paradigm Shift

  • Accessibility for Everyone. If AI can respond well to simple, intent-based commands, then using it becomes less about mastering arcane prompt tricks — and more like having a conversation. That means almost anyone can get good results. No “prompt-engineer” badge required.
  • Speed & Experimentation — Like Never Before. Instead of spending minutes (or hours) refining wording, you can rapidly iterate: ask, get output, tweak intent, repeat. That’s leaner, faster, and frankly more fun.
  • More Creativity, Less Formula. For creative tasks — stories, brainstorming, design ideas — this approach yields richer diversity. It feels less like feeding a machine a template, and more like collaborating with a creative partner.

In short: it’s less about how well you write prompts, and more about how well you think your idea.


But Hold On — It’s Not Quite the Death of Prompting

I’m not claiming “prompt engineering is dead” — but I am saying the role is evolving. Here’s why:

  • As much as intent-based prompting works, some tasks still benefit from precision, constraints, context, and structure — especially in technical, factual, or high-stakes domains.
  • The broader set of techniques around optimizing AI usage — sometimes referred to as PromptOps — is rising. This is about managing prompts at scale: version control, testing, monitoring, refinement. It’s less glamorous but more sustainable for real-world, production-level use. Dataversity+2Skywork+2
  • And, for people who want maximum control — for example, guaranteeing tone, structure, style, or compliance — there is still value in careful prompting.

In other words: prompt-engineering as a flamboyant art is fading — but the underlying craft of clear communicationremains more important than ever.


What This Means for You (and Me)

If you’re using AI — casually or professionally — here’s how I’d rethink it:

  • Don’t sweat the “perfect prompt.” Start with simple, intent-driven requests. Evaluate results. Iterate.
  • If you’re building AI-backed workflows, think in terms of PromptOps — versioned prompts, testing, monitoring, reuse.
  • Reserve structured prompting for when you need precision — for example, writing legal drafts, crafting marketing copy, ensuring compliance, or delivering consistent tone.
  • Embrace AI as a collaborator — not a machine you trick with clever phrasing, but a tool you communicate with.

Because the day you can just say what you mean — and get what you want — is the day prompting becomes as natural as talking. And that, I think, is worth celebrating.

What “Getting Your Data Ready for AI” Really Means — And Why It Matters

You’ve heard the hype: AI is poised to transform everything from marketing to healthcare to logistics. Yet behind every dazzling AI demo or predictive model lies a far less glamorous — but far more critical — foundation: your data.

In many ways, the difference between an AI project that soars and one that flops isn’t about the algorithms, the compute power, or the latest fancy model. It’s about whether your data is ready.

Here’s why data readiness is the unsung hero of AI — and what it takes to truly prepare your data so AI can deliver on its promise.


Why “Raw Data” Is Usually the Wrong Starting Point

Think about every system, spreadsheet, database, or log that’s built up over years in an organization. Chances are high that much of that data is messy: inconsistent formatting, missing fields, outdated entries, duplications, mismatched naming conventions, and information stored in multiple silos.

Feeding that kind of data straight into AI — especially machine learning or analytics-based systems — is a recipe for disaster. As described in discussions of data preprocessing, raw data often contains noise, errors, and structural issuesthat can throw off models, skew predictions, or even embed bias. Wikipedia+2Pecan AI+2

In short: the old adage stands — “garbage in, garbage out.”

That’s why one of the most important phases of any AI initiative is not designing the neural net — it’s preparing the data. As many experts note, data preparation often consumes the majority of time and resources in AI projects. Astera+2Nexla+2


What “AI-Ready Data” Really Means: Beyond Clean Sheets

When people say “AI-ready data,” they often mean more than just “tidy CSV files.” The most reliable, useful, scalable data for AI is built around several key pillars:

✅ Technical Cleanliness and Structure

  • Consistent formatting: dates use a standard format; currency and numeric fields use unified encoding; categorical values (like “NY”, “New York”, “N.Y.”) are standardized.
  • No duplicates, no missing critical values, no corrupt or malformed entries. Cleaning and data cleansing — removing or correcting invalid data — is the cornerstone of data hygiene. Wikipedia+2HBS -+2
  • Proper data types and normalization: numerical values where they belong, categorical variables encoded suitably (for example with one-hot encoding or label encoding), text data cleaned and normalized (if used). Astera+2Sand Technologies+2

🧠 Business Context and Semantic Clarity

  • The data must carry business meaning: labels, definitions, and metadata should clarify what each field represents. For example: is “date” a transaction date? A logging timestamp? A user signup date? Context matters. Analytics8+2Medium+2
  • Include metadata and lineage where possible: know where data came from (which system, when), what transformations have been applied, and how different pieces relate to each other. This is especially vital for unstructured or semi-structured data (logs, text, events, sensor data). Nexla+1
  • Embed business rules, constraints, and domain-specific logic when relevant — because AI doesn’t inherently understand your business. Without that context, even “clean” data can lead to meaningless or misleading insights. Medium+1

🧪 Validation, Governance & Ongoing Maintenance

  • Continuous validation: even after initial cleaning and structuring, data changes. New sources get added; inputs vary; users evolve their behavior. Without ongoing checks, data drift or noise creeps in — undermining even the best models. Analytics8+2HBS -+2
  • Governance and compliance: data must be treated with care — privacy, security, access control, versioning, audit trails. Especially if dealing with sensitive or regulated data, you need governance frameworks in place before AI sees it. HBS -+2Analytics8+2
  • Scalable architecture & accessibility: data should live in accessible, well-organized storage — data warehouses, data lakes, or proper data pipelines rather than siloed spreadsheets. This ensures the data remains usable across teams and systems over time. HBS -+2Astera+2

In other words: “AI-ready” means clean, contextualized, validated, accessible — and aligned with business meaning.


From Chaos to Clarity: A Practical Roadmap

If you were building a house, you wouldn’t skip laying a foundation just to start decorating. Treat preparing data for AI as building that foundation. Here’s a practical roadmap you can follow, whether you’re a solo developer, a data scientist, or part of a business team.

1. Start with Clear Objectives & Use Cases

Before touching a dataset, ask: What problem am I trying to solve with AI?

  • Are you building a recommendation system? Forecasting sales? Doing anomaly detection? Or powering a language-based chatbot?
  • Each use case demands different data — customer behavior, transaction histories, logs, usage metrics, text data, etc. Pull only what’s relevant. Analytics8+2Upsilon+2
  • Don’t build AI “just because.” Let the business need drive the data — not the other way around.

This alignment ensures you aren’t preparing massive, noisy datasets “just in case” — but building with purpose.

2. Inventory & Audit Your Data Sources

Map out where your data lives. Databases, legacy systems, spreadsheets, logs, third-party APIs, user-generated content — list them all.

  • For each source: what data types are there? Structured? Unstructured? Semi-structured?
  • What metadata do you have (timestamps, source IDs, user IDs, origin)?
  • What’s the current state: clean, messy, incomplete, duplicated?

This audit tells you whether you’re starting with garbage — and roughly how much work the cleanup will take.

3. Clean, Normalize & Transform

With inventory in hand, begin the process of cleaning and normalizing data:

  • Remove duplicates, handle missing values appropriately (drop rows, impute values, or reject depending on context), standardize formats (dates, strings, numbering). Sand Technologies+2HBS -+2
  • Normalize categories: ensure consistent labeling (e.g., “NY,” “N.Y.,” “New York” — pick one).
  • For textual or unstructured data: clean noise, standardize encoding, tokenize if needed, possibly add metadata (e.g., capture source, timestamp, user).
  • Convert data into usable types: categorical encoding, numerical normalization, text preprocessing, feature engineering if needed. Pecan AI+2Astera+2

At this phase, you’re essentially performing “data wrangling” (a.k.a. data munging) — transforming messy real-world inputs into clean, structured datasets that can be consumed reliably. Wikipedia+1

4. Add Metadata and Context — Make it Meaningful

Clean numbers alone don’t make good AI inputs if they lack context. You need to add meaningful metadata:

  • What does each field represent? Date? Transaction? Event? Log?
  • What’s the relation between fields — e.g., user → transaction → timestamp → product?
  • What business rules apply? For example: a “return” flag means different things depending on time since purchase; discount codes may only reflect promotions; customer IDs may need anonymization.

This context transforms data from generic values into business-aware records that AI can use to yield insights relevant to your real world. Medium+2Analytics8+2

5. Validate, Test & Monitor — Before and After Deployment

Even the best-cleaned data can degrade over time. New data may arrive with different formats; sources may change; users may input unexpected values.

  • Build validation pipelines: run checks for missing values, duplicates, out-of-range entries, inconsistent formats, unexpected categories. Analytics8+1
  • Use version control or data lineage tracking: know which version of data was used for training, when it was modified, and by whom. This helps with reproducibility, auditing, and debugging.
  • Monitor data drift and quality over time — especially after deploying models. A model trained on clean, well-structured data can quickly become unreliable if fed messy, evolving inputs.

6. Ensure Governance, Security & Ethical Compliance

Data readiness isn’t just technical — it’s also organizational.

  • Classify data by sensitivity: public, internal, confidential, restricted. Apply appropriate access controls, encryption, anonymization/pseudonymization when needed. HBS -+1
  • Maintain audit trails, ownership logs, and data-access policies. Know who changed what, and when.
  • Especially for regulated domains (healthcare, finance, personal data), ensure compliance with privacy laws, data-handling standards, and ethical guidelines before using the data in AI systems.

7. Build a Scalable Architecture / Pipeline — Don’t Treat It as a One-Off

If your AI project is more than a one-time experiment — which it should be — build data infrastructure that supports ongoing operations:

  • Centralize data storage in a data warehouse, data lake, or data lakehouse rather than using ad-hoc spreadsheets or file-shares. HBS -+2Astera+2
  • Automate data ingestion, cleaning, transformation, validation, and delivery into ML pipelines. Use ETL/ELT tools, scheduled jobs or data-pipeline frameworks. Astera+2Upsilon+2
  • If building machine learning systems — think in terms of a feature store: a centralized repository of curated, preprocessed “features” that models can rely on. This increases reuse, consistency, and collaboration across teams. Wikipedia+1

What Happens When Data Isn’t Ready — And Why Many AI Projects Fail

Putting off data preparation might seem like a shortcut — after all, why spend weeks cleaning data when you could be playing with models instead? But in practice, this shortcut almost always leads to failure — or disappointing results.

  • Inaccurate or misleading predictions: models might learn from noise, errors, or inconsistencies. Garbage in, garbage out.
  • Bias, unfairness, or unpredictability: messy data often hides skewed distributions, missing values, duplicated records — all of which can introduce bias or unstable behavior. arXiv+2arXiv+2
  • Poor interpretability and trust: without metadata and context, stakeholders can’t understand where insights come from — which undermines trust in AI outputs.
  • Maintenance nightmares and technical debt: one-off scripts and “dirty” pipelines become brittle, hard to reproduce, and easy to break when data or formats change.
  • Wasted time and resources: the model might train fast — but yield poor results. Or it might require constant debugging. Either way, you end up pouring hours into chasing problems caused by bad data.

Many times, failed AI pilots trace root-cause not back to algorithms — but to data.


The Competitive Advantage of Being Data-Ready

Here’s the upside: organizations that invest early and properly in data readiness often get disproportionately high returns.

  • Faster time to value: once data pipelines and governance are in place, new experiments — new models — can be spun up quickly with confidence.
  • Greater reliability & trust: clean, contextualized, validated data leads to robust, explainable, and repeatable AI outputs. Stakeholders can trust insights and act on them.
  • Scalability: as data grows or use cases expand, having a solid infrastructure (feature stores, data lakes, pipelines) makes it easier to onboard new data sources or build new AI systems.
  • Cost-efficiency over time: yes, preparing data takes effort. But it avoids the far greater cost of debugging, retraining, failures, or incorrect decisions based on poor insights.

In short: those who treat data as a strategic asset — not a side detail — make AI a sustainable, value-generating capability, not a gamble.


Final Thoughts: Treat Data Preparation Like Building a House — Not Painting Walls

When I think about “getting data ready for AI,” I like to imagine building a house. Some people rush to pick paint colors and furniture — but you wouldn’t skip pouring the foundation or checking the plumbing. Without them, your house might collapse or leak.

Similarly: don’t rush to deploy flashy AI models. First, build a foundation.

  • Clean your data.
  • Add context.
  • Build pipelines.
  • Put governance in place.
  • Monitor, maintain, and scale.

Once you do that, AI doesn’t just become possible — it becomes powerful, trustworthy, and repeatable.

Because at the end of the day: AI isn’t magic — it’s data, done right.

Build Your Own Small Language Model — Because Sometimes “Small” Is All You Need

In the age of mega-models with hundreds of billions of parameters, it’s easy to get swept up by grandiosity. But what if what you really need is something much smaller: a compact, purpose-built language model that runs on modest hardware, privately stores your data, and does just enough? That’s the idea behind a small language model (SLM) — and building one from scratch might be more within reach than you think.

Here’s why SLMs matter — and how to build one — step by step.


🎯 Why SLMs? The Case for “Small but Mighty”

Large language models (LLMs) grab headlines — but they come with tradeoffs: massive compute needs, high cost, long training times, and often, privacy/compliance headaches if you’re using external APIs. Hugging Face+2KDnuggets+2

SLMs, by contrast, offer a different set of advantages:

  • Efficiency & accessibility — SLMs can run on a single GPU or even a CPU (especially with quantization), drastically lowering hardware requirements. Medium+2arthur.ai+2
  • Cost-effectiveness & speed — With fewer parameters and lighter architecture, training or fine-tuning SLMs is faster, cheaper, and often practical even on personal computers. Medium+2Hugging Face+2
  • Customizability & privacy — Because you control the data and the model, it’s easier to tailor the SLM to a specific domain (e.g., internal docs, code base, specialized jargon) — without sending sensitive data to external servers. Medium+2Ataccama+2
  • Simplicity & focus — Instead of aiming for a universal “jack-of-all-trades” LLM, you can build an SLM optimized for a narrow, well-defined task — which often means better performance for that task. Medium+2arthur.ai+2

As one guide notes, SLMs aren’t about replicating the full power of GPT-like titans — they’re about providing practical, usable AI where it matters. Ataccama+1

That said — building an SLM isn’t trivial. It requires careful design, good data, and some engineering discipline.


🧩 What Counts as an “SLM”? Defining the Scope

There’s no universal threshold, but typically an SLM is a language model with significantly fewer parameters than state-of-the-art LLMs — often millions to a few hundreds of millions. Hugging Face+2Medium+2

For example:

  • A minimal SLM might have 10–15 million parameters, aiming for extremely light usage and narrow tasks. Medium+2Medium+2
  • “Mid-size” SLMs might reach tens or hundreds of millions of parameters — enough to handle general language tasks (generation, summarization, Q&A), albeit with lower capacity than full-blown LLMs. Hugging Face+2DataCamp+2

Because of their lighter footprint, SLMs are often trained or fine-tuned on domain-specific datasets (internal docs, code repositories, specialized corpora) — making them highly customized for the task at hand. Medium+2Ataccama+2


🔬 Building Blocks: The Core Steps to Build an SLM

Here’s a practical roadmap — inspired by tutorials and community efforts — for building a small language model from scratch (or near-scratch).

1. Define the Goal & Scope

Before writing a single line of code, ask:

  • What will the SLM be used for? A chatbot over internal documentation? Code completion? Domain-specific summarization? FAQ answering?
  • What kind of text & tasks does it need to handle? Short messages? Long-form text? Specialized vocabulary? Code?
  • What are your constraints: hardware (GPU vs CPU), time, data availability, privacy/compliance needs, acceptable latency, inference environment.

Starting narrow helps — the smaller and more specific your use case, the more realistic it is to build a robust SLM with limited resources. Medium+2Ataccama+2

2. Collect & Curate Your Dataset

Your “model intelligence” only comes from data. For an SLM designed for a specialized domain, it’s often best to compile a bespoke dataset relevant to exactly what you want the model to understand. That might mean:

  • Internal documents, manuals, code repos, FAQs, support tickets, reports, transcripts — depending on your domain.
  • Scraping or exporting existing textual data (web pages, logs, markdown files, PDFs — whatever is relevant).
  • For generative tasks, you might also craft or collect diverse “examples” or “templates” (stories, question-answer pairs, dialogues, instructions) to give the model richer patterns.

Once you collect raw text, cleaning and normalization are essential: remove HTML or markup, strip extraneous whitespace, normalize encodings, eliminate weird characters or artifacts — aim for consistent, clean, human-readable text. Medium+2Medium+2

Also consider splitting the data into training vs validation (or test) sets, so you can later check whether your model generalizes or just memorizes. Medium+2Medium+2

3. Tokenization — Bridge from Text to Numbers

Language models don’t understand raw text; they work on sequences of tokens (subword units, bytes, or characters), represented as numbers. That’s why a tokenizer is a critical component.

  • If you’re fine-tuning an existing model, you can reuse its tokenizer (e.g. from a standard pre-trained model). Medium+1
  • If you’re building from scratch, you might train your own tokenizer (e.g. a Byte-Pair Encoding tokenizer, or SentencePiece) over your dataset — to better match your domain’s vocabulary and usage patterns. fast.ai+2Hugging Face+2
  • Once the tokenizer is ready, encode your entire dataset into token-ID sequences (and optionally save them in efficient binary format for fast loading). GitHub+2Hugging Face+2

This tokenizer + tokenization step is more than boilerplate — it shapes how well your SLM “understands” the language in your domain.

4. Choose Your Model Architecture

You have two main paths here:

  • Fine-tune an existing model: start from a pre-trained (or “off-the-shelf”) language model (small or mid-size) — e.g. a small Transformer — then re-train it (fine-tuning) on your curated dataset. This is often the most practical, resource-efficient route. Medium+2Ataccama+2
  • Train from scratch: define a minimalist architecture (e.g. a small Transformer with a few layers, small hidden size, fewer attention heads), and train it end-to-end from your data. This gives maximal control, but requires more effort and possibly more compute. Medium+2Hugging Face+2

Many practical guides and open-source notebooks follow a custom Transformer-from-scratch approach: simple multi-head self-attention blocks, feed-forward layers, positional encodings, token embeddings. GitHub+2DataCamp+2

For example, one publicly available notebook uses a lightweight dataset (short stories), BPE tokenization, binary dataset storage, and a minimal custom Transformer — all runnable on a single GPU. GitHub+1

5. Training the Model — Patience, Engineering & Monitoring

With data, tokenizer, and architecture in place, training begins. Here’s what you should pay attention to:

  • Batching & memory management: especially on limited hardware, you need to tune batch size, sequence length, memory allocation carefully — many toy SLM setups do “just enough” to fit in GPU/CPU memory. GitHub+2Analytics Vidhya+2
  • Hyperparameter tuning: learning rate, number of layers, hidden size, number of heads, context window, dropout, etc — all affect the balance between model capacity, generalization, overfitting, and resource use.
  • Validation & early stopping: use your held-out validation set (or split) to detect overfitting, check model progress (loss curves, perplexity), and ensure the model doesn’t just memorize but generalizes. Hugging Face+1
  • Iteration & refinement: building a good SLM rarely works well on the first try. You may need to tweak data cleaning, tokenization, context length, or architecture design to get stable, coherent output.

Training a custom SLM is often more time-consuming than you expect — but the resulting control, privacy, and custom alignment to your domain can make it worthwhile. Many articles note that training a production-ready custom SLM can take months of work for a small team — though simpler experiments or domain-specific prototypes can be done much faster. DataCamp+2Ataccama+2

6. Evaluate, Test & Fine-Tune / Iterate

Once you have a trained SLM, don’t assume it’s “ready.” Evaluate it thoroughly:

  • Test with prompts typical of your intended use — not just generic “hello world” text.
  • Check for coherence, consistency, hallucinations, domain-appropriateness, bias or errors.
  • If needed — fine-tune further on more data, or add “instruction tuning” / domain-specific examples to improve behavior (especially for chatbots or Q&A use cases). Some community guides even walk through alignment and instruction-tuning for custom SLMs. LinkedIn+1
  • Prepare for maintenance: as your underlying data evolves (new documents, updated info), you may need to retrain or re-fine-tune periodically to keep the model relevant. Ataccama+1

⚠️ What SLMs Can’t (Easily) Do — Tradeoffs and Limitations

SLMs are powerful — but there are tradeoffs. Important to know what you’re giving up.

  • Limited capacity & generality. Because they have fewer parameters, SLMs often struggle with very complex language tasks, deep reasoning, long-term dependencies, or very diverse domains. They work best when scoped narrowly. arthur.ai+2arXiv+2
  • Domain-specific bias / overfitting. If your dataset is too narrow or not sufficiently representative, the model may overfit to quirks, produce repetitive or shallow outputs, or fail to generalize beyond narrow patterns.
  • Need for quality data & good tokenization. Garbage in → garbage out. Without careful data cleaning, normalization, adequate tokenization, and thoughtful preprocessing, results will suffer — more so than with large, pre-trained models. Medium+2GitHub+2
  • Training & engineering complexity. Building from scratch means you need familiarity with ML tooling (e.g. PyTorch/TensorFlow), model design, training loops, memory management — even for a small model. For many, starting with fine-tuning an existing model may be more practical. Ataccama+2DataCamp+2
  • Maintenance & drift. Over time, domain knowledge may evolve, data may change, or user expectations shift — requiring retraining or continuous updates to keep the SLM relevant. Ataccama+1

In other words: SLMs trade breadth and generality for efficiency, privacy, and specificity. For many real-world tasks, that’s exactly what you want — but you’re unlikely to get “universal intelligence.”


🧰 When an SLM Makes Sense — Use Cases That Play to Its Strengths

SLMs shine in scenarios where:

  • You have domain-specific, self-contained data (internal documentation, code repos, specialized vocabulary).
  • You care about privacy or compliance — especially if data can’t leave your infrastructure.
  • You want low-cost, fast inference — perhaps running on modest hardware or embedded systems.
  • Your tasks are relatively narrow and well-defined — e.g., FAQ bots, code completion, domain-specific note summarization, internal knowledge base search, small-scale automation.
  • You prefer full control over the model, rather than relying on black-box APIs or external services.

In these contexts, an SLM isn’t just a compromise — it can be the optimal solution. As one primer puts it: when a simple tool (like scissors) does the job better than a chainsaw, that’s all you need. arthur.ai+1


🧠 The Bigger Picture — SLMs Are Part of a Growing Ecosystem

The recent boom in generative AI hasn’t excluded smaller models — in fact, there’s growing recognition that “size isn’t always the point.” Lightweight, efficient, privacy-aware, domain-specific models are becoming more relevant as organizations realize they don’t always need massive, general-purpose LLMs. Hugging Face+2Ataccama+2

Techniques like knowledge distillation, pruning, quantization, and domain-specific fine-tuning are expanding the capabilities of SLMs — enabling them to punch above their weight while remaining efficient. Ataccama+2DataCamp+2

Meanwhile, educational and open-source resources — from minimal Transformer-from-scratch notebooks to community guides — make the path to building your own SLM accessible even without a large compute budget. GitHub+2DataCamp+2

In short: SLMs are no longer “toy projects.” For many real-world tasks, they’re a smart, pragmatic, and powerful choice.


✅ Final Thoughts: If You Think You Need a Language Model — Ask First, “Do I Need It Big?”

Before you rush to fine-tune GPT-class models or build massive architecture, stop and ask: What do I really need?

If what you need is domain-specific knowledge, privacy, cost-efficiency, and reasonable performance — an SLM might be more than enough.

Building a small language model is not trivial — it’s a craft: you gather the right data, clean it, tokenize it wisely, choose suitable architecture, train patiently, evaluate rigorously, and maintain thoughtfully.

But in return, you get:

  • A model that lives under your control.
  • A tool tailored to your domain.
  • Efficient and cost-effective execution.
  • Ownership over both data and behavior.

So yes — the age of giant, monolithic LLMs isn’t the only path forward. Sometimes, what you really want is something small, nimble, and purpose-built. And that’s why building an SLM from scratch is worth considering.

Why I Do This: Training Companies to Use AI — Not Just Admire It

Over the last few years, I’ve worked as a tutor / trainer for a broad spectrum of organizations — from small startups with 5 people to large enterprises with hundreds or thousands of employees. The journey of taking a company from “What is that AI thing, again?” to “We deploy models, use AI in daily workflows, and trust our data-driven decisions” is rarely smooth — but when it works, it creates real, lasting value.

And that’s why I always tell my clients: AI isn’t magic. It’s a people skill, a culture shift, and a process investment. You can’t just license a fancy tool, flip a switch — successful adoption begins (and ends) with training people.

Here’s how I structure that journey — and why companies who treat AI training as a core part of their growth strategy end up ahead.


Getting Everyone Speaking the Same Language: AI Literacy First

One of the first things I do, regardless of company size, is run an “AI literacy bootcamp.”

  • Every team — engineering, sales, marketing, operations, HR — gets the same foundational overview of what AI/ML is, what it isn’t, what it can realistically do today, and where it often fails. This includes basic concepts (data, models, predictions, inference, bias, limitations).
  • The goal: build a shared baseline understanding so nobody confuses hype with reality. This levels the playing field — people stop feeling intimidated or skeptical, and instead start seeing concrete possibilities.

Why does this matter? Because if only a few “techie” folks understand AI, then adoption remains siloed, projects stall, and broader teams don’t buy in. I’ve seen many companies waste money that way.

Having everyone fluent in basic AI concepts also helps break down resistance. When non-technical staff realize AI isn’t “some black-box magic,” but a set of tools — they start asking useful questions, contribute ideas, and even influence business-relevant use-cases.


Tailoring Training to the Company’s Business Needs — Not Just Teaching ML for ML’s Sake

It’s tempting (and common) for companies to say: “We want everyone trained in machine learning.” But in reality, what they often need is not everyone to be a data scientist — but people who know how AI applies to their role.

When I onboard a client, I first lead a workshop:

  1. We map current workflows, pain points, and opportunities.
  2. We ask: Which tasks could AI realistically improve? Where is data abundant? Where is human judgement critical?
  3. We set clear, measurable goals: e.g. “reduce report-preparation time by 50% using ML,” or “use AI to triage support tickets and reduce average response time to under 2 hours.”

Then — and only then — I design a custom learning pathway. For example:

  • For marketing teams: brief sessions on generative-AI tools for content creation, customer-segmentation models, AI-driven A/B testing.
  • For operations: training on data-analysis tools, basic predictive modeling, anomaly detection workflows.
  • For engineering/data teams: deeper modules on model training, evaluation, versioning, deployment, monitoring.

The advantage: companies end up learning what they need, not what seems neat or trendy.

This approach aligns with research showing that upskilling works best when it’s tied directly to business-aligned goals, not generic “AI literacy” ambitions. Udemy Business+2Forbes+2


Building a Culture of Learning — And Reducing Fear of AI

A recurring challenge in my experience: many employees — especially non-technical — are fearful. Not of learning AI, but of what it might mean for their jobs.

To combat that, I focus on psychological safety and empowerment:

  • I present AI as a tool to amplify human effort, not replace it — helping people do more, faster, or with better precision.
  • I include hands-on, low-stakes workshops: sandboxed environments where people can experiment with AI tools (e.g. generative text tools, simple data-analysis dashboards) without fear of messing up real projects.
  • I encourage peer learning and collaboration: people from different teams share their ideas, successes, and concerns. The goal: create internal “AI ambassadors” — staff who get excited and lead small-scale experiments.

This social / cultural side of training is critical. According to some of the best practices for organizational AI adoption, fostering willingness to learn and giving people autonomy in their learning journey is a key success factor. BCG Global+2Hyperspace+2


From Training to Practice: Embedding AI into Daily Workflows

Training alone won’t transform a company — integration does. That means after the training sessions, I work with leadership and individual teams to embed AI tools into actual workflows.

This often involves:

  • Defining where AI can have quick wins (e.g. automating data cleaning, report generation, content drafts, ticket triage).
  • Setting up pilot projects to test AI-driven workflows.
  • Creating governance guidelines: how to use AI ethically, how to validate model outputs, when to use human oversight.
  • Training on maintenance and monitoring, especially if models are deployed in production: someone (or a team) needs to own evaluation, retraining, data drift detection, performance tracking.

Embedding AI as part of “business as usual” — not a one-off experiment — helps ensure value is sustained.

This approach follows what many experts recommend: treat AI adoption as a change-management problem, not just a tech upgrade. Bessemer Venture Partners+2McKinsey & Company+2


The Long Haul: Continuous Learning & Adaptation

One of the biggest mistakes I see: companies treat AI training as a one-time event. They run a few workshops, check “done,” and move on. Inevitably, that leads to stagnation.

Instead, I encourage clients to view AI upskilling as continuous, for these reasons:

  • AI evolves fast. New tools, better models, shifting best practices — what was cutting-edge a year ago may already be outdated. Staying static means falling behind.
  • Business needs evolve. New challenges, changing markets, new data sources — companies need agility. A workforce trained only once loses relevance quickly.
  • Scale and growth. As companies get bigger, more employees join; new departments added — all need onboarding. Having a repeatable training infrastructure (maybe a “training pipeline,” not just a workshop) is vital.

The organizations that thrive — in my experience — are those that commit to building a culture of learning: regular refresher courses, internal knowledge-sharing sessions, AI hackathons, open forums for experimentation, and evolving governance.

This aligns with modern thinking: successful AI upskilling is less about a one-off training push and more about building organizational resilience through ongoing education and adaptabilityd2l.com+2McKinsey & Company+2


What I’ve Learned as a Trainer: Mistakes, Pitfalls & What Works

After hundreds of training engagements, some patterns stand out (and some lessons hard-earned).

Mistake 1: Starting with technology, not problems.

Companies often ask: “Teach us machine learning.” But without a clear sense of why — what problem they are trying to solve — training becomes academic. Worse: models are built for the sake of building models, but never see production.

Mistake 2: Training only technical staff.

If only engineers or data scientists are trained — while sales, marketing, operations remain untouched — adoption stalls. Many opportunities happen outside code: think content generation, customer insights, process automation, or decision support. Skipping non-technical teams leaves value on the table.

Mistake 3: Treating AI as a “set and forget.”

Without maintenance, monitoring, and governance — AI deployments decay. Data drifts, models degrade, hype defeats reality. Trainings that fail to emphasize long-term upkeep waste time and money.

What works:

  • Start with use cases and business value. Let problems drive training.
  • Build cross-functional learning paths, tailored to roles, needs, and technical comfort.
  • Foster experimentation and psychological safety — let people play, fail, try again, invent.
  • Invest in governance, ethics, and maintenance, not just model-building.
  • Promote ongoing learning, not single-event training; embed AI literacy into company culture.

Why This Matters Now — and What’s at Stake

AI isn’t a niche tool anymore. As more companies — big and small — explore automation, insight generation, predictive analytics, generative content, decision support, the gap between those who use AI effectively and those who don’t is becoming a competitive divide.

  • Companies that invest in training gain productivity, agility, insight and scale. Employees spend less time on repetitive tasks, more time on strategic and creative work.
  • Organizations that ignore training—or treat it as a checkbox—risk wasted investments, failed projects, or even worse: decision-making based on flawed or outdated AI outputs.
  • Importantly: as automation rises, ethical use of AI becomes critical. Companies that instill responsibility, transparency, and human-centric oversight in training are better positioned to avoid bias, compliance issues, and reputational risk.

In short: training your people is not optional — it’s foundational. And done right, it’s what turns AI from a shiny experiment into a sustainable advantage.


Final Thoughts: I Don’t Just Build Models — I Help Build Capable Teams

When I look back at the companies I’ve worked with — some small, some massive — the ones that succeed are never the ones that splurged on the fanciest tech. They’re the ones that:

  • started with humility;
  • invested in people;
  • recognized that AI is a tool, not a magic wand;
  • committed to continuous learning, not one-off training;
  • integrated AI into workflows, culture, and decision-making.

As a trainer, my goal is never to create AI experts at every level. My goal is to build AI-capable organizations — teams where AI is understood, used responsibly, and evolves with the business.

Because at the end of the day, AI’s true power doesn’t lie in algorithms or compute — it lies in people who know how to use it.