Beyond Prompting: Using Agent Skills in Data Science
A data scientist published a detailed methodology on Towards Data Science showing how to convert repetitive analytical workflows into reusable AI agent skills using a SKILL.md file format. The approach moves beyond one-off AI prompting toward structured, repeatable automation tha...
According to Towards Data Science, a data scientist documented how eight years of weekly visualization work became the foundation for a formalized, repeatable AI workflow built around a concept called agent skills. The article, published on April 17, 2026, on the popular Medium-hosted data science platform, walks through a concrete, instructional methodology for packaging institutional knowledge into machine-readable instructions that AI coding tools can execute consistently, week after week, without the user typing out fresh prompts each time.
Why This Matters
This is the article that the AI agent community has needed for a long time. Most coverage of AI in data science stops at "write better prompts," which is essentially telling professionals to get better at talking to a tool rather than building systems that work. The agent skills framework described here treats AI as an execution engine for formal procedures, not a conversation partner, and that distinction matters enormously for anyone running the same analytical workflows 50 times a year. Data science teams collectively waste thousands of hours on repetitive analytical tasks that are genuinely automatable, and codified, shareable skill files are a credible answer to that problem.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
The central premise is straightforward: if you have done the same analytical task long enough to understand every decision point in it, you have enough knowledge to write it down in a way that an AI agent can follow. The author spent eight years running weekly data visualizations, which means that workflow was not experimental or exploratory. It was a known, well-defined procedure with predictable inputs and outputs, exactly the kind of task that breaks down under ad hoc prompting but thrives under structured automation.
The technical vehicle for this transformation is the SKILL.md file format. A SKILL.md file is essentially a plain-text document that captures the complete logic of a workflow in a way that AI coding tools, specifically tools like Claude Code, can interpret and execute reliably. The format is part of the broader Model Context Protocol, or MCP, framework, which provides a standardized communication layer between AI agents and external services or tools. Think of MCP as a universal adapter that lets AI systems plug into workflows without requiring custom integration work every single time.
The author structured the visualization workflow into 2 distinct skills. The first skill handles dataset analysis, reading the incoming data and identifying what it contains. The second skill handles the actual visualization generation, taking the analysis output and producing the charts or graphs. Splitting the workflow this way is a deliberate engineering choice. Smaller, focused skills are easier to test, easier to debug, and easier to reuse across different projects than one monolithic instruction block.
What makes this approach genuinely useful for working data scientists is the reproducibility angle. When a workflow lives inside a SKILL.md file, it becomes a document. It can be version-controlled, shared with teammates, tested against known outputs, and updated when the underlying process changes. That is a meaningful departure from the current reality at most organizations, where the institutional knowledge about how a specific report gets produced lives entirely inside one person's head or inside a messy collection of old prompt drafts.
The article is careful to frame agent skills as complementary to human judgment rather than a replacement for it. The data scientist still decides which analyses matter, how to interpret results, and when the workflow needs to change. The skill handles mechanical execution. That framing is important because it sets realistic expectations about what this approach actually automates.
Key Details
- The article was published on April 17, 2026, through Towards Data Science on Medium.
- The author's weekly visualization habit spans 8 years of consistent execution.
- The workflow was divided into 2 separate agent skills: one for data analysis and one for visualization generation.
- The SKILL.md file format is the core technical artifact for encoding agent skills.
- The methodology is built on the Model Context Protocol framework for AI-to-tool communication.
- Claude Code is specifically named as a compatible AI coding tool for executing agent skills.
What's Next
Expect to see more SKILL.md repositories appearing on GitHub over the next few months as data teams begin formalizing their own recurring workflows using this pattern. The Model Context Protocol has been gaining adoption steadily through early 2026, and practical, instructional content like this accelerates that adoption by giving professionals a concrete starting point rather than a theoretical framework. Data science teams that invest in building a shared skill library now will have a meaningful productivity advantage over teams still relying entirely on prompt-by-prompt interactions.
How This Compares
Anthropic has been pushing the Model Context Protocol hard since its initial release, and Claude Code is the most visible consumer-facing tool built on that foundation. But the agent skills methodology described in this article is not just a Claude feature. It represents a workflow philosophy that applies across AI tools and platforms, including OpenAI's Codex and similar code-generation environments. The difference between this approach and what OpenAI has demonstrated is focus. OpenAI's recent Codex announcements have emphasized exploratory coding assistance, while this methodology is explicitly designed for known, repeatable procedures. Those are different problems, and the solutions are not interchangeable.
Compare this also to the broader low-code automation movement represented by tools like Zapier and Make. Those platforms have spent years helping non-technical users build repeatable workflows through visual interfaces. Agent skills do something similar but at a higher level of technical sophistication, targeting professionals who can write structured documentation and who need AI to execute analytical code, not just trigger API calls. The AI agent guides covering workflow automation have consistently shown that the hardest part of building reliable agent systems is not the AI capability, it is the structured documentation of what the AI is supposed to do. SKILL.md is a direct answer to that problem.
Looking at the broader AI agents news from Q1 2026, the pattern is clear: the industry has moved past debating whether AI can write code and into the harder question of how to make AI-generated work consistent and maintainable over time. Agent skills, reusable skill libraries, and standardized file formats like SKILL.md are the engineering discipline that makes that consistency possible. This article is a practical on-ramp to that discipline.
FAQ
Q: What is a SKILL.md file and how does it work? A: A SKILL.md file is a plain-text document that describes a complete workflow in structured language that AI coding tools can read and execute. It captures the decision logic, steps, and output specifications of a recurring task so that an AI agent like Claude Code can run that task reliably without the user writing new instructions each time.
Q: How is this different from just writing a detailed prompt? A: A prompt is a one-time instruction you type before a single task. An agent skill is a reusable, documented procedure that lives in a file, can be version-controlled, shared with teammates, and executed repeatedly across different datasets or time periods. The skill encodes expertise permanently rather than requiring the user to reconstruct it each session.
Q: What kinds of data science tasks are best suited for agent skills? A: Recurring tasks with predictable steps are the best fit, things like weekly reporting, monthly dashboards, regular data quality checks, or periodic visualizations. Tasks that require novel analytical thinking each time are less suitable. If you have done the same workflow more than a dozen times and the core logic does not change, it is a strong candidate for formalization as an agent skill.
The shift from prompting to structured agent skills is one of the more practical developments in applied AI this year, and data scientists who adopt it early will spend less time repeating themselves and more time doing work that actually requires their expertise. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.


