<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Skills on Ante Kapetanovic</title><link>https://antekapetanovic.com/tags/skills/</link><description>Recent content in Skills on Ante Kapetanovic</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://antekapetanovic.com/tags/skills/index.xml" rel="self" type="application/rss+xml"/><item><title>Replacing the Semantic Scholar MCP with a Skill?</title><link>https://antekapetanovic.com/blog/mcp-vs-skill/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate><guid>https://antekapetanovic.com/blog/mcp-vs-skill/</guid><description>&lt;p&gt;In the &lt;a href="https://antekapetanovic.com/blog/mcp-vs-vanilla-agent/"&gt;previous post&lt;/a&gt;, the Semantic Scholar MCP beat a vanilla agent on a reference extraction task with 100% recall vs. 85%, 3x fewer tool calls, and zero side effects. But since everybody&amp;rsquo;s buzzing how &lt;em&gt;MCP is dead, skill is all you really need,&lt;/em&gt; yada yada yada&amp;hellip; in this post I&amp;rsquo;ll show is this really true. Can a skill actually do the same thing, but cheaper?&lt;/p&gt;
&lt;h2 id="what-changed"&gt;What Changed&lt;/h2&gt;
&lt;p&gt;So, I built a &lt;code&gt;/scholar&lt;/code&gt; skill. It&amp;rsquo;s extremely simple: three Python scripts calling the same Semantic Scholar API, using stdlib &lt;code&gt;urllib&lt;/code&gt; and running via &lt;code&gt;uv&lt;/code&gt;. No async framework, no rate limiter, no circuit breaker, no nothing. The agent (with frontier model that it&amp;rsquo;s using, of course) is more than capable enough to retry if a call fails.&lt;/p&gt;</description></item></channel></rss>