This article is a continuation of my previous one, “Spec Kit to Delivery Discipline - A SDLC guide.” In that article, I looked at the Spec Kit from the perspective of the process and delivery disciplines. This time, I wanted to make a more practical comparison and really test the two approaches: Spec Kit alone versus Spec Kit with extensions.
So I ran the same experiment in both scenarios and compared the outputs side by side.
The experiment clearly showed that the version with extensions produced the better overall result.
The most interesting metrics are Halstead complexity metrics, which focus on how easily the code can be understood (cognitive effort) and the density of bug prediction. When using the extension to solve the same problem, the effort required to understand the code is almost 50% less.
Both approaches produced a working solution, so the question was not only “does it work?” but also “which one gives a better engineering outcome?” Based on the results, the version with extensions came out ahead with a total weighted score of 9.00, compared to 7.19 for the base Spec Kit approach.
What is also interesting is where the difference appeared. The extension-based version was stronger in the areas that matter a lot in real delivery work: it had more tests, more documentation, and lower cognitive complexity, making the solution easier to understand and reason about. On the other hand, the base version performed slightly better on a few isolated code-quality checks, but not enough to change the overall picture.
The key takeaway: success means generating code that is complete, maintainable, and delivery-ready.
Extensions led to a more mature, usable result—better suited to real engineering needs.
If you are interested, I invite you to take a look at the repository and the comparison report. I shared the full outputs, methodology, and findings there so you can review everything yourself.
https://github.com/vunvulear/speckit-assessment

Comments
Post a Comment