Skip to main content

Posts

Showing posts from March, 2026

ame Prompt, Three Workflows: What Happens When BMAD Joins SpecKit

 In my previous article, I looked at SpecKit without extensions and SpecKit with extensions, trying to understand how much structure really helps when we use AI to generate code. This post is a follow-up of that work. I kept the same prompt, tools, and evaluation method, but added a third approach: BMAD (BMad Agentic Development). From the beginning, BMAD felt different. SpecKit guides the AI through clear workflows. BMAD, on the other hand, feels like a small virtual team that thinks first, plans more, and then writes code. This difference shows clearly in the output. What impressed me most was simplicity. Even if BMAD did not win on all linting scores, the code was much easier to read and reason about. The Halstead cognitive metrics showed a big gap that classic linters do not really capture. In simple words, the BMAD code is easier for a human brain. Testing was another strong signal. BMAD produced the highest number of tests and almost 99% coverage, while also having the ...

I compared spec kit with and without extensions. Here is what I found

 This article is a continuation of my previous one, “Spec Kit to Delivery Discipline - A SDLC guide.” In that article, I looked at the Spec Kit from the perspective of the process and delivery disciplines. This time, I wanted to make a more practical comparison and really test the two approaches: Spec Kit alone versus Spec Kit with extensions. So I ran the same experiment in both scenarios and compared the outputs side by side. The experiment clearly showed that the version with extensions produced the better overall result. The most interesting metrics are Halstead complexity metrics, which focus on how easily the code can be understood (cognitive effort) and the density of bug prediction. When using the extension to solve the same problem, the effort required to understand the code is almost 50% less.  Both approaches produced a working solution, so the question was not only “does it work?” but also “which one gives a better engineering outcome?” Based on the results, the ve...

Spec Kit to Delivery Discipline - A SDLC guide

 If we already accept that Spec Kit is a valuable foundation, the more important conversation is how to use its extension ecosystem in a disciplined way. In my view, the real opportunity is not in adding more tooling. It lies in forming a delivery model where each extension has a clear purpose across the lifecycle. The overall stack should strengthen control, traceability, and delivery confidence. For a standard enterprise-oriented setup, I see the most effective combination as the following (for March 2026): Requirements / Discovery DocGuard Improves specification quality, structure, and traceability Planning / Backlog Jira Integration Connects requirements and task breakdowns to the delivery management tool Verification / Validation Verify Validates implementation against the specification Verification / Delivery Control Verify Tasks Detects tasks marked as complete without actual implementation Maintenance / Drift Control Spec Sync Keeps specification and implementation aligned ...

A 4-week GitHub Copilot learning journey for development teams

 For many development teams, the challenge is not only to start using AI tools but to do so in a practical and safe way during real delivery work. For a .NET team working with Visual Studio, GitHub Copilot, backend services, Windows-based applications, and Azure, the real value lies in AI becoming part of the normal engineering workflow. This approach should be treated as the development of a core team capability, not as an informal learning exercise. Week 1: Start with certainty and basic habits In the first week, the main goal is to help the team feel comfortable with GitHub Copilot inside Visual Studio. Developers should try the most common ways to work with it: inline suggestions, chat, code explanations, and small code generation. For a .NET team, this can be very practical. A developer can ask Copilot to explain what one service class is doing, generate a DTO, add XML comments, or summarise how one component is calling an Azure backend service. This first week is important be...

AI Adoption: a practical 6-week plan

 Giving people AI tools is the easy part. Getting them to use those tools every day is the hard part. Too many companies buy licences, announce the tool, then expect adoption to happen by magic. People are busy, they don’t have time to learn, and often they don’t know where to start or what “good” looks like for their role. If you want real adoption, learning must happen in the flow of work: short, practical, role-focused and most importantly, hands-on. The goal Make people confident in using AI tools for daily work—not just for demos or toy examples. The plan (6 weeks, minimal overhead) Week 0: Sponsor & plan (30–60 min). Get a leader to agree to protected learning time and pick a handful of real use cases. Leadership backing is small but crucial. Week 1: Role-based awareness (60–90 min). Run short sessions for each role: developers, QA, product, PM, support and show three concrete examples for the role. Keep it practical: real tasks, not abstract slides. Weeks 2–4 : Study g...

AI adoption is not about tools, it’s about enablement

 Many companies are giving employees access to AI tools, but many are creating real adoption. I see this pattern more and more across organisations. Companies roll out AI tools like Copilot or ChatGPT, but commonly lack training, structure, and support to help teams use AI in daily work. The main obstacle is time. Teams are busy, so learning new tools is deprioritized. Many need examples and guidance on applying AI to their roles. Access alone is not sufficient. Instead, use simple mechanisms for learning-by-doing. Study groups and weekly sessions help share ideas. Having an AI-savvy person work with a team shows practical uses, like an engineer demonstrating Copilot in Visual Studio. For real AI adoption, companies need to invest in both tools and enablement. A simple prompt to for leaders: Don’t stop at buying licenses. Create role-based AI awareness sessions. Start small study groups inside teams. Give people protected time to learn. Use early adopters to coach teams hands-on. C...

Moving faster in cloud transformation without cutting governance

 Many organisations begin cloud transformation by asking, “How do we move faster?” A better question is often, “What is slowing us down in the first place?” In my experience, Azure transformation programmes rarely slow down because teams are too cautious. More often, they slow down because the foundations are weak. Decisions are inconsistent. Too much is treated as bespoke work. What looks like speed at the beginning often becomes rework later. Teams push workloads forward but then need to come back and fix identity, networking, security, subscription design, resilience, or operational readiness. This is why I believe strongly that speed does not come from shortcuts. It comes from clarity, repeatability, and doing the important basics early. On Azure, this starts with a strong landing zone. When management groups, subscriptions, Azure Policy, RBAC, connectivity, monitoring, and security baselines are established early, delivery teams can move with much more confidence. They are not...

Windsurf changed the way I work with Home Assistant

 I started using Windsurf as a practical assistant for my Home Assistant setup at home, and after a short time, it became clear to me that this is far more than a nice AI demo. It changed the way I do maintenance, debugging, and small improvements. The biggest value is not only the technical help, but the speed of interaction and the fact that I can keep full control while still moving much faster. Before going into details, here is the short version of the impact: Reduced my Home Assistant maintenance effort by around 80% Helped me clean up around 90 accumulated errors in about one hour Made debugging much faster because it can inspect logs over SSH Helped me add new automations and features with much less friction Kept the process safe, because it has read-only access only At home, I use Windsurf as a Home Assistant assistant—not to replace me, but to speed me up. What I like most is the way we interact. I do not write long prompts. Usually, I just say things like “check the logs...

How I Used GitHub Copilot to Automate an Azure DevOps Migration

 The primary goal of this work was to assess whether an AI system could define and support the entire migration process, from design through script creation to execution. This effort was not limited to faster code generation. The experiment aimed to determine whether AI could practically support all migration stages, including analysis, process structuring, task automation, documentation, and execution support. For this experiment, I migrated from Azure DevOps Server to Azure DevOps Services using Microsoft’s official Data Migration Tool. This scenario was selected for its complexity, which includes technical dependencies, validation points, identity management, infrastructure setup, and post-migration verification. Such migrations are prone to errors if the process is unclear or not repeatable. The objective was to automate as much of the end-to-end migration flow as possible. Typically, such migrations require several days for planning, scripting, testing, documentation, troubles...