Skip to main content

The scope of a PoC

Let us talk about what it should be the scope of a PoC and what you should or you should not have in a PoC.

Purpose of PoC
First, we need to define what is the purpose of a PoC is. The main purpose is to demonstrate the principles that are covered in technical documents (that it is not just theory and diagrams).

Reusability
It is already a deja vu for me to hear people that they want to reuse the PoC output in the main project. This happens because many times the PoC scope is too big and does not covers only the ideas that needs to be demonstrated.
When you have a PoC that covers more than 15% of the implementation effort than you might have a problem. That is not a PoC anymore, it is a PILOT, that represents a system with a limited functionality that go in production. The Pilot might have many restrictions, from NFRs to business use cases that are covered, but it has some part that works.
You will never want to invest in a PoC more than it is necessary and you shall always push the output code to trash. Even if it is working code, it should never reach the Pilot or production. Yes, technical team might look over the PoC implementation to get inspiration, but nothing more than this.

Scope
It is so common to add to the PoC scope things that are general truth. A general truth is well documented already and proven by others.
A good example is to check in a PoC that Azure AD can be used as authentication mechanism inside an ASP.NET MVC application. You already know that it works and you will find plenty official documentation. Yes, maybe the team do not have experience with this, but this does not means that it will not work.


Challenge the PoC Scope
Let us take for example the following PoC scope:

  • Import data from a 3rd party provider
  • Store data inside CosmosDB
  • Validate data and move it to Azure SQL
  • Expose data inside a ASP.NET MVC application using OData
  • Use Azure AD for authentication in front of OData Service
  • Use data access restrictions to OData services based on user role
  • Create a simple web application based on Angular
  • Display data that was read from OData Service

What do you think; it is the scope of this PoC valid?
I personally would say that it is not. Most of the items included in PoC are general truth and you know that they will work. Let’s take a look one by one and see.
Import data from a 3rd party provider: This is a valid PoC item and it should be kept. Even if it is more on integration side, you want to validate that you can communicate and extract data from an external 3rd party.
Store data from 3rd party inside CosmosDB: You need to keep data somewhere, but what is the purpose of this inside the PoC. Taking into account the other items, the data is kept only to validate it and move it Azure SQL. In this case, the validation can be done on the fly and there is no need to keep it inside CosmosDB for PoC.
Validate data and move it to Azure SQL: The validation is a valid point to show how data transformation is done, but you need to cover only one data type. There is no need to cover all of them. You can keep the data inside Azure SQL for PoC, but a binary file should work as well. You demonstrate the validation part, not the storage. It is clear that you can store data inside Azure SQL.
Expose data inside an ASP.NET MVC application using OData: This can be added to the scope as long you expose only one entity and you want to validate that you can restrict access to data based on user role. Otherwise, OData can be used to offer access to entities. Another case when you would want to include OData inside PoC is when you have a complex data structure and you want to be sure that it is possible to expose only a part of it in the way that you want.
Use Azure AD for authentication in front of OData Service: If you would have only this inside PoC, this would be a general truth. Combined with the next requirement, it might make sense to include inside PoC. Even if, the next item can be seen as a general truth.
Use data access restrictions to OData services based on user role: We know that inside an ASP.NET MVC application you can restrict access based on user role. This is kind of general truth but let’s say that it might be included inside a PoC.
Create a simple web application based on AngularJS: What is the purpose of it? You can validate the authentication and role based using a simple C# code. There is no need to create an Angular application only for it. Only if you want to define the base template and in this case it is the pilot.
Display data that was read from OData Service inside AngularJS: As long as you don’t use a custom UI controller or things like this, this is pretty clear that will work and it doesn’t make sense to include it.

As we can see, most of the things that were included were general truth and you do not need to cover them inside a PoC.

PoC Optimization
There are some things that you can do to optimize the PoC. Even if you don’t plan to use EF, for the PoC it might be useful to extract data from SQL using EF. You can use the designer from database and you would not need to write a line of code. The same thing for OData. You can expose content that is read using EF, giving you the option to write 0 lines of code to read data from Azure SQL and expose as OData. This is applicable as long as you don't have in the scope some NFRs like performance on reading data from storage.

What you should remember

  • Focus inside PoC only on things that you need to prove that work. 
  • Do not validate inside a PoC things that you already know that are general truth.
  • Avoid defining the design of the application during PoC
  • Do not reuse PoC code in production

Comments

  1. I would say that sometimes just a better name should be found - very often, PoC means "opportunity to learn about some new technology".. :) So it is used as a time-boxed way to later answer questions like: "how long would _you_ need to complete that?" or "how hard is for _you_ to use that technology?" :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too