Skip to main content

The scope of a PoC

Let us talk about what it should be the scope of a PoC and what you should or you should not have in a PoC.

Purpose of PoC
First, we need to define what is the purpose of a PoC is. The main purpose is to demonstrate the principles that are covered in technical documents (that it is not just theory and diagrams).

Reusability
It is already a deja vu for me to hear people that they want to reuse the PoC output in the main project. This happens because many times the PoC scope is too big and does not covers only the ideas that needs to be demonstrated.
When you have a PoC that covers more than 15% of the implementation effort than you might have a problem. That is not a PoC anymore, it is a PILOT, that represents a system with a limited functionality that go in production. The Pilot might have many restrictions, from NFRs to business use cases that are covered, but it has some part that works.
You will never want to invest in a PoC more than it is necessary and you shall always push the output code to trash. Even if it is working code, it should never reach the Pilot or production. Yes, technical team might look over the PoC implementation to get inspiration, but nothing more than this.

Scope
It is so common to add to the PoC scope things that are general truth. A general truth is well documented already and proven by others.
A good example is to check in a PoC that Azure AD can be used as authentication mechanism inside an ASP.NET MVC application. You already know that it works and you will find plenty official documentation. Yes, maybe the team do not have experience with this, but this does not means that it will not work.


Challenge the PoC Scope
Let us take for example the following PoC scope:

  • Import data from a 3rd party provider
  • Store data inside CosmosDB
  • Validate data and move it to Azure SQL
  • Expose data inside a ASP.NET MVC application using OData
  • Use Azure AD for authentication in front of OData Service
  • Use data access restrictions to OData services based on user role
  • Create a simple web application based on Angular
  • Display data that was read from OData Service

What do you think; it is the scope of this PoC valid?
I personally would say that it is not. Most of the items included in PoC are general truth and you know that they will work. Let’s take a look one by one and see.
Import data from a 3rd party provider: This is a valid PoC item and it should be kept. Even if it is more on integration side, you want to validate that you can communicate and extract data from an external 3rd party.
Store data from 3rd party inside CosmosDB: You need to keep data somewhere, but what is the purpose of this inside the PoC. Taking into account the other items, the data is kept only to validate it and move it Azure SQL. In this case, the validation can be done on the fly and there is no need to keep it inside CosmosDB for PoC.
Validate data and move it to Azure SQL: The validation is a valid point to show how data transformation is done, but you need to cover only one data type. There is no need to cover all of them. You can keep the data inside Azure SQL for PoC, but a binary file should work as well. You demonstrate the validation part, not the storage. It is clear that you can store data inside Azure SQL.
Expose data inside an ASP.NET MVC application using OData: This can be added to the scope as long you expose only one entity and you want to validate that you can restrict access to data based on user role. Otherwise, OData can be used to offer access to entities. Another case when you would want to include OData inside PoC is when you have a complex data structure and you want to be sure that it is possible to expose only a part of it in the way that you want.
Use Azure AD for authentication in front of OData Service: If you would have only this inside PoC, this would be a general truth. Combined with the next requirement, it might make sense to include inside PoC. Even if, the next item can be seen as a general truth.
Use data access restrictions to OData services based on user role: We know that inside an ASP.NET MVC application you can restrict access based on user role. This is kind of general truth but let’s say that it might be included inside a PoC.
Create a simple web application based on AngularJS: What is the purpose of it? You can validate the authentication and role based using a simple C# code. There is no need to create an Angular application only for it. Only if you want to define the base template and in this case it is the pilot.
Display data that was read from OData Service inside AngularJS: As long as you don’t use a custom UI controller or things like this, this is pretty clear that will work and it doesn’t make sense to include it.

As we can see, most of the things that were included were general truth and you do not need to cover them inside a PoC.

PoC Optimization
There are some things that you can do to optimize the PoC. Even if you don’t plan to use EF, for the PoC it might be useful to extract data from SQL using EF. You can use the designer from database and you would not need to write a line of code. The same thing for OData. You can expose content that is read using EF, giving you the option to write 0 lines of code to read data from Azure SQL and expose as OData. This is applicable as long as you don't have in the scope some NFRs like performance on reading data from storage.

What you should remember

  • Focus inside PoC only on things that you need to prove that work. 
  • Do not validate inside a PoC things that you already know that are general truth.
  • Avoid defining the design of the application during PoC
  • Do not reuse PoC code in production

Comments

  1. I would say that sometimes just a better name should be found - very often, PoC means "opportunity to learn about some new technology".. :) So it is used as a time-boxed way to later answer questions like: "how long would _you_ need to complete that?" or "how hard is for _you_ to use that technology?" :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP