Skip to main content

How should we treat virtual methods exposed in APIs

Cu cateva zile in urma mi s-a cerut sa investighez de ce nu functioneaza o aplicatie asa cum trebuia in urma unui upgrade de framework, iar cand am gasit cauza problemei am zis ca trebuie sa va zic si voua.
Un mic framework care era folosit definea o clasa de baza abstracta, care la randul ei continea cateva metode virtuale.
public abstract class FooBase
{
    public virtual void DoAction1()
    {
        ...
    }
    ...
}
Implementarea care era facuta in sismul nostru asta in felul urmator:
public class MyCustomFoo : FooBase
{
    ...
    public override void DoAction1()
    {
         // Some custom action
         ... 
    }
}
Problema la MyCustomFoo este ca metoda DoAction1() nu apeleaza metoda din clasa de baza. Asta nu ar fi nici o problema cat timp cel care a scris acest cod implementeaza aceasta functionalitate. Pe vechia versiune de framework, acest lucru era in regula, dar noua versiune schimba usor o functionalitate si are nevoie neaparat ca metoda din clasa de baza sa fie apelata.
Intrebarea care a aparut aici in cazul meu a fost: Cine este de vina?
Din unele puncte de vedere as spune ca dezvoltatorul care a implementat clasa MyCustomFoo. Acesta trebuie sa se asigure ca apeleaza si metoda din clasa de baza, pastrand vechia functionalitate.
Totodata metoda era marcata ca virtual, cea ce inseamna ca cel care marcato ca virtual permite persoanei care face ovveride sa schimbe modul de implementare a respectivei functionalitati. Dar sa nu uitam, ca in momentul in care face ovveride nu trebuie sa alterezi vechiul comportament.
Noua versiune de framework trebuia si i-a modificata in asa fel incat sa nu se altereze functionalitatea in nici un fel, dar unele modificari pot sa duca la unele schimbari si in clasele virtuale.
In acest caz noua versiune trebuie sa fie insotita si de un document cu modificarile la API care au fost facute.
Voi ce parere aveti? Intr-un caz de acest gen cine poarta vina este raspuzator pentru aceasta problema?

Partea a doua din aceasta discutie: http://vunvulearadu.blogspot.ro/2012/06/how-should-we-treat-virtual-methods.html.

Comments

  1. Cred ca ideea de "vina" nu isi are locul aici. Cind se face un design, se face in functie de ce anume se doreste in acel moment, nu ce s-ar putea schimba in viitor. Daca ceva se schimba, atunci este responsabilitatea celui care schimba sa modifice si designul.

    In cazul asta, metoda trebuie sa fie nevirtuala, ca sa execute mereu baza, dar care apeleaza in acea baza si o alta metoda care este virtuala. Patternul este deobicei sa termini numele unei astfel de metode in Override, deci DoAction1Override. Desigur, asta tine mai mult de code practices in cadrul proiectului.

    ReplyDelete
  2. Ca si Siderite eu consider ca e mai bine sa faci un DoSomethingCore drept virtual si restul codului DoSomething sa se execute indiferent de vointa clasei derivate. Pana la urma filosofia .NET e sa nu te bazezi pe buna intentie si atentia programatorilor ci sa te asiguri ca se face ce trebuie (vezi GC)

    ReplyDelete
    Replies
    1. Asta in cazul in care cei care expun API se gandesc la asta.
      Part 2 - http://vunvulearadu.blogspot.ro/2012/06/how-should-we-treat-virtual-methods.html

      Delete
  3. As zice ca daca era intr-adevar un 'framework' cel care a definit clasa de baza ar trebui sa se asigure ca a evidentiat foarte clar in documentatie (sau si mai bine printr-un naming fara ambiguitati) care e contractul care trebuie satisfacut de clasa respectiva si de cele derivate din ea - in felul asta cel ce face override va sti la ce se asteapta clientii acelei ierahii de clase.

    Intr-un framework odata stabilit acel contract va fi foarte greu de schimbat in viitor fara a introduce breaking changes (precum in exemplul de mai sus) - daca e doar ceva de uz intern, desigur nu mai e asa important.

    Intr-un framework, cand se face o metoda virtual, autorul trebuie sa se asigure ca cel ce face override ii e foarte clar ce se asteapta de la acel punct de extensibilitate - in astfel de cazuri cand functionalitatea din clasa de baza e "a must" se foloseste template method pattern, cum a zis si Andrei.

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP