Skip to main content

Ce nu trebuie sa facem cand vrem sa masuram viteza de executie a unei zone de cod

Cand testam viteza de executie a unui aplicatii sau a unei parti din cod nu trebuie sa uitam:
Sa masuram doar codul care trebuie masurat
Cand trebuie sa verificam viteza de execuție a unei porțiuni din cod, trebuie sa încercam sa eliminam cat mai mult cu putința liniile de cod care nu ne interesează si care nu trebuie masurate.
De exemplu in codul de mai jos se doreste cronometrarea doar in zona in care se procesează datele, fara timpul necesar citirii datelor si inițializarea altor obiecte.
public void DoTest()
{
//Start monitorizare.

var items=ReadData();
var container=new Container();
container.Init();

//Proceseaza datele.
...

Save(items);

//Stop monitorizare
}
Corect ar fi:
public void DoTest()
{
var items=ReadData();
var container=new Container();
container.Init();

//Start monitorizare.
//Proceseaza datele.
//Stop monitorizare

Save(items);
}
Dezactivati mesajele de debug
Toate mesajele de genul Debug.Writeline("..."), Trace("..."), Logger.WriteLine("...") trebuie dezactivate. Acestea consuma resurse si timp, iar un functie de accesul la disk, ele pot sa afecteze timpii finali, a.i. sa se obtina timpi foarte diferiti la mai multe rulari.
Nu faceti testele de viteza in Debug
Cand testati in Debug, pe langa codul propriu-zis apar nenumarate simboluri de debug si calluri care interactioneaza cu debugul. De foarte multe ori codul compilat in release, ruleaza de zeci de ori mai rapid.
Nu folositi DateTime.Now
Pentru a calcula durata de executie nu este recomanda sa folositi DateTime.Now. Daca de exemplul threadul este suspendat de OS, pot sa apara intarzieri care sa afecteze timpul final. Pentru a masura durata de executie se poate folosi Stopwatch. Aceasta are metoda Start(), Stop() pentru controlul cronometrului. Pentru a obtine durata putem sa apelam propietatile Elapsed, ElapsedMilliseconds sau ElapsedTicks. Daca dorim sa verificam daca cronometrul ruleaza putem sa ne folosim de propietate IsRunning.
public void DoTest()
{
var items=ReadData();
var container=new Container();
container.Init();

Stopwatch stopwatch=new Stopwatch();
stopwatch.Start();
//Proceseaza datele.
stopwatch.Stop()

Console.WriteLine(string.Format("Durata: {0}",stopwatch.Elapsed);

Save(items);
}
Masurati de mai multe ori si calculati valoarea medie a timpilor de executie
Deoarece pe acelasi procesor ruleaza mai multe procese de la SO la antivirus, timpii pe care ii obtinem pot sa difere. Este recomandat sa se ruleze un test de cel putin 1000 de ori daca este posibil si sa se calculeze media aritmetica. Datele obtinute se pot salva, iar la sfarsit informatia se poate prelucra.
static void CalculareTimpMediu()
{
var items = ReadData();

var watch = new Stopwatch();
var cycles = Math.Pow(10, 3);
var times = 0D;

for (var i = 0; i < cycles; i++)
{
watch.Reset();
watch.Start();

//Proceseaza datele.

watch.Stop();
times += watch.ElapsedMilliseconds;
}

inputList.Clear();
Console.WriteLine("Durata: " + (times / cycles));
}
Exista nenumarate aplicatii care ne pot ajuta sa masuram viteza de executie. Dar pentru o testare simpla, putem sa face totul manual. Putem sa ne cream o librărie simpla care sa ne ajute. Timpii intermediari obtinuti e bine sa ii salvati in memorie si doar la sfarsit pe disk. In cazul in care stocati timpii sau alte date pentru testare in siruri, este bine sa alocati de la bun inceput spatiul necesar pentru acestea.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP