Skip to main content

Security - GET and POST

In today post we will talk about GET and POST from a security perspective. We will try to identify why we should use POST and not GET in different situations and when it is the same thing using GET or POST.

Why I see this topic so important?
At different security reviews or penetration tests I see recommendations where GET is not recommended.

GET and POST overview
The main difference between GET and POST is the way how parameters are send. When using GET, all parameters are send in the query string and are visible. In contrast, when using POST, the query parameters can be added to the body of the message, that is not directly visible.
GET:
/playground?name=Tom&age=20
POST
POST /playground
Host: foo.com
name=Tom&age=20
As we can see in the above example, the parameters can be found in the body of the request, not in the query string.
If we take a look on w3schools we will notify that the main differences between GET and POST are:
Feature GET__ POST
Can be cached Yes No
Remains in browser history Yes No
Can be bookmarked Yes No
Has length restrictions Yes No



It is pretty clear from the above table, that GET can be used with success when we need traceability, bookmark and navigation features. Especially when we what search optimization. POST is great for scenarios when we need to submit data, especially because a browser will not allow automatically to submit the same request again (POST) without asking the user one more time about this.

GET and POST over HTTPS
Over HTTPS both requests are secure as long as the tunnel between the two endpoints is not corrupted. Even if over GET, the arguments are send over query string, the request itself is encrypted. The only thing that can be seen from outside is the request endpoint (IP). Everything else is encrypted.
Does this mean that GET is secure? No. It means that over a secure channel like HTTPS, it is almost the same thing if we use GET or POST.

Proxy and 3rd party listeners 
All traffic over GET (and HTTP) can be logged and cached by any listeners. Even if we don't expose sensitive information from query parameters, a system/person could understand the system and what are the weak points of the system. He can be in a discovery mode and identify what are the endpoints that are exposed, what kind of features are exposed over them and what kind of values are accepted by each endpoint.

Web Accelerators
In general a web accelerator will click and navigate to all GET requests by default. If we expose over GET an insert of delete command that we can end up with a behavior that we really don't want.
For POST a web accelerator needs to be prefetched with data.

Cache
Beside browsers, there are other systems between clients and servers that can cache data. The best example in this case is a reverse proxy that can cache and server from his own cache all GET requests.
If we don't want data to be cached, we could say that changing the requests to GET can be a better solution, but it is more simple to specify in the GET requests that we don't want to cache that specific resource.

Browser history over HTTPS
Even if we are using HTTPS, all requests that are send by a browser will appear in the browser history. It is important to know that this requests (URL and query parameters) are visible only in the browser. From outside the system all content is encrypted and query parameters cannot be sniffed.

Secure over HTTP
Over HTTP, both requests are the same from a security perspective. All the content is in clear text and can be accessed by anybody (sniffed). Over HTTPS all the content above TCP/IP level is encrypted. The good part is that there are more and more browsers that accept POST only over HTTPS.

Alter Requests
It is true that it is more easier to alter GET requests, because we only need to change the query parameters. But it is not complicated to alter a POST requests, if we take into account that nowadays almost all browsers have a 'developer' plugin build-in.

Malicious Links
Usually malicious links are send over GET requests. It is pretty hard to send an link in the email that will trigger a POST to a specify site.

Conclusion
Yes, it is true that over GET more information are visible (including the endpoint), but discussion is not all the time around security. Let's imagine that we have a proxy between browser and server. Even if we are doing a POST and parameters are in the body, nobody stops the proxy to not access the content from the body. If the content is encrypted over HTTPS then the proxy will not be able to access this information (even if we are over GET or POST).
When you don't know if you should use GET or POST ask yourself if the information that is send can be shared with others or not alter data. If yes, then GET could be a solution for you. Otherwise, POST is your best friend.

Post are more secure if we are thinking from a security perspective on the computer from where the request are done, but over the wire there is no difference.

Comments

  1. A useful list.

    Anyway, the first criteria when choosing between GET and POST is their original intended use:
    - GET - only data retrieval, no side effects
    - POST - add new resource, or (when PUT can't be used), modify an existing resource
    If these are followed, many issues can be avoided, starting from trivial CSRF attacks.

    ReplyDelete
    Replies
    1. Yes, this is true.
      But during a security audit, this things are very important. They will prefer POST all the time vs GET, even if is not according to REST standards.

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too