Skip to main content

Shared Access Signature and Access Level on Blob, Tables and Queues

Some months ago I have some posts about Shared Access Signature (SAS). Yesterday I received a simple question that appears when we start to use SAS over Windows Azure Storage (blobs, tables or queues).
When I’m using Shared Access Signature over a blob, should I change the public access level?
People can have the feeling that from the moment when you start using the SAS over a container or a blob, people will not be able to access the content in the classic way. SAS don’t change the public access level, because of this, if your blob is public, than people will be able to access it using SAS token or with a normal URL.
To be able to control the access to a container or to a blob using only SAS you will need to set the access level of the content to private. This can be made from different locations (Windows Azure Portal, different storage clients or from code). Having a container of blob with the access level set to private means that people with account credentials will be able to access the content.
I recommend you to have different containers for the content that needs to be public and private (and the private content is access using SAS). In this way the content management will be easier. Also, try to generate SAS tokens per blob and not per container, when is possible.
Using the storage account name and access key anybody can access our storage account, even if we are using SAS. From the moment when we start using SAS, our client should not have access to our storage account credentials. Also, using storage account credentials anybody can change our SAS configuration.

In conclusion we could say that from the moment when we start using SAS we should switch the access rights of the blobs and container to private.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…