Tag Archives: SQL

Microsoft SQL 2012 Developer: Creating Partitioned Tables

SQL Server 2012 includes the T-SQL extensions to allow for partitioned tables and indexes. Here are the specific steps you perform to create and maintain these database objects.

To create a partitioned table, complete the following tasks:

  1. Select an appropriate partition key.
  2. Create a partition function.
  3. Create a partition scheme.
  4. Create the table and identify its partition scheme.
Key Terms
Partition key A single column in a table whose values determine the partition on which the data resides.
Partition function A function that specifies how to partition the table by defining partition key boundaries for each subset of data.
Partition scheme A mapping of individual partitions to filegroups.

Creating a Partition Function

With a partition key selected, the next step is to define the partitions. The CREATE PARTITION FUNCTION statement specifies how to divide the table by identifying specific boundaries for each partition. The basic syntax of the CREATE PARTITION FUNCTION statement is:

Note: the following syntax details: The input_parameter_type specifies the data type of the partition key and not the name of the partitioning column. The LEFT and RIGHT keywords specify the side of a partition boundary on which a boundary_value belongs. LEFT is the default if you do not explicitly state a side.

LEFT and RIGHT Boundaries

The best way to understand the difference between LEFT and RIGHT partition boundaries is to look at some examples. Consider the following two partition functions:

 

Table below shows the specific partition boundaries for both LEFT and RIGHT versions of the partition function.

Partition Column Values (LEFT) Column Values (RIGHT)
1 <= 100 < 100
2 > 100 AND <= 1,000 >= 100 AND < 1,000
3 > 1,000 AND <= 10,000 >= 1,000 AND < 10,000
4 > 10,000 AND <= 100,000 >= 10,000 AND < 100,000
5 > 100,000 >= 100,000

The table above Partition range values defined according to LEFT and RIGHT boundaries.

A partition function that you define using LEFT boundaries indicates that the boundary value exists in the partition to the value’s left. Similarly, RIGHT boundaries place values in the partition to the value’s right.

Because all partition key values need to map to a partition, SQL Server actually creates n+1 partitions for every partition function. Notice that both functions specify only four boundary values. However, values less than the minimum and greater than the maximum are part of the entire domain of values and therefore must map to a partition.

TIP: If you need to specify acceptable minimum or maximum values, apply a CHECK constraint on the table to reject inappropriate partition key values. For example, to prevent partition key values greater than 100,000, define the partition function by using RANGE LEFT and placing a CHECK constraint on the table to reject values greater than 100,000.

In addition, a CHECK constraint on the table effectively leaves either the leftmost or the rightmost partition empty—a benefit if you add or remove partitions later.

Although not ideal, SQL Server allows partition key columns to contain NULL values and handles them using these rules:

By default, all records that have a partition key value of NULL exist on the leftmost partition.
If a partition function contains a NULL boundary value and is defined using RANGE RIGHT, the leftmost partition remains empty and rows that contain NULL values exist on the next partition.

Keep these considerations in mind when you design partition functions.

Frank TillinghastFrank Tillinghast is a senior consultant with MTOW Software Solutions. He is a Microsoft Certified Solution Developer and has been developing applications for over fifteen years. Most of his time is spent consulting for companies nationwide with troubled projects or mentoring projects to successful completion. When he is not developing software or mentoring clients, Frank is teaching other developers. He has taught thousands of application developers how to create business solutions with Visual Studio .NET. VB.NET, ASP.NET, Visual C#, and SQL Server.

Isolation Levels and Transactions – Microsoft SQL

An isolation level defines how much a transaction is isolated from changes made by other, concurrent transactions. The current isolation level setting determines the side effects that can occur due to uncommitted changes that other transactions make. SQL Server supports all four of the isolation levels defined in the SQL ISO standard as well as two others related to row versioning:

  • The default READ COMMITTED isolation level is in effect with autocommit transactions. This ensures that every transaction meets the requirements of the ACID test without requiring extra code. READ COMMITTED prevents concurrent transactions from reading uncommitted data or interfering with the transaction until it is complete. This level results on locks on resources that could affect concurrency and the volume of transactions that a server can handle.
  • The READ UNCOMMITTED isolation level lets other transactions read changed data that has not yet been committed by other concurrent transactions, called dirty reads. This isolation level does not issue as many locks as READ COMMITTED, so has a lighter touch on resources and allows higher transaction rates.
  • The REAPEATABLE READ isolation level is similar to READ COMMITTED in that the transaction cannot read changed data not yet committed from other transactions. But it goes further: this isolation level prevents other transactions from modifying data read by the current transaction until it completes. The benefit is that a transaction at this isolation level can repeatedly read the data and get the same results, even if there are other transactions pending.
  • The SERIALIZABLE isolation level is similar to the REPEATABLE READ isolation level, but takes it up a notch. The SERIALIZABLE isolation level prevents both updates and inserts. It ensures that if a query is reissued inside the same transaction, existing rows won’t change and new rows won’t suddenly appear. SERIALIZABLE employs a range of locks that prevents edits, deletions, or insertions until the transaction is complete.
  • The SNAPSHOT isolation level essentially saves a snapshot of data at the start of the transaction to ensure that any data read during the life of the transaction will be the same as at the start of the transaction, no matter what changes have been made to the data in the meantime by other transactions (the current transaction will see its own data
    changes). This isolation level generally doesn’t require locks on data until the current transaction attempts to update data.
  • The READ COMMITTED SNAPSHOT isolation level is similar to SNAPSHOT except that instead of providing a snapshot of the data when the transaction started, READ COMMITTED SNAPSHOT provides one when the statement started. The behavior of this isolation level is similar to READ COMMITTED except that it doesn’t acquire shared locks and changes aren’t blocked when resources are exclusively locked.

NOTE Isolation levels are described in the “Isolation Levels in the Database Engine” topic in SQL Server Books Online (BOL). As of this writing, Microsoft has not updated this topic for SQL Server 2012, but the SQL Server 2008 version is still valid. This is the case for some of the other BOL references included in this chapter.

You can set the isolation level of the current session using the SET TRANSACTION ISOLATION LEVEL <isolation level name> statement, as well as with a table hint for a specific query using the WITH clause.

Isolation levels can affect what data your code reads when there are other concurrent transactions executing. The choice of isolation level you make can significantly impact the amount of activity an instance of SQL Server can support, so if you have performance issues on a heavily used database you might want to tweak the isolation levels used by your code.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2012 Developer course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

User-Defined Function Overview – Microsoft SQL

User-defined functions (UDFs) were introduced in SQL Server 2000 to provide the coding functionality similar to what is available in general-purpose programming languages. UDFs provide a level of flexibility and reusability that would be impossible to duplicate by using only stored procedures, triggers, and views. These functions, like those in other programming languages, provide reusable code modules that you can invoke with parameters, perform some kind of calculations or actions, and return various
kinds of data.

SQL Server supports three kinds of user-defined functions, each with its own specific features and benefits:

  • Scalar functions are what most developers think of as user-defined functions, because they are most like the functions available in other programming languages and the built-in Transact-SQL (T-SQL) functions. A scalar function can take parameters, contain complex processing, and return a single value of almost any SQL data type.
  • Inline table-valued functions are similar to views in that they each consist of a single SELECT statement and each returns a table object. However, inline table-valued functions can take parameters, whereas views cannot. Like views, you can use inline table-valued functions in statements that update, insert, or delete data, and you can join to them. An inline table-valued function is essentially a view with parameters.
  • Multi-statement table-valued functions are similar to stored procedures, but they return an in-memory table object (a set of rows and columns that you can select from) instead of a static result set. This makes them a powerful view/stored procedure hybrid, because you can perform complex processing in them as in a stored procedure and select data from them as in a view. However, the tables that these functions return are not updatable.

TIP: Another way to categorize the types of functions available in SQL Server is
as scalar and table-valued, based on the type of data returned. With that
categorization, there are then two types of table-valued functions.

UDFs are not allowed to have side effects, which is a bit of a limitation on their use. This means that they are not allowed to make any lasting changes in the database, such as to make changes to data or to alter the schema of any object. But there are more subtle actions that cause changes and therefore are not allowed in UDFs. For example, the RAND function, which generates random numbers, retains state information about the last random number it generated, which changes the state of the function and requires SQL Server to store that state somewhere.

User-defined functions combine some of the best elements of stored procedures and views. Many tasks that you previously performed by using views and stored procedures you can now handle better with UDFs. You can invoke these functions from views, stored procedures, other user-defined functions, batches, and scripts, as well as from external applications.

NOTE:  SQL Server 2000 did not permit nondeterministic functions in the body of a user-defined function. A deterministic function, like LEN, always returns the same result anytime it is called with the same argument values. A nondeterministic function, such as GETDATE(), always returns a different value on each invocation. However, in SQL Server 2005 and later, things have loosened up: GETDATE() is now permissible in UDFs.

Common Language Runtime UDFs

You can create each type of UDF in SQLCLR code as well as Transact-SQL. Transact-SQL is recommended for functions that rely on set-related data manipulation and SQLCLR functions are recommended for functions that require intensive computations or string manipulations, neither of which T-SQL is well suited for. In some cases, the .NET Framework contains base classes that supply functionality that would be difficult to duplicate in Transact-SQL, such as string processing that uses the RegEx class. However, if your functions involve fairly straightforward data manipulation, Transact-SQL is the best choice. This chapter will focus on T-SQL UDFs.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2012 Developer course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

The Stages of Query Compilation

Compiling a query involves three stages: parsing, algebrization, and optimization. The process transforms the logical T-SQL code you write into the physical operators that define the steps that SQL Server will perform to execute the query and return results. Below shows a very high-level view of the entire process.

thestagesofQuery1 The overall query compilation process in SQL Server.

When SQL Server receives a batch of T-SQL statements from a client application (which includes Management Studio), it first checks to see if a plan exists in the plan cache for the query. If the query was recently executed by this or another session, then the query doesn’t need compilation. The plan is passed to the execution engine, and the process is done.

If no match is found in the plan cache, SQL Server parses the query to check the correct T-SQL syntax, including the use of valid SQL identifiers for object names—although not yet to verify that all referenced objects exist—as well as the spelling and use of keywords.

Then for each statement in the batch, the query processor sees if there is an existing plan guide and if it is in the cache. A plan guide is an object that you can define to specify a query plan for a query. If there is a match that is in the plan cache, the processor uses the plan guide for that statement and sends it off for execution. If there is no match, the query is parameterized, which essentially creates a version of the query with separate parameters, and it once again checks to see if the parameterized query is in the plan cache. If there is a
version in the plan cache, it is sent to the execution engine.

Otherwise, if there is no matching plan guide or parameterized query in the plan cache, the query processor has to go through the process of creating an execution plan for the query. The first step is algebrization, also called normalization. This step creates a logical tree, sometimes called a parse tree, that represents the logical steps needed to execute the query. A primary task of algebrization is binding, which checks whether referenced tables and columns exist, loads metadata about those objects, and adds information about implicit
conversions needed. The next step is optimization, which, as its name implies, performs any refinements that can make the execution plan more efficient. Then the plan is added to the plan cache and sent to the execution engine.

TIP: Not all execution plans are cached. For example, you can define a stored
procedure with the WITH RECOMPILE option, which causes SQL Server to
generate a new execution plan every time that code executes. In that case,
SQL Server doesn’t bother caching the plan since there is no point.

Caching execution plans is one of the major ways that SQL Server dramatically increases the efficiency of query execution. Creating an execution plan can be a time-consuming operation, so if most queries can use a cached plan the server will be able to process far more queries.

Even if an execution plan for a query is in the plan cache, SQL Server may decide to recompile the query anyway and throw out the old plan. This process is called recompilation, and it occurs for two broad reasons, correctness and optimality.

  • Correctness: The query processor might determine that an existing cached plan could return incorrect results. In that case, it recompiles the statement or batch. There are numerous reasons why a plan might return incorrect results, including schema changes; added, changed, or dropped indexes; statistics updates; use of WITH RECOMPILE; changed environment settings (usually SET statements); or an explicit call to the sp_recompile system stored procedure.
  • Optimality: SQL Server tracks data distributions with statistics, including information about the number of rows in a table as well as changes to the data in individual columns. The query processor uses a threshold value to decide whether it should recompile a query. Essentially, if the data has changed too much, it could affect the choice of the optimal execution plan and so SQL Server recompiles it.

The benefit of recompilation, even though it is a hit against performance, is that a query with an execution plan that is incorrect or suboptimal is likely to perform far worse than with a correct, optimal plan. It could even return incorrect results in extreme cases. So the performance hit of recompilation is usually well worth the performance benefits.

NOTE: In SQL Server 2005 and later, individual statements can be
recompiled instead of the entire batch, an enormous performance
boost for long batches or complicated stored procedures and other
code modules that can take some time to compile.

It is important to understand that the query processor is not looking for the absolute best execution plan possible. A complex query might have many thousands of execution plans that return correct results, and it might take hours to find the best plan that saves mere milliseconds off query execution time. One of the tasks that the optimizer does is to look for a trivial plan, which has a known and constant cost, involves no parameters, and only uses system functions. An example of a query with a trivial plan is:

Because this query returns all rows and all columns from a relatively small table, no amount of analysis is likely to find one method of scanning the table that is substantially better, in terms of performance, than another. There just aren’t that many ways to perform this query. If the optimizer finds a trivial plan, it returns that execution plan and does no further work.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2012 Developer: LocalDB Databases course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

LINQ and Relational Data

linq-logoAt its most basic level, LINQ provides the ability to query any data source that supports the IEnumerable or generic IEnumerable(T) interface. The data you want to query in an application can come from a variety of sources. The data
may reside in in-memory objects. If so, you can use LINQ to Objects. The data may reside in XML. If so, you can use LINQ to XML.

The primary source of data in most applications is a relational database, such as SQL Server. If you have built database applications with Visual Studio, you are familiar with using ADO.NET and building SQL statements to query and modify data in a relational database.

In this chapter, you will see how to use LINQ to SQL to query and modify data in a SQL Server database. You may find yourself thinking that ADO.NET works great, so why do I need another way to work with relational data? What advantage does LINQ to SQL offer me?

LINQ provides a consistent model for querying all types of data. With LINQ, a query that works with objects looks similar to a query that works with XML. It also looks similar to a query that works with relational data. If you know how to use LINQ to Objects to write a query, you already know most of what you need to write a query by using LINQ to SQL.

LINQ to SQL is part of the ADO.NET family of technologies. It abstracts away the various ADO.NET classes such as SqlConnection, SqlCommand, SqlDataAdapter, SqlDataReader, etc. You get the same functionality by writing less code. You can also easily mix LINQ to SQL with existing ADO.NET code.

LINQ to SQL also bridges the gap between object programming and traditional database programming. As a .NET developer, you build applications based on objects. The .NET Framework consists of classes. To use the functionality that a class provides, you create an instance of the class and then use its properties, methods, and events.

To query a database, you build a SQL statement as a string. You can then use ADO.NET classes to send the query to the database. You can also store the results by using the ADO.NET DataSet and DataTable classes. Is this object programming? Yes and no. The use of ADO.NET is object programming; however, the data model is based on tables and rows, not on objects.

To model the data as objects, you can create classes for each table in the database. For example, you could create a Customer class with properties for company name, address, city, region, and so on. When you query the Customer table, you store the results in one or more instances of the Customer class.

LINQ to SQL provides a runtime infrastructure for managing relational data as objects. To use LINQ to SQL, you map the structure of a relational database to an object model. Each table in the database maps to a class. This class is an entity class. At runtime, the .NET Framework translates LINQ queries into SQL and sends them to the database. When the database returns results, the runtime populates the objects with the returned data.

Once you create your object model, you can use LINQ to SQL to query and change data in a SQL Server database.

Note: Microsoft built LINQ to SQL to support any relational database.
However, the implementation of LINQ to SQL that ships with the
.NET Framework 3.5 and 4.0 support only SQL Server. It is
possible that Microsoft will support other databases in future
releases.

 

ldn-expertkgetzThis post is an excerpt from the online courseware for ourWindows 8 Using XAML: Views, Resources, and Toastscourse written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

Roots of SQL Server Data Tools

Thumbnail for 580Microsoft has long supported application database development with features built into Visual Studio, ranging from support for data access to the ability to create a connection to a database and manipulate database objects and data in simple ways. Various add-ons over the years have expanded those basic abilities, but even developers were constantly having to bounce between Visual Studio and either Enterprise Manager (in earlier versions of SQL Server) or Management Studio (in more recent versions, including SQL Server 2012) to perform data-related tasks, such as to create database tables, set security features, or execute stored procedures. Neither Visual Studio nor the SQL Server tools fully supported database development.

Then several years ago, Microsoft released its first version of the Team System Edition of Visual Studio 2008. This was a huge package of group software collaboration tools, including a long overdue successor to its Visual Source Safe source code control product. Over time, Team System spawned Architecture, Testing, and Development editions. But for a long time, one piece was missing: a database edition.

Microsoft soon rectified that, and released the product code named Data Dude. Officially, it is Visual Studio 2008 Team System Database Edition. Definitely a case where the code name was way cooler than the official name!

Data Dude, er, Database Edition, was a set of tools for managing the full database development lifecycle. Software developers were intimately familiar with the process, since it mirrors the code, build, deploy, rinse, repeat cycle common in software development. (And it can handle agile development methodologies as well.) Database administrators had to get used to some things, but could most certainly be part of the process.

Microsoft had several important high-level goals for the product:

  • Provide integrated change management, just like source control systems. This lets you go back in time to earlier versions of the database project and also provides controlled access to the project components to various developers for collaborative, simultaneous development.
  •  Manage database projects by allowing you to monitor progress and perform code reviews.
  •  Find and understand differences between versions. This way you can easily find the differences that cause problems in order to resolve them quickly.
  •  Make changes and see their effectiveness, without necessarily needing to build an entire project.
  • Support isolated development environments, so that each developer can work on their part of the project and test the changes before checking them in and distributing them to the rest of the team.
  • Test solutions to validate the design and code. You can use Team System’s support for unit tests to make sure that the design, code, and data doesn’t introduce any breaking changes.
  • Simplify deployment, of both the entire database and changes. This supports a variety of deployment scenarios during development and after the database goes into production, including finding the changes made to a test version of the database relative to a production version of the database.
  • Facilitate collaborative development of databases, so that team
    members can work together productively.

Database Edition was a huge step forward to make the process of creating database projects similar to creating application projects, particularly with its collaboration tools, support for source control, and test integration. But even with this step, things just weren’t as deeply integrated as they could be, in three primary ways:

Developer Edition wasn’t fully integrated with the database server. It enabled developers to perform more database-specific actions than earlier tools, but you still had to pop over to Management Studio too often.

  • Database developers had one tool to create database objects, and another tool—the Business Intelligence Development Studio
  • (BIDS)—for BI projects, adding a significant third tool to the mix in addition to Visual Studio and Management Studio.
  • Developer Edition made many simple things very easy, but once you went past simple stuff things got very hard. This is a rather subjectiv assessment, we realize, but there just seemed to be too many work arounds for what should have been easy.

The Database Edition was a great set of tools, particularly relative to what came before it. But with SQL Server 2012, Microsoft introduced what has so far proven to be the best set of tools yet, truly integrating database development tools into Visual Studio 2010 and 2012.

ldn-expertdkielyThis post is an excerpt from the online courseware for our SQL Server 2012: Advanced SQL CLR course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Introduction to Views

You can use a SQL SELECT statement to read data from one or more tables in
a relational database. This is such a common operation that SQL provides a
way to save a SELECT statement as a persistent database object, called a view.
A view is simply a saved SELECT statement, which you can also think of as a
virtual table because the view object is used like a table. The view does not
create a copy of the underlying data. Instead, when you use a view SQL Server
dynamically reads that data from the underlying tables. A view can select data
from a single table or multiple tables, and can even span multiple databases
and servers.

You can query views the same way you query tables – you SELECT from a
view and get a result back. You can also perform updates against views,
although with some limitations that you’ll learn about later in this chapter. You
can also create nested viewsviews that contain other views.
A view is not a stored procedure, even though there are some similarities. Both
can return a result set, but a view has no parameters and there is no procedural
logic, just a SELECT statement, although the statement can be as complicated
as you care to make it. Nor does a view have all the features of a user-defined
function.

For a feature that is so simple in concept, views pack a lot of power. Let’s dive
into them and see how they can provide a handy way to access data. You’ll
also learn abou

t their limitations, to help you decide when it is better to use
SELECT statements directly. Let’s start with a discussion about the advantages
of views and quickly look at the security issues of views.

Advantages of Views

You can use views to provide data to client applications. Some of the
advantages include:

  • You can give Columns in views friendly names to simplify databasestructures so that users can easily query them.
  • You can use views to customize data with different filters and sorts for different users or different applications.
  • The end user or client application doesn’t have to know anything about the complexity, join syntax, or aggregates in the query underlying the view.
  • You can omit unnecessary or sensitive data from the view. In other words, you don’t have to include all the columns in the view results from the tables that comprise the view’s source data.
  • Views can provide a backward compatible interface to emulate a table that no longer exists or that has had its schema modified.
  • You can export data to another application using a view.
  • You can use views to partition data based on whatever criteria you specify.>

A view is the sort of database object that, once you understand how they work
and the difference between them and SELECT statements, you’ll start finding
a wide variety of ways to put views to use.

Views and Security

One of the major reasons to use a view instead of executing SELECT
statements directly is to avoid granting permissions to users on the underlying
tables and other database objects. You can grant a user or role permission on a
view alone, thereby limiting a user’s access to the raw data. As long as the
owner of the view and the underlying tables are the same principal, the user
will have access to the data even without explicit rights to the underlying data.

Another way to describe this is that you can revoke all permissions to tables

and grant permissions on views that select only certain columns or rows from
those tables. Users will then be able to work with the views based on the
permissions granted to the views. This enables more granular control over
exposure of your data, letting you limit data modification so that it conforms to
the view criteria.

ldn-expertdkiely

This post is an excerpt from the online courseware for our SQL 2012 Developer: Views course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

SQL Security Fundamentals: Changing the Execution

2.0-bigsqllogo

 

You don’t always want the caller’s permissions to be used to validate permissions in a broken ownership chain. Sometimes you want to execute the code as though it was being executed by another user entirely, by using that other user’s permissions to validate permissions on all of the accessed objects.

This is called switching the execution context of the code. This lets you take advantage of SQL Server’s granular permissions, keeping tight control over the permissions to underlying objects, while giving various users the ability to execute the code.

In SQL Server, when you define any kind of user-defined functions (except inline table-valued functions), stored procedures, and data manipulation triggers, you can use the EXECUTE AS clause as part of the definition of the code to run the code under the security context of the specified user.

Four EXECUTE AS options are available:

  • EXECUTE AS CALLER: The default for backward compatibility. The code executes in the context of the caller of the code, who must have permissions both to execute the code and to access underlying objects. The actual behavior depends on whether the ownership chain is broken or unbroken.
  • EXECUTE AS= ‘username’ and EXECUTE AS = ‘loginname’: The code executes in the context of the specified user or login, so that the user must have permissions on underlying objects. In this case, the caller must either:
    • Have execute permission on the code.
    • Be a sysadmin or db_owner, or have CONTROL SERVER >permissions on the server or the database, or have impersonate permission for the username.>
  • EXECUTE AS SELF: A shortcut notation for the current user who is creating the procedure. This is equivalent to EXECUTE AS USER = [myUserName]. The SQL Server catalog stores the actual user ID of the person who writes the code.
  • EXECUTE AS OWNER: Another variation of execution under the security context of a specific user, in this case the owner of the code at the time of execution. If the owner changes after the code is created in the database, it is the current owner when the code executes.

TIP: Any time the security context of the code changes through EXECUTE AS, the creator of the code must have the IMPERSONATE permission for the user specified in the clause. You don’t ever need to have this permission to impersonate yourself, however, such as for EXECUTE AS SELF.


ldn-expertdkielyThis post is an excerpt from the online courseware for our SQL Server 2012: Security Fundamentals
course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Understanding Logins with SQL Server

screen shot 2013-05-24 at 10.25.02 amSQL Server logins are not part of Windowsthey are saved in and managed by SQL Server. A user who connects to SQL Server via a SQL Server login is prompted for a login name and password. If you select the Windows Only mode of authentication, all SQL Server logins will be disabled and users will be able to connect only by using their Windows logins.

Mixed mode with SQL Server logins is much more flexible—for example, it supports users on Windows 9x computers—but it is also less secure. SQL Server logins and passwords are saved in the system tables in SQL Server, which are file-based. Users who gain access to these files could conceivably hack administrative logins and passwords.

If you configure your SQL Server to support SQL Server logins, there is one built-in SQL Server login that you need to watch out forthe sa login.

Beware of the sa Login

You may have noticed a login named “sa” hanging around with the other logins in the Logins node in Object Explorer. The sa, or system administrator, login is included mainly for backward compatibility with older versions of SQL Server. The sa account is mapped to the sysadmin fixed server role, and anyone who uses sa is a full system administrator, with irrevocable rights over the entire SQL Server instance and all the databases in it.

You can’t modify or delete the sa login. If you select Mixed Mode authentication when you install SQL Server, you’re prompted for a password for the sa user. Unless you set a password, anyone can log in as sa with no password, and play “let’s administer the server.” Needless to say, this is the last thing you want your users doing. Use the sa login only as a backdoor if other system administrators are unavailable or have forgotten their Windows passwords. If that happens, you probably need new admins!

WARNING! Never, EVER, use the sa login for access to a database in an application. Doing so could give a hacker administration-level control over your database server if the hacker is able to get control of the application. This has been an easy way to attack servers and is a horrible practice. Instead, either set up a custom Windows or SQL Server login for the application to use, and give that login the absolute minimum permissions necessary to run the application (which is another application of the principle of least privilege).

Password Policy and Enforcement

In versions of SQL Server before 2005, there was no easy way for a system administrator to enforce password policies that could help make a system more secure. For example, SQL Server had no way to force users to create strong passwords of a minimum length and a mix of alphanumeric and other characters. If someone wanted to create a login with a single letter for a password, you couldn’t configure SQL Server to prevent it. Likewise, there was no way to cause passwords to expire on a regular basis, such as every three months. Some people rightly saw this as a major reason not to use SQL Server logins.

SQL Server now can hook into the password policies of Windows Server 2003, Windows Vista, or later versions. The passwords are still stored in SQL Server, but SQL Server makes a call into the NetValidatePasswordPolicy() Windows API method, which was first introduced in Windows Server 2003. This API function applies the Windows password policy to SQL Server logins and returns a value that indicates whether the password is valid. SQL Server calls this function when a user creates, sets, or resets a password.


Thumbnail for 628

 

 

learnnowonline expert instructor don kielyThis post is an excerpt from the online courseware for our SQL Server 2012: Security Fundamentals course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

SQL Server 2012: Using Snapshot Isolation

2.0-bigsqllogoAs discussed in our previous article, concurrency that is based solely on locking can cause blocking and deadlocking problems. However, reading data with READ UNCOMMITTED is not a solution for applications that require row-level accuracy when retrieving data; dirty data is generally not a good thing. To solve this problem, SQL Server includes a row versioning mechanism called snapshot isolation, which is intended to improve performance by avoiding the reader-writer blocking scenarios in the previous example.

Read operations do not request shared locks on the data, so transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level.

The term “snapshot” reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits transactions to execute without being blocked by a prior incomplete transaction.

NOTE: Two write operations do block each other even while running under row versioning-based isolation levels, because two write operations cannot modify the same data at the same time.

A unique transaction sequence number identifies each transaction. Whenever a transaction modifies a row, an image of the row before modification is copied into a page in tempdb. If multiple transactions modify a row, multiple versions of the row are linked in a version chain.

When a user or application retrieves data, it automatically gets the last saved version of each row.

Snapshot Isolation Caveats

Snapshot isolation is not recommended for every application. Here are several reasons why READ COMMITTED might be a better choice for your application:

  • READ COMMITTED consumes less tempdb space than snapshot isolation.
  • Snapshot isolation does not work with distributed transactions that span servers.
  • Snapshot isolation is vulnerable to update conflicts that do not apply to READ COMMITTED. When a transaction that’s running under snapshot isolation reads data that is then modified by another transaction, an update by the snapshot transaction to the same data causes an update conflict and the transaction terminates and rolls back. This is not an issue with READ COMMITTED.

Enabling Snapshot Isolation

You enable snapshot isolation per database by turning on the ALLOW_SNAPSHOT_ISOLATION database option.

This allows snapshot isolation when it is explicitly invoked, but the default READ COMMITTED transaction isolation level remains in effect for implicit transactions that do not specify snapshot isolation.

To replace READ COMMITTED entirely, turn on the READ_COMMITTED_SNAPSHOT option in addition to the ALLOW_SNAPSHOT_ISOLATION option. Snapshot isolation will then be used for all transactions.


Thumbnail for 566This post is an excerpt from the online courseware for our
SQL Server 2012: T-SQL Working with Data course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.