Monthly Archives: September 2013

LINQ and Relational Data

linq-logoAt its most basic level, LINQ provides the ability to query any data source that supports the IEnumerable or generic IEnumerable(T) interface. The data you want to query in an application can come from a variety of sources. The data
may reside in in-memory objects. If so, you can use LINQ to Objects. The data may reside in XML. If so, you can use LINQ to XML.

The primary source of data in most applications is a relational database, such as SQL Server. If you have built database applications with Visual Studio, you are familiar with using ADO.NET and building SQL statements to query and modify data in a relational database.

In this chapter, you will see how to use LINQ to SQL to query and modify data in a SQL Server database. You may find yourself thinking that ADO.NET works great, so why do I need another way to work with relational data? What advantage does LINQ to SQL offer me?

LINQ provides a consistent model for querying all types of data. With LINQ, a query that works with objects looks similar to a query that works with XML. It also looks similar to a query that works with relational data. If you know how to use LINQ to Objects to write a query, you already know most of what you need to write a query by using LINQ to SQL.

LINQ to SQL is part of the ADO.NET family of technologies. It abstracts away the various ADO.NET classes such as SqlConnection, SqlCommand, SqlDataAdapter, SqlDataReader, etc. You get the same functionality by writing less code. You can also easily mix LINQ to SQL with existing ADO.NET code.

LINQ to SQL also bridges the gap between object programming and traditional database programming. As a .NET developer, you build applications based on objects. The .NET Framework consists of classes. To use the functionality that a class provides, you create an instance of the class and then use its properties, methods, and events.

To query a database, you build a SQL statement as a string. You can then use ADO.NET classes to send the query to the database. You can also store the results by using the ADO.NET DataSet and DataTable classes. Is this object programming? Yes and no. The use of ADO.NET is object programming; however, the data model is based on tables and rows, not on objects.

To model the data as objects, you can create classes for each table in the database. For example, you could create a Customer class with properties for company name, address, city, region, and so on. When you query the Customer table, you store the results in one or more instances of the Customer class.

LINQ to SQL provides a runtime infrastructure for managing relational data as objects. To use LINQ to SQL, you map the structure of a relational database to an object model. Each table in the database maps to a class. This class is an entity class. At runtime, the .NET Framework translates LINQ queries into SQL and sends them to the database. When the database returns results, the runtime populates the objects with the returned data.

Once you create your object model, you can use LINQ to SQL to query and change data in a SQL Server database.

Note: Microsoft built LINQ to SQL to support any relational database.
However, the implementation of LINQ to SQL that ships with the
.NET Framework 3.5 and 4.0 support only SQL Server. It is
possible that Microsoft will support other databases in future
releases.

 

ldn-expertkgetzThis post is an excerpt from the online courseware for ourWindows 8 Using XAML: Views, Resources, and Toastscourse written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

Roots of SQL Server Data Tools

Thumbnail for 580Microsoft has long supported application database development with features built into Visual Studio, ranging from support for data access to the ability to create a connection to a database and manipulate database objects and data in simple ways. Various add-ons over the years have expanded those basic abilities, but even developers were constantly having to bounce between Visual Studio and either Enterprise Manager (in earlier versions of SQL Server) or Management Studio (in more recent versions, including SQL Server 2012) to perform data-related tasks, such as to create database tables, set security features, or execute stored procedures. Neither Visual Studio nor the SQL Server tools fully supported database development.

Then several years ago, Microsoft released its first version of the Team System Edition of Visual Studio 2008. This was a huge package of group software collaboration tools, including a long overdue successor to its Visual Source Safe source code control product. Over time, Team System spawned Architecture, Testing, and Development editions. But for a long time, one piece was missing: a database edition.

Microsoft soon rectified that, and released the product code named Data Dude. Officially, it is Visual Studio 2008 Team System Database Edition. Definitely a case where the code name was way cooler than the official name!

Data Dude, er, Database Edition, was a set of tools for managing the full database development lifecycle. Software developers were intimately familiar with the process, since it mirrors the code, build, deploy, rinse, repeat cycle common in software development. (And it can handle agile development methodologies as well.) Database administrators had to get used to some things, but could most certainly be part of the process.

Microsoft had several important high-level goals for the product:

  • Provide integrated change management, just like source control systems. This lets you go back in time to earlier versions of the database project and also provides controlled access to the project components to various developers for collaborative, simultaneous development.
  •  Manage database projects by allowing you to monitor progress and perform code reviews.
  •  Find and understand differences between versions. This way you can easily find the differences that cause problems in order to resolve them quickly.
  •  Make changes and see their effectiveness, without necessarily needing to build an entire project.
  • Support isolated development environments, so that each developer can work on their part of the project and test the changes before checking them in and distributing them to the rest of the team.
  • Test solutions to validate the design and code. You can use Team System’s support for unit tests to make sure that the design, code, and data doesn’t introduce any breaking changes.
  • Simplify deployment, of both the entire database and changes. This supports a variety of deployment scenarios during development and after the database goes into production, including finding the changes made to a test version of the database relative to a production version of the database.
  • Facilitate collaborative development of databases, so that team
    members can work together productively.

Database Edition was a huge step forward to make the process of creating database projects similar to creating application projects, particularly with its collaboration tools, support for source control, and test integration. But even with this step, things just weren’t as deeply integrated as they could be, in three primary ways:

Developer Edition wasn’t fully integrated with the database server. It enabled developers to perform more database-specific actions than earlier tools, but you still had to pop over to Management Studio too often.

  • Database developers had one tool to create database objects, and another tool—the Business Intelligence Development Studio
  • (BIDS)—for BI projects, adding a significant third tool to the mix in addition to Visual Studio and Management Studio.
  • Developer Edition made many simple things very easy, but once you went past simple stuff things got very hard. This is a rather subjectiv assessment, we realize, but there just seemed to be too many work arounds for what should have been easy.

The Database Edition was a great set of tools, particularly relative to what came before it. But with SQL Server 2012, Microsoft introduced what has so far proven to be the best set of tools yet, truly integrating database development tools into Visual Studio 2010 and 2012.

ldn-expertdkielyThis post is an excerpt from the online courseware for our SQL Server 2012: Advanced SQL CLR course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

HTML Helper Classes

HTML helpers are implemented in the System.Web.Mvc.HtmlHelper and HtmlHelper<T> classes. Views and partial views in MVC have an Html property that expose an HtmlHelper class, making it easy to use in a view. If you examine the interface of the HtmlHelper class in the documentation, you’ll find that it doesn’t implement most of the helper methods as native methods on the class. Instead, most of the helpers are extension methods implemented in the System.Web.Mvc.Html namespace, and HtmlHelper merely functions as
an anchor point for those helpers. Although HtmlHelper is most useful as the source of the helper extension methods, it has a number of its own native methods, of which Table 1 lists a few.

Method Description
AntiForgeryToken Adds a hidden form field with an anti-forgery token, used to
defend against cross-site request forgery (CSRF) attacks.
You can also extend this by passing in a salt value, domain
and path.
AttributeEncode HTML-encodes an attribute value.
EnableClientValidation Turns on client-side validation, causing the helpers to
generate the necessary client-side code.
Encode HTML-encodes a string or object value passed to the
method.
HttpMethodOverride Returns the value of a hidden field that identifies an override
method for the HTTP data-transfer method used by the
client.

Table 1. Some useful native methods of HtmlHelper.

The native HTML helpers included with HtmlHelper and its extension methods are useful, but cover only the most commonly used HTML elements. You’ll probably find yourself wanting more, and it is relatively easy to write your own helper methods, or even throw away the built-in helpers and use your own custom helper methods instead. Then, instead of referencing the System.Web.Mvc.Html namespace in your project, you replace it with your
namespace with your custom helper extension methods. This is just another way that you’re in control in MVC.

Types of HTML Helper Methods
MVC includes three broad classes of HTML helper methods: untyped helper methods, strongly-typed helper methods, and templated helpers. The untyped helper methods first appeared in the initial release of MVC, version 1, and generally take a string with the name of the property to which it is bound at runtime. The following code is an example of using a regular HTML helper to display a label, text box, and validation message to edit a Name property of the view model object, using the Label, TextBox, and ValidationMessage helper
methods.

A strongly-typed helper has “For” appended to the name of the method and takes a lambda expression that binds the helper to a model property at design time. This is in contrast to a regular helper that takes a string parameter. Strongly-typed helpers have the benefit of catching many kinds of problems at compile time instead of runtime, and IntelliSense fully supports them when you write code. The following code is an example of using strongly-typed helpers for the same Name property of the previous code, using the LabelFor,
TextBoxFor, EditorFor and ValidationMessageFor methods. Notice that each takes a lambda expression to bind the helper at design time.

@Html.LabelFor(model => model.Name)
@Html.TextBoxFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name)
@Html.EditorFor(model => model.Notes) @Html.ValidationMessageFor(model => model.Notes)

Using a lambda expression lets you pass both the field that the helper is binding to and the value of the field. The strongly-typed helper method uses this information to intelligently determine how to display the data. It really is a rather elegant way to pass the information to the helper method.

TIP: Try misspelling a field name to see what happens. Specifically, misspell one
of the field names in one of the TextBox helper methods in the Edit view of
the AWLTProducts project, such as changing “Name” to “Nam.” You’ll find
that when you edit a product’s data you will receive a Compilation Error
since the Product model does not contain a definition for “Nam”.

Code that uses strongly-typed methods looks a bit more complex and requires more typing than the untyped versions, but only a little. And because the fields you are using are strongly typed, IntelliSense can help you where it couldn’t with just strings, as shown in Figure 12. Another benefit of using stronglytyped helper methods is that refactoring tools can update the views if you change field names.

figure12intellisenseFigure 12. IntelliSense provides a list of the properties of the Product object.

Templated helpers take the methods to an entirely different level, using built-in and custom templates to give you fine control over the HTML generated based on various templates. You can even throw an entire custom object at a templated helper, and the helper will use reflection to render all its properties—including nested properties—based on the templates defined for each data type, and reverting to a text box for any data types for which it has no template. The following code is an example of using a template helper to edit an entire view model, not just a single property of the object, using the EditorForModel method.

James Curtis

This post is an excerpt from the online courseware for our MVC4.0: Working with Data course written by expert James Curtis.

James Curtis  is a .NET Developer that primarily works in the UX space. He has worked on and for several large projects alongside Microsoft Consulting. James has spoken at several code camps about UX development for ASP.NET and SharePoint. He is an active participant in the development community tweeting and blogging about several topics in the UX area. James is an active consultant and is also assisting in several Start-ups contributing his UX experience.

Social Tagging in SharePoint 2010

As you read earlier, folksonomy is a community-driven mechanism for categorizing data where the keywords are driven by end users instead of designated taxonomists. SharePoint 2010 now offers support for folksonomy via Social Tagging. Social Tagging allows end users to tag pages with Keywords for future reference or to share with colleagues.

When a user tags a page, SharePoint searches for a Keyword or Term that matches the tag. If a match occurs then the existing Keyword or Term is used. If, on the other hand, a matching Keyword or Term is not found, then SharePoint creates a new Keyword and stores it in the special Keywords Term Set. By default, everyone can add new Terms to this special Term Set. As you would expect, the Keywords Term Set uses an open submission policy, which allows all users to contribute entries.

NOTE     Terms in the Keywords Term Set are called Keywords, not Terms.

Keywords do not support any type of hierarchy or relationship. Keywords can only exist under the Keywords Term Set and only in one flat structure.

When tagging a page, the user has the option of either marking the tag public or private. If the tag is marked as private others will not be able to see that the page was tagged, but the Keyword will be created in the Keyword Store and available for others to use. Also, if a page has been tagged by someone but the tag was marked as public, anyone else who tags that page will see that tag as a suggestion. Because of this, users should be properly trained on how to use tags to prevent tags of a sensitive or confidential nature from being inadvertently created.

Over time, certain Keywords may gain wide acceptance. In such cases, it may be useful to promote them by moving them from the Keywords Term Set to another, more formal Term Set. In this way, a Term can be promoted from a folksonomy model (Keyword) to a taxonomy model (Term). If Social Tagging is used in conjunction with SharePoint’s My Sites feature,
then in addition to tagging pages it is also possible to add notes about a page. Notes are stored on a user’s My Site for later retrieval and are viewable by everyone.

 

John.UnderwoodThis post is an excerpt from the online courseware for our Microsoft SharePoint 2010: Enterprise Content Management course written by expert John Underwood.

John Underwood is a technical evangelist for a SharePoint consulting company, with 30 years of programming experience, including 20 years on the Microsoft platform and 10 years with .NET. John also has extensive experience using, configuring, and programming SharePoint.

 

 

Generic Lists in LINQ

LINQ allows you to query over any type of generic list. The most common list type you’ll use is List, but LINQ allows you to work with any of the following list types, in the System.Collections.Generic namespace unless otherwise specified:

  • List<T>
  • LinkedList<T>
  • Queue<T>
  • Stack<T>
  • HashSet<T>
  • System.Collections.ObjectModel.Collection<T>
  • System.ComponentModel.BindingList<T>

Just to prove the point, the sample project includes the QueryGenericList method. This method performs similar work to the earlier QueryArray method, this time showing both query syntax, and functional syntax. Listing 2 shows the entire procedure.


Listing 2. The QueryGenericList method shows off several different techniques, including query vs. functional syntax.

Running this procedure displays the output shown in the Figure below.

The Figure above. The QueryGenericList procedure provides a list of files.

The QueryGenericList procedure starts by retrieving the array of files, and copying the data into a generic list:

Next, the code uses standard query syntax to retrieve from the list all the files whose length is less than 1000 bytes, ordered first by length in descending order, and then for files with matching lengths, by file name. The query retrieves an anonymous type containing the Name and Length properties of the file:

Finally, the procedure demonstrates the equivalent query, created using function syntax instead. This query returns the same data in the same order, but does it by calling extension methods with lambda expressions defining their behavior:

Note a few things about this query definition:

  • The Where function accepts a lambda expression that returns true or false for each item in the collection. Items for which the lambda expression returns false are excluded from the collection:
  • The OrderByDescending method (and its “cousin,” the OrderBy method) accepts a lambda expression that returns information indicating how to sort the data. In this case, the sorting occurs based on the Length property of each FileInfo object:
  • Using the standard query syntax, you can indicate ordering by multiple columns by simply including the columns separated with a comma. In function syntax, you must use the ThenBy or ThenByDescending method to indicate a secondary sort. As a parameter to the method, pass a lambda expression which, again, indicates the sort order. In this case, the secondary sort uses the Name property of the input FileInfo object:
  • The Select method accepts a lambda expression that identifies which field or fields you want the query to return. In this case, the query returns an anonymous type containing the Name and Length properties 

NOTE Remember, all the techniques you’ve seen in this example apply to any type of LINQ query, not just LINQ to Objects, and not just queries that work with generic lists.

TIP: The System.Linq.Enumerable class provides a large number of extension methods that add behavior to queryable objects, much like the Select, Where, OrderBy, ThenBy, OfType, and other methods you’ve already seen. Later sections in this chapter introduce many of these methods. For more information, review the Microsoft documentation for the System.Linq.Enumerable class and its many methods.

 

ldn-expertkgetzThis post is an excerpt from the online courseware for our Microsoft LINQ Using Visual C# 2010 course written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

SQL 2012 Developer: Aggregates

Aggregates

One of the more exciting types of SQLCLR code modules is custom aggregates. Earlier versions of SQL Server didn’t offer an easy way to extend the rather short list of T-SQL aggregates, such as SUM, AVG, MAX, MIN, and COUNT. You still can’t use T-SQL to create custom aggregates, but you can write your own in SQLCLR. It’s not something you’ll need to do often, but it can greatly simplify application coding in situations where you need it. 

Unlike other SQLCLR code modules, you can define only a single custom aggregate within a class. The class must be public and requires four non-static methods, which SQL Server calls as it processes the data to aggregate:

  • Init initializes storage values. SQL Server calls this once at the beginning of the query that uses the aggregate.
  • Accumulate does the work of combining values. SQL Server calls it once for each row that is aggregated, and it must support multiple instances of the aggregation code that is executing so that SQL Server can utilize multiple threads to make operations more efficient.
  • Merge is called at the end of the operation to merge the aggregations of multiple instances into one final value.
  • Terminate is the last method called and returns the final aggregated value.

You also have to deal with serialization issues because instances of the aggregation class have to be moved between threads. This complicates the code only a little, since the .NET Framework has rich support for serialization. Most commonly, you’ll implement the IBinarySerialize interface to let .NET do all the serialization work, but there are other options if you have a special need.

Consider a sample aggregation. A common requirement in applications is to create a concatenation of strings within a table, delimited by a single character such as a semicolon. Some applications require information in this format for processing. You’ll create a Concatenate aggregation that you can use to concatenate all string values in a field. For example, once you create the aggregation, you could run the following T-SQL code to produce a single string with all product names from the Northwind Products table that have a category ID of 4, which is Dairy Products.

This code produces the following string (with line breaks added to fit on the printed page):

The first part of the SQLCLR code defines the aggregation class and decorates it with Serializable and SqlUserDefinedAggregate attributes. The SqlUserDefinedAggregate attribute needs the following parameters, most of which the query processor uses to perform the aggregation properly.

  • Format specifies the serialization format, either Format.Native or Format.UserDefined. You’ll nearly always need to use UserDefined, although SQL Server has enhanced support for the Native format.
  • IsInvariantToDuplicates indicates whether the aggregation result is the same if any of the items are duplicates. The Concatenate aggregation returns a different result if duplicates exist (they’ll appear multiple times in the resulting string), so set this value to false.
  • IsInvariantToNulls indicates whether the aggregation result is the same if any of the items are nulls. Concatenate ignores null values, so set this value to true because the same string will result whether there are nulls or not.
  • IsNullIfEmpty indicates whether the aggregation result is null if no items were aggregated, such as when the WHERE clause returns norecords to aggregate. If no strings exist, Concatenate returns a null, so set this to true.
  • MaxByteSize is required by the UserDefined serialization format and indicates the largest aggregation value that can be returned. This is set to 8000 bytes in Concatenate.

Concatenate is implemented in the following code as a structure and inherits from IBinarySerialize so that SQL Server can properly use multiple instances of the aggregate, using the listed parameters.

SQL Server calls the Init() method before aggregation begins. This is the point where you should initialize any variables used in the aggregation. Concatenate uses a StringBuilder object in an sb variable to hold the aggregated values, so that variable is instantiated here.

SQL Server calls the Accumulate method when it has a new value to add to the aggregation. Concatenate works only on string type data, so the method receives a value of type SqlString. If the value is null, it returns immediately without doing any aggregation task. Otherwise, it calls the Append method of the StringBuilder variable to concatenate the string.

The Merge method is used to aggregate multiple instances of the Concatenate object. In this case it simply concatenates the value passed to it by the SQL Server query processor to the current instance’s aggregation value. The method is passed an object of type Concatenate and reads the object’s StringBuilder’s ToString property to get the current aggregation value.

Often you’ll need to do more work here; for example, when the resulting aggregation needs to be in alphabetical order. In that case, you may want to use a sortable .NET list object to store the strings internally while processing the aggregate. But here a simple append is sufficient, since the order of the items in the final string is not important.

The last of the four required aggregation methods is Terminate. SQL Server calls this after it processes the last value that it aggregates. This method needs to return the final aggregated value, so it first checks whether the current string is of zero length. If it is, it returns a null. Otherwise it strips off the final semicolon and returns the value. It also clears the sb object in case this instance of Concatenate is used for another aggregation.

The aggregation object also has two other methods, required because the object implements the IBinarySerialize interface. The Read method is called when the object is deserialized and restores the value of the sb variable. Write puts the current string value of sb into the serialization object w.

This simple but useful example shows what is necessary to create a SQLCLR aggregate. Other custom aggregates may require more complex code to implement all the required features, and you may have to use multiple class-level variables to keep track of multiple values during the course of calculation, but it really isn’t all that hard to do once you understand the basic principles. You’ll need to have a good handle on how .NET does serialization. But these techniques really become valuable when you implement your own
user-defined types in SQLCLR, which open up whole worlds of possibilities in SQL Server.

 

ldn-expertdkielyThis post is an excerpt from the online courseware for our SQL Server 2012: Advanced SQL CLR course written by expert Don Kiely. Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

SharePoint 2010 Search

The need for enterprise search is a key driver for implementations of Microsoft SharePoint 2010 Search. The out-of-the-box SharePoint Server search features are useful, but getting the most benefit from SharePoint Server search requires configuration and customization.

The SharePoint Server object model makes it possible for application developers to implement powerful custom functionality, but in many cases you can meet requirements by configuring a site in the browser. This chapter focuses on customization of search by using the out-of-the-box features. SharePoint Server comes with two search site definitions you can use as a basis: Basic Search Center and Enterprise Search Center. Both site definitions create pages that contain Search web parts. You can use these web parts to extend sites based on the site definitions. You can also use them to create custom search pages in sites based on any site definition simply by enabling the correct set of features.

SharePoint Server does not simply provide the capability to perform searches. It also enables you to tune and improve your search results to provide the most relevant information to your users. When you consider the potential reduction in the cost of time spent looking for information and the cost of duplicated effort, it is easy to understand why customizing search is worthwhile. To this end, SharePoint Server includes reporting and optimization tools for search. With this information, you can define different types of search by defining search scopes. For example, you can create a scope to support search pages that enable users to find people within a geographic location or find documents of a particular type.

SharePoint 2010 also adds a new search feature knows as refiners. Refiners, and the accompanying Refinement Panel Web Part, provide users with a quick look at the kinds of matches they are getting. It also provides a meaningful way for users to whittle down the results by key areas such as document type, author, and origin of the search result.

You can also promote specific content based on its importance or relevance. You can specify the “best bets” for searches based on specific keywords. You can improve the quality of your keywords and best bets based on what you learn by analyzing the site’s usage reports.

Basic Search Center

As the name implies, the Basic Search Center site template provides basic
search functionality. A new site based on this template has several applications
pages, including:

  • default.aspx for entering search queries.
  • results.aspx for showing search results.

Basic Search Center supports minimal customization, and does not permit the addition of new pages. Basic Search Center works with all versions of SharePoint 2010. (For a comparison of search capabilities in versions of SharePoint 2010 visit http://go.appdev.com/?id=SXEG).

Enterprise Search Center

The Enterprise Search Center (formerly known as Search Center with Tabs) is designed to provide greater scalability and customization than that Basic Search Center template. Enterprise Search Center is available with SharePoint Server 2010 Standard and Enterprise editions. In order to use the Enterprise Search Center, the SharePoint Server Enterprise Site Collection feature and SharePoint Server Publishing Infrastructure feature must be enabled (see Figure below).

ECM Ch05 Blog.pdf - Adobe Acrobat Pro

The Figure above. Required features for using Enterprise Search Center.

When it comes to customization, there are two significant differences between Basic Search Center and Enterprise Search Center. First, Enterprise Search Center includes a Pages library where you can create, customize, and publish search pages. Second, Enterprise Search Center includes the ability to provide tabbed search pages. The out-of-box template includes tabs for All Sites and People. You may modify the tab to include custom pages, scopes, etc.

 

John.UnderwoodThis post is an excerpt from the online courseware for our Microsoft SharePoint 2010: Enterprise Content Management course written by expert John Underwood.

John Underwood is a technical evangelist for a SharePoint consulting company, with 30 years of programming experience, including 20 years on the Microsoft platform and 10 years with .NET. John also has extensive experience using, configuring, and programming SharePoint.

Schemas and Naming in SQL Server

Whenever you execute a data selection query in SQL Server, you’ll be accessing one or more database objects, so it is important that you understand how things are named. In SQL Server, database object names use a convention that can contain four parts, any of which can be blank, except the object name:

  • The server_name specifies linked server name or remote server name. A blank implies the current server.
  • The database_name specifies the database name. A blank implies the current database context.
  • The schema_name specifies the name of the schema that contains the object. A blank implies the default schema for the current user or the dbo schema if no other default schema is assigned to the current user.
  • The object_name specifies the name of the object.

In most situations, it is not necessary to use all four parts. However, the recommendation is to use the schema name with the object name, as shown in the following two examples. The first example for the Northwind database will work with or without dbo, because the server uses dbo when no schema is explicitly defined and no default schema is explicitly assigned to the current user. The second query for the AdventureWorks2012 database will fail if the
schema name Sales is omitted, unless the user has Sales set as her default schema. In this case, the Store table was created in the Sales schema. This query will work only for a user with Sales as the default schema.

It might be tempting to deal with schemas by keeping all database objects assigned to dbo and avoid creating or assigning any other schemas. However, schemas can be a useful way of creating multiple namespaces in a database, just as namespaces make it easier for .NET programmers to keep track of classes. The AdventureWorks2012 database provides a good example of using schemas as namespaces. You can also assign permissions on a schema that grant the permission to all objects within it, which makes schemas a powerful security tool for protecting data access. 

NOTE

In versions before SQL Server 2005, a schema was created automatically for each database user. When a user created an object, the object was automatically created in that user’s
schema—unless the user was a database owner, in which case the object was created in the dbo schema. In more recent versions of SQL Server, each user does not automatically have a schema. Schemas are created independently of users; users must explicitly be assigned rights to a schema and can be assigned a default schema. Many users can have rights to use any schema and many users can have the same schema as their default.

 

ldn-expertdkielyThis post is an excerpt from the online courseware for our SQL Server 2012: Select, Where and Clauses course written by expert Don Kiely. Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.