Microsoft SQL Server 2008: SQL Server 2008 Matrix Features

With the release of SQL Server 2008 Reporting Services, Microsoft merged the Table, Matrix, and List data regions into a single unified data region: the tablix. So when you drag a matrix region to a report, even though it says Matrix on the Toolbox, you are actually dragging a tablix onto the report surface.

The major benefit of the unified Tablix data region is the added flexibility over the separate data regions. This allows you to create more flexible matrix reports as well as reports that blend features of both tabular and matrix reports into one hybrid report. The next few sections illustrate some of the newfound flexibility of SQL Server 2008 tablix region in producing more flexible matrix reports.

Adding Adjacent Dynamic Columns to a Matrix

Thanks to the Tablix data region, you can create matrix reports that display multiple dynamic columns. This was impossible prior to SQL Server 2008. With the introduction of the Tablix data region, however, it’s fairly simple.

To add a second (or third, etc.) dynamic column to an existing matrix (tablix) region, click on the arrow to the right of an existing column group in the Grouping pane and select Add Group|Adjacent Before or Add Group|Adjacent After from the popup menu as shown in Figure 1.



Figure 1. Adding an adjacent column group.

A Two-Dynamic Column Example: The rptTwoDynamicCols Report

The rptTwoDynamicCols report contains two adjacent column groups. The first group, like most of the sample reports in this chapter is based on the OrderYear field. The report contains an adjacent column group, based on the CategoryName field. The rptTwoDynamicCols report is shown in Design view in Figure 2. The Tablix data region on the reports contains an extra header row that was added above the column groups by right-clicking on one of the column group cells and selecting Insert Row|Outside Group – Above.


Figure 2. The rptTwoDynamicCols report contains two adjacent column groups.

The report is shown in Firefox in Figure 3.


Figure 3. The rptTwoDynamicCols report in Firefox.

Adding Adjacent Static Columns to a Matrix

With SQL Server 2008 Reporting Services, you aren’t limited to creating a report that is either a tabular report or a matrix report. Reporting Services now allows you to add elements of one report type to another. For example, you can take an existing matrix report and add a static column adjacent to it.

To add a static column to the left or right of an existing matrix region, rightclick on the column selector of an existing column group, and select Insert Column|Outside Group – Left or Insert Column|Outside Group – Right, respectively as shown in Figure 4. After adding the column, you can either drag and drop a field from the Report Data pane to the new blank column or hover over the blank column and click on the field list to select from the list of dataset fields.


Figure 4. Adding a static column to the right of a dynamic column.

A Matrix/Table Hybrid Example: The rptDynamicAndStaticCols Report

The rptDynamicAndStaticCols report contains a static column to the right of the dynamic OrderYear column. This column contains a text box that is bound to the LastOrder field from the dsSales2 dataset. The report is shown in Design view in Figure 5 and in the Firefox browser in Figure 6.



Figure5. The rptDynamicAndStaticCols report in Design view.


Figure 6. The completed rptDynamicAndStaticCols report adds a static column, LastOrder, to the right of a dynamic column, OrderYear.

Adding a Percentage to a Matrix Report

Adding a percentage to a dynamic column in a matrix report was close to impossible prior to SQL Server 2008. Now, however, it’s a pretty simple process. The basic trick is to insert a new column on the design surface that is inside the dynamic column group. By selecting Inside Group, you are ensuring that the new column, which will host the percentage calculation, repeats with each instance of the dynamic column.
paul_LitwinThis post is an excerpt from the online courseware for our  Microsoft SQL Server 2008 Reporting Services course written by expert Paul Litwin. 

Paul Litwin is a developer specializing in ASP, ASP.NET, Visual Basic, C#, SQL Server, and related technologies. He is an experienced trainer, has written articles for MSDN Magazine and PC World, and is the author of several books including ASP.NET for Developers (SAMS) and Access 2002 Enterprise Developer’s Handbook (SYBEX). Paul is a Microsoft MVP, the chair of the Microsoft ASP.NET Connections conference, and a member of the INETA Speakers Bureau.

Be Sociable, Share!

Windows 8 Using XAML: Introducing Badges

As you have seen, tiles act as a Windows Store app’s interface on the Windows Start screen. These tiles can display static or “live” data, depending on the functionality you add to the application. Sending notifications to the tiles to update their content is the topic of a different/earlier section—in this section, you’ll learn about creating the badge that can appear in the lower-right corner of any tile. This badge is a separate entity from the tile content, and you create and update the badge separately.

Badge Overview

A badge on a tile displays summary or status information for the application, and that information must be specific to your particular application. In other words, it would be confusing and irrelevant to display information about anything other than the application associated with the tile.

A badge on a tile can take on one of only two forms:

  • A numeric value between 1 and 99; numbers greater than 99 appear as 99+.
  • A glyph (a small image); one of a set of pre-defined glyphs.

Badges can appear on either wide or square tiles, and badges always appear in the lower right corner of the tile (lower-left corner, for RTL languages).

You might use a badge to indicate any of the following sample scenarios:

  • Network connection in an online game.
  • User status in a messaging app.
  • Number of unread email messages.
  • Number of new posts in a social media app.

Consider these things when designing an application that includes a badge on the applications tile:

  • Badges can only display numeric values between 1 and 99. Setting the value of the badge to 0 clears the badge, and setting the value to a number greater than 99 appears as 99+ on the badge.
  • Badges can display a limited number of glyphs (plus a special glyph value, None, which displays nothing). You cannot extend the list, and Windows supplies all the glyphs that a badge can display.

As an example, Figure 1 shows a sample tile for the Windows Store. This tile displays the number of apps that require updating.

introbadges1Figure 1. The Windows store tile, with a badge.

Figure 2 shows a sample application tile that displays a glyph badge. This glyph is one of a small set of available glyphs.

introbadges2Figure 2. The sample app displays a badge showing a glyph.

NOTE Samples in this chapter assume that you have installed Visual Studio 2012 Update 1 (or later). If you are running the original release of Visual Studio 2012, some of the steps will not function correctly.

ldn-expertkgetzThis post is an excerpt from the online courseware for our Windows 8 Using XAML: Tiles, Badges, Print, and Charms,course written by expert Ken Getz. 

Ken Getz is a featured instructor for several of our Visual Studio courses. He is a Visual Basic and Visual C# expert and has been recognized multiple times as a Microsoft MVP. Ken is a seasoned instructor, successful consultant, and the author or co-author of several best-selling books. He is a frequent speaker at technical conferences like Tech-Ed, VSLive and DevConnections, and he has written for several of the industry’s most-respected publications including Visual Studio Magazine, CoDe Magazine and MSDN Magazine.

Be Sociable, Share!

Using Visual Studio 2010 to Create BCS Applications

There are two ways to use Visual Studio to create BCS applications. The first is to build custom BCS models with the Business Data Connectivity Model, the second is to use Visual Studio to migrate declarative models built with SharePoint Designer for deployment via solution packages.

Business Data Connectivity Model

Visual Studio 2010 includes the Business Data Connectivity Model project template that you can use to create and use a .NET Assembly shim to any data store for use by BCS. Solutions based on the project template consist of a feature to install the model in BCS, an XML configuration file that is the model, and .NET classes that do the work of reading and writing data.

The XML model contains all of the information required to work with the .NET classes including method and type descriptors. This means that the associated .NET class’s methods and parameters must match the model.

At this point in the chapter you may have the strong impression that Microsoft really wants people to buy licenses to SharePoint Server if they need BCS. If so, it will not surprise you to discover that you must do some extra work to use this project template with SharePoint Foundation to support deployment to BCS.

Migrating Declarative Models to Visual Studio

You can use the Business Data Connectivity Model project template as a basis to migrate declarative models created in SharePoint Designer. Begin by using SharePoint Designer to export the model. Then create a Business Data Connectivity Model project and remove the default template items. Finally, add the exported model and replace the missing SharePoint Server specific feature receiver to deploy the model to SharePoint Foundation.

doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.

Be Sociable, Share!

SQL Server 2008: Refining Attribute Relationships

Once you create a basic cube with dimensions, hierarchies, and measure groups, you will need to refine your cube design to optimize performance. One critical step in optimizing performance is to refine the attribute relationships, especially those in your natural hierarchies. When you build relationships between the attributes that form the levels of a hierarchy, SSAS can use an aggregation that was stored at one level to build aggregations for another level. For example, in a time dimension, a relationship exists between the semester
and year level, and SSAS has stored the aggregation for the semester level. When you query the year level, SSAS can add the two semester totals to determine the result for the year, thus improving query speeds.

Once you create a hierarchy in the Dimension Designer in BIDS, you can change to the Attribute Relationships tab to manage the relationships between the attributes being used in the hierarchies.

The Attribute Relationships tab has three sections, the design surface (which holds a relationship diagram), the Attributes pane, and the Attribute Relationships pane. In the design pane, you can drag and drop attributes to define the required attribute relationships. You should start with the key level and build from there. To create a relationship between the Employee key and Title attributes, you would drag the Employee key and drop it on the Title. An arrow will appear both on the design surface and in the Attribute Relationships
section to represent the relationship pictured in Figure 1.


Figure 1. Attribute relationships are indicated by arrows.

In the Dimension Designer, as shown in Figure 2, you will notice that the arrow between Employee and Birth Date and the arrow between Employee and Start Date are solid black. This indicates a rigid relationship.


Figure 2. Rigid relationships are indicated by a solid black arrow.

Attributes where the relationship are not likely to change over time should be defined as rigid relationships. The examples in the previous paragraph of Birth Date and Start Date as they relate to each employee should not change over time. The employee’s original start date should not change once they have started work. Rigid relationships allow SSAS to better optimize aggregations during incremental updates. Aggregations for rigid relationships are
maintained during an incremental update, while aggregations for flexible relationships are dropped and must be reprocessed.

NOTE Aggregations and dimension processing are beyond the scope of this chapter. For more information about aggregations, see the SQL Server Books Online topic, Aggregations and Aggregation Designs and for more information on dimension processing see the SQL Server Books Online topic, Processing (Analysis Services – Multidimensional Data) and Processing Options and Settings

Flexible relationships may change over time, for example, an employee’s title  may change when they are promoted. By default, all relationships are flexible. To change a relationship from flexible to rigid, right-click on the relationship arrow in either the diagram or Attribute Relationships areas, and then select Flexible or Rigid on the Relationship Type submenu. Additionally, when you right-click the relationship, you can select Edit Attribute Relationship to modify both the attributes and the relationship type.

ann.weberThis post is an excerpt from the online courseware for our SQL Server 2008 Analysis Services course written by expert Ann Weber.

Ann Weber has been an author, instructor and consultant for over 12 years. She is an expert in SQL Server, and has her MCITP, MCSE and MCT certifications. Ann works with all facets of SQL Server including administration, writing queries, development, SSAS, SSIS and SSRS. Ann has developed several courses and other learning materials for SQL Server.

Be Sociable, Share!

Microsoft SQL Server 2008: Creating Groups

It’s difficult to create a very useful report without needing to group the data in some way. A report without any groups is either very simple—and there’s nothing wrong with a simple report—or very disorganized.
Groups are a great way to organize data in a report into a more manageable assemblage of information. If you need to create subtotals or other statistics you will likely need to create groups.

NOTE While the focus of this chapter—and all of the examples—is tabular reports and row groups created within a Table data region, some of the principles also apply to matrix and list reports, as well as hybrid reports that have attributes of tabular, matrix, and list reports.

The Grouping Pane

While previous versions of Reporting Services supported grouping, SQL Server 2008 Reporting Services has brought report groups to the forefront with the addition of the Grouping pane to the design surface. From the Grouping pane, you can easily view and manage your groups. You can see the Grouping pane at the bottom of the report design surface in Figure 1.

CreatingGroups1Figure 1. The Grouping pane appears at the bottom of the report design surface.

NOTE This chapter will focus on the row groups that are part of tabular reports. Elsewhere in this course you will find a discussion of Column Groups that are used on matrix reports.

The Details Group

By default, Reporting Services adds a details group—labeled (Details) in the Row Groups pane—to every Table and List data region. (Matrix data regions do not contain a details group and Chart and Gauge data regions do not use the Grouping pane.) The details group is unique in that it is a group that is not based on a grouping expression. Instead, it represents the detail rows in a Table or List data region.

Adding a Row Group

You can add a new row group to a Tablix data region either by dragging dropping a field from the Report Data window and dropping it on the Grouping pane or by using the Grouping pane’s popup menu.

Dragging and Dropping

Drag a field from the Report Data window and drop it onto the Row Groups area of the Grouping pane to create a new group. The key to getting the group into the correct place in the group hierarchy for the report is to carefully position the mouse cursor before letting go of the mouse button. As you hover over the existing groups, Reporting Services will draw a blue line to indicate where the new group will be inserted as shown in Figure 2.

CreatingGroups2Figure 2. The new group will be created as a child of the Country group and as a parent of the details group.

When you create a group using drag and drop, you can only create parent or child groups; you cannot create an adjacent group using this technique. Nor can you control the presence of group header and footer rows, or create a group based on an expression. If you need any of these group options, you’ll want to employ the Grouping pane menus to create your group.

Using the Grouping Pane Menus

To add a row group to a report using the Grouping pane menus, click on the down arrow to the right of an existing group and select Add Group from the menu as shown in Figure 3. A submenu will present several choices including Parent Group, Child Group, Adjacent Before and Adjacent After. (Creating adjacent groups will be discussed in more detail in the next section.)

CreatingGroups3Figure 3. Adding a row group using the Grouping pane.

After selecting the type of group that you want to create, Reporting Services displays the Tablix group dialog box that is shown in Figure 4.

CreatingGroups4Figure 4. The Tablix group dialog box.

To finish creating the group, select the Group by field using the drop-down list or click the fx button to group on an expression instead. Don’t forget to check the Add group header and Add group footer check boxes as appropriate before clicking OK, because Reporting Services makes it difficult to recreate the group header and footer rows once you have dismissed this
dialog box.

Adding Row Groups without the Grouping Pane
As an alternative to using the Grouping pane, you can right-click on a tablix row to
add groups to a report. Just click on a row selector of a detail or existing group row
and select Add Group from the popup menu.
Depending on the context when you right-click on a row, some grouping options may
be disabled or invisible. In general, you’ll have better success creating groups using
the Grouping pane.

Adding an Adjacent Row Group

Rather than add a group that is a child or parent of an existing group, you can add a group that is adjacent (that is, a sibling of) an existing group by selecting Adjacent Before or Adjacent After from the Add Group popup menu (see Figure 3).

Adding an adjacent row group is similar to adding a second tablix region to your report. The major difference is that each tablix region can be bound to a different dataset, whereas all of the groups within a tablix share the same dataset.

When you add an adjacent row group, you may be surprised to find that the new group will not have any detail rows. Fortunately, you can add a child details group to the adjacent row group.
paul_LitwinThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Reporting Services course written by expert Paul Litwin.

Paul Litwin is a developer specializing in ASP, ASP.NET, Visual Basic, C#, SQL Server, and related technologies. He is an experienced trainer, has written articles for MSDN Magazine and PC World, and is the author of several books including ASP.NET for Developers (SAMS) and Access 2002 Enterprise Developer’s Handbook (SYBEX). Paul is a Microsoft MVP, the chair of the Microsoft ASP.NET Connections conference, and a member of the INETA Speakers Bureau.

Be Sociable, Share!

Microsoft SQL Server 2008: Checkpoints

As you learn about Integration Services, you’ll be able to tackle larger and more complex ETL projects, with dozens or even hundreds of tasks moving data among many data stores and performing multiple transformations on it along the way. You may also have individual tasks that take hours to run because of the volume of data they have to move and process, or because of slow resources such as network speeds.

A package might fail after many tasks completed successfully, or after the one hours-long task completes. If you have to re-run the entire package after fixing the problem, you’ll again have to patiently wait for hours while earlier tasks duplicate their work before you even get to the point where the package failed on the first run. That can be a painful experience, and the local database administrator will likely not be pleased that you’re taking up so many resources for so long, repeatedly.

To get around these kinds of problems, Integration Services packages are restartable using a feature called checkpoints. When you implement checkpoints, the package creates a checkpoint file that tracks the execution of the package. As each task completes, Integration Services writes state information to the file, including the current values of variables that are in
scope. If the package completes without errors, Integration Services deletes the checkpoint file. If the package fails, the file contains complete information about which tasks completed and which failed, as well as a reference to where the error occurred. After you fix the error, you execute the package again and the package restarts at the point of failure—not at the beginning—with the same state it had at failure. Checkpoints are an incredibly useful feature, especially in long-running packages.

Checkpoints are not enabled on a package by default. You have to set three package-level properties to configure checkpoints:

  • CheckpointFilename: Specifies the name and location of the checkpoint file name. You must set this property, but the name can be any valid Windows filename, with any extension.
  • CheckpointUsage: Determines how the package uses the checkpoint file while the package executes. It has three settings:
    • Always: The package will always use the checkpoint file and will fail if the file does not exist.
    • IfExists: The package will use the checkpoint file if it exists to restart the package at the previous point of failure. Otherwise, execution begins at the first Control Flow task. This is the usual setting for using checkpoints.
    • Never: The package will not use the checkpoint file even if it exists. This means that the package will never restart, and will only execute from the beginning.
  • SaveCheckpoints: Specifies whether the package should write checkpoints to the file.

This combination of properties provides flexibility in configuring checkpoints for the package, then turning its use on and off before execution without losing the checkpoint configuration.

In order for checkpoints to work, a task failure has to cause the package to fail. Otherwise, the package will continue executing beyond the failure, recording more data in the checkpoint file for subsequent tasks. So you must also set the
FailPackageOnFailure property to true for each task where you want to make it possible to restart the package using a checkpoint. If it is set to false for a task and the task fails, Integration Services doesn’t write any data to the
checkpoint file. Because the checkpoint data is incomplete, the next time you execute the package it will start from the beginning.

TIP: Checkpoints only record data for Control Flow tasks. This includes a Data Flow task, but it does not save checkpoint data for individual steps in a Data Flow. Therefore a package can restart at a Data Flow task, but not within a Data Flow itself. In other words, you cannot restart a package using a checkpoint to execute only part of a Data Flow, just the entire Data Flow.

At the start of the package, Integration Services checks for the existence of the checkpoint file. If the file exists, Integration Services scans the contents of the checkpoint file to determine the starting point in the package. Integration
Services writes to the checkpoint file while the package executes. The contents of the checkpoint file are stored as XML and include the following information:

  • Package ID: A GUID stamped onto the file at the beginning of the execution phase.
  • Execution Results: A log of each task that executes successfully in order of execution. Based on these results, Integration Services knows where to begin executing the package the next time.
  • Variable Values: Integration Services saves package variables’ values in the checkpoint file. When execution begins again, the checkpoint file’s variable values are read from the checkpoint file and then set on the package.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Be Sociable, Share!

Transaction Support in Integration Services

A transaction is a core concept of relational database systems. It is one of the major mechanisms through which a database server protects the integrity of data, by making sure that the data remains internally consistent. Within a transaction, if any part fails you can have the entire set of operations within the transaction roll back, so that no changes are persisted to the database. SQL Server has always had rich support for transactions, and Integration Services hooks into that support.

A key concept in relational database transactions is the ACID test. To ensure predictable behavior, all transactions must possess the basic ACID test, which means:

  • Atomic: A transaction must work as a unit, which is either fully committed or fully abandoned when complete.
  • Consistent: All data must be in a consistent state when the transaction is complete. All data integrity rules must be enforced and all internal storage mechanisms must be correct when the transaction is complete.
  • Isolated: All transactions must be independent of the data operation of other concurrent transactions. Concurrent transactions can only see data before other operations are complete or after other transactions are complete.
  • Durable: After the transaction is complete, the effects are permanent even in the event of system failure

Integration Services ensures reliable creation, updating, and insertion of rows through the use of ACID transactions. For example, if an error occurs in a package that uses transactions, the transaction rolls back the data that was previously inserted or updated, thereby keeping database integrity. This eliminates orphaned rows and restores updated data to its previous value to ensure that the data remains consistent. No partial success or failure exists when tasks in a package have transactions enabled. They fail or succeed together.

Tasks can use the parent container’s transaction isolation or create their own. The properties that are required to enable transactions are as follows:

  • TransactionOption: Set this property of a task or container to enable transactions. The options are:
    • Required: The task or container enrolls in the transaction of the parent container if one exists; otherwise it creates a new transaction for its own use.
    • Supported: The task uses a parent’s transaction, if one is available. This is the default setting.
    • Not Supported: The task does not support and will not use a transaction even if the parent is using one.
  • IsolationLevel: This property determines the safety level, using the same scheme you can use in a SQL Server stored procedure. The options are:
    • Serializable: The most restrictive isolation level of all. It ensures that if a query is reissued inside the same transaction, existing rows won’t look any different and new rows won’t suddenly appear. It employs a range of locks that prevents edits or insertions until the transaction is completed.
    • Read Committed: Ensures that shared locks are issued when data is being read and prevents “dirty reads.” A dirty read consists of data that is in the process of being edited, but has not been committed or rolled back. However, you can change data before the end of the transaction, resulting in nonrepeatable reads (also known as phantom data).
    • Read Uncommitted: The least restrictive isolation level, which is the opposite of READ COMMITTED, allows “dirty reads” of the data. Ignores locks that other operations may have issued and does not create any locks of its own. This is called “dirty read” because underlying data may change within the transaction and this query would not be aware of it.
    • Snapshot: Reads data as it was when the transaction started, ignoring any changes since then. As a result, it doesn’t represent the current state of the data, but it represents a consistent state of the database as of the beginning of the transaction.
    • Repeatable Read: Prevents others from updating data until a transaction is completed, but does not prevent others from inserting new rows. The inserted rows are known as phantom rows, because they are not visible to a transaction that was started prior to their insertion. This is the minimum level of isolation required to prevent lost updates, which occur when two separate transactions select a row and then update it based on the selected data. The second update would be lost since the criteria for update would no longer match.

Integration Services supports two types of transactions. The first is Distributed Transaction Coordinator (DTC) transactions, which lets you include multiple resources in the transaction. For example, you might have a single transaction that involves data in a SQL Server database, an Oracle database, and an Access database. This type of transaction can span connections, tasks, and packages. The down side is that it requires the DTC service to be running and tends to be very slow.
The other type of transaction is a Native transaction, which uses SQL Server’s built-in support for transactions within its own databases. This uses a single connection to a database and T-SQL commands to manage the transaction. Integration Services supports a great deal of flexibility with transactions.

Integration Services supports a great deal of flexibility with transactions. It supports a variety of scenarios, such as a single transaction within a package, multiple independent transactions in a single package, transactions that span packages, and others. You’ll be hard pressed to find a scenario that you can’t implement with a bit of careful thought using Integration Services transactions.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Be Sociable, Share!

Windows 8:Interacting with Tiles

Windows 8 displays tiles on its start screen, making it easy for users to interact with your applications. These tiles act as a representation of your applications, and make it possible for users to start your applications. But tiles do much more. You can just display static information on a tile, but that’s not really the intent. Tiles allow you to display information pertinent to your application, making it possible to display new, real-time information to the user.

In general, you can display text and images, plus a status badge, on a tile. You can update the content of the tile regularly, in reaction to an event, or at any time. Beware that even if the tile displays active data, the user can elect to display only static data, and have no updates appear on the tile. In Figure 1, the WeatherBug and ABC News apps show active data.


Figure 1. WeatherBug and ABC News apps display live data.

Tile Variations

Tiles come in two sizes: Square, and wide. Either size can display text, images, and/or a badge. You can (and probably should) include both square and wide tile configurations in your application package, so that users can switch between the two sizes at will. Users can mix and match square and wide tiles, and wide tiles are just the right size to “match” two square tiles side by side, with a 5-pixel gutter space, as shown in Figure 2.


Figure 2. Tiles come in two sizes.

In addition to text and images, tiles can display badges that contain specific information. For example, in Figure 3, the Mail app displays a badge with a count of the unread email messages. Badges can consist of numbers or a limited set of specific icons.



Figure 3. Note the badge on the Mail app.

NOTE: Creating badge notifications is beyond the scope of this particular chapter

Secondary Tiles

Besides the main tile associated with an application, an application can display one or more secondary tiles. These secondary tiles allow users to promote specific content and internal links from within the application to the Start screen. These secondary tiles can display and/or link to specific content, such as:

  •  Information about specific friends
  •  Weather reports for specific locations
  •  Stock reports for specific stocks

Not all applications support creating secondary tiles, but many do. For example, Figure 4 shows a secondary tile for the Weather app showing weather conditions at a specific location.


Figure 4. A secondary tile in the Weather app.

This post is an excerpt from the online courseware Windows 8 Tiles, Badges, Print and Charms course written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

Be Sociable, Share!

SQL 2012: Developer: NULLs and SqlBoolean

When integrating T-SQL with the CLR, remember to declare variables, parameters, and return values of data types exposed through the System.Data.SqlTypes namespace. Doing so guarantees a behavior more similar to T-SQL.

As described in the previous section, the outcome of performing arithmetic, bitwise, and logical comparisons between two variables when one or both values is NULL can be inconsistent. The ANSI_NULLS option in T-SQL proves how different the results can be; and as you saw in the simple Visual Basic .NET example, not using the SqlTypes data types leads to the same confusion.

Fortunately, there is the SqlBoolean data type. Exposed as part of the SqlTypes namespace, the SqlBoolean data type can represent three distinct states—true, false, and unknown. In addition, the comparison of two SqlTypes data types always returns a SqlBoolean, which again ensures consistent behavior.

The SqlBoolean data type exposes three important properties:

  • IsTrue: Indicates whether the comparison produces a TRUE value.
  • IsFalse: The outcome when the comparison is FALSE.
  •  IsNull: Returns true when the comparison between the variables produces an unknown or NULL result.

Keeping these concepts in mind, look at the Visual Basic .NET code behind the SqlBooleans button on the switchboard form.


The code makes the following comparison:


The code compares intX—a SqlInt32 assigned the value 5—with intY, another SqlInt32 explicitly assigned a NULL value. The result is a SqlBoolean data type with properties that contain the outcome of the comparison— blnResult.IsTrue.
Figure 1 shows the Msgbox that displays the outcome of this routine.


Figure 1. SqlBooleans provide consistency when you work with NULL values.

WARNING! Remember that a SqlBoolean data type represents three states— IsTrue, IsFalse, and IsNull. IsNull returns TRUE only when both sides of the comparison are unknown.

Frank TillinghastFrank Tillinghast is a senior consultant with MTOW Software Solutions. He is a Microsoft Certified Solution Developer and has been developing applications for over fifteen years. Most of his time is spent consulting for companies nationwide with troubled projects or mentoring projects to successful completion. When he is not developing software or mentoring clients, Frank is teaching other developers. He has taught thousands of application developers how to create business solutions with Visual Studio .NET. VB.NET, ASP.NET, Visual C#, and SQL Server.

Be Sociable, Share!

SharePoint 2010: Developer: Sandbox Solution Overview

SharePoint 2010 includes a secure isolated environment for user deployable Web solution packages—the user code sandbox. To deploy a sandbox solution, a user (usually the site owner), uploads a WSP to a special library that is part of every SharePoint 2010 site named the solution gallery. Once added to the solution gallery, the site owner can activate the solution by clicking a button on the ribbon.

The sandbox makes it easy to create code that runs on a SharePoint server as Web Parts, pages, and event handlers. Code in the sandbox runs under a restricted set of privileges out of process from the web application in a monitored host process called the Windows SharePoint Services User Code Host Service.

The SharePoint object model includes facilities to allow communication between the main worker process and the user code host. The SPUserCodeWebPart Web Part is one of these facilities. SPUserCodeWebPart provides the ability to host controls running in the sandbox on a page running in the main worker process.

In addition to a reduced set of privileges, the sandbox also provides a limited and safe subset of the SharePoint object model. This prevents sandbox code from making any changes outside the current site and from executing with explicit elevation of privileges via the SPSecurity namespace.

Why Use Sandbox?

The sandbox environment gives farm operators the ability to enable customization for users without providing administrative access to the farm. This power comes with a number of safeguards to protect the overall stability and security of the farm.

The sandbox protects the farm from poorly written or malicious code. This includes protection from:

  • Unhandled exceptions
  • Processor intensive operations
  • Unauthorized manipulation of web application and farm infrastructure
  • Elevation of privilege

SharePoint Central Administration and the solution gallery both provide visibility to farm administrators of solution health and resource needs. Administrators can define quotas to block defective solutions and have the ability to manually block execution of specific solutions for any reason.

How the Sandbox Works

The Windows SharePoint Services User Code Host Service provides a partial trust AppDomain to host sandboxed processes. The service consists of three parts:

  • SPUCHostService.exe
  • SPUCWorkerProcessProxy.exe
  • SPUCWorkerProcess.exe

SPUCHostService manages one or more SPUCWorkerProcess via SPUCWorkerProcessProxy. This architecture makes it possible to scale the user code sandbox across multiple servers in the farm. Solutions in the sandbox use a special version of Microsoft.SharePoint.dll located in the UserCode\Assemblies folder in the SharePoint root.

The host service also allows configuration of server affinity—you can specify that sandbox code runs on the same machine as a request or that requests to run sandboxed code are run on available servers with the Sandboxed Code Service. Regardless of the configuration the host service runs sandbox code within SPUCWorkerProcess.exe. SPUCWorkerProcess.exe is the process to which you attach the debugger to debug sandbox code.
doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.

Be Sociable, Share!