Monthly Archives: March 2014

Microsoft SQL Server 2008: Checkpoints

As you learn about Integration Services, you’ll be able to tackle larger and more complex ETL projects, with dozens or even hundreds of tasks moving data among many data stores and performing multiple transformations on it along the way. You may also have individual tasks that take hours to run because of the volume of data they have to move and process, or because of slow resources such as network speeds.

A package might fail after many tasks completed successfully, or after the one hours-long task completes. If you have to re-run the entire package after fixing the problem, you’ll again have to patiently wait for hours while earlier tasks duplicate their work before you even get to the point where the package failed on the first run. That can be a painful experience, and the local database administrator will likely not be pleased that you’re taking up so many resources for so long, repeatedly.

To get around these kinds of problems, Integration Services packages are restartable using a feature called checkpoints. When you implement checkpoints, the package creates a checkpoint file that tracks the execution of the package. As each task completes, Integration Services writes state information to the file, including the current values of variables that are in
scope. If the package completes without errors, Integration Services deletes the checkpoint file. If the package fails, the file contains complete information about which tasks completed and which failed, as well as a reference to where the error occurred. After you fix the error, you execute the package again and the package restarts at the point of failure—not at the beginning—with the same state it had at failure. Checkpoints are an incredibly useful feature, especially in long-running packages.

Checkpoints are not enabled on a package by default. You have to set three package-level properties to configure checkpoints:

  • CheckpointFilename: Specifies the name and location of the checkpoint file name. You must set this property, but the name can be any valid Windows filename, with any extension.
  • CheckpointUsage: Determines how the package uses the checkpoint file while the package executes. It has three settings:
    • Always: The package will always use the checkpoint file and will fail if the file does not exist.
    • IfExists: The package will use the checkpoint file if it exists to restart the package at the previous point of failure. Otherwise, execution begins at the first Control Flow task. This is the usual setting for using checkpoints.
    • Never: The package will not use the checkpoint file even if it exists. This means that the package will never restart, and will only execute from the beginning.
  • SaveCheckpoints: Specifies whether the package should write checkpoints to the file.

This combination of properties provides flexibility in configuring checkpoints for the package, then turning its use on and off before execution without losing the checkpoint configuration.

In order for checkpoints to work, a task failure has to cause the package to fail. Otherwise, the package will continue executing beyond the failure, recording more data in the checkpoint file for subsequent tasks. So you must also set the
FailPackageOnFailure property to true for each task where you want to make it possible to restart the package using a checkpoint. If it is set to false for a task and the task fails, Integration Services doesn’t write any data to the
checkpoint file. Because the checkpoint data is incomplete, the next time you execute the package it will start from the beginning.

TIP: Checkpoints only record data for Control Flow tasks. This includes a Data Flow task, but it does not save checkpoint data for individual steps in a Data Flow. Therefore a package can restart at a Data Flow task, but not within a Data Flow itself. In other words, you cannot restart a package using a checkpoint to execute only part of a Data Flow, just the entire Data Flow.

At the start of the package, Integration Services checks for the existence of the checkpoint file. If the file exists, Integration Services scans the contents of the checkpoint file to determine the starting point in the package. Integration
Services writes to the checkpoint file while the package executes. The contents of the checkpoint file are stored as XML and include the following information:

  • Package ID: A GUID stamped onto the file at the beginning of the execution phase.
  • Execution Results: A log of each task that executes successfully in order of execution. Based on these results, Integration Services knows where to begin executing the package the next time.
  • Variable Values: Integration Services saves package variables’ values in the checkpoint file. When execution begins again, the checkpoint file’s variable values are read from the checkpoint file and then set on the package.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Transaction Support in Integration Services

A transaction is a core concept of relational database systems. It is one of the major mechanisms through which a database server protects the integrity of data, by making sure that the data remains internally consistent. Within a transaction, if any part fails you can have the entire set of operations within the transaction roll back, so that no changes are persisted to the database. SQL Server has always had rich support for transactions, and Integration Services hooks into that support.

A key concept in relational database transactions is the ACID test. To ensure predictable behavior, all transactions must possess the basic ACID test, which means:

  • Atomic: A transaction must work as a unit, which is either fully committed or fully abandoned when complete.
  • Consistent: All data must be in a consistent state when the transaction is complete. All data integrity rules must be enforced and all internal storage mechanisms must be correct when the transaction is complete.
  • Isolated: All transactions must be independent of the data operation of other concurrent transactions. Concurrent transactions can only see data before other operations are complete or after other transactions are complete.
  • Durable: After the transaction is complete, the effects are permanent even in the event of system failure

Integration Services ensures reliable creation, updating, and insertion of rows through the use of ACID transactions. For example, if an error occurs in a package that uses transactions, the transaction rolls back the data that was previously inserted or updated, thereby keeping database integrity. This eliminates orphaned rows and restores updated data to its previous value to ensure that the data remains consistent. No partial success or failure exists when tasks in a package have transactions enabled. They fail or succeed together.

Tasks can use the parent container’s transaction isolation or create their own. The properties that are required to enable transactions are as follows:

  • TransactionOption: Set this property of a task or container to enable transactions. The options are:
    • Required: The task or container enrolls in the transaction of the parent container if one exists; otherwise it creates a new transaction for its own use.
    • Supported: The task uses a parent’s transaction, if one is available. This is the default setting.
    • Not Supported: The task does not support and will not use a transaction even if the parent is using one.
  • IsolationLevel: This property determines the safety level, using the same scheme you can use in a SQL Server stored procedure. The options are:
    • Serializable: The most restrictive isolation level of all. It ensures that if a query is reissued inside the same transaction, existing rows won’t look any different and new rows won’t suddenly appear. It employs a range of locks that prevents edits or insertions until the transaction is completed.
    • Read Committed: Ensures that shared locks are issued when data is being read and prevents “dirty reads.” A dirty read consists of data that is in the process of being edited, but has not been committed or rolled back. However, you can change data before the end of the transaction, resulting in nonrepeatable reads (also known as phantom data).
    • Read Uncommitted: The least restrictive isolation level, which is the opposite of READ COMMITTED, allows “dirty reads” of the data. Ignores locks that other operations may have issued and does not create any locks of its own. This is called “dirty read” because underlying data may change within the transaction and this query would not be aware of it.
    • Snapshot: Reads data as it was when the transaction started, ignoring any changes since then. As a result, it doesn’t represent the current state of the data, but it represents a consistent state of the database as of the beginning of the transaction.
    • Repeatable Read: Prevents others from updating data until a transaction is completed, but does not prevent others from inserting new rows. The inserted rows are known as phantom rows, because they are not visible to a transaction that was started prior to their insertion. This is the minimum level of isolation required to prevent lost updates, which occur when two separate transactions select a row and then update it based on the selected data. The second update would be lost since the criteria for update would no longer match.

Integration Services supports two types of transactions. The first is Distributed Transaction Coordinator (DTC) transactions, which lets you include multiple resources in the transaction. For example, you might have a single transaction that involves data in a SQL Server database, an Oracle database, and an Access database. This type of transaction can span connections, tasks, and packages. The down side is that it requires the DTC service to be running and tends to be very slow.
The other type of transaction is a Native transaction, which uses SQL Server’s built-in support for transactions within its own databases. This uses a single connection to a database and T-SQL commands to manage the transaction. Integration Services supports a great deal of flexibility with transactions.

Integration Services supports a great deal of flexibility with transactions. It supports a variety of scenarios, such as a single transaction within a package, multiple independent transactions in a single package, transactions that span packages, and others. You’ll be hard pressed to find a scenario that you can’t implement with a bit of careful thought using Integration Services transactions.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Windows 8:Interacting with Tiles

Windows 8 displays tiles on its start screen, making it easy for users to interact with your applications. These tiles act as a representation of your applications, and make it possible for users to start your applications. But tiles do much more. You can just display static information on a tile, but that’s not really the intent. Tiles allow you to display information pertinent to your application, making it possible to display new, real-time information to the user.

In general, you can display text and images, plus a status badge, on a tile. You can update the content of the tile regularly, in reaction to an event, or at any time. Beware that even if the tile displays active data, the user can elect to display only static data, and have no updates appear on the tile. In Figure 1, the WeatherBug and ABC News apps show active data.


Figure 1. WeatherBug and ABC News apps display live data.

Tile Variations

Tiles come in two sizes: Square, and wide. Either size can display text, images, and/or a badge. You can (and probably should) include both square and wide tile configurations in your application package, so that users can switch between the two sizes at will. Users can mix and match square and wide tiles, and wide tiles are just the right size to “match” two square tiles side by side, with a 5-pixel gutter space, as shown in Figure 2.


Figure 2. Tiles come in two sizes.

In addition to text and images, tiles can display badges that contain specific information. For example, in Figure 3, the Mail app displays a badge with a count of the unread email messages. Badges can consist of numbers or a limited set of specific icons.



Figure 3. Note the badge on the Mail app.

NOTE: Creating badge notifications is beyond the scope of this particular chapter

Secondary Tiles

Besides the main tile associated with an application, an application can display one or more secondary tiles. These secondary tiles allow users to promote specific content and internal links from within the application to the Start screen. These secondary tiles can display and/or link to specific content, such as:

  •  Information about specific friends
  •  Weather reports for specific locations
  •  Stock reports for specific stocks

Not all applications support creating secondary tiles, but many do. For example, Figure 4 shows a secondary tile for the Weather app showing weather conditions at a specific location.


Figure 4. A secondary tile in the Weather app.

This post is an excerpt from the online courseware Windows 8 Tiles, Badges, Print and Charms course written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

SQL 2012: Developer: NULLs and SqlBoolean

When integrating T-SQL with the CLR, remember to declare variables, parameters, and return values of data types exposed through the System.Data.SqlTypes namespace. Doing so guarantees a behavior more similar to T-SQL.

As described in the previous section, the outcome of performing arithmetic, bitwise, and logical comparisons between two variables when one or both values is NULL can be inconsistent. The ANSI_NULLS option in T-SQL proves how different the results can be; and as you saw in the simple Visual Basic .NET example, not using the SqlTypes data types leads to the same confusion.

Fortunately, there is the SqlBoolean data type. Exposed as part of the SqlTypes namespace, the SqlBoolean data type can represent three distinct states—true, false, and unknown. In addition, the comparison of two SqlTypes data types always returns a SqlBoolean, which again ensures consistent behavior.

The SqlBoolean data type exposes three important properties:

  • IsTrue: Indicates whether the comparison produces a TRUE value.
  • IsFalse: The outcome when the comparison is FALSE.
  •  IsNull: Returns true when the comparison between the variables produces an unknown or NULL result.

Keeping these concepts in mind, look at the Visual Basic .NET code behind the SqlBooleans button on the switchboard form.


The code makes the following comparison:


The code compares intX—a SqlInt32 assigned the value 5—with intY, another SqlInt32 explicitly assigned a NULL value. The result is a SqlBoolean data type with properties that contain the outcome of the comparison— blnResult.IsTrue.
Figure 1 shows the Msgbox that displays the outcome of this routine.


Figure 1. SqlBooleans provide consistency when you work with NULL values.

WARNING! Remember that a SqlBoolean data type represents three states— IsTrue, IsFalse, and IsNull. IsNull returns TRUE only when both sides of the comparison are unknown.

Frank TillinghastFrank Tillinghast is a senior consultant with MTOW Software Solutions. He is a Microsoft Certified Solution Developer and has been developing applications for over fifteen years. Most of his time is spent consulting for companies nationwide with troubled projects or mentoring projects to successful completion. When he is not developing software or mentoring clients, Frank is teaching other developers. He has taught thousands of application developers how to create business solutions with Visual Studio .NET. VB.NET, ASP.NET, Visual C#, and SQL Server.

SharePoint 2010: Developer: Sandbox Solution Overview

SharePoint 2010 includes a secure isolated environment for user deployable Web solution packages—the user code sandbox. To deploy a sandbox solution, a user (usually the site owner), uploads a WSP to a special library that is part of every SharePoint 2010 site named the solution gallery. Once added to the solution gallery, the site owner can activate the solution by clicking a button on the ribbon.

The sandbox makes it easy to create code that runs on a SharePoint server as Web Parts, pages, and event handlers. Code in the sandbox runs under a restricted set of privileges out of process from the web application in a monitored host process called the Windows SharePoint Services User Code Host Service.

The SharePoint object model includes facilities to allow communication between the main worker process and the user code host. The SPUserCodeWebPart Web Part is one of these facilities. SPUserCodeWebPart provides the ability to host controls running in the sandbox on a page running in the main worker process.

In addition to a reduced set of privileges, the sandbox also provides a limited and safe subset of the SharePoint object model. This prevents sandbox code from making any changes outside the current site and from executing with explicit elevation of privileges via the SPSecurity namespace.

Why Use Sandbox?

The sandbox environment gives farm operators the ability to enable customization for users without providing administrative access to the farm. This power comes with a number of safeguards to protect the overall stability and security of the farm.

The sandbox protects the farm from poorly written or malicious code. This includes protection from:

  • Unhandled exceptions
  • Processor intensive operations
  • Unauthorized manipulation of web application and farm infrastructure
  • Elevation of privilege

SharePoint Central Administration and the solution gallery both provide visibility to farm administrators of solution health and resource needs. Administrators can define quotas to block defective solutions and have the ability to manually block execution of specific solutions for any reason.

How the Sandbox Works

The Windows SharePoint Services User Code Host Service provides a partial trust AppDomain to host sandboxed processes. The service consists of three parts:

  • SPUCHostService.exe
  • SPUCWorkerProcessProxy.exe
  • SPUCWorkerProcess.exe

SPUCHostService manages one or more SPUCWorkerProcess via SPUCWorkerProcessProxy. This architecture makes it possible to scale the user code sandbox across multiple servers in the farm. Solutions in the sandbox use a special version of Microsoft.SharePoint.dll located in the UserCode\Assemblies folder in the SharePoint root.

The host service also allows configuration of server affinity—you can specify that sandbox code runs on the same machine as a request or that requests to run sandboxed code are run on available servers with the Sandboxed Code Service. Regardless of the configuration the host service runs sandbox code within SPUCWorkerProcess.exe. SPUCWorkerProcess.exe is the process to which you attach the debugger to debug sandbox code.
doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.