Using Visual Studio 2010 to Create BCS Applications

There are two ways to use Visual Studio to create BCS applications. The first is to build custom BCS models with the Business Data Connectivity Model, the second is to use Visual Studio to migrate declarative models built with SharePoint Designer for deployment via solution packages.

Business Data Connectivity Model

Visual Studio 2010 includes the Business Data Connectivity Model project template that you can use to create and use a .NET Assembly shim to any data store for use by BCS. Solutions based on the project template consist of a feature to install the model in BCS, an XML configuration file that is the model, and .NET classes that do the work of reading and writing data.

The XML model contains all of the information required to work with the .NET classes including method and type descriptors. This means that the associated .NET class’s methods and parameters must match the model.

At this point in the chapter you may have the strong impression that Microsoft really wants people to buy licenses to SharePoint Server if they need BCS. If so, it will not surprise you to discover that you must do some extra work to use this project template with SharePoint Foundation to support deployment to BCS.

Migrating Declarative Models to Visual Studio

You can use the Business Data Connectivity Model project template as a basis to migrate declarative models created in SharePoint Designer. Begin by using SharePoint Designer to export the model. Then create a Business Data Connectivity Model project and remove the default template items. Finally, add the exported model and replace the missing SharePoint Server specific feature receiver to deploy the model to SharePoint Foundation.

doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.

Be Sociable, Share!

SQL Server 2008: Refining Attribute Relationships

Once you create a basic cube with dimensions, hierarchies, and measure groups, you will need to refine your cube design to optimize performance. One critical step in optimizing performance is to refine the attribute relationships, especially those in your natural hierarchies. When you build relationships between the attributes that form the levels of a hierarchy, SSAS can use an aggregation that was stored at one level to build aggregations for another level. For example, in a time dimension, a relationship exists between the semester
and year level, and SSAS has stored the aggregation for the semester level. When you query the year level, SSAS can add the two semester totals to determine the result for the year, thus improving query speeds.

Once you create a hierarchy in the Dimension Designer in BIDS, you can change to the Attribute Relationships tab to manage the relationships between the attributes being used in the hierarchies.

The Attribute Relationships tab has three sections, the design surface (which holds a relationship diagram), the Attributes pane, and the Attribute Relationships pane. In the design pane, you can drag and drop attributes to define the required attribute relationships. You should start with the key level and build from there. To create a relationship between the Employee key and Title attributes, you would drag the Employee key and drop it on the Title. An arrow will appear both on the design surface and in the Attribute Relationships
section to represent the relationship pictured in Figure 1.

SQLrefining

Figure 1. Attribute relationships are indicated by arrows.

In the Dimension Designer, as shown in Figure 2, you will notice that the arrow between Employee and Birth Date and the arrow between Employee and Start Date are solid black. This indicates a rigid relationship.

SQLrefining2

Figure 2. Rigid relationships are indicated by a solid black arrow.

Attributes where the relationship are not likely to change over time should be defined as rigid relationships. The examples in the previous paragraph of Birth Date and Start Date as they relate to each employee should not change over time. The employee’s original start date should not change once they have started work. Rigid relationships allow SSAS to better optimize aggregations during incremental updates. Aggregations for rigid relationships are
maintained during an incremental update, while aggregations for flexible relationships are dropped and must be reprocessed.

NOTE Aggregations and dimension processing are beyond the scope of this chapter. For more information about aggregations, see the SQL Server Books Online topic, Aggregations and Aggregation Designs and for more information on dimension processing see the SQL Server Books Online topic, Processing (Analysis Services – Multidimensional Data) and Processing Options and Settings

Flexible relationships may change over time, for example, an employee’s title  may change when they are promoted. By default, all relationships are flexible. To change a relationship from flexible to rigid, right-click on the relationship arrow in either the diagram or Attribute Relationships areas, and then select Flexible or Rigid on the Relationship Type submenu. Additionally, when you right-click the relationship, you can select Edit Attribute Relationship to modify both the attributes and the relationship type.

ann.weberThis post is an excerpt from the online courseware for our SQL Server 2008 Analysis Services course written by expert Ann Weber.

Ann Weber has been an author, instructor and consultant for over 12 years. She is an expert in SQL Server, and has her MCITP, MCSE and MCT certifications. Ann works with all facets of SQL Server including administration, writing queries, development, SSAS, SSIS and SSRS. Ann has developed several courses and other learning materials for SQL Server.

Be Sociable, Share!

Microsoft SQL Server 2008: Creating Groups

It’s difficult to create a very useful report without needing to group the data in some way. A report without any groups is either very simple—and there’s nothing wrong with a simple report—or very disorganized.
Groups are a great way to organize data in a report into a more manageable assemblage of information. If you need to create subtotals or other statistics you will likely need to create groups.

NOTE While the focus of this chapter—and all of the examples—is tabular reports and row groups created within a Table data region, some of the principles also apply to matrix and list reports, as well as hybrid reports that have attributes of tabular, matrix, and list reports.

The Grouping Pane

While previous versions of Reporting Services supported grouping, SQL Server 2008 Reporting Services has brought report groups to the forefront with the addition of the Grouping pane to the design surface. From the Grouping pane, you can easily view and manage your groups. You can see the Grouping pane at the bottom of the report design surface in Figure 1.

CreatingGroups1Figure 1. The Grouping pane appears at the bottom of the report design surface.

NOTE This chapter will focus on the row groups that are part of tabular reports. Elsewhere in this course you will find a discussion of Column Groups that are used on matrix reports.

The Details Group

By default, Reporting Services adds a details group—labeled (Details) in the Row Groups pane—to every Table and List data region. (Matrix data regions do not contain a details group and Chart and Gauge data regions do not use the Grouping pane.) The details group is unique in that it is a group that is not based on a grouping expression. Instead, it represents the detail rows in a Table or List data region.

Adding a Row Group

You can add a new row group to a Tablix data region either by dragging dropping a field from the Report Data window and dropping it on the Grouping pane or by using the Grouping pane’s popup menu.

Dragging and Dropping

Drag a field from the Report Data window and drop it onto the Row Groups area of the Grouping pane to create a new group. The key to getting the group into the correct place in the group hierarchy for the report is to carefully position the mouse cursor before letting go of the mouse button. As you hover over the existing groups, Reporting Services will draw a blue line to indicate where the new group will be inserted as shown in Figure 2.

CreatingGroups2Figure 2. The new group will be created as a child of the Country group and as a parent of the details group.

When you create a group using drag and drop, you can only create parent or child groups; you cannot create an adjacent group using this technique. Nor can you control the presence of group header and footer rows, or create a group based on an expression. If you need any of these group options, you’ll want to employ the Grouping pane menus to create your group.

Using the Grouping Pane Menus

To add a row group to a report using the Grouping pane menus, click on the down arrow to the right of an existing group and select Add Group from the menu as shown in Figure 3. A submenu will present several choices including Parent Group, Child Group, Adjacent Before and Adjacent After. (Creating adjacent groups will be discussed in more detail in the next section.)

CreatingGroups3Figure 3. Adding a row group using the Grouping pane.

After selecting the type of group that you want to create, Reporting Services displays the Tablix group dialog box that is shown in Figure 4.

CreatingGroups4Figure 4. The Tablix group dialog box.

To finish creating the group, select the Group by field using the drop-down list or click the fx button to group on an expression instead. Don’t forget to check the Add group header and Add group footer check boxes as appropriate before clicking OK, because Reporting Services makes it difficult to recreate the group header and footer rows once you have dismissed this
dialog box.

Adding Row Groups without the Grouping Pane
As an alternative to using the Grouping pane, you can right-click on a tablix row to
add groups to a report. Just click on a row selector of a detail or existing group row
and select Add Group from the popup menu.
Depending on the context when you right-click on a row, some grouping options may
be disabled or invisible. In general, you’ll have better success creating groups using
the Grouping pane.

Adding an Adjacent Row Group

Rather than add a group that is a child or parent of an existing group, you can add a group that is adjacent (that is, a sibling of) an existing group by selecting Adjacent Before or Adjacent After from the Add Group popup menu (see Figure 3).

Adding an adjacent row group is similar to adding a second tablix region to your report. The major difference is that each tablix region can be bound to a different dataset, whereas all of the groups within a tablix share the same dataset.

When you add an adjacent row group, you may be surprised to find that the new group will not have any detail rows. Fortunately, you can add a child details group to the adjacent row group.
paul_LitwinThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Reporting Services course written by expert Paul Litwin.

Paul Litwin is a developer specializing in ASP, ASP.NET, Visual Basic, C#, SQL Server, and related technologies. He is an experienced trainer, has written articles for MSDN Magazine and PC World, and is the author of several books including ASP.NET for Developers (SAMS) and Access 2002 Enterprise Developer’s Handbook (SYBEX). Paul is a Microsoft MVP, the chair of the Microsoft ASP.NET Connections conference, and a member of the INETA Speakers Bureau.

Be Sociable, Share!

Microsoft SQL Server 2008: Checkpoints

As you learn about Integration Services, you’ll be able to tackle larger and more complex ETL projects, with dozens or even hundreds of tasks moving data among many data stores and performing multiple transformations on it along the way. You may also have individual tasks that take hours to run because of the volume of data they have to move and process, or because of slow resources such as network speeds.

A package might fail after many tasks completed successfully, or after the one hours-long task completes. If you have to re-run the entire package after fixing the problem, you’ll again have to patiently wait for hours while earlier tasks duplicate their work before you even get to the point where the package failed on the first run. That can be a painful experience, and the local database administrator will likely not be pleased that you’re taking up so many resources for so long, repeatedly.

To get around these kinds of problems, Integration Services packages are restartable using a feature called checkpoints. When you implement checkpoints, the package creates a checkpoint file that tracks the execution of the package. As each task completes, Integration Services writes state information to the file, including the current values of variables that are in
scope. If the package completes without errors, Integration Services deletes the checkpoint file. If the package fails, the file contains complete information about which tasks completed and which failed, as well as a reference to where the error occurred. After you fix the error, you execute the package again and the package restarts at the point of failure—not at the beginning—with the same state it had at failure. Checkpoints are an incredibly useful feature, especially in long-running packages.

Checkpoints are not enabled on a package by default. You have to set three package-level properties to configure checkpoints:

  • CheckpointFilename: Specifies the name and location of the checkpoint file name. You must set this property, but the name can be any valid Windows filename, with any extension.
  • CheckpointUsage: Determines how the package uses the checkpoint file while the package executes. It has three settings:
    • Always: The package will always use the checkpoint file and will fail if the file does not exist.
    • IfExists: The package will use the checkpoint file if it exists to restart the package at the previous point of failure. Otherwise, execution begins at the first Control Flow task. This is the usual setting for using checkpoints.
    • Never: The package will not use the checkpoint file even if it exists. This means that the package will never restart, and will only execute from the beginning.
  • SaveCheckpoints: Specifies whether the package should write checkpoints to the file.

This combination of properties provides flexibility in configuring checkpoints for the package, then turning its use on and off before execution without losing the checkpoint configuration.

In order for checkpoints to work, a task failure has to cause the package to fail. Otherwise, the package will continue executing beyond the failure, recording more data in the checkpoint file for subsequent tasks. So you must also set the
FailPackageOnFailure property to true for each task where you want to make it possible to restart the package using a checkpoint. If it is set to false for a task and the task fails, Integration Services doesn’t write any data to the
checkpoint file. Because the checkpoint data is incomplete, the next time you execute the package it will start from the beginning.

TIP: Checkpoints only record data for Control Flow tasks. This includes a Data Flow task, but it does not save checkpoint data for individual steps in a Data Flow. Therefore a package can restart at a Data Flow task, but not within a Data Flow itself. In other words, you cannot restart a package using a checkpoint to execute only part of a Data Flow, just the entire Data Flow.

At the start of the package, Integration Services checks for the existence of the checkpoint file. If the file exists, Integration Services scans the contents of the checkpoint file to determine the starting point in the package. Integration
Services writes to the checkpoint file while the package executes. The contents of the checkpoint file are stored as XML and include the following information:

  • Package ID: A GUID stamped onto the file at the beginning of the execution phase.
  • Execution Results: A log of each task that executes successfully in order of execution. Based on these results, Integration Services knows where to begin executing the package the next time.
  • Variable Values: Integration Services saves package variables’ values in the checkpoint file. When execution begins again, the checkpoint file’s variable values are read from the checkpoint file and then set on the package.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Be Sociable, Share!

Transaction Support in Integration Services

A transaction is a core concept of relational database systems. It is one of the major mechanisms through which a database server protects the integrity of data, by making sure that the data remains internally consistent. Within a transaction, if any part fails you can have the entire set of operations within the transaction roll back, so that no changes are persisted to the database. SQL Server has always had rich support for transactions, and Integration Services hooks into that support.

A key concept in relational database transactions is the ACID test. To ensure predictable behavior, all transactions must possess the basic ACID test, which means:

  • Atomic: A transaction must work as a unit, which is either fully committed or fully abandoned when complete.
  • Consistent: All data must be in a consistent state when the transaction is complete. All data integrity rules must be enforced and all internal storage mechanisms must be correct when the transaction is complete.
  • Isolated: All transactions must be independent of the data operation of other concurrent transactions. Concurrent transactions can only see data before other operations are complete or after other transactions are complete.
  • Durable: After the transaction is complete, the effects are permanent even in the event of system failure

Integration Services ensures reliable creation, updating, and insertion of rows through the use of ACID transactions. For example, if an error occurs in a package that uses transactions, the transaction rolls back the data that was previously inserted or updated, thereby keeping database integrity. This eliminates orphaned rows and restores updated data to its previous value to ensure that the data remains consistent. No partial success or failure exists when tasks in a package have transactions enabled. They fail or succeed together.

Tasks can use the parent container’s transaction isolation or create their own. The properties that are required to enable transactions are as follows:

  • TransactionOption: Set this property of a task or container to enable transactions. The options are:
    • Required: The task or container enrolls in the transaction of the parent container if one exists; otherwise it creates a new transaction for its own use.
    • Supported: The task uses a parent’s transaction, if one is available. This is the default setting.
    • Not Supported: The task does not support and will not use a transaction even if the parent is using one.
  • IsolationLevel: This property determines the safety level, using the same scheme you can use in a SQL Server stored procedure. The options are:
    • Serializable: The most restrictive isolation level of all. It ensures that if a query is reissued inside the same transaction, existing rows won’t look any different and new rows won’t suddenly appear. It employs a range of locks that prevents edits or insertions until the transaction is completed.
    • Read Committed: Ensures that shared locks are issued when data is being read and prevents “dirty reads.” A dirty read consists of data that is in the process of being edited, but has not been committed or rolled back. However, you can change data before the end of the transaction, resulting in nonrepeatable reads (also known as phantom data).
    • Read Uncommitted: The least restrictive isolation level, which is the opposite of READ COMMITTED, allows “dirty reads” of the data. Ignores locks that other operations may have issued and does not create any locks of its own. This is called “dirty read” because underlying data may change within the transaction and this query would not be aware of it.
    • Snapshot: Reads data as it was when the transaction started, ignoring any changes since then. As a result, it doesn’t represent the current state of the data, but it represents a consistent state of the database as of the beginning of the transaction.
    • Repeatable Read: Prevents others from updating data until a transaction is completed, but does not prevent others from inserting new rows. The inserted rows are known as phantom rows, because they are not visible to a transaction that was started prior to their insertion. This is the minimum level of isolation required to prevent lost updates, which occur when two separate transactions select a row and then update it based on the selected data. The second update would be lost since the criteria for update would no longer match.

Integration Services supports two types of transactions. The first is Distributed Transaction Coordinator (DTC) transactions, which lets you include multiple resources in the transaction. For example, you might have a single transaction that involves data in a SQL Server database, an Oracle database, and an Access database. This type of transaction can span connections, tasks, and packages. The down side is that it requires the DTC service to be running and tends to be very slow.
The other type of transaction is a Native transaction, which uses SQL Server’s built-in support for transactions within its own databases. This uses a single connection to a database and T-SQL commands to manage the transaction. Integration Services supports a great deal of flexibility with transactions.

Integration Services supports a great deal of flexibility with transactions. It supports a variety of scenarios, such as a single transaction within a package, multiple independent transactions in a single package, transactions that span packages, and others. You’ll be hard pressed to find a scenario that you can’t implement with a bit of careful thought using Integration Services transactions.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Be Sociable, Share!

Windows 8:Interacting with Tiles

Windows 8 displays tiles on its start screen, making it easy for users to interact with your applications. These tiles act as a representation of your applications, and make it possible for users to start your applications. But tiles do much more. You can just display static information on a tile, but that’s not really the intent. Tiles allow you to display information pertinent to your application, making it possible to display new, real-time information to the user.

In general, you can display text and images, plus a status badge, on a tile. You can update the content of the tile regularly, in reaction to an event, or at any time. Beware that even if the tile displays active data, the user can elect to display only static data, and have no updates appear on the tile. In Figure 1, the WeatherBug and ABC News apps show active data.

Win8

Figure 1. WeatherBug and ABC News apps display live data.

Tile Variations

Tiles come in two sizes: Square, and wide. Either size can display text, images, and/or a badge. You can (and probably should) include both square and wide tile configurations in your application package, so that users can switch between the two sizes at will. Users can mix and match square and wide tiles, and wide tiles are just the right size to “match” two square tiles side by side, with a 5-pixel gutter space, as shown in Figure 2.

Win8Fig2

Figure 2. Tiles come in two sizes.

In addition to text and images, tiles can display badges that contain specific information. For example, in Figure 3, the Mail app displays a badge with a count of the unread email messages. Badges can consist of numbers or a limited set of specific icons.

Win8Fig3

 

Figure 3. Note the badge on the Mail app.

NOTE: Creating badge notifications is beyond the scope of this particular chapter

Secondary Tiles

Besides the main tile associated with an application, an application can display one or more secondary tiles. These secondary tiles allow users to promote specific content and internal links from within the application to the Start screen. These secondary tiles can display and/or link to specific content, such as:

  •  Information about specific friends
  •  Weather reports for specific locations
  •  Stock reports for specific stocks

Not all applications support creating secondary tiles, but many do. For example, Figure 4 shows a secondary tile for the Weather app showing weather conditions at a specific location.

Win8Fig4

Figure 4. A secondary tile in the Weather app.

This post is an excerpt from the online courseware Windows 8 Tiles, Badges, Print and Charms course written by expert Ken Getz.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

Be Sociable, Share!

SQL 2012: Developer: NULLs and SqlBoolean

When integrating T-SQL with the CLR, remember to declare variables, parameters, and return values of data types exposed through the System.Data.SqlTypes namespace. Doing so guarantees a behavior more similar to T-SQL.

As described in the previous section, the outcome of performing arithmetic, bitwise, and logical comparisons between two variables when one or both values is NULL can be inconsistent. The ANSI_NULLS option in T-SQL proves how different the results can be; and as you saw in the simple Visual Basic .NET example, not using the SqlTypes data types leads to the same confusion.

Fortunately, there is the SqlBoolean data type. Exposed as part of the SqlTypes namespace, the SqlBoolean data type can represent three distinct states—true, false, and unknown. In addition, the comparison of two SqlTypes data types always returns a SqlBoolean, which again ensures consistent behavior.

The SqlBoolean data type exposes three important properties:

  • IsTrue: Indicates whether the comparison produces a TRUE value.
  • IsFalse: The outcome when the comparison is FALSE.
  •  IsNull: Returns true when the comparison between the variables produces an unknown or NULL result.

Keeping these concepts in mind, look at the Visual Basic .NET code behind the SqlBooleans button on the switchboard form.

 

The code makes the following comparison:

 

The code compares intX—a SqlInt32 assigned the value 5—with intY, another SqlInt32 explicitly assigned a NULL value. The result is a SqlBoolean data type with properties that contain the outcome of the comparison— blnResult.IsTrue.
Figure 1 shows the Msgbox that displays the outcome of this routine.

Sqlbool1

Figure 1. SqlBooleans provide consistency when you work with NULL values.

WARNING! Remember that a SqlBoolean data type represents three states— IsTrue, IsFalse, and IsNull. IsNull returns TRUE only when both sides of the comparison are unknown.

Frank TillinghastFrank Tillinghast is a senior consultant with MTOW Software Solutions. He is a Microsoft Certified Solution Developer and has been developing applications for over fifteen years. Most of his time is spent consulting for companies nationwide with troubled projects or mentoring projects to successful completion. When he is not developing software or mentoring clients, Frank is teaching other developers. He has taught thousands of application developers how to create business solutions with Visual Studio .NET. VB.NET, ASP.NET, Visual C#, and SQL Server.

Be Sociable, Share!

SharePoint 2010: Developer: Sandbox Solution Overview

SharePoint 2010 includes a secure isolated environment for user deployable Web solution packages—the user code sandbox. To deploy a sandbox solution, a user (usually the site owner), uploads a WSP to a special library that is part of every SharePoint 2010 site named the solution gallery. Once added to the solution gallery, the site owner can activate the solution by clicking a button on the ribbon.

The sandbox makes it easy to create code that runs on a SharePoint server as Web Parts, pages, and event handlers. Code in the sandbox runs under a restricted set of privileges out of process from the web application in a monitored host process called the Windows SharePoint Services User Code Host Service.

The SharePoint object model includes facilities to allow communication between the main worker process and the user code host. The SPUserCodeWebPart Web Part is one of these facilities. SPUserCodeWebPart provides the ability to host controls running in the sandbox on a page running in the main worker process.

In addition to a reduced set of privileges, the sandbox also provides a limited and safe subset of the SharePoint object model. This prevents sandbox code from making any changes outside the current site and from executing with explicit elevation of privileges via the SPSecurity namespace.

Why Use Sandbox?

The sandbox environment gives farm operators the ability to enable customization for users without providing administrative access to the farm. This power comes with a number of safeguards to protect the overall stability and security of the farm.

The sandbox protects the farm from poorly written or malicious code. This includes protection from:

  • Unhandled exceptions
  • Processor intensive operations
  • Unauthorized manipulation of web application and farm infrastructure
  • Elevation of privilege

SharePoint Central Administration and the solution gallery both provide visibility to farm administrators of solution health and resource needs. Administrators can define quotas to block defective solutions and have the ability to manually block execution of specific solutions for any reason.

How the Sandbox Works

The Windows SharePoint Services User Code Host Service provides a partial trust AppDomain to host sandboxed processes. The service consists of three parts:

  • SPUCHostService.exe
  • SPUCWorkerProcessProxy.exe
  • SPUCWorkerProcess.exe

SPUCHostService manages one or more SPUCWorkerProcess via SPUCWorkerProcessProxy. This architecture makes it possible to scale the user code sandbox across multiple servers in the farm. Solutions in the sandbox use a special version of Microsoft.SharePoint.dll located in the UserCode\Assemblies folder in the SharePoint root.

The host service also allows configuration of server affinity—you can specify that sandbox code runs on the same machine as a request or that requests to run sandboxed code are run on available servers with the Sandboxed Code Service. Regardless of the configuration the host service runs sandbox code within SPUCWorkerProcess.exe. SPUCWorkerProcess.exe is the process to which you attach the debugger to debug sandbox code.
doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.

Be Sociable, Share!

SSRS 2008: Reporting Services and SharePoint

SQL Server 2008 Reporting Services supports two different installation options:

  • Native mode. The Report Server controls all report execution, rendering, and management.
  • SharePoint integrated mode. Report Server runs as part of a SharePoint server farm. SharePoint provides front-end access to content, management, and security. The Report Server takes care of report execution and rendering.

In order to use SharePoint integrated mode, you must have installed Windows SharePoint Services 3.0 or Office SharePoint Server 2007. You also need to configure Report Server for SharePoint integrated mode.

TIP:An alternative to deploying Reporting Services in SharePoint Integrated mode is to deploy Reporting Services in Native mode, but to use the Reporting Services Web parts to find and view reports from SharePoint.

Why Use SharePoint Integrated Mode?

SharePoint Integrated mode makes the most sense when your organization has made a strong investment in SharePoint and you want to leverage that investment when creating, executing, and managing reports.

A number of Reporting Services features are only available when working in SharePoint Integrated mode. When in integrated mode, you can:

  • Manage reports, data sources, and report history using SharePoint.
  • Use the document management and collaboration features of SharePoint with reports.
  • Make use of SharePoint managed authentication and authorization for reports. This includes the use of Forms authentication.
  • Distribute reports outside of a firewall using SharePoint.
  • Deliver reports using a SharePoint delivery extension.
  • Create custom integrations between a SharePoint site and Reporting Services using the ReportingServices Web services API.

A number of Reporting Services features are not supported when the Report Server is configured for Integrated mode. These include:

  • Report Manager
  • My reports
  • Linked reports
  • The rs.exe command-line utility

NOTE While there is much in common across Native and Integrated modes, where there is a difference, this course will cover the Native mode approach.

TIP: See SQL Server 2008 Books Online and your SharePoint documentation for guidance on SharePoint-specific features and interfaces.

paul_LitwinPaul Litwin is a developer specializing in ASP, ASP.NET, Visual Basic, C#, SQL Server, and related technologies. He is an experienced trainer, has written articles for MSDN Magazine and PC World, and is the author of several books including ASP.NET for Developers (SAMS) and Access 2002 Enterprise Developer’s Handbook (SYBEX). Paul is a Microsoft MVP, the chair of the Microsoft ASP.NET Connections conference, and a member of the INETA Speakers Bureau.

Be Sociable, Share!

Windows Presentation Foundation Using Visual C# 2010: Introducing Binding

As you write applications, you often need to update the value of one element with information from another element. Often, you need to display information coming from a collection of objects in a list, or combo box. Or, perhaps you need to work with data coming from a data source, such as a database. In all these cases, the simplest solution is to bind data from a data source to a target element; in other words, you need binding capabilities.

Data binding makes it simple for applications to display and interact with data. You use a connection, or binding, between the data source and the target; this binding object allows data to flow between the source and the target objects. Once you have created a binding, and you change data in the data source, the changes appear in the target. If you have enabled two-way data binding, you can also make changes in the target and have those changes appear back in the original data source.

When might you use binding? You might use binding to:

  • Display the number of characters in a text box, in a TextBlock control.
  • Display the value of a Slider control in a TextBlock control.
  • Display a list of customers in a ListBox control.
  • Display a customer’s photo in an Image control.
  • Display and edit a customer’s name in a TextBox control.

Obviously, there are infinite reasons to use binding, and XAML makes it easy.

Connecting Sources and Targets

Every time you use binding, you must supply a source for the data, and a target (normally, a dependency property of some user interface element). Figure 1 demonstrates the basic concepts of binding in XAML. As you can see, the binding source can be any property of any CLR object. The binding object connects this source to the binding target, which must be a dependency property of some object that inherits from FrameworkElement (all the standard
controls inherit from this base class, and just about every property of these controls is a DependencyProperty object.)

introbinding

Figure 1. The Binding object connects the binding source and target.

Although you can generally work blissfully unaware of this fact, when you use a binding in XAML you’re actually creating an instance of the Binding class, setting various properties. XAML provides a markup extension that hides this fact, as you’ll see, but all the same, you’re working with a Binding class instance.

TIP: If you want to create a binding in code, you’ll always need to create an
instance of the Binding class, and set its various properties to match the
properties you would otherwise set in XAML markup.

The flow shown in Figure 1 indicates that binding can work in multiple directions. By setting the Mode property of the Binding instance, you can indicate whether you want the data to move in one direction whenever changes are made to the source, in one direction only once, in two directions, or not at all.

Value Converters

The Value Converter block in Figure 1 represents an instance of a class that implements the System.Windows.Data.IValueConverter interface. You’ll often need to use a value converter, especially if you want to do all of your binding declaratively, in the XAML markup. Imagine scenarios like these, in which value converters become necessary:

  •  You want to select a customer from a ListBox, and display the combined FirstName and LastName fields in a TextBlock.
  •  You want to select a number value, and use that value to set the BorderThickness of a Border control. You can’t simply bind the number to the property, because the BorderThickness property of a Border is a Thickness object.

There are, obviously, infinite reasons to use a value converter, but unless you perform data binding declaratively (in the XAML markup, that is) there’s no reason to use one. But if you would like to simplify your applications and reduce the amount of code you must maintain, using declarative binding is a truly useful tool, and it requires value converters in order to be fully useful.

This post is an excerpt from the online courseware Presentation Foundation Using Visual C# 2010 course written by expert Ken Getz and Robert Green.

Ken Getz is a Visual Studio expert with over 25 years of experience as a successful developer and consultant. He is a nationally recognized author and speaker, as well as a featured instructor for LearnNowOnline.

  Robert Greenis is a Visual Studio expert and a featured instructor for several of our Visual Basic and Visual C# courses. He is currently a Technical Evangelist in the Developer Platform and Evangelism (DPE) group at Microsoft. He has also worked for Microsoft on the Developer Tools marketing team and as Community Lead on the Visual Basic team. Robert also has several years of consulting experience focused on developer training, and is a frequent speaker at technology conferences including TechEd, VSLive, VSConnections and Advisor Live.

Be Sociable, Share!