Tag Archives: Don Kiely

MVC 6: Your Web Development World is About to Get Rocked!

If you missed the live-stream of our latest webinar on 7/22/2015, never fear! Check out the link below to get yourself up to speed with MVC 6. The webinar is led by instructor and skijoring enthusiast, Don Kiely, and I should warn you in advance …it will rock your world!

For years, Microsoft did its best in battling rapid changes with the Internet to keep its ASP.NET and MVC technology up to speed. However; it’s been nearly 15 years since ASP.NET first hit the web, and those changes simply became too dramatic to overcome.

Enter Visual Studio 2015, Microsoft’s way of throwing out the “old ASP.NET manual” and rebuilding things from the ground up. Visual Studio 2015 was released on 7/20/2015, along with the .NET Framework 4.6 and updated web development tools. With MVC being part of ASP.NET, it has strapped itself in and come along for the update adventure!

This massive update now sees MVC boasting new features like broad support for new web technologies, as well as editor updates for JSON, HTML, and JavaScript. It will also now be much easier and more fun (!) to write robust web applications thanks to tools like HTML5 and CSS.

So this is a big deal?

Absolutely! Not only is the jump from ASP.NET to ASP.NET 5 as significant as switching from Classic ASP to ASP.NET was, but this bad boy is also open source! MVC, Web API, and Web Pages have been combined into a single unified open source programming model, which is hosted on GitHub. There are new tag helpers, view components, simpler dependency management, dependency injection, and more!

As you can see, there is suddenly a lot of stuff to learn about the all new Visual Studio 2015. With our new webinar, you will get a high level look at the newest MVC 6 and ASP.NET 5 features, as well as a jumpstart on building web applications using the new Microsoft stack. Instructor Don Kiely will walk you through these new and exciting changes and features, and have you ready to be a Visual Studio superstar in no time.

Check out the link below (it’s free)…just be sure to turn the volume up to eleven!

http://www.learnnowonline.com/webinars

NOTE: Both MVC 6 and ASP.NET 5 are included with this release; however, you cannot yet use them for production purposes.

The RC1 (release candidate 1) for ASP.NET 5 is slated for release in November 2015, which will have a “Go Live License”. This “Go Live License” indicates that Microsoft is ready to fully support ASP.NET 5 and MVC 6 and that they are confident that users will be able to use the technologies for production applications.

A helpful roadmap of release dates can be viewed here:

https://github.com/aspnet/Home/wiki/Roadmap

About the Author:


zach2-300x225Zach Young
 manages the LearnNowOnline customer support department. In addition to making strange but surprisingly delicious smoothies, Zach divides his time between the LearnNowOnline recording studio, providing sales demos for new and existing clients, and ensuring that each customer is taken care of. In his spare time, Zach enjoys globetrotting with his wife, playing and recording music, and attempting to get the required 1.21 gigawatts for Doc Brown’s DeLorean.

 

“Cool” New Courses for Entity Framework

donfrozen-rev-300x300We’ve joined forces with our Yeti instructor, Don Kiely, to create two new Entity Framework 6.1 courses for you. (I wonder how he can type when he’s that “Frozen?” I guess I’ll just have to “Let It Go.”)

No, Don doesn’t normally look this way in the winter. And no, he didn’t have a run in with Elsa. His friend Tracey Martinson was not too frozen to take this picture of Don after he went for a run when the temperature was -18°F (which is -27.8°C or 245 K). In between running, caring for his sled dogs, and never being asked “Do you Want to Build a Snowman?”, Don has created these exciting new Entity Framework courses:

Entity Framework 6.1: SQL Server Features – Now available
In this course you’ll learn about a few of Entity Framework’s “For the First Time in Forever” additions to support SQL Server features. You’ll start with a look at Entity Framework’s support for hierarchyID or, rather, its missing support. Then you’ll jump into one of the best new features in Entity Framework in a long time— enums—which you can use to protect the integrity of your data. Next you’ll explore Entity Framework’s support for spatial data, which covers location-aware applications and data. You’ll wrap up with a look at table-valued functions and their support in Entity Framework.

Entity Framework 6.1: Code-First Development – Coming 2/2/15
You will begin by learning how code-first works by default, which will probably work for most applications early in their development cycle. But when you’re ready to deploy the application to a production server, or need more flexibility (in a “Fixer Upper” kind of way), you’ll learn how Entity Framework creates a database. You’ll see how to create a code-first model and create a database from it, and see an application that makes use of it to maintain data in the database. You’ll also learn how to customize the database using data annotations, and the DBModelBuilder API which lets you write code instead of using data annotations. Lastly you’ll see how code migration is a newer feature of code first that goes beyond just deleting and recreating the database when the model changes.

donkiely

Be sure to check out all of our Entity Framework courses including these two new additions. By the way, here is a picture of Don all thawed out. I doubt he thinks “Reindeer are Better than People,” because he may not have met one yet. Let’s hope “In Summer” up in Alaska, Don’s runs won’t have that frozen look.

My apologies to Disney for using Frozen songs as puns.

About the Author


brianblogpic-150x150

Brian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

 

Entity Framework’s Entity Data Model

EntityFrameworkDatadelwithDK

Don Kiely recently presented an interesting webinar on the Entity Framework Data Model—complete with his sled dogs in the background supporting him all the way.

Broadcasting from his home in Alaska, Don kicked off the webinar by explaining why Entity Framework’s Entity Data Model is the key link between the entity data objects in your application and the backend data store where data resides. Don went on to describe how the Entity Framework uses the model to generate .NET entity classes and APIs that provide powerful data access features to an application. Don then reached down into the guts of the XML that makes up the three Entity Data Models—conceptual, storage, and mapping—to give us a good understanding of how Entity Framework implements many of its features. Don’s dogs chimed in from time to time, unable to wait for the Q&A to show their enthusiasm for the topic.

If you missed Don (and his dogs), catch the webinar replay here. Also, take a moment to register now for our next event titled “What’s new in iOS8 and Xamarin” presented by Wally McClure.

 

About the Author


brianblogpic-150x150

Brian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

 

Entity Framework 6.1 Fundamentals

New from our instructor in the land of the midnight sun are new courses covering the fundamentals of Entity Framework 6.1. That instructor is Don Kiely…and between high adventure trips, skijoring, saving sled dogs, dodging moose, and running marathons, Don has found the time to work with us to create two excellent new courses.

According to Microsoft, Entity Framework (EF) is an object-relational mapper that enables .NET developers to work with relational data using domain-specific objects. It eliminates the need for most of the data-access code that developers usually need to write.

In our first new course, Entity Framework 6.1: Introduction, Don covers the basics from data access issues to the EF API and tools. In the second course, Entity Framework 6.1: Data Model, as the title suggests, Don digs in to the entity data model. These courses total over four hours of video training and are now available. Learn more

Entity Framework 6.1: Introduction

Watch trailer – Entity Framework 6.1: Introduction

Entity Framework 6.1: Data Model

Watch trailer – Entity Framework 6.1: Data Model

 

 

 

 

 

 

Watch for more EF courses from Don coming in the near future. In the meantime, I invite you to attend Don’s upcoming webinar on the Entity Framework Entity Data Model. He will be broadcasting live from Alaska beginning at 1pm CST on Wednesday, September 10th. (Don’t be surprised if you hear his dogs barking at moose in the background!) Register now

About the Author

brianblogpic-150x150


Brian Ewoldt
 is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Watch Now: SSIS Data Flows and Components

Watch the recording of our SSIS event now.*

Watch the recording of our SSIS event now.*

Last week we held our SSIS 2012/14: Data Flows and Components live learning event, presented by Don Kiely. Our thanks to Don for sharing his expertise on this popular topic.

If you missed the live event, you can watch the recording* now. You’ll learn about data flow pipelines, and how to get data from a source, transform it along the way, and store it in various data store destinations.

Our next live learning event, Making Sense of One ASP.NET featuring Mike Benkovich, will be held on Wednesday, June 11th at 11:00 a.m. CDT. Register now

* To view the event recording in Chrome or Firefox, please have the Windows Media Player extension installed. Or right click on the link and select Copy Link Address; then open Windows Media player, click File>Open URL…, paste the link address in the text box, and click the OK button. 

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Join Us for a Look at SSIS Data Flows and Components

SQL Server expert Don Kiely

SQL Server expert Don Kiely

I am pleased to announce our upcoming webinar SSIS 2012/14: Data Flows and Components featuring expert Don Kiely.

Here is Don’s description of what he will be covering in this event:

The Data Flow task is a special Control Flow task that moves data from a data source to a data destination, optionally transforming the data in various ways as it moves. It is so important and complex that, unlike any other Control Flow task, the Data Flow task has its own designer in SQL Server Data Tools. This is where you are likely to spend most of your time when developing any non-trivial Integration Services package that moves data rather than just performs other Control Flow tasks. The Data Flow task is the single most important task in a Control Flow and performs the majority of the work in an ETL (Extract, Transform, and Load) process.

During this LearnNowOnline live learning event, you’ll learn about data flow pipelines, how to get data from a source, transform it along the way, and store it in various data store destinations.

This event takes place on Wednesday, May 21st from 1:00pm – 2:30pm CDT. I will be the moderator and hope to see you there!

registernow-blue

 

 

Can’t make it? No worries. Watch our blog next week to access the recording of the event.

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

SSRS 2012: Preview Performance for Report Builder

When you work in Design view in Report Builder, you are not working with real data, even if you created a data set and attached it to a data region. Report Builder uses that data set design to discern the schema for the data, but uses only a representation of that data. That’s why you’ll want to preview a report repeatedly as you design the report so that the actual data looks as you envisioned it.

When you click the Run button in Design view, Report Builder reads the actual data from the data store and renders the report so you can view it with actual data. It connects to the data source you specified and caches it, then combines the data and layout to render the report. You can switch between design and preview as often as necessary.

This is convenient for developing a report, but it can be a painfully slow process. If the data set uses a complex query that takes time to execute in a database, for example, you might have a significant wait for the report preview. In older versions of Reporting Services, you just had wait patiently.

However, newer versions of Report Builder greatly enhance the report preview process by using edit sessions when you’re connected to a report server. The edit session creates a data cache on the report server that it retains for your next report preview. This way you have to wait for the data only once; subsequently, the report preview appears almost instantaneously. As long as you don’t make any changes to the data set or any report changes that affect the data, report previewing uses the cached data. If you ever need to use fresh data, you can preview the report and click the Refresh button in the Report Builder’s preview toolbar, as shown in Figure 1.

PreviewPerformance

Figure 1. Refresh button in preview mode in Report Builder.

Report Builder creates an edit session the first time you preview the report; the session lasts for two hours by default, and resets to two hours every time you preview the report. The data cache can hold a maximum of five data sets. If you need more or use a number of different parameter values when you preview the report, the data cache may need to refresh more often, which slows preview performance.

You cannot access the underlying edit sessions that Report Builder uses to enhance preview performance, and the only properties you can tweak to affect preview behavior are the length of an edit session and the number of data sets in the cache. But actions you take can affect whether Report Builder is able to use the cached data, so it is helpful to have a basic understanding of what affects the edit session’s use of cached data.

TIP: To change the cache expiration timeout or the number of data sets the cache stores, use the Advanced page of the Server Properties dialog box for the Reporting Services instance from Management Studio.

The following changes cause Report Builder to refresh the cache, which causes a slower report preview:

  • Adding, changing, or deleting any data set associated with the report, including changes to its name or any properties.
  • Adding, changing, or deleting any data source, including changes to any properties.
  • Changing the language of the report.
  • Changing any assemblies or custom code in the report.
  • Adding, changing, or deleting any query parameters in the report, or any parameter values.

This list suggests that Report Builder refreshes the cache conservatively, that is, any time there might be an effect on the data used by the report. But changes to the report layout or data formatting do not cause the cached data to refresh.

TIP: Adding or deleting columns in a table or matrix does not refresh the cache. All of the fields in a data set are available to the report, whether you use them or not, so these actions do not affect the data set.

ldn-expertdkielyThis post is an excerpt from the online courseware for our SSRS 2012 Developer course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Microsoft SQL Server 2008: Checkpoints

As you learn about Integration Services, you’ll be able to tackle larger and more complex ETL projects, with dozens or even hundreds of tasks moving data among many data stores and performing multiple transformations on it along the way. You may also have individual tasks that take hours to run because of the volume of data they have to move and process, or because of slow resources such as network speeds.

A package might fail after many tasks completed successfully, or after the one hours-long task completes. If you have to re-run the entire package after fixing the problem, you’ll again have to patiently wait for hours while earlier tasks duplicate their work before you even get to the point where the package failed on the first run. That can be a painful experience, and the local database administrator will likely not be pleased that you’re taking up so many resources for so long, repeatedly.

To get around these kinds of problems, Integration Services packages are restartable using a feature called checkpoints. When you implement checkpoints, the package creates a checkpoint file that tracks the execution of the package. As each task completes, Integration Services writes state information to the file, including the current values of variables that are in
scope. If the package completes without errors, Integration Services deletes the checkpoint file. If the package fails, the file contains complete information about which tasks completed and which failed, as well as a reference to where the error occurred. After you fix the error, you execute the package again and the package restarts at the point of failure—not at the beginning—with the same state it had at failure. Checkpoints are an incredibly useful feature, especially in long-running packages.

Checkpoints are not enabled on a package by default. You have to set three package-level properties to configure checkpoints:

  • CheckpointFilename: Specifies the name and location of the checkpoint file name. You must set this property, but the name can be any valid Windows filename, with any extension.
  • CheckpointUsage: Determines how the package uses the checkpoint file while the package executes. It has three settings:
    • Always: The package will always use the checkpoint file and will fail if the file does not exist.
    • IfExists: The package will use the checkpoint file if it exists to restart the package at the previous point of failure. Otherwise, execution begins at the first Control Flow task. This is the usual setting for using checkpoints.
    • Never: The package will not use the checkpoint file even if it exists. This means that the package will never restart, and will only execute from the beginning.
  • SaveCheckpoints: Specifies whether the package should write checkpoints to the file.

This combination of properties provides flexibility in configuring checkpoints for the package, then turning its use on and off before execution without losing the checkpoint configuration.

In order for checkpoints to work, a task failure has to cause the package to fail. Otherwise, the package will continue executing beyond the failure, recording more data in the checkpoint file for subsequent tasks. So you must also set the
FailPackageOnFailure property to true for each task where you want to make it possible to restart the package using a checkpoint. If it is set to false for a task and the task fails, Integration Services doesn’t write any data to the
checkpoint file. Because the checkpoint data is incomplete, the next time you execute the package it will start from the beginning.

TIP: Checkpoints only record data for Control Flow tasks. This includes a Data Flow task, but it does not save checkpoint data for individual steps in a Data Flow. Therefore a package can restart at a Data Flow task, but not within a Data Flow itself. In other words, you cannot restart a package using a checkpoint to execute only part of a Data Flow, just the entire Data Flow.

At the start of the package, Integration Services checks for the existence of the checkpoint file. If the file exists, Integration Services scans the contents of the checkpoint file to determine the starting point in the package. Integration
Services writes to the checkpoint file while the package executes. The contents of the checkpoint file are stored as XML and include the following information:

  • Package ID: A GUID stamped onto the file at the beginning of the execution phase.
  • Execution Results: A log of each task that executes successfully in order of execution. Based on these results, Integration Services knows where to begin executing the package the next time.
  • Variable Values: Integration Services saves package variables’ values in the checkpoint file. When execution begins again, the checkpoint file’s variable values are read from the checkpoint file and then set on the package.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Transaction Support in Integration Services

A transaction is a core concept of relational database systems. It is one of the major mechanisms through which a database server protects the integrity of data, by making sure that the data remains internally consistent. Within a transaction, if any part fails you can have the entire set of operations within the transaction roll back, so that no changes are persisted to the database. SQL Server has always had rich support for transactions, and Integration Services hooks into that support.

A key concept in relational database transactions is the ACID test. To ensure predictable behavior, all transactions must possess the basic ACID test, which means:

  • Atomic: A transaction must work as a unit, which is either fully committed or fully abandoned when complete.
  • Consistent: All data must be in a consistent state when the transaction is complete. All data integrity rules must be enforced and all internal storage mechanisms must be correct when the transaction is complete.
  • Isolated: All transactions must be independent of the data operation of other concurrent transactions. Concurrent transactions can only see data before other operations are complete or after other transactions are complete.
  • Durable: After the transaction is complete, the effects are permanent even in the event of system failure

Integration Services ensures reliable creation, updating, and insertion of rows through the use of ACID transactions. For example, if an error occurs in a package that uses transactions, the transaction rolls back the data that was previously inserted or updated, thereby keeping database integrity. This eliminates orphaned rows and restores updated data to its previous value to ensure that the data remains consistent. No partial success or failure exists when tasks in a package have transactions enabled. They fail or succeed together.

Tasks can use the parent container’s transaction isolation or create their own. The properties that are required to enable transactions are as follows:

  • TransactionOption: Set this property of a task or container to enable transactions. The options are:
    • Required: The task or container enrolls in the transaction of the parent container if one exists; otherwise it creates a new transaction for its own use.
    • Supported: The task uses a parent’s transaction, if one is available. This is the default setting.
    • Not Supported: The task does not support and will not use a transaction even if the parent is using one.
  • IsolationLevel: This property determines the safety level, using the same scheme you can use in a SQL Server stored procedure. The options are:
    • Serializable: The most restrictive isolation level of all. It ensures that if a query is reissued inside the same transaction, existing rows won’t look any different and new rows won’t suddenly appear. It employs a range of locks that prevents edits or insertions until the transaction is completed.
    • Read Committed: Ensures that shared locks are issued when data is being read and prevents “dirty reads.” A dirty read consists of data that is in the process of being edited, but has not been committed or rolled back. However, you can change data before the end of the transaction, resulting in nonrepeatable reads (also known as phantom data).
    • Read Uncommitted: The least restrictive isolation level, which is the opposite of READ COMMITTED, allows “dirty reads” of the data. Ignores locks that other operations may have issued and does not create any locks of its own. This is called “dirty read” because underlying data may change within the transaction and this query would not be aware of it.
    • Snapshot: Reads data as it was when the transaction started, ignoring any changes since then. As a result, it doesn’t represent the current state of the data, but it represents a consistent state of the database as of the beginning of the transaction.
    • Repeatable Read: Prevents others from updating data until a transaction is completed, but does not prevent others from inserting new rows. The inserted rows are known as phantom rows, because they are not visible to a transaction that was started prior to their insertion. This is the minimum level of isolation required to prevent lost updates, which occur when two separate transactions select a row and then update it based on the selected data. The second update would be lost since the criteria for update would no longer match.

Integration Services supports two types of transactions. The first is Distributed Transaction Coordinator (DTC) transactions, which lets you include multiple resources in the transaction. For example, you might have a single transaction that involves data in a SQL Server database, an Oracle database, and an Access database. This type of transaction can span connections, tasks, and packages. The down side is that it requires the DTC service to be running and tends to be very slow.
The other type of transaction is a Native transaction, which uses SQL Server’s built-in support for transactions within its own databases. This uses a single connection to a database and T-SQL commands to manage the transaction. Integration Services supports a great deal of flexibility with transactions.

Integration Services supports a great deal of flexibility with transactions. It supports a variety of scenarios, such as a single transaction within a package, multiple independent transactions in a single package, transactions that span packages, and others. You’ll be hard pressed to find a scenario that you can’t implement with a bit of careful thought using Integration Services transactions.

ldn-expertdkielyThis post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely.

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

SQL Server 2008: The BIDS Interface and Components

SQL Server Integration Services is not the only member of the Microsoft Business Intelligence suite to use BIDS for development. Analysis Services and Reporting Services developers also use BIDS to create their work. Using project templates supplied by Microsoft, Analysis Services developers build cubes, data mining models, and other analytical objects, while report developers build reports and report models in BIDS.

BIDS supports any or all of these project types within the same solution, as
shown in Figure A.

bids1

Figure A. The BIDS Solution Explorer window showing multiple SQL Business Intelligence project types.

TIP: You can create as many Integration Services packages as you want in a single solution. Modularizing a complex ETL workflow into multiple packages makes it easier to develop and debug the individual components. When all the packages are working correctly, you can group them to execute all at once using an Execute Package task.

Creating a new Integration Services project—rather than just opening a package as a standalone file—exposes some additional sections of the interface, such as access to the Data Sources and Data Source Views folders in Solution Explorer, as you can see in the Integration Services Project in Figure A.

Items that you add to a Project are visible in the Solution Explorer window, and can be accessible to any part of the project.

  • Data Sources contain a connection string, and are available in Solution Explorer in the Data Sources folder. They are created and maintained at the project level, outside of any package definition, but can be referenced by a package’s Connection Manager. Connection Managers depend on Data Sources in Integration Services and can be used to share connection definitions among multiple packages in the same solution.

TIP: When you use Data Sources, you can change the data connection strings for multiple packages at once. Adding Connection Managers directly to a package creates local data links, which you must edit per package.

  • Data Source Views are based on Data Sources, but support further filtering of the database schema. By creating package objects based on Data Source Views, you streamline working with lists of database objects.
  • The Miscellaneous folder can contain any support files you need, such
    as documentation, flat files that contain data for import, etc.

You access Integration Services functions in BIDS or Visual Studio through the SSIS menu, the tabbed designer windows, and their related toolboxes:

  • SSIS Menu: Choose options for setting up package logging, configurations, variables, and other options, as shown in Figure B.

bids2

Figure 2. The SSIS menu.

  • Tabbed Designer: Lay out the logic of the package by dragging tasks from the toolbox to the designer to control the overall flow of processing steps. For example, you could download files via FTP before importing them into a database table.
  • Toolboxes: The toolboxes associated with some of the designer tabs let you drag and drop components that perform the many potential actions of a complete Integration Services solution. If you are familiar with Web or Windows forms development in .NET, you’ll be right at home with adding components to a package.

In this course, you will learn about the BIDS tools that enable you to build powerful ETL solutions in Integration Services. The major components of the BIDS interface include:

  • Control Flow designer
  • Data Flow designer
  • Connection Managers
  • Event Handler designer
  • Package Explorer
  • Progress pane


ldn-expertdkiely

This post is an excerpt from the online courseware for our Microsoft SQL Server 2008 Integration Services course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.