Author Archives: Marty S.

What’s new in MVC 5.2

mvc

Microsoft’s very successful model-view-controller architecture, or MVC, has been their flagship framework for developing next generation Web applications—and Microsoft continues to improve it with version 5.2 released just over two months ago. If you’re still hanging on to MVC 4, you’re missing out on many new and exciting features, and Microsoft has made the path to upgrade easier than ever.

So what’s so exciting about MVC 5? Let me start by hitting you with some of the big improvements with this latest release. If you want even more information or want to see some demonstrations of these new features, please check out our MVC 5.2 courses with expert Eric Greene.

One ASP.NET

In MVC 5, Microsoft introduced a new project type called One ASP.NET. This project type has the goal of saving the Web developer’s time by reducing the clutter of many single-focused Web templates constantly growing within Visual Studio. One ASP.NET creates a more “a la carte” model for creating applications so the developer can start with core functionality, and then add more and more components as various features and functionality are required. This allows developers to combine ASP.NET Web Forms, MVC, Web API and other project templates into a single project and not be restricted to use only one of them.

Bootstrap

From the brilliant minds of the Twitter software engineers came a CSS and JavaScript framework that has quickly become one of the most popular tools for front-end development. Bootstrap provides user interface tools and controls that allow developers to build rich Internet applications that auto-respond to changing screen sizes and devices. It takes away the drudgery of constantly tinkering with the CSS and JavaScript code necessary to get your site to perform professionally for all of your users.

Microsoft now includes Bootstrap templates in MVC 5 so you can take advantage of all its features right out of the box. In fact, Bootstrap is now the default HTML/CSS/JavaScript framework bundled with ASP.NET MVC. Bootstrap is managed by NuGet which means it can be automatically upgraded as the technology advances. You can discover more about Bootstrap by taking a look at our Bootstrap 3.1 courses with expert Adam Barney.

ASP.NET Identity

Before ASP.NET MVC 5, Microsoft had promoted its Membership Provider to handle security, authentication, and roles for your Web applications. But with the ASP.NET Identity, they completely rebuilt their security solution to include a whole new range of features. It still contains all the core functionality for authentication and authorization, but it also extends to support new forms like two-factor authentication (2FA) and integrated authentication. With 2FA, you can require multiple forms of authentication like the Google Authenticator or SMS text messaging. Integrated authentication allows you to work with many existing third-party providers like Google, Facebook, and Twitter. It allows your users to access your site using credentials from these and other providers, freeing you from the responsibility of managing credentials, and not forcing your users to memorize yet another password.

New Filters

Authorization filters have been around for quite a while in ASP.NET and have been a staple for most developers who need to set up security for their Web applications. Authentication filters, on the other hand, are new to MVC 5. These new filters allow for programming logic to occur before the authorization filter, giving developers the ability to better identify and control users entering their site. For example, developers can now assign a new authentication principal (think of it like a role) to a user logging in prior to the authorization filter, giving them better control at the individual Action/Controller level. Think of the authorization filter as providing a more global security model, one that covers the site as a whole, while the authentication filter provides a more specific security model that can be applied at more localized level.

Another new filter enhancement is filter overrides. Filter overrides allow you to define filters that apply to most of your application, either at the global level or at the controller level, but then have the option to actually override or turn off those filters at the action level or controller level.

Upgrading from MVC 4

Microsoft has made upgrading easy and painless for the developer. In a nutshell, most applications will simply need to update their NuGet packages, plus make a couple of web.config changes, and they will be off and running. The NuGet services manage all the individual components or packages that your Web application utilizes, like Razor and Bootstrap, and make sure that they are all on the latest releases relative your version of MVC. Keep in mind that in addition to moving to MVC 5, there are minor releases coming out as well. At the time of this writing, there have been 5.1 and 5.2 releases, but by the time you read this there may be 5.3 available and beyond. Regardless, migrations at this level are equally straightforward in their upgrade process.

Keep in mind that in many cases the migration forward is a one-way proposition. With each upgrade, your application is exposed to more and more features and functionality, which means you can’t go back once you start using it. But hey, why would you go back, right?

Finally, it’s not just ASP.NET MVC that is gaining new features—ASP.NET Web API, Razor, SignalR, Entity Framework, NuGet and many others are also improving. LearnNowOnline can help you keep up with the latest releases so you can be the best Web developer you can be. Check out our complete course list.

 

About the Author


martyblogpic2-150x150Martin Schaeferle
 is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

A New Angle on Web Development

There is a relatively new open source JavaScript framework that is currently taking the web development community by storm. I’m speaking of AngularJS, which Black Duck’s Open Hub Web site (formerly Ohloh) currently has listed as one of the most active open source projects. Although it started at the top of 2010, it has really been in the last couple years that it has grown to rock star status and has become a go-to framework for many web developers.

So what’s so powerful about AngularJS? The power is in how it easily binds data to the objects on a Web page. It is essentially an MVC framework that can efficiently create dynamic views in a Web browser. AngularJS is built to perform all those complex, low-level DOM manipulation commands so you don’t have to. But doesn’t jQuery do that? Sure, but AngularJS is jQuery to the next level, and can be used in tandum with jQuery or as a complete replacement. AngularJS also provides built-in AJAX support and, unlike jQuery, RESTful services support.

AngularJS also has the unfair advantage of being heavily supported by Google with many Google developers actively working to improve the framework. That has led to a huge community that is actively engaged with the open source project on GitHub. So with or without Google, it is destined to stay on top as one of the best frameworks to use—so use it with confidence.

Ok, so what does AngularJS look like? Let’s look at a simple example of data binding using AngularJS. We will take a look at a Web page to do Fahrenheit to Celsius converting. The first step is we need to reference the Angular JavaScript file in the <head> tag. That is shown here:

Next add the following HTML code to the body:

What’s going on here? In the first DIV tag we provide scope for the block of HTML that will leverage AngularJS by including the “ng-app” attribute. AngularJS chose the ng namespace for the fact that when you phonetically say “NG” you say “aye-n-g” which is about as close to “angle” that you can get with two letters. Ok, moving on.

We then see an attribute “ng-init” that is used to initialize a variable “fTemp.” This sets the variable to 32. It is then used in the <input> tag to bind that textbox value to the variable fTemp. This variable is then used in the calculation of Celsius. Next you come across the double curly braces, “{{ }}”, which AngularJS picks up and evaluates what is between them. In this case, we calculated what the equivalent Celsius value is based on the current setting of Fahrenheit.

Below is what the page looks like:
NewAngleWebDevelopmentimg1

And if I change the value of Fahrenheit, the Celsius instantly changes as well, as shown here:
NewAngleWebDevelopmentimg2
Although this is a simple AngularJS example, I hope you can see the power behind it. It wouldn’t take much additional code to bind this Web page to a RESTful service that returns the current temperature in your area. And it’s not just weather data – you could hook up dynamic pages to your company’s data and manipulate it easily on the fly without getting buried in all the DOM-related calls or dealing with all the nuances of different browsers.

Check out our AngularJS courses for yourself and see John Culviner break down AngularJS so you can leverage the power in your Web sites.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Hadoop…Pigs, Hives, and Zookeepers, Oh My!

zookeeper

If there is one aspect of Hadoop that I find particularly entertaining, it is the naming of the various tools that surround Hadoop. In my 7/3 post, I introduced Hadoop, the reasons for its growing popularity, and the core framework features. In this post, I will introduce you to the many different tools, and their clever names, that augment Hadoop and make it more powerful. And yes, the names in the title of this blog are actual tools.

Pig
The power behind Pig is that it provides developers with a simple scripting language that performs rather complex MapReduce queries. Originally developed by a team at Yahoo and named for its ability to devour any amount and any kind of data. The scripting language (yes, you guessed it, called Pig Latin) provides the developer with a set of high level commands to do all kinds of data manipulation like joins, filters, and sorts.

Unlike the SQL language, Pig is a more procedure or script-oriented query language. SQL, by design, is more declarative. The benefit of a procedural design is that you have more control over the processing of your data. For example, you can inject user code at any point within the process to control the flow.

Hive
To complement Pig, Hive provides developers a declarative query language similar to SQL. For many developers who are familiar with building SQL statements for relational databases like SQL Server and Oracle, Hive will be significantly easier to master. Originally developed by a team at Facebook, it has quickly become one of the most popular methods of retrieving data from Hadoop.

Hive uses a SQL-like implementation called HiveQL or HQL. Although it doesn’t strictly conform to the SQL ’92 standard, it does provide many of the same commands. The key language limitation relative to the standard is that there is no transactional support. HQL supports both ODBC and JDBC so developers can leverage many different programming languages like Java, C#, PHP, Python, and Ruby.

Oozie
To tie these query languages together for complex tasks requires an advanced workflow engine. Enter Oozie—a workflow scheduler for Hadoop that allows multiple queries from multiple query languages to be assembled into a convenient automated step-by-step process. With Oozie, you have total control over the flow to perform branching, decision-making, joining, and more. It can be configured to run at specific times or intervals and reports back logging and status information to the system. Oozie workflows can also accept user input parameters to add additional control. This allows developers to tweak the flow based on changing states or conditions of the system.

Sqoop
When deploying a Hadoop solution, one of the first steps is populating the system with data. Although data can come from many different sources, the most likely would be a relational database like Oracle, MySQL or SQL Server. For moving data to and from relational databases, Apache’s Sqoop is great tool to use. The name is derived from combining “SQL” and “Hadoop”; signifying the connection between SQL and Hadoop data.

Part of Sqoop’s power comes from the intelligence built-in to optimize the transfer of data both on the SQL side and the Hadoop side. It can query the SQL table’s schema to determine the structure of the incoming data, translate it into a set of intelligent data classes, and configure MapReduce to import the data efficiently into a Hadoop data store like HBase. Sqoop also provides the developer more granular control over the transfer by allowing them to import subsets of the data; for example, Sqoop can be told to only import specific columns within the table instead of the whole table.

Sqoop was even chosen by Microsoft as their preferred tool for moving SQL Server data into Hadoop.

Flume
Another popular data source for Hadoop outside of relational data is log or streaming data. Web sites, in particular, have a propensity to generate massive amounts of log data and more and more companies are finding out how valuable this data is to better understand their audience and their buying habits. So another challenge for the Hadoop community to solve was how to move log-based data into Hadoop. Apache tackled that challenge and released Flume (yes, think of a log flume).

The flume metaphor symbolizes the fact that this tool is dealing with streaming data like water down a rushing river. Unlike Sqoop which is typically moving static data, Flume needs to manage constant changes in data flow and be able to adjust to handle very busy times. For example, Web data may be coming in at an extreme high rate during a promotion. Flume is designed to scale itself to handle these changes in rates. Flume can also receive data from multiple streaming sources, even beyond Web logs, and does so with guaranteed delivery.

Zookeeper
There are many more tools I could cover but I’m going to wrap it up with one of my favorite tool names— Zookeeper. This tool comes into play when dealing with very large Hadoop installations. At some point in the growth of the system, as more and more computers are added to the cluster, there will be an increasing need to be able to manage and optimize the various nodes involved.

Zookeeper collects information about all nodes and organizes them in a hierarchy similar to how your operating system will create a hierarchy of all the files on your hard drive to make them easier to manage. The Zookeeper service is an in-memory service making it extremely fast, although it is limited by available RAM which may affect its scalability. It replicates itself across many of the nodes in the Hadoop system so that it maintains high availability and does not create a weak-link situation.

Zookeeper becomes the main hub that client machines connect to in order to obtain health information about the system as a whole. It is constantly monitoring all the nodes and logging events as they happen. With Zookeeper’s organized map of the system, it makes what could be a cumbersome task of checking on and maintaining each of the nodes individually a more enjoyable and manageable experience.

Summary
I hope this give you a taste of the many support tools that are available to Hadoop as well as illustrates the community’s commitment to this project. As technology goes, Hadoop is in the very early stages of its lifespan and components and tools are constantly changing. For more information about these and other tools, be sure to check out our new Hadoop course.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

The Power of Hadoop

hadoop.jpb

Even within the context of other hi-tech technologies, Hadoop went from obscurity to fame in a miraculously short about of time. It had to… the pressures driving the development of this technology were too great. If you are not familiar with Hadoop, let’s start by looking at the void it is trying to fill.

Companies, up until recently—say the last five to ten years or so—did not have the massive amounts of data to manage as they do today. Most companies only had to manage the data relating to running their business and managing their customers. Even those with millions of customers didn’t have trouble storing data using your everyday relational database like Microsoft SQL Server or Oracle.

But today, companies are realizing that with the growth of the Internet and with self-servicing (or SaaS) Web sites, there are now hundreds of millions of potential customers that are all voluntarily providing massive amounts of valuable business intelligence. Think of storing something as simple as a Web log that provides every click of every user on your site. How does a company store and manipulate this data when it is generating potentially trillions of rows of data every year?

Generally speaking, the essence of the problem Hadoop is attempting to solve is that data is coming in faster than hard drive capacities are growing. Today we have 4 TB drives available which can then be assembled on SAN or NAS devices to easily get 40 TB volumes or maybe even 400 TB volumes. But what if you needed a 4,000 TB or 4 Petabytes (PB) volume? The costs quickly get incredibly high for most companies to absorb…until now. Enter Hadoop.

Hadoop Architecture
One of the keys to Hadoop’s success is that it operates on everyday common hardware. A typical company has a backroom with hardware that has since past its prime. Using old and outdated computers, one can pack them full of relatively inexpensive hard drives (doesn’t need to be the same total capacity within each computer) and use them within a Hadoop cluster. Need to expand capacity? Add more computers or hard drives. Hadoop can leverage all the hard drives into one giant volume available for storing all types of data, from web logs to large video files. It is not uncommon for Hadoop to be used to store rows of data that are over 1GB per row!

The file system that Hadoop uses is called the Hadoop Distributed File System or HDFS. It is a highly fault tolerant file system that focuses on high availability and fast readabilities. It is best used for data that is written once and read often. It leverages all the hard drives in the systems when writing data because Hadoop knows that bottlenecks stem from writing and reading to a single hard drive. The more hard drives are used simultaneously during the writing and reading of data, the faster the system operates as a whole.

The HDFS file system operates in small file blocks which are spread across all hard drives available within a cluster. The block size is configurable and optimized to the data being stored. It also replicates the blocks over multiple drives across multiple computers and even across multiple network subnets. This allows for hard drives or computers to fail (and they will) and not disrupt the system. It also allows Hadoop to be strategic in which blocks it accesses during a read. Hadoop will choose to read certain replicated blocks when it feels it can retrieve the data faster using one computer over another. Hadoop analyses which computers and hard drives are currently being utilized, along with network bandwidth, to strategically pick the next hard drive to read a block. This produces a system that is very quick to respond to requests.

MapReduce
Despite the relatively odd name, MapReduce is the cornerstone of Hadoop’s data retrieval system. It is an abstracted programming layer on top of HDFS and is responsible for simplifying how data is read back to the user. It has a purpose similar to SQL in that it allows programmers to focus on building intelligent queries and not get involved in the underlying plumbing responsible for implementing or optimizing the queries. The “Map” part of the name refers to the task of building a map on the best way to sort and filter the information requested and then to return it as a pseudo result set. The “Reduce” task summarizes the data like the counting and summing of certain columns.

These two tasks are both analyzed by the Hadoop engine and then broken into many pieces or nodes (a divide and conquer model) which are all processed in parallel by individual workers. This result is the ability to process Petabytes of data in a matter of hours.

MapReduce is an open source project originally developed by Google and has been now ported over to many programming languages. You can find out more on MapReduce by visiting http://mapreduce.org.

In my next post, I’ll take a look at some of the other popular components around Hadoop, including advanced analytical tools like Hive and Pig. In the meantime, if you’d like to learn more about Hadoop, check out our new course.

Apache Hadoop, Hadoop, Apache, the Apache feather logo, and the Apache Hadoop project logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

8 Key Players in Your SharePoint Rollout, Part 2

SharePointRolesCloud

In my 5/12/2014 post I took a look at one of the main reasons many SharePoint installations fail—lack of user buy-in. One of the best ways to get buy-in is through SharePoint education. Then in my 5/29/2014 post, I began to look at some of the primary roles within a company that are involved in planning and implementing SharePoint. I covered how targeted and structured training within these roles can create an environment where communication can flow freely, resulting in SharePoint deployments with a high rate of success.

In this post, let’s take a look at the remaining roles within a typical SharePoint deployment, and why they also need a solid understanding of SharePoint in order to obtain buy-in, and thereby create the necessary steps to insure a high level of success.

Developers

Developers are given the task of implementing the business logic that controls the document flow within SharePoint. Typically this should be the most obvious place to throw training dollars, but surprisingly many companies don’t believe it necessary. They feel SharePoint development is no different than any other Web development so why bother. Unbeknown to them, they have now greatly increased their chances of stepping on one of the biggest landmines in SharePoint deployment—code from an uneducated developer. SharePoint provides a very powerful framework that gives developers a huge amount of leeway on how they can extend it. Not taking the time to understand the pros and cons of all options can jeopardize the security, stability and maintainability of a SharePoint installation.

SharePoint can also suffer from poor coding practices. There are many development tools and concepts that can be leveraged to extend SharePoint from C# to MVC, from JavaScript to Entity Framework. Each area can introduce a weak spot if developers are not up to speed on the latest coding practicing or versions. Companies that want to maximize their chance of a successful deployment should make sure that their development teams have the right knowledge so they can make the best decisions and build components and workflows that are rock solid.

Designers

Depending on the size of the company, the design of the SharePoint site might be controlled by a team other than developers. Designers are responsible for the look and feel of the site and likely do not have a strong programming background. They may control areas like images, color, fonts, logos, layout, and branding that are implemented throughout the site.

Since part of the success of any SharePoint deployment is getting your employees to use it, attention to design and the user experience cannot be overlooked. Your design team needs to become familiar with SharePoint and understand how people will use it, so they can then design a solution that is easy to use and increases productivity. Any solution that creates a burden on performing even the simplest of tasks will not be adopted.

Administrators

Another key role in the deployment of any SharePoint installation is the administrator role. This person is the infrastructure guru that is ultimately responsible for allocating internal resources and installing all the services necessarily to get SharePoint up and running. The administrator will, of course, be guided by the detailed plans laid out by the infrastructure architect. Clearly this is a role that needs to have a firm understanding of SharePoint. Bad decisions by the administrator could lead to security breaches, loss of documents, degraded performance and/or site outages. Each of these could break the trust of its users, leading to a slow adoption curve or even no adoption at all.

Site Owners

Once SharePoint is installed and operational, the task of configuring SharePoint falls to the site owner. In many smaller installations, the site owners and champions will be the same person. Since the champion role requires a much deeper understanding of SharePoint, and therefore much more training, many larger companies may elect to limit the number of champions to what they need, and instead have additional site owners.

To make SharePoint more manageable, companies will break up SharePoint in many ways (by department, region, floor, rolling dice, etc.) since it is impractical for one person to manage it at the global level. By dicing the site up into pieces, individual site owners can customize the look and feel, as well as security, to meet the direct needs of that group.

Site owners are like mini-administrators. They have full control over their little piece of SharePoint and are responsible for creating and managing their site or sites. This may include the type of templates and document libraries used, as well as creating users and assigning access rights. There are still needs that would require going to the company administrator…for example, if their site runs low on storage space.

Even at this level, education and training is very important because these site owners need to understand how to do the tasks necessary so their users have a positive and engaging experience. This is the last group to influence SharePoint before it goes live.

Power Users and Business Users

Now that your SharePoint is live, the education needs don’t stop. You’ll likely have hundreds or even thousands of employees who can now take advantage of the power of SharePoint. But will they use it if they don’t understand it? Often users tend to get intimidated by SharePoint. They have been doing things one way for so long that it is difficult to trust that a new way would be better. The quickest way to gain trust and increase engagement with SharePoint is through training—successful SharePoint deployments always include training for their general users. That way they can feel comfortable working in this new environment right off the bat, and can more easily trust that this new way of doing things will be a better and more productive way than before.

In Summary

Creating a successful SharePoint deployment requires a conscious buy-in to the solution that starts from the top of the organization chart all the way down. Any member of the team who doesn’t understand or doesn’t trust the solution will be a kink in the armor. Too many kinks will cause the solution to stall, falter or fail. To get everyone’s buy-in, the best prescription is education. By training the top, you can be sure that the design and necessary resources will meet the needs of the business. By training architects, developers and administrators, you can be assured that the installation is rock solid and performs well. By training at the user level, you can be confident that the solution will be adopted and the company will reap the benefits.

Finally, I want to give a shout-out to one our indispensable SharePoint gurus and instructors, Philip Wheat, who assisted me in putting some of the content together for this blog series.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

8 Key Players in Your SharePoint Rollout

In my previous blog article, Is Your SharePoint Rollout Doomed to Fail?, I took a look at one of the main reasons many SharePoint installations struggle—the lack of user buy-in. Without complete buy-in on your SharePoint solution from everyone from the CEO on down, you might as well put your IT budget on Black-13, spin the roulette wheel and hope for the best.

Assuming you’re not the gambling type, just how do you tackle the training of your company in SharePoint? Who are the key players that require their own unique educational approach? In this post, we will begin to take a look at a typical SharePoint rollout, the roles involved, and what each role should know.

CEO/Executives

Unfortunately, many companies fail to include one of the key roles in any SharePoint rollout, upper management. Don’t get me wrong, I’m not suggesting they are purposely kept in the dark. It is more about the level of engagement. Your CEO sets the tone for the company and everyone else tends to follow his or her lead. If the CEO doesn’t completely understand the value, or the ROI, of their SharePoint solution, they will more than likely take a wait-and-see attitude towards the project…especially if the solution is sold and managed by the IT department. This attitude will trickle down and soon you will find yourself with a SharePoint site that no one uses or even cares to use. Why bother? No one has any “skin in the game” as they say.

Proper education of your executive team is important so they understand how their company will benefit by implementing SharePoint. Once they are on board, they will insist that each department be on board as well, and so on. So, does the CEO need to become a SharePoint developer? Of course not. But they need to see the big picture and understand the challenges that your SharePoint project will overcome.

Ok, you have upper management’s buy-in. Who’s next?

Architects

There can be up to four architects required for a SharePoint implementation, depending on the size of your company. Smaller organizations might consolidate the architect roles into just one or two.

The two most important architects are the:

  • Business architect – This person is focused on the business needs of the company and the business problems that the SharePoint implementation is trying to solve.
  • Technical architect – This person is focused on the technology requirements. The technical architect needs to work with the business architect to make sure the organization has the network infrastructure and resources necessary to support the SharePoint implementation.

The other two architects who should be involved are the process architect and the infrastructure architect. Once the business and technical architects iron out a plan, the process and infrastructure architects start working on how to implement it using the available resources.

  • Process architect – This person develops the business logic to support the plan and may even get into where the business logic resides, such as workflows, custom applications, templates, etc.
  • Infrastructure architect – This person works on the network and server requirements. Do we need more servers? Can we provide adequate security? How can we insure high availability?

Do these architects need to understand SharePoint? Absolutely! But here’s the key. Most companies don’t go far enough in getting the people in these roles sufficiently up to speed on all that SharePoint has to offer. Remember, SharePoint is a framework. This means it comes with an infinite amount of uses and many ways to implement. A common mistake is not thoroughly investigating the options available, and therefore going down a road that is misinterpreted as the only road available. It is common for a SharePoint implementation to be crippled right out of the chute due to poor architecture.

The cure…training, of course. Architects that have gone through a detailed, structured training program are more likely to work better together and come up with solutions that lead to a successful implementation of SharePoint. And in the end, this will draw out the architects’ buy-in which you needed all along.

Champions

Different companies label this role differently, but for the sake of this blog I’m going to refer to this particular role as champion. Most companies do not have, nor do they really need, an abundance of architects. But what you’ll find is that once the solution is deployed, everyone wants access to the architects. SharePoint is not a trivial solution and once things roll out, there are few people that really understand the solution at a high level. And unless you have a cloning device tucked away in your back pocket, you’re going to need more people to support the solution.

Champions are the ones that understand SharePoint at a high level and are usually trusted with administrator rights on the servers. They are then available to assist departments with site creation, security, major functional changes in business logic, and the like. It is also common for companies to assign the role of SharePoint Site Owner to these people as well, depending on the overall size of the company. These roles have a lot in common.

Clearly this role also requires getting up to speed in SharePoint. Just like with the architects, the people in this role will need a detailed understanding of SharePoint so they can effectively build and configure sites that are aligned with the goals of the company. With champions on board, your chances of a successful SharePoint rollout are greatly increased.

Next Steps

These roles provide you with a solid foundation to begin building your SharePoint implementation. Education plays a critical role here because it allows efficient communication to occur between all your major departments. When everyone understands the power and wide range of features that SharePoint brings to the table, great ideas and great solutions have a chance to come forward.

In my next blog, I will dig into five more roles that are critical to a successful SharePoint rollout: Administrator, Developer, Designer, Business/Power User, and SharePoint Site Owner. Stay tuned…

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Is Your SharePoint Rollout Doomed to Fail?

Why do so many SharePoint rollouts wind up dying on the vine? It certainly doesn’t come from a lack of good intentions. Everyone wants to be more efficient, and why not? Any software that promises you can accomplish more in less time will always get our business.

Don’t get me wrong, SharePoint is a great and powerful framework, and many companies have experienced tremendous success with it. But with all its successes, why do more than 60% of SharePoint implementations wind up stalled, struggling, or failing according to a 2013 AIIM report? The report goes on to say that the biggest ongoing issue with SharePoint is user adoption.

sharepointrolloutchart

 

User adoption means that everyone who will be utilizing SharePoint “buys in” to the project. They understand their role and the benefits of the software, and they are committed to making the project a success.

Getting user buy-in

So how do you get user buy-in for your SharePoint project? There are several things you can do to encourage buy-in and usage, but one sure-fire way is to begin SharePoint education early-on for everyone…and I mean everyone.

Many companies go into a SharePoint rollout with buy-in only from the IT department. Everyone else, from the CEO, to the sales team, to marketing, is simply along for the ride. When the software is rolled out, not only will the team not know how to use it, but it may not even meet their needs if they were never interviewed and included in the planning.

Education is your best deterrent of this situation. Start at the top and get the leadership team on board. Make sure they understand how SharePoint can be used to streamline processes and workflows, ultimately saving time and money for the company. Once they see the value SharePoint can bring to their company, your leaders will require buy-in from the heads of all departments.

Going on to educate and include the entire team in your SharePoint project, you will earn their buy-in as well. It will also lead to better communication between the IT department and department leaders, ensuring that your SharePoint solution is designed and built with each department’s goals in mind.

 IT…the biggest point of failure?

One of the major liabilities in the success of a SharePoint rollout is in the developer team. They will likely buy-in to the project, but if they don’t thoroughly understand SharePoint, the framework, and all its components, it can lead to poor code development, system instability, poor maintainability, and sites that cannot be updated when Microsoft launches their next release.

Case in point, when Microsoft released SharePoint 2013, they introduced the SharePoint App Model which is required for any components that integrate with their SharePoint Cloud solution (SharePoint Online). Companies are now realizing that moving to the Cloud is a not a matter of “if,” but “when.” If their developers are building SharePoint components that cannot integrate, they are going to find themselves in a very expensive situation. Ensuring your development team has a deep knowledge of SharePoint, including latest tools and advancements, is absolutely critical to the success of your project long term.

Next steps

Without a doubt, education across your entire organization is critical for a successful SharePoint rollout. But how do you begin to identify the key players? Where do you start in educating your team? In the future, I’ll take a look at the primary roles involved in a typical rollout and go through the key rollout stages along the way. Stay tuned…

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

SSRS 2012: Preview Performance for Report Builder

When you work in Design view in Report Builder, you are not working with real data, even if you created a data set and attached it to a data region. Report Builder uses that data set design to discern the schema for the data, but uses only a representation of that data. That’s why you’ll want to preview a report repeatedly as you design the report so that the actual data looks as you envisioned it.

When you click the Run button in Design view, Report Builder reads the actual data from the data store and renders the report so you can view it with actual data. It connects to the data source you specified and caches it, then combines the data and layout to render the report. You can switch between design and preview as often as necessary.

This is convenient for developing a report, but it can be a painfully slow process. If the data set uses a complex query that takes time to execute in a database, for example, you might have a significant wait for the report preview. In older versions of Reporting Services, you just had wait patiently.

However, newer versions of Report Builder greatly enhance the report preview process by using edit sessions when you’re connected to a report server. The edit session creates a data cache on the report server that it retains for your next report preview. This way you have to wait for the data only once; subsequently, the report preview appears almost instantaneously. As long as you don’t make any changes to the data set or any report changes that affect the data, report previewing uses the cached data. If you ever need to use fresh data, you can preview the report and click the Refresh button in the Report Builder’s preview toolbar, as shown in Figure 1.

PreviewPerformance

Figure 1. Refresh button in preview mode in Report Builder.

Report Builder creates an edit session the first time you preview the report; the session lasts for two hours by default, and resets to two hours every time you preview the report. The data cache can hold a maximum of five data sets. If you need more or use a number of different parameter values when you preview the report, the data cache may need to refresh more often, which slows preview performance.

You cannot access the underlying edit sessions that Report Builder uses to enhance preview performance, and the only properties you can tweak to affect preview behavior are the length of an edit session and the number of data sets in the cache. But actions you take can affect whether Report Builder is able to use the cached data, so it is helpful to have a basic understanding of what affects the edit session’s use of cached data.

TIP: To change the cache expiration timeout or the number of data sets the cache stores, use the Advanced page of the Server Properties dialog box for the Reporting Services instance from Management Studio.

The following changes cause Report Builder to refresh the cache, which causes a slower report preview:

  • Adding, changing, or deleting any data set associated with the report, including changes to its name or any properties.
  • Adding, changing, or deleting any data source, including changes to any properties.
  • Changing the language of the report.
  • Changing any assemblies or custom code in the report.
  • Adding, changing, or deleting any query parameters in the report, or any parameter values.

This list suggests that Report Builder refreshes the cache conservatively, that is, any time there might be an effect on the data used by the report. But changes to the report layout or data formatting do not cause the cached data to refresh.

TIP: Adding or deleting columns in a table or matrix does not refresh the cache. All of the fields in a data set are available to the report, whether you use them or not, so these actions do not affect the data set.

ldn-expertdkielyThis post is an excerpt from the online courseware for our SSRS 2012 Developer course written by expert Don Kiely. 

Don Kiely is a featured instructor on many of our SQL Server and Visual Studio courses. He is a nationally recognized author, instructor and consultant who travels the country sharing his expertise in SQL Server and security.

Windows 8 Using XAML: Introducing Badges

As you have seen, tiles act as a Windows Store app’s interface on the Windows Start screen. These tiles can display static or “live” data, depending on the functionality you add to the application. Sending notifications to the tiles to update their content is the topic of a different/earlier section—in this section, you’ll learn about creating the badge that can appear in the lower-right corner of any tile. This badge is a separate entity from the tile content, and you create and update the badge separately.

Badge Overview

A badge on a tile displays summary or status information for the application, and that information must be specific to your particular application. In other words, it would be confusing and irrelevant to display information about anything other than the application associated with the tile.

A badge on a tile can take on one of only two forms:

  • A numeric value between 1 and 99; numbers greater than 99 appear as 99+.
  • A glyph (a small image); one of a set of pre-defined glyphs.

Badges can appear on either wide or square tiles, and badges always appear in the lower right corner of the tile (lower-left corner, for RTL languages).

You might use a badge to indicate any of the following sample scenarios:

  • Network connection in an online game.
  • User status in a messaging app.
  • Number of unread email messages.
  • Number of new posts in a social media app.

Consider these things when designing an application that includes a badge on the applications tile:

  • Badges can only display numeric values between 1 and 99. Setting the value of the badge to 0 clears the badge, and setting the value to a number greater than 99 appears as 99+ on the badge.
  • Badges can display a limited number of glyphs (plus a special glyph value, None, which displays nothing). You cannot extend the list, and Windows supplies all the glyphs that a badge can display.

As an example, Figure 1 shows a sample tile for the Windows Store. This tile displays the number of apps that require updating.

introbadges1Figure 1. The Windows store tile, with a badge.

Figure 2 shows a sample application tile that displays a glyph badge. This glyph is one of a small set of available glyphs.

introbadges2Figure 2. The sample app displays a badge showing a glyph.

NOTE Samples in this chapter assume that you have installed Visual Studio 2012 Update 1 (or later). If you are running the original release of Visual Studio 2012, some of the steps will not function correctly.

ldn-expertkgetzThis post is an excerpt from the online courseware for our Windows 8 Using XAML: Tiles, Badges, Print, and Charms,course written by expert Ken Getz. 

Ken Getz is a featured instructor for several of our Visual Studio courses. He is a Visual Basic and Visual C# expert and has been recognized multiple times as a Microsoft MVP. Ken is a seasoned instructor, successful consultant, and the author or co-author of several best-selling books. He is a frequent speaker at technical conferences like Tech-Ed, VSLive and DevConnections, and he has written for several of the industry’s most-respected publications including Visual Studio Magazine, CoDe Magazine and MSDN Magazine.

Using Visual Studio 2010 to Create BCS Applications

There are two ways to use Visual Studio to create BCS applications. The first is to build custom BCS models with the Business Data Connectivity Model, the second is to use Visual Studio to migrate declarative models built with SharePoint Designer for deployment via solution packages.

Business Data Connectivity Model

Visual Studio 2010 includes the Business Data Connectivity Model project template that you can use to create and use a .NET Assembly shim to any data store for use by BCS. Solutions based on the project template consist of a feature to install the model in BCS, an XML configuration file that is the model, and .NET classes that do the work of reading and writing data.

The XML model contains all of the information required to work with the .NET classes including method and type descriptors. This means that the associated .NET class’s methods and parameters must match the model.

At this point in the chapter you may have the strong impression that Microsoft really wants people to buy licenses to SharePoint Server if they need BCS. If so, it will not surprise you to discover that you must do some extra work to use this project template with SharePoint Foundation to support deployment to BCS.

Migrating Declarative Models to Visual Studio

You can use the Business Data Connectivity Model project template as a basis to migrate declarative models created in SharePoint Designer. Begin by using SharePoint Designer to export the model. Then create a Business Data Connectivity Model project and remove the default template items. Finally, add the exported model and replace the missing SharePoint Server specific feature receiver to deploy the model to SharePoint Foundation.

doug (frame 367 of smile clip)This post is an excerpt from the online courseware for our Microsoft SharePoint 2010 for Developers course written by expert Doug Ware.

Doug Ware is a SharePoint expert and an instructor for many of our SharePoint 2007 and SharePoint 2010 courses. A Microsoft MVP several times over, Doug is the leader of the Atlanta .NET User Group, one of the largest user groups in the Southeast U.S., and is a frequent speaker at code camps and other events. In addition to teaching and writing about SharePoint, Doug stays active as a consultant and has helped numerous organizations implement and customize SharePoint.