Tag Archives: Martin Schaeferle

What’s new in MVC 5.2

mvc

Microsoft’s very successful model-view-controller architecture, or MVC, has been their flagship framework for developing next generation Web applications—and Microsoft continues to improve it with version 5.2 released just over two months ago. If you’re still hanging on to MVC 4, you’re missing out on many new and exciting features, and Microsoft has made the path to upgrade easier than ever.

So what’s so exciting about MVC 5? Let me start by hitting you with some of the big improvements with this latest release. If you want even more information or want to see some demonstrations of these new features, please check out our MVC 5.2 courses with expert Eric Greene.

One ASP.NET

In MVC 5, Microsoft introduced a new project type called One ASP.NET. This project type has the goal of saving the Web developer’s time by reducing the clutter of many single-focused Web templates constantly growing within Visual Studio. One ASP.NET creates a more “a la carte” model for creating applications so the developer can start with core functionality, and then add more and more components as various features and functionality are required. This allows developers to combine ASP.NET Web Forms, MVC, Web API and other project templates into a single project and not be restricted to use only one of them.

Bootstrap

From the brilliant minds of the Twitter software engineers came a CSS and JavaScript framework that has quickly become one of the most popular tools for front-end development. Bootstrap provides user interface tools and controls that allow developers to build rich Internet applications that auto-respond to changing screen sizes and devices. It takes away the drudgery of constantly tinkering with the CSS and JavaScript code necessary to get your site to perform professionally for all of your users.

Microsoft now includes Bootstrap templates in MVC 5 so you can take advantage of all its features right out of the box. In fact, Bootstrap is now the default HTML/CSS/JavaScript framework bundled with ASP.NET MVC. Bootstrap is managed by NuGet which means it can be automatically upgraded as the technology advances. You can discover more about Bootstrap by taking a look at our Bootstrap 3.1 courses with expert Adam Barney.

ASP.NET Identity

Before ASP.NET MVC 5, Microsoft had promoted its Membership Provider to handle security, authentication, and roles for your Web applications. But with the ASP.NET Identity, they completely rebuilt their security solution to include a whole new range of features. It still contains all the core functionality for authentication and authorization, but it also extends to support new forms like two-factor authentication (2FA) and integrated authentication. With 2FA, you can require multiple forms of authentication like the Google Authenticator or SMS text messaging. Integrated authentication allows you to work with many existing third-party providers like Google, Facebook, and Twitter. It allows your users to access your site using credentials from these and other providers, freeing you from the responsibility of managing credentials, and not forcing your users to memorize yet another password.

New Filters

Authorization filters have been around for quite a while in ASP.NET and have been a staple for most developers who need to set up security for their Web applications. Authentication filters, on the other hand, are new to MVC 5. These new filters allow for programming logic to occur before the authorization filter, giving developers the ability to better identify and control users entering their site. For example, developers can now assign a new authentication principal (think of it like a role) to a user logging in prior to the authorization filter, giving them better control at the individual Action/Controller level. Think of the authorization filter as providing a more global security model, one that covers the site as a whole, while the authentication filter provides a more specific security model that can be applied at more localized level.

Another new filter enhancement is filter overrides. Filter overrides allow you to define filters that apply to most of your application, either at the global level or at the controller level, but then have the option to actually override or turn off those filters at the action level or controller level.

Upgrading from MVC 4

Microsoft has made upgrading easy and painless for the developer. In a nutshell, most applications will simply need to update their NuGet packages, plus make a couple of web.config changes, and they will be off and running. The NuGet services manage all the individual components or packages that your Web application utilizes, like Razor and Bootstrap, and make sure that they are all on the latest releases relative your version of MVC. Keep in mind that in addition to moving to MVC 5, there are minor releases coming out as well. At the time of this writing, there have been 5.1 and 5.2 releases, but by the time you read this there may be 5.3 available and beyond. Regardless, migrations at this level are equally straightforward in their upgrade process.

Keep in mind that in many cases the migration forward is a one-way proposition. With each upgrade, your application is exposed to more and more features and functionality, which means you can’t go back once you start using it. But hey, why would you go back, right?

Finally, it’s not just ASP.NET MVC that is gaining new features—ASP.NET Web API, Razor, SignalR, Entity Framework, NuGet and many others are also improving. LearnNowOnline can help you keep up with the latest releases so you can be the best Web developer you can be. Check out our complete course list.

 

About the Author


martyblogpic2-150x150Martin Schaeferle
 is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Hadoop…Pigs, Hives, and Zookeepers, Oh My!

zookeeper

If there is one aspect of Hadoop that I find particularly entertaining, it is the naming of the various tools that surround Hadoop. In my 7/3 post, I introduced Hadoop, the reasons for its growing popularity, and the core framework features. In this post, I will introduce you to the many different tools, and their clever names, that augment Hadoop and make it more powerful. And yes, the names in the title of this blog are actual tools.

Pig
The power behind Pig is that it provides developers with a simple scripting language that performs rather complex MapReduce queries. Originally developed by a team at Yahoo and named for its ability to devour any amount and any kind of data. The scripting language (yes, you guessed it, called Pig Latin) provides the developer with a set of high level commands to do all kinds of data manipulation like joins, filters, and sorts.

Unlike the SQL language, Pig is a more procedure or script-oriented query language. SQL, by design, is more declarative. The benefit of a procedural design is that you have more control over the processing of your data. For example, you can inject user code at any point within the process to control the flow.

Hive
To complement Pig, Hive provides developers a declarative query language similar to SQL. For many developers who are familiar with building SQL statements for relational databases like SQL Server and Oracle, Hive will be significantly easier to master. Originally developed by a team at Facebook, it has quickly become one of the most popular methods of retrieving data from Hadoop.

Hive uses a SQL-like implementation called HiveQL or HQL. Although it doesn’t strictly conform to the SQL ’92 standard, it does provide many of the same commands. The key language limitation relative to the standard is that there is no transactional support. HQL supports both ODBC and JDBC so developers can leverage many different programming languages like Java, C#, PHP, Python, and Ruby.

Oozie
To tie these query languages together for complex tasks requires an advanced workflow engine. Enter Oozie—a workflow scheduler for Hadoop that allows multiple queries from multiple query languages to be assembled into a convenient automated step-by-step process. With Oozie, you have total control over the flow to perform branching, decision-making, joining, and more. It can be configured to run at specific times or intervals and reports back logging and status information to the system. Oozie workflows can also accept user input parameters to add additional control. This allows developers to tweak the flow based on changing states or conditions of the system.

Sqoop
When deploying a Hadoop solution, one of the first steps is populating the system with data. Although data can come from many different sources, the most likely would be a relational database like Oracle, MySQL or SQL Server. For moving data to and from relational databases, Apache’s Sqoop is great tool to use. The name is derived from combining “SQL” and “Hadoop”; signifying the connection between SQL and Hadoop data.

Part of Sqoop’s power comes from the intelligence built-in to optimize the transfer of data both on the SQL side and the Hadoop side. It can query the SQL table’s schema to determine the structure of the incoming data, translate it into a set of intelligent data classes, and configure MapReduce to import the data efficiently into a Hadoop data store like HBase. Sqoop also provides the developer more granular control over the transfer by allowing them to import subsets of the data; for example, Sqoop can be told to only import specific columns within the table instead of the whole table.

Sqoop was even chosen by Microsoft as their preferred tool for moving SQL Server data into Hadoop.

Flume
Another popular data source for Hadoop outside of relational data is log or streaming data. Web sites, in particular, have a propensity to generate massive amounts of log data and more and more companies are finding out how valuable this data is to better understand their audience and their buying habits. So another challenge for the Hadoop community to solve was how to move log-based data into Hadoop. Apache tackled that challenge and released Flume (yes, think of a log flume).

The flume metaphor symbolizes the fact that this tool is dealing with streaming data like water down a rushing river. Unlike Sqoop which is typically moving static data, Flume needs to manage constant changes in data flow and be able to adjust to handle very busy times. For example, Web data may be coming in at an extreme high rate during a promotion. Flume is designed to scale itself to handle these changes in rates. Flume can also receive data from multiple streaming sources, even beyond Web logs, and does so with guaranteed delivery.

Zookeeper
There are many more tools I could cover but I’m going to wrap it up with one of my favorite tool names— Zookeeper. This tool comes into play when dealing with very large Hadoop installations. At some point in the growth of the system, as more and more computers are added to the cluster, there will be an increasing need to be able to manage and optimize the various nodes involved.

Zookeeper collects information about all nodes and organizes them in a hierarchy similar to how your operating system will create a hierarchy of all the files on your hard drive to make them easier to manage. The Zookeeper service is an in-memory service making it extremely fast, although it is limited by available RAM which may affect its scalability. It replicates itself across many of the nodes in the Hadoop system so that it maintains high availability and does not create a weak-link situation.

Zookeeper becomes the main hub that client machines connect to in order to obtain health information about the system as a whole. It is constantly monitoring all the nodes and logging events as they happen. With Zookeeper’s organized map of the system, it makes what could be a cumbersome task of checking on and maintaining each of the nodes individually a more enjoyable and manageable experience.

Summary
I hope this give you a taste of the many support tools that are available to Hadoop as well as illustrates the community’s commitment to this project. As technology goes, Hadoop is in the very early stages of its lifespan and components and tools are constantly changing. For more information about these and other tools, be sure to check out our new Hadoop course.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

The Power of Hadoop

hadoop.jpb

Even within the context of other hi-tech technologies, Hadoop went from obscurity to fame in a miraculously short about of time. It had to… the pressures driving the development of this technology were too great. If you are not familiar with Hadoop, let’s start by looking at the void it is trying to fill.

Companies, up until recently—say the last five to ten years or so—did not have the massive amounts of data to manage as they do today. Most companies only had to manage the data relating to running their business and managing their customers. Even those with millions of customers didn’t have trouble storing data using your everyday relational database like Microsoft SQL Server or Oracle.

But today, companies are realizing that with the growth of the Internet and with self-servicing (or SaaS) Web sites, there are now hundreds of millions of potential customers that are all voluntarily providing massive amounts of valuable business intelligence. Think of storing something as simple as a Web log that provides every click of every user on your site. How does a company store and manipulate this data when it is generating potentially trillions of rows of data every year?

Generally speaking, the essence of the problem Hadoop is attempting to solve is that data is coming in faster than hard drive capacities are growing. Today we have 4 TB drives available which can then be assembled on SAN or NAS devices to easily get 40 TB volumes or maybe even 400 TB volumes. But what if you needed a 4,000 TB or 4 Petabytes (PB) volume? The costs quickly get incredibly high for most companies to absorb…until now. Enter Hadoop.

Hadoop Architecture
One of the keys to Hadoop’s success is that it operates on everyday common hardware. A typical company has a backroom with hardware that has since past its prime. Using old and outdated computers, one can pack them full of relatively inexpensive hard drives (doesn’t need to be the same total capacity within each computer) and use them within a Hadoop cluster. Need to expand capacity? Add more computers or hard drives. Hadoop can leverage all the hard drives into one giant volume available for storing all types of data, from web logs to large video files. It is not uncommon for Hadoop to be used to store rows of data that are over 1GB per row!

The file system that Hadoop uses is called the Hadoop Distributed File System or HDFS. It is a highly fault tolerant file system that focuses on high availability and fast readabilities. It is best used for data that is written once and read often. It leverages all the hard drives in the systems when writing data because Hadoop knows that bottlenecks stem from writing and reading to a single hard drive. The more hard drives are used simultaneously during the writing and reading of data, the faster the system operates as a whole.

The HDFS file system operates in small file blocks which are spread across all hard drives available within a cluster. The block size is configurable and optimized to the data being stored. It also replicates the blocks over multiple drives across multiple computers and even across multiple network subnets. This allows for hard drives or computers to fail (and they will) and not disrupt the system. It also allows Hadoop to be strategic in which blocks it accesses during a read. Hadoop will choose to read certain replicated blocks when it feels it can retrieve the data faster using one computer over another. Hadoop analyses which computers and hard drives are currently being utilized, along with network bandwidth, to strategically pick the next hard drive to read a block. This produces a system that is very quick to respond to requests.

MapReduce
Despite the relatively odd name, MapReduce is the cornerstone of Hadoop’s data retrieval system. It is an abstracted programming layer on top of HDFS and is responsible for simplifying how data is read back to the user. It has a purpose similar to SQL in that it allows programmers to focus on building intelligent queries and not get involved in the underlying plumbing responsible for implementing or optimizing the queries. The “Map” part of the name refers to the task of building a map on the best way to sort and filter the information requested and then to return it as a pseudo result set. The “Reduce” task summarizes the data like the counting and summing of certain columns.

These two tasks are both analyzed by the Hadoop engine and then broken into many pieces or nodes (a divide and conquer model) which are all processed in parallel by individual workers. This result is the ability to process Petabytes of data in a matter of hours.

MapReduce is an open source project originally developed by Google and has been now ported over to many programming languages. You can find out more on MapReduce by visiting http://mapreduce.org.

In my next post, I’ll take a look at some of the other popular components around Hadoop, including advanced analytical tools like Hive and Pig. In the meantime, if you’d like to learn more about Hadoop, check out our new course.

Apache Hadoop, Hadoop, Apache, the Apache feather logo, and the Apache Hadoop project logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

8 Key Players in Your SharePoint Rollout, Part 2

SharePointRolesCloud

In my 5/12/2014 post I took a look at one of the main reasons many SharePoint installations fail—lack of user buy-in. One of the best ways to get buy-in is through SharePoint education. Then in my 5/29/2014 post, I began to look at some of the primary roles within a company that are involved in planning and implementing SharePoint. I covered how targeted and structured training within these roles can create an environment where communication can flow freely, resulting in SharePoint deployments with a high rate of success.

In this post, let’s take a look at the remaining roles within a typical SharePoint deployment, and why they also need a solid understanding of SharePoint in order to obtain buy-in, and thereby create the necessary steps to insure a high level of success.

Developers

Developers are given the task of implementing the business logic that controls the document flow within SharePoint. Typically this should be the most obvious place to throw training dollars, but surprisingly many companies don’t believe it necessary. They feel SharePoint development is no different than any other Web development so why bother. Unbeknown to them, they have now greatly increased their chances of stepping on one of the biggest landmines in SharePoint deployment—code from an uneducated developer. SharePoint provides a very powerful framework that gives developers a huge amount of leeway on how they can extend it. Not taking the time to understand the pros and cons of all options can jeopardize the security, stability and maintainability of a SharePoint installation.

SharePoint can also suffer from poor coding practices. There are many development tools and concepts that can be leveraged to extend SharePoint from C# to MVC, from JavaScript to Entity Framework. Each area can introduce a weak spot if developers are not up to speed on the latest coding practicing or versions. Companies that want to maximize their chance of a successful deployment should make sure that their development teams have the right knowledge so they can make the best decisions and build components and workflows that are rock solid.

Designers

Depending on the size of the company, the design of the SharePoint site might be controlled by a team other than developers. Designers are responsible for the look and feel of the site and likely do not have a strong programming background. They may control areas like images, color, fonts, logos, layout, and branding that are implemented throughout the site.

Since part of the success of any SharePoint deployment is getting your employees to use it, attention to design and the user experience cannot be overlooked. Your design team needs to become familiar with SharePoint and understand how people will use it, so they can then design a solution that is easy to use and increases productivity. Any solution that creates a burden on performing even the simplest of tasks will not be adopted.

Administrators

Another key role in the deployment of any SharePoint installation is the administrator role. This person is the infrastructure guru that is ultimately responsible for allocating internal resources and installing all the services necessarily to get SharePoint up and running. The administrator will, of course, be guided by the detailed plans laid out by the infrastructure architect. Clearly this is a role that needs to have a firm understanding of SharePoint. Bad decisions by the administrator could lead to security breaches, loss of documents, degraded performance and/or site outages. Each of these could break the trust of its users, leading to a slow adoption curve or even no adoption at all.

Site Owners

Once SharePoint is installed and operational, the task of configuring SharePoint falls to the site owner. In many smaller installations, the site owners and champions will be the same person. Since the champion role requires a much deeper understanding of SharePoint, and therefore much more training, many larger companies may elect to limit the number of champions to what they need, and instead have additional site owners.

To make SharePoint more manageable, companies will break up SharePoint in many ways (by department, region, floor, rolling dice, etc.) since it is impractical for one person to manage it at the global level. By dicing the site up into pieces, individual site owners can customize the look and feel, as well as security, to meet the direct needs of that group.

Site owners are like mini-administrators. They have full control over their little piece of SharePoint and are responsible for creating and managing their site or sites. This may include the type of templates and document libraries used, as well as creating users and assigning access rights. There are still needs that would require going to the company administrator…for example, if their site runs low on storage space.

Even at this level, education and training is very important because these site owners need to understand how to do the tasks necessary so their users have a positive and engaging experience. This is the last group to influence SharePoint before it goes live.

Power Users and Business Users

Now that your SharePoint is live, the education needs don’t stop. You’ll likely have hundreds or even thousands of employees who can now take advantage of the power of SharePoint. But will they use it if they don’t understand it? Often users tend to get intimidated by SharePoint. They have been doing things one way for so long that it is difficult to trust that a new way would be better. The quickest way to gain trust and increase engagement with SharePoint is through training—successful SharePoint deployments always include training for their general users. That way they can feel comfortable working in this new environment right off the bat, and can more easily trust that this new way of doing things will be a better and more productive way than before.

In Summary

Creating a successful SharePoint deployment requires a conscious buy-in to the solution that starts from the top of the organization chart all the way down. Any member of the team who doesn’t understand or doesn’t trust the solution will be a kink in the armor. Too many kinks will cause the solution to stall, falter or fail. To get everyone’s buy-in, the best prescription is education. By training the top, you can be sure that the design and necessary resources will meet the needs of the business. By training architects, developers and administrators, you can be assured that the installation is rock solid and performs well. By training at the user level, you can be confident that the solution will be adopted and the company will reap the benefits.

Finally, I want to give a shout-out to one our indispensable SharePoint gurus and instructors, Philip Wheat, who assisted me in putting some of the content together for this blog series.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

8 Key Players in Your SharePoint Rollout

In my previous blog article, Is Your SharePoint Rollout Doomed to Fail?, I took a look at one of the main reasons many SharePoint installations struggle—the lack of user buy-in. Without complete buy-in on your SharePoint solution from everyone from the CEO on down, you might as well put your IT budget on Black-13, spin the roulette wheel and hope for the best.

Assuming you’re not the gambling type, just how do you tackle the training of your company in SharePoint? Who are the key players that require their own unique educational approach? In this post, we will begin to take a look at a typical SharePoint rollout, the roles involved, and what each role should know.

CEO/Executives

Unfortunately, many companies fail to include one of the key roles in any SharePoint rollout, upper management. Don’t get me wrong, I’m not suggesting they are purposely kept in the dark. It is more about the level of engagement. Your CEO sets the tone for the company and everyone else tends to follow his or her lead. If the CEO doesn’t completely understand the value, or the ROI, of their SharePoint solution, they will more than likely take a wait-and-see attitude towards the project…especially if the solution is sold and managed by the IT department. This attitude will trickle down and soon you will find yourself with a SharePoint site that no one uses or even cares to use. Why bother? No one has any “skin in the game” as they say.

Proper education of your executive team is important so they understand how their company will benefit by implementing SharePoint. Once they are on board, they will insist that each department be on board as well, and so on. So, does the CEO need to become a SharePoint developer? Of course not. But they need to see the big picture and understand the challenges that your SharePoint project will overcome.

Ok, you have upper management’s buy-in. Who’s next?

Architects

There can be up to four architects required for a SharePoint implementation, depending on the size of your company. Smaller organizations might consolidate the architect roles into just one or two.

The two most important architects are the:

  • Business architect – This person is focused on the business needs of the company and the business problems that the SharePoint implementation is trying to solve.
  • Technical architect – This person is focused on the technology requirements. The technical architect needs to work with the business architect to make sure the organization has the network infrastructure and resources necessary to support the SharePoint implementation.

The other two architects who should be involved are the process architect and the infrastructure architect. Once the business and technical architects iron out a plan, the process and infrastructure architects start working on how to implement it using the available resources.

  • Process architect – This person develops the business logic to support the plan and may even get into where the business logic resides, such as workflows, custom applications, templates, etc.
  • Infrastructure architect – This person works on the network and server requirements. Do we need more servers? Can we provide adequate security? How can we insure high availability?

Do these architects need to understand SharePoint? Absolutely! But here’s the key. Most companies don’t go far enough in getting the people in these roles sufficiently up to speed on all that SharePoint has to offer. Remember, SharePoint is a framework. This means it comes with an infinite amount of uses and many ways to implement. A common mistake is not thoroughly investigating the options available, and therefore going down a road that is misinterpreted as the only road available. It is common for a SharePoint implementation to be crippled right out of the chute due to poor architecture.

The cure…training, of course. Architects that have gone through a detailed, structured training program are more likely to work better together and come up with solutions that lead to a successful implementation of SharePoint. And in the end, this will draw out the architects’ buy-in which you needed all along.

Champions

Different companies label this role differently, but for the sake of this blog I’m going to refer to this particular role as champion. Most companies do not have, nor do they really need, an abundance of architects. But what you’ll find is that once the solution is deployed, everyone wants access to the architects. SharePoint is not a trivial solution and once things roll out, there are few people that really understand the solution at a high level. And unless you have a cloning device tucked away in your back pocket, you’re going to need more people to support the solution.

Champions are the ones that understand SharePoint at a high level and are usually trusted with administrator rights on the servers. They are then available to assist departments with site creation, security, major functional changes in business logic, and the like. It is also common for companies to assign the role of SharePoint Site Owner to these people as well, depending on the overall size of the company. These roles have a lot in common.

Clearly this role also requires getting up to speed in SharePoint. Just like with the architects, the people in this role will need a detailed understanding of SharePoint so they can effectively build and configure sites that are aligned with the goals of the company. With champions on board, your chances of a successful SharePoint rollout are greatly increased.

Next Steps

These roles provide you with a solid foundation to begin building your SharePoint implementation. Education plays a critical role here because it allows efficient communication to occur between all your major departments. When everyone understands the power and wide range of features that SharePoint brings to the table, great ideas and great solutions have a chance to come forward.

In my next blog, I will dig into five more roles that are critical to a successful SharePoint rollout: Administrator, Developer, Designer, Business/Power User, and SharePoint Site Owner. Stay tuned…

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.