Hadoop…Pigs, Hives, and Zookeepers, Oh My!

zookeeper

If there is one aspect of Hadoop that I find particularly entertaining, it is the naming of the various tools that surround Hadoop. In my 7/3 post, I introduced Hadoop, the reasons for its growing popularity, and the core framework features. In this post, I will introduce you to the many different tools, and their clever names, that augment Hadoop and make it more powerful. And yes, the names in the title of this blog are actual tools.

Pig
The power behind Pig is that it provides developers with a simple scripting language that performs rather complex MapReduce queries. Originally developed by a team at Yahoo and named for its ability to devour any amount and any kind of data. The scripting language (yes, you guessed it, called Pig Latin) provides the developer with a set of high level commands to do all kinds of data manipulation like joins, filters, and sorts.

Unlike the SQL language, Pig is a more procedure or script-oriented query language. SQL, by design, is more declarative. The benefit of a procedural design is that you have more control over the processing of your data. For example, you can inject user code at any point within the process to control the flow.

Hive
To complement Pig, Hive provides developers a declarative query language similar to SQL. For many developers who are familiar with building SQL statements for relational databases like SQL Server and Oracle, Hive will be significantly easier to master. Originally developed by a team at Facebook, it has quickly become one of the most popular methods of retrieving data from Hadoop.

Hive uses a SQL-like implementation called HiveQL or HQL. Although it doesn’t strictly conform to the SQL ’92 standard, it does provide many of the same commands. The key language limitation relative to the standard is that there is no transactional support. HQL supports both ODBC and JDBC so developers can leverage many different programming languages like Java, C#, PHP, Python, and Ruby.

Oozie
To tie these query languages together for complex tasks requires an advanced workflow engine. Enter Oozie—a workflow scheduler for Hadoop that allows multiple queries from multiple query languages to be assembled into a convenient automated step-by-step process. With Oozie, you have total control over the flow to perform branching, decision-making, joining, and more. It can be configured to run at specific times or intervals and reports back logging and status information to the system. Oozie workflows can also accept user input parameters to add additional control. This allows developers to tweak the flow based on changing states or conditions of the system.

Sqoop
When deploying a Hadoop solution, one of the first steps is populating the system with data. Although data can come from many different sources, the most likely would be a relational database like Oracle, MySQL or SQL Server. For moving data to and from relational databases, Apache’s Sqoop is great tool to use. The name is derived from combining “SQL” and “Hadoop”; signifying the connection between SQL and Hadoop data.

Part of Sqoop’s power comes from the intelligence built-in to optimize the transfer of data both on the SQL side and the Hadoop side. It can query the SQL table’s schema to determine the structure of the incoming data, translate it into a set of intelligent data classes, and configure MapReduce to import the data efficiently into a Hadoop data store like HBase. Sqoop also provides the developer more granular control over the transfer by allowing them to import subsets of the data; for example, Sqoop can be told to only import specific columns within the table instead of the whole table.

Sqoop was even chosen by Microsoft as their preferred tool for moving SQL Server data into Hadoop.

Flume
Another popular data source for Hadoop outside of relational data is log or streaming data. Web sites, in particular, have a propensity to generate massive amounts of log data and more and more companies are finding out how valuable this data is to better understand their audience and their buying habits. So another challenge for the Hadoop community to solve was how to move log-based data into Hadoop. Apache tackled that challenge and released Flume (yes, think of a log flume).

The flume metaphor symbolizes the fact that this tool is dealing with streaming data like water down a rushing river. Unlike Sqoop which is typically moving static data, Flume needs to manage constant changes in data flow and be able to adjust to handle very busy times. For example, Web data may be coming in at an extreme high rate during a promotion. Flume is designed to scale itself to handle these changes in rates. Flume can also receive data from multiple streaming sources, even beyond Web logs, and does so with guaranteed delivery.

Zookeeper
There are many more tools I could cover but I’m going to wrap it up with one of my favorite tool names— Zookeeper. This tool comes into play when dealing with very large Hadoop installations. At some point in the growth of the system, as more and more computers are added to the cluster, there will be an increasing need to be able to manage and optimize the various nodes involved.

Zookeeper collects information about all nodes and organizes them in a hierarchy similar to how your operating system will create a hierarchy of all the files on your hard drive to make them easier to manage. The Zookeeper service is an in-memory service making it extremely fast, although it is limited by available RAM which may affect its scalability. It replicates itself across many of the nodes in the Hadoop system so that it maintains high availability and does not create a weak-link situation.

Zookeeper becomes the main hub that client machines connect to in order to obtain health information about the system as a whole. It is constantly monitoring all the nodes and logging events as they happen. With Zookeeper’s organized map of the system, it makes what could be a cumbersome task of checking on and maintaining each of the nodes individually a more enjoyable and manageable experience.

Summary
I hope this give you a taste of the many support tools that are available to Hadoop as well as illustrates the community’s commitment to this project. As technology goes, Hadoop is in the very early stages of its lifespan and components and tools are constantly changing. For more information about these and other tools, be sure to check out our new Hadoop course.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Be Sociable, Share!

Bootstrap Fundamentals with Adam Barney

Bootstrap 3.1: Introduction and Installing
Watch the trailer for Bootstrap 3.1: Introduction and Installing

The LearnNowOnline production department has been busy working on four new courses for Bootstrap. So why would you need a course about how to use a bootstrap, you ask? How hard can it be? Actually, if you are going to create a web site about how to strap on your boots, you may want to use Bootstrap. Okay, enough with the bad play on words! The Bootstrap I’m talking about here is a free collection of web site creation tools. It was developed by engineers at Twitter as a framework to create uniformity in the development of websites.

Our new Bootstrap courses are presented by Adam Barney. Adam is a Microsoft C# MVP, consultant, speaker, author, and instructor. Starting in the 4th grade on a Commodore64, Adam has continually pushed himself to learn more about the construction of software, and he loves sharing that knowledge and passion with others. He is the instructor of our Microsoft Silverlight 5 courses, and we’re happy to have him back sharing his wisdom once again with these four new Bootstrap courses:

Bootstrap 3.1: Introduction and Installing (now available)
This course will take you through the process of acquiring Bootstrap – you’ll see how we can download Bootstrap distributions and sources or use a CDN or front-end package manager like Bower to get Bootstrap.

Bootstrap 3.1: Base CSS (now available)
This course will show you what basic CSS classes come with Bootstrap and how to apply them to your site to make it look great.

Bootstrap 3.1: Components (coming soon)
In this course you’ll learn about creating some different input elements, page navigation, emphasizing content in subtle ways with labels and badges, or in big ways with jumbotrons. You’ll also check out how you can visually separate content from its surroundings — both text-based content in wells, panels and alerts, and images in thumbnails.

Bootstrap 3.1: JavaScript (coming soon)
In this course, you’ll see that Bootstrap comes loaded with a ton of great JavaScript plugins to help you create powerful and engaging sites for customers and users. You’ll learn how to capture users’ attention with modal dialogs and create an auto-updating navigation element that remains fixed in the browser window as you scroll through your site.  You’ll also learn how to add some interactive elements to your site as we show and hide content dynamically using the tab and collapse plugins, and add some additional contextual content to our users in the form of tool-tips ad popovers.

One last item of note…yes Bootstrap 3.2 has just been released, but have no fear. All topics in these courses will still apply to Bootstrap 3.2.

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

Watch “Getting Started with AngularJS”

GettingStartedwithAngularJSwJC

This week expert John Culviner presented the webinar “Getting Started with AngularJS.” John covered how AngularJS can be used to create interactive web pages, and how it makes this process quicker and easier than most JavaScript frameworks. He presented some very valuable material during the 90-minute presentation and Q&A session. If you missed this event, you can watch the replay now. In fact, replays of all of our past events are now available in the LearnNowOnline webinar archive – be sure to check it out.

Our next webinar is Wednesday, July 23rd and will be presented by Doug Ortiz. Doug will be demonstrating the power of Power Pivot Dashboards and how you can add graphics to your company’s data so your audience can more easily see what all that data means. Register now

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

Agile/Scrum Essentials for Practitioners

GeofLory-300x200
Geof Lory, Agile/Scrum instructor

This last week we had Geof Lory in our production studio recording our new Agile/Scrum Essentials for Practitioners course. Geof is from the Twin Cities and has been introducing Agile and Lean principles into organizations since the mid-90s.

So what is the Agile/Scrum methodology of project management? For starters, it’s unlike the standard waterfall project management methodology where you assign tasks with a given order and time frame for completion. (For example, Task A is done before task B, and so on until the project is complete.) This process involves a project manager whose job it is to plan and assign each task within a given time frame.

The Agile/Scrum methodology does not have a traditional project manager, but rather a project team who divides the project management duties among its members. Instead of a task-driven schedule, a circular approach is taken. Each part of the project has a series of meetings to help in the development process. There is a kick-off meeting to start the project, daily meetings or Scrums, and Sprint meetings. Each type of meeting has a specific role within the project.

The Agile/Scrum method started in the software development industry as an answer to the quick changing technology needs of clients. Over time, the Agile/Scrum method has been adopted by other industries because of the flexibility it provides to the project team for adjusting to a changing feature list or other unforeseen circumstances.

Our new Agile/Scrum Essentials for Practitioners course is scheduled to be released later this month. Watch our web site for more information.

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

The Power of Hadoop

hadoop.jpb

Even within the context of other hi-tech technologies, Hadoop went from obscurity to fame in a miraculously short about of time. It had to… the pressures driving the development of this technology were too great. If you are not familiar with Hadoop, let’s start by looking at the void it is trying to fill.

Companies, up until recently—say the last five to ten years or so—did not have the massive amounts of data to manage as they do today. Most companies only had to manage the data relating to running their business and managing their customers. Even those with millions of customers didn’t have trouble storing data using your everyday relational database like Microsoft SQL Server or Oracle.

But today, companies are realizing that with the growth of the Internet and with self-servicing (or SaaS) Web sites, there are now hundreds of millions of potential customers that are all voluntarily providing massive amounts of valuable business intelligence. Think of storing something as simple as a Web log that provides every click of every user on your site. How does a company store and manipulate this data when it is generating potentially trillions of rows of data every year?

Generally speaking, the essence of the problem Hadoop is attempting to solve is that data is coming in faster than hard drive capacities are growing. Today we have 4 TB drives available which can then be assembled on SAN or NAS devices to easily get 40 TB volumes or maybe even 400 TB volumes. But what if you needed a 4,000 TB or 4 Petabytes (PB) volume? The costs quickly get incredibly high for most companies to absorb…until now. Enter Hadoop.

Hadoop Architecture
One of the keys to Hadoop’s success is that it operates on everyday common hardware. A typical company has a backroom with hardware that has since past its prime. Using old and outdated computers, one can pack them full of relatively inexpensive hard drives (doesn’t need to be the same total capacity within each computer) and use them within a Hadoop cluster. Need to expand capacity? Add more computers or hard drives. Hadoop can leverage all the hard drives into one giant volume available for storing all types of data, from web logs to large video files. It is not uncommon for Hadoop to be used to store rows of data that are over 1GB per row!

The file system that Hadoop uses is called the Hadoop Distributed File System or HDFS. It is a highly fault tolerant file system that focuses on high availability and fast readabilities. It is best used for data that is written once and read often. It leverages all the hard drives in the systems when writing data because Hadoop knows that bottlenecks stem from writing and reading to a single hard drive. The more hard drives are used simultaneously during the writing and reading of data, the faster the system operates as a whole.

The HDFS file system operates in small file blocks which are spread across all hard drives available within a cluster. The block size is configurable and optimized to the data being stored. It also replicates the blocks over multiple drives across multiple computers and even across multiple network subnets. This allows for hard drives or computers to fail (and they will) and not disrupt the system. It also allows Hadoop to be strategic in which blocks it accesses during a read. Hadoop will choose to read certain replicated blocks when it feels it can retrieve the data faster using one computer over another. Hadoop analyses which computers and hard drives are currently being utilized, along with network bandwidth, to strategically pick the next hard drive to read a block. This produces a system that is very quick to respond to requests.

MapReduce
Despite the relatively odd name, MapReduce is the cornerstone of Hadoop’s data retrieval system. It is an abstracted programming layer on top of HDFS and is responsible for simplifying how data is read back to the user. It has a purpose similar to SQL in that it allows programmers to focus on building intelligent queries and not get involved in the underlying plumbing responsible for implementing or optimizing the queries. The “Map” part of the name refers to the task of building a map on the best way to sort and filter the information requested and then to return it as a pseudo result set. The “Reduce” task summarizes the data like the counting and summing of certain columns.

These two tasks are both analyzed by the Hadoop engine and then broken into many pieces or nodes (a divide and conquer model) which are all processed in parallel by individual workers. This result is the ability to process Petabytes of data in a matter of hours.

MapReduce is an open source project originally developed by Google and has been now ported over to many programming languages. You can find out more on MapReduce by visiting http://mapreduce.org.

In my next post, I’ll take a look at some of the other popular components around Hadoop, including advanced analytical tools like Hive and Pig. In the meantime, if you’d like to learn more about Hadoop, check out our new course.

Apache Hadoop, Hadoop, Apache, the Apache feather logo, and the Apache Hadoop project logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Be Sociable, Share!

Hadoop and Power Pivot courses in the works

Doug Ortiz, PowerPivot instructor

Doug Ortiz, Power Pivot instructor

We here in the LearnNowOnline production department have been busy with back-to-back shoots for two popular technologies that help you manage and analyze your data.

Barry Solomon was here covering Hadoop. Hadoop is a syntax based OS and application used to manage large data files. By large I mean data files that gets into terabytes of size. With many new ways of covering data these days it gets more and more important that companies have a way to manage this large amount of data. Hadoop works from a VM and then is used to connect to the database files. The course we’re working on will cover the “why” and “how” for Hadoop.

Last week we began production on our Power Pivot fundamentals courses with Doug Ortiz. Once you have all that data mentioned above, how do you use it effectively? Power Pivot helps a company to analyze that data. It is an add-on for Excel and can be run in a SharePoint environment. Power Pivot gives users the ability to import millions of lines of data from several database sources. Once the data is in Excel, a company then can use the Excel’s tools to help sort the data and use the analytical capabilities, such as Data Analysis Expressions (DAX).

Our new courses for Hadoop and Power Pivot are scheduled to be released in July. Watch our web site for more information.

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

Cross Platform Development with Xamarin

crossplatformxamarinbwithWC

This week Wally McClure presented the webinar Cross Platform Development with Xamarin. He shared great information about using Xamarin to create mobile applications for iPhones and Android-based phones using Portable Class Libraries. If you missed the webinar and are interested, you can view the replay now. In fact, recordings of all of our past webinars are now available on our web site on our webinar archive page – check it out!

Our next free webinar, Getting Started with AngularJS with John Culviner, will take place on Wednesday, July 9th. This session will cover why AngularJS is quickly becoming one of the most popular frameworks for building HTML and JavaScript applications. Register now

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

Making Sense of One ASP.NET

makingsenseaspnet

Last week we held our Making Sense of One ASP.NET webinar, presented by expert Mike Benkovich. If you missed the live event, you can watch the recording now. In this session, Mike took us through the new features of .NET 4.5.1 and how you can use them to build responsive, connected, and modern web solutions with ASP.NET. Thanks for a great event, Mike!

Our next webinar, Cross Platform Development with Xamarin with Wallace McClure, will be held on Wednesday, June 25th at 11:00 a.m. CDT. Register now

About the Author

BrianBlogpicBrian Ewoldt is the Project Manager for LearnNowOnline. Brian joined the team in 2008 after 13 years of working for the computer gaming industry as a producer/project manager. Brian is responsible for all production of courses published by LearnNowOnline. In his spare time, Brian enjoys being with his family, watching many forms of racing, racing online, and racing Go-Karts.

Be Sociable, Share!

8 Key Players in Your SharePoint Rollout, Part 2

SharePointRolesCloud

In my 5/12/2014 post I took a look at one of the main reasons many SharePoint installations fail—lack of user buy-in. One of the best ways to get buy-in is through SharePoint education. Then in my 5/29/2014 post, I began to look at some of the primary roles within a company that are involved in planning and implementing SharePoint. I covered how targeted and structured training within these roles can create an environment where communication can flow freely, resulting in SharePoint deployments with a high rate of success.

In this post, let’s take a look at the remaining roles within a typical SharePoint deployment, and why they also need a solid understanding of SharePoint in order to obtain buy-in, and thereby create the necessary steps to insure a high level of success.

Developers

Developers are given the task of implementing the business logic that controls the document flow within SharePoint. Typically this should be the most obvious place to throw training dollars, but surprisingly many companies don’t believe it necessary. They feel SharePoint development is no different than any other Web development so why bother. Unbeknown to them, they have now greatly increased their chances of stepping on one of the biggest landmines in SharePoint deployment—code from an uneducated developer. SharePoint provides a very powerful framework that gives developers a huge amount of leeway on how they can extend it. Not taking the time to understand the pros and cons of all options can jeopardize the security, stability and maintainability of a SharePoint installation.

SharePoint can also suffer from poor coding practices. There are many development tools and concepts that can be leveraged to extend SharePoint from C# to MVC, from JavaScript to Entity Framework. Each area can introduce a weak spot if developers are not up to speed on the latest coding practicing or versions. Companies that want to maximize their chance of a successful deployment should make sure that their development teams have the right knowledge so they can make the best decisions and build components and workflows that are rock solid.

Designers

Depending on the size of the company, the design of the SharePoint site might be controlled by a team other than developers. Designers are responsible for the look and feel of the site and likely do not have a strong programming background. They may control areas like images, color, fonts, logos, layout, and branding that are implemented throughout the site.

Since part of the success of any SharePoint deployment is getting your employees to use it, attention to design and the user experience cannot be overlooked. Your design team needs to become familiar with SharePoint and understand how people will use it, so they can then design a solution that is easy to use and increases productivity. Any solution that creates a burden on performing even the simplest of tasks will not be adopted.

Administrators

Another key role in the deployment of any SharePoint installation is the administrator role. This person is the infrastructure guru that is ultimately responsible for allocating internal resources and installing all the services necessarily to get SharePoint up and running. The administrator will, of course, be guided by the detailed plans laid out by the infrastructure architect. Clearly this is a role that needs to have a firm understanding of SharePoint. Bad decisions by the administrator could lead to security breaches, loss of documents, degraded performance and/or site outages. Each of these could break the trust of its users, leading to a slow adoption curve or even no adoption at all.

Site Owners

Once SharePoint is installed and operational, the task of configuring SharePoint falls to the site owner. In many smaller installations, the site owners and champions will be the same person. Since the champion role requires a much deeper understanding of SharePoint, and therefore much more training, many larger companies may elect to limit the number of champions to what they need, and instead have additional site owners.

To make SharePoint more manageable, companies will break up SharePoint in many ways (by department, region, floor, rolling dice, etc.) since it is impractical for one person to manage it at the global level. By dicing the site up into pieces, individual site owners can customize the look and feel, as well as security, to meet the direct needs of that group.

Site owners are like mini-administrators. They have full control over their little piece of SharePoint and are responsible for creating and managing their site or sites. This may include the type of templates and document libraries used, as well as creating users and assigning access rights. There are still needs that would require going to the company administrator…for example, if their site runs low on storage space.

Even at this level, education and training is very important because these site owners need to understand how to do the tasks necessary so their users have a positive and engaging experience. This is the last group to influence SharePoint before it goes live.

Power Users and Business Users

Now that your SharePoint is live, the education needs don’t stop. You’ll likely have hundreds or even thousands of employees who can now take advantage of the power of SharePoint. But will they use it if they don’t understand it? Often users tend to get intimidated by SharePoint. They have been doing things one way for so long that it is difficult to trust that a new way would be better. The quickest way to gain trust and increase engagement with SharePoint is through training—successful SharePoint deployments always include training for their general users. That way they can feel comfortable working in this new environment right off the bat, and can more easily trust that this new way of doing things will be a better and more productive way than before.

In Summary

Creating a successful SharePoint deployment requires a conscious buy-in to the solution that starts from the top of the organization chart all the way down. Any member of the team who doesn’t understand or doesn’t trust the solution will be a kink in the armor. Too many kinks will cause the solution to stall, falter or fail. To get everyone’s buy-in, the best prescription is education. By training the top, you can be sure that the design and necessary resources will meet the needs of the business. By training architects, developers and administrators, you can be assured that the installation is rock solid and performs well. By training at the user level, you can be confident that the solution will be adopted and the company will reap the benefits.

Finally, I want to give a shout-out to one our indispensable SharePoint gurus and instructors, Philip Wheat, who assisted me in putting some of the content together for this blog series.

About the Author

martysMartin Schaeferle is the Vice President of Technology for LearnNowOnline. Martin joined the company in 1994 and started teaching IT professionals nationwide to develop applications using Visual Studio and Microsoft SQL Server. He has been a featured speaker at various conferences including Microsoft Tech-Ed, DevConnections and the Microsoft NCD Channel Summit. Today, he is responsible for all product and software development as well as managing the company’s IT infrastructure. Martin enjoys staying on the cutting edge of technology and guiding the company to produce the best learning content with the best user experience in the industry. In his spare time, Martin enjoys golf, fishing, and being with his wife and three teenage children.

Be Sociable, Share!

Training or reference material?

newdudethinking

Just like all of us learn in different ways, each of us also has different uses and needs for our technology training resources. When it’s time to find a training or reference tool, what is right for you?

Do you go for the least expensive and save a few bucks? Do you look for volume and choose the solution with the most hours of training? Do you choose a training tool or a reference tool…or can one solution give you both?

In most cases, the cheapest or largest learning solution will not be the answer you need. For example, if you need to learn a technology from start to finish, a YouTube video isn’t going to get the job done. If you need some quick help with a project, taking a week-long class is going to be overkill.

I have heard many people say, “I paid for a training solution, but once I started using it, it wasn’t able to solve my problem.” That’s why I think it’s always best to look beyond price and volume of training initially, and instead take a few moments to clarify your needs and what issues you are trying to solve. Ask yourself these simple questions:

  • What is your learning style? (see my April 28 and May 15 posts)
  • Are you new to development/programming, or are you experienced?
  • Are you looking to solve problems or learn specific new skills?
  • Do you need to learn a new technology from scratch?

Your answers to these questions are going to point you in the direction of the right learning resource. Ideally the ultimate learning tool has it all. It supports your learning style(s); it’s versatile for all skill levels; and it’s flexible enough to serve as both an in-depth training resource for new skills and technologies, as well as on-the-spot reference material to help you with your day-to-day questions.

There are a lot of training tools out there, but for Developers and IT Pros, I truly think we’ve got the best. We have built our learning material to be an excellent resource no matter what your learning style, and have designed our solution so you can learn a topic or technology from intro to advanced, as well as use it as ongoing reference material. And as your needs change (you become more experienced, you have new projects and technologies to tackle, you get a new job), our up-to-date content and versatile delivery will continue to help you get the job done.

Give us a try and see what you think.

In future blogs I will discuss what makes the best reference material and the best training material. Stay tuned…

About the Author

Craig PhotoCraig Jensen is the President and CEO of LearnNowOnline. Craig has led the company’s change from instructor-led classroom training, to self-study CD/DVD training, to the award winning online learning solutions offered today. Craig is passionate about helping individuals and businesses of all sizes solve their problems through practical learning and technology. He is involved in setting direction for the company, including selecting training content for development with resources to support all learning styles. He is also involved in The CEO Roundtable organization in the Twin Cities as well as the Minnesota High Tech organization. In his spare time, Craig loves to travel, golf, and partake in water sports of all kinds.

Be Sociable, Share!