Tag Archives: .net

What are the Most In-Demand Programming Languages?

url3If you’re a programmer, sometimes it’s a good idea to occasionally read up on job reports that illustrate the most popular languages on the job market. Doing so accomplishes two things. One, it allows you to keep place with your fellow programmer peers; and two, it may make you consider a change in your area of as proficiency. C#, for example, is in demand, and as a result, you may consider a C# video tutorial to help get you up to speed.

That said, don’t take these reports as the entire gospel truth. For example, look at where the data is collected. Some studies, for example, look at programming job openings on Twitter. This is all well and good, but as this article notes, Twitter is disproportionately used by start-ups. In other words, blue chip companies post their programming jobs elsewhere, which can skew the results.

This reality underscores the importance of knowing where you want to be. If you’re aiming for a start-up, then you probably won’t be surprised if like-minded surveys tout JavaScript. But if you’re angling for a blue chip company, languages like .NET are likely in greater demand.

Thumbnail for 637

Thousands of developers worldwide use LearnNowOnline to gain the technical skills they need to succeed on the job and advance their career.

ObjectContext’s SavingChanges Event

ObjectContext’s SavingChanges event lets you validate or change data before Entity Framework sends it to the database. Entity Framework raises this event immediately before it creates the SQL INSERT, UPDATE, and DELETE statements that will persist the changes to the database.

One of the issues with using Entity Framework with SQL Server is that the range of DateTime values in .NET is different than those of the DateTime type in SQL Server. So in the case where you have a non-nullable DateTime field in a table, you have to assign a value to the corresponding property in an entity. This is exactly the case with the Modified property, associated with the ModifiedDate field in all of the tables of the AdventureWorksLT database. If you create a new instance of any entity and don’t explicitly set a valid value for the Modified property, .NET sets its minimum value, which doesn’t work for a SQL Server DateTime type. (It would, however, work with SQL Server 2008 and later’s DateTime2 type, but AdventureWorksLT doesn’t use this type for its date/time fields.) So you either need to explicitly set the Modified property to DateTime.Now when you create or update any entity, you’ll get a rather cryptic exception about some unnamed DateTime type being out of range.

NOTE: It is interesting that while the ModifiedDate fields in the database have a default value of GETDATE()—the T-SQL equivalent of DateTime. Now—there is nothing that updates the value when a record is changed. So you’ll need to supply a value for the Modified property when creating a new instance of an entity to avoid the datetime overflow problem, and when modifying an entity so that the field reflects the last time any data in the record changed.

This is a perfect use of the SavingChanges event. The one trick is that the view entities—CategoryList, ProductAndDescription, and ProductModelCatalogDescription—don’t have a Modified property. So the code has to be selective about which entities it applies to. But SavingChanges makes this easy through how it requires you to get access to the set of inserted, updated, or deleted entities. You have to access the entities with the ObjectContext’s GetObjectStateEntries method. The method takes one or more EntityState enumerations OR’d together, and returns a collection of entities with the selected state. You can select for Added, Deleted, Modified, and Unchanged states, but in this case you’re only interested in the Added and Modified states. There is no reason to update Modified for an entity you are deleting or that remains unchanged.

modify entities method

TIP: There is one other EntityState enumeration value, Detached. This state means that ObjectContext isn’t managing state for that entity. This value is not relevant in the SavingChanges event because there won’t be any work to do for detached entities.

The AWLT project in AWLT.sln has a Context.cs code file with a partial AWLTEntities class to customize the context object. The class implements the following AWLTEntities_SavingChanges method. The code uses the ObjectStateManager property of ObjectContext to get a reference to the ObjectStateManager for the context. It then uses the GetObjectStateEntries method, with the Added and Modified EntityState enumeration values, to populate the entities object with a collection of modified entities. Then the code loops through the collection, and uses reflection to update the value of the Modified property of each entity. The update code is wrapped in a try block and a do-nothing catch just in case an entity slips through that doesn’t have a Modified property.

You also have to wire up the SavingChanges event handler, and the OnContextCreated partial method is the perfect place to do that. The following code in the partial AWLTEntities class takes care of this task in the code.

The ModifyEntries method in Program.cs puts the SavingChanges event handler to use. It executes a query to retrieve all customers named Ann, and updates each one with a Jr. suffix. Then it saves the changes to the database, and turns around and resets the suffixes to null (which is what they all were originally). It then refreshes the collection of customers, just to make sure nothing was cached in memory, and writes out each customer, including the new value of the Modified property. Figure 1 shows the result of running the application.

 

Figure 1. The result of running the ModifyEntities method.

 

Learning A New Programming Language

VisualStudio2010

 

Microsoft’s .NET interpreted languages are some of the easiest to learn and some of the easiest to use to develop fully functional software applications. Visual Basic has long been used as a training language – it’s easy to learn, but not particularly robust. C++ on the other hand is the granddaddy of the modern programming language. Object-oriented with sophisticated memory management, it’s the professional language of choice for software development. But, it’s very difficult to learn.

A popular choice is Visual C#. It combines the object-oriented power of C++ with the simplicity of Visual Basic. It’s very easy to learn.

There are several ways to learn a new programming language. Programming books come loaded with samples and often include actual code on a CD or as downloads. Don’t have enough time for a book? Just choose a visual studio 2010 tutorial video and start building some software.


Just When “Metro” Started Making Sense…

microsoftrebrand

 

Microsoft has always had a knack of constantly branding, and then rebranding; shifting focus, and then rolling it back. I remember back, oh, about a decade or so, when Microsoft was first branding the “.NET” term. It seemed that everything was becoming a “.NET this” and a “.NET that”. Remember .NET Servers? It was clear Microsoft got slightly carried away in their naming but in the end, once the dust settled and things got cleaned up, the .NET term was cemented and exists to this day. Furthermore, it was important to have because it represented a clear shift in Microsoft’s direction and if you didn’t adopt to the new development practices, you were left on the side of the road to parish.

So what about the term “Metro”? Microsoft had a vision dating back prior to the Zune and Xbox on the future of their UI. It was driven by key cultural changes going on in the marketplace: touch-centric computing, always connected, and increasing mobility. With the magnitude of the changes required for Windows to meet this vision, the term “Metro” was created to provide a distinct differentiator between the old and the new. Microsoft’s formal definition of Metro is:

Metro is our design language. We call it metro because it’s modern and clean. It’s fast and in motion. It’s about content and typography and it’s entirely authentic.

With this new mantra, Microsoft released additional terms to further define this Metro experience including:

These terms were all used to describe any application written to run on WinRT; as opposed to an application written to run on Windows 7 or the Windows 8 desktop (your basic Win32 app). For more than a year, Microsoft painstakingly worked at getting developers and users alike to understand this new vision. Then, moments before RTW, Microsoft changed the game. Metro, it turned out, was merely a code name and not the official name going forward… What? Huh?

Microsoft was quick to release the official terms going forward:

  • Metro style applications became Windows 8 style applications
  • Metro design became Windows 8 design
  • Metro user interface became Windows 8 user interface

So what does this mean? When defining your applications, you will need to be clear regarding the platform it was built on to run. When you call it a Windows 8 application, you are identifying it as being about to run on WinRT. If it was built for the desktop, you must refer to it as a Desktop application. Make sense? Keep in mind, since a Windows 8 app runs on WinRT, they will also run on Microsoft’s new Surface tablets. So essentially, building Windows 8 applications is analogous to building Microsoft Surface tablet applications.

As with most Microsoft branding rollercoaster experiences, I’m expecting over the next year that most of the bumps will be ironed out—it’s all part of the ride. Which brings up another question, what happens when Windows 9 ships? Will we still use the term Windows 8 user interface or will it change? Hmmm.

The .NET Objects 411

Netframework-version-4

 

ADO.NET has long provided a variety of generic data objects you can use to access data in a variety of data stores. A few of the most common objects that you’ve probably used since the initial debut of the .NET Framework in the early part of the millennium include:

  • SqlConnection and OleDbConnection: Represents a connection to a SQL Server database and an OLE DB database, respectively.
  • SqlDataAdapter and OleDbAdapter: Represent data commands and a database connection used to read and update a database.
  • DataSet: An in-memory data cache with one or more DataTables, used to access and update data as a generic data container with various data-related behaviors.
  • DataTable: An in-memory data cache with a single DataTable, kind of a lightweight DataSet.
  • SqlDataReader and OleDbDataReader: Provide a forward-only stream of data from a SQL Server or OLE DB database. These objects don’t actually “contain” data, but provide a fast way to retrieve data for immediate processing or caching in other objects in memory.

Together, these objects provide a means for accessing data in a data store, manipulate it, and persist updates back to the data store. The two data containers, DataSet and DataTable, can contain any two-dimensional data (rows and columns) that you read from a data store or generate in memory as the application executes. They contain properties and behaviors that let you perform tasks such as getting a list of the fields for a DataTable (whether a standalone object or part of a DataSet), read each row of data as a whole or field-by-field, compare updated field values in a row to the original values read from the data store, track changes in values and new and deleted rows, and save changes back to the database. These objects are remarkably versatile, and developers have built untold numbers of applications using these data objects.

As useful as these objects are, there are a number of problems that developers had to overcome, making data code unnecessarily complex, including:

  • Strong coupling between an application and database: The application code that reads data using a DataAdapter or DataReader needs to know the structure of the data coming from the database in response to some kind of query. This means that any change to the database schema requires a change to your application code, and the problem is magnified with the more applications that use the database. There are some ways to mitigate this issue, but that means more complex code.
  • Loose typing: DataSets and DataTables primarily rely on strings to identify the table (recordset) and field you want to access or update. The objects have to be able to handle any kind of data contained within a field—strings, dates, numbers of various kinds, etc.—and therefore return generic objects of type System.Object. You often have to perform an expensive conversion to a specific, strong data type before using it, another way that the application is strongly coupled to the database. DataReaders have type-specific methods to retrieve specific data types from a field, which again causes strong coupling.
  • Object interactions: A DataSet is able to contain relationships that define how the data in various DataTables are related to each other, something like what foreign keys accomplish in a relational database. But extracting related data requires some often complex, unintuitive code, and sorting and filtering can be a challenge as well.

As you can see, there are various problems with using the generic ADO.NET objects for data access, even though developers have used these objects for years. The problem is that every one of those non-trivial applications has had to deal with these objects’ limitations, over and over and over again. But for about a decade, it was the best we had, short of manually developing custom entity objects that encapsulate all the data access code but require massive amounts of custom code.

Entity objects are a big improvement over generic data objects. Instead of instantiating a DataSet object that contains DataTables for Customers, Orders, and OrderDetails, you can instantiate a Customers object with custom properties related to being a customer, order, and order details, along with navigation properties that easily access related entities and behaviors germane to the entity types. An application can then make use of these entity objects in a very object-oriented fashion, making it far easier to work with the underlying data that is all nicely encapsulated in a customer object, for example, that acts like a native customer object should.

The interface of an entity object can closely mirror the tables and fields in a relational database, such as with an entity object for each relevant table and properties for all or most fields in the underlying table. Or the object can be completely different with, perhaps, a single entity object that uses data from multiple tables in the database, with properties that bear no resemblance to the fields in the various underlying tables. Or something between these two extremes, whatever makes the entity objects most useful for applications that use them. No matter how you design the objects, somewhere deep within them the objects have to have a mechanism for mapping the database schema to the properties of the entity objects. Entity Framework has all the features it needs to manage this mapping for you.

There are many benefits to using entity objects instead of generic data objects:

  • Strong typing: Each property of an entity object can be the specific type needed for the data it exposes. Strings can be strings, dates can be dates, numbers can be whichever of any of several types to most closely match the data, and so on. You no longer have to do any type conversions when using data in the application; the entity object takes care of that when reading and writing the data from and to the database.
  • Related to strong typing is compile-time checking. The generic data objects mostly return data as objects, so your application code has to manage conversion and you have to make sure it’s right. With entity objects, the compiler can take care of this task for you, dramatically reducing the potential for runtime bugs.
  • Persistence ignorance: You can design these entity objects to encapsulate the persistent data store so that the application doesn’t need to know anything about how and where data is stored, or even its structure. This is known as persistence ignorance, in which information about persistence is segregated from business logic. It disconnects the data storage from the application, making both resilient to changes in the other, and is a major benefit for using entity objects.
  • Productivity: A benefit of both strong typing and compile-time checking, as well as the inherent design of entity objects, is that application code is far easier and faster to write. You don’t have to worry about writing and rewriting code to connect to the database or manage the flow of data between objects and the database, as you do with the generic data objects. Even better, IntelliSense in Visual Studio can assist you in writing clean, correct code, taking advantage of the implementation of business objects.

Entity data objects provide a far richer way to access data and manage its flow between the application and database than the generic ADO.NET data objects. You can do a better job modeling the real world with them, and they can have behaviors that automatically know how to perform various actions.

4 Benefits of Object-Relational Mapping (ORM)

ldn-bbluetry

Object-relational mapping, in the purest sense, is a programming technique that supports the conversion of incompatible types in object-oriented programming languages, specifically between a data store and programming objects. You can use an ORM framework to persist model objects to a relational database and retrieve them, and the ORM framework will take care of converting the data between the two otherwise incompatible states. Most ORM tools rely heavily on metadata about both the database and objects, so that the objects need to know nothing about the database and the database doesn’t need to know anything about how the data is structured in the application. ORM provides a clean separation of concerns in a well-designed data application, and the database and application can each work with data in its native form.

TIP: Nicknames and acronyms used for “object-relational mapping” include ORM, OR/M, and O/R mapping. Although ORM seems to be the term most commonly used in the .NET world, you’ll often see the others in books and articles. We’ll stick with ORM, mostly because it is the easiest to type!

The key feature of ORM is the mapping it uses to bind an object to its data in the database. Mapping expresses how an object and its properties and behaviors are related to one or more tables and their fields in the database. An ORM uses this mapping information to manage the process of converting data between its database and object forms, and generating the SQL for a relational database to insert, update, and delete data in response to changes the application makes to data objects.

ORM performs the rather amazing task of managing the application’s interactions with the database. Once you’ve used an ORM’s tools to create mappings and objects for use in an application, those objects completely manage the application’s data access needs. You won’t have to write any other low-level data access code. Strictly speaking, you could still write low-level data access code to supplement the ORM data objects, but this adds a significant layer of complexity to an application that we’ve rarely found necessary when using a robust ORM tool. It is better to stick to one or the other and keep the application simpler and more maintainable.

There are a number of benefits to using an ORM for development of databased applications and here’s four:

  1. Productivity: The data access code is usually a significant portion of a typical application, and the time needed to write that code can be a significant portion of the overall development schedule. When using an ORM tool, the amount of code is unlikely to be reduced—in fact, it might even go up—but the ORM tool generates 100% of the data access code automatically based on the data model you define, in mere moments.
  2. Application design: A good ORM tool designed by very experienced software architects will implement effective design patterns that almost force you to use good programming practices in an application. This can help support a clean separation of concerns and independent development that allows parallel, simultaneous development of application layers.
  3. Code Reuse: If you create a class library to generate a separate DLL for the ORM-generated data access code, you can easily reuse the data objects in a variety of applications. This way, each of the applications that use the class library need have no data access code at all.
  4. Application Maintainability: All of the code generated by the ORM is presumably well-tested, so you usually don’t need to worry about testing it extensively. Obviously you need to make sure that the code does what you need, but a widely used ORM is likely to have code banged on by many developers at all skill levels. Over the long term, you can refactor the database schema or the model definition without affecting how the application uses the data objects.

One potential downside to using an ORM is performance. It is very likely that the data access code generated by the ORM is more complex than you’d typically write for an application. This is because most ORMs are designed to handle a wide variety of data-use scenarios, far more than any single application is ever likely to use. Complex code generally means slower performance, but a well-designed ORM is likely to generate well-tuned code that minimizes the performance impact. Besides, in all but the most data-intensive applications the time spent interacting with the database is a relatively small portion of the time the user spends using the application. Nevertheless, we’ve never found a case where the small performance hit wasn’t worth the other benefits of using an ORM. You should certainly test it for your data and applications to make sure that the performance is acceptable.

There are a number of ORM tools available for .NET applications (see the “List of object-relational mapping software” topic in Wikipedia in the .NET section for an exhaustive list). Before Microsoft introduced Entity Framework, the open source NHibernate was probably the dominant ORM tool. NHibernate is ported from Hibernate, a Java ORM tool that has been available for years. But because Microsoft now bundles Entity Framework with the .NET Framework and incorporates extensive support for it in Visual Studio, Entity Framework has become the dominant ORM in the Microsoft development world.

Start training on .NET!