June 2008 - Posts

Random Grammar/Style Question
26 June 08 03:04 PM | Scott Mitchell

When writing my Security Tutorials for www.asp.net, I often wrote sentences like the following: “To log in to the site, ...” Although sometimes I'd write it, “To log into the site, ...” and other times I'd use, “To login to the site...”

I can't say with certainty whether any one of these three are grammatically correct or if any are grammatically incorrect. My guess is that in the sentence above, “To login to the site...” is incorrect because, according to Dictionary.com, log in is a verb, while login is a noun. In other words, you would only use the word login in a sentence like, “Your login is comprised of a username, password, and PIN.“ That leads me to believe that the correct form is, “To log in to the site,“ but I'm sure someone out there can make a case for “To log into...“

In any case, I should have picked a particular approach and used it throughout, rather than varying the styles throughout the tutorial series. If it's any consolation, I assure you that the variance was done on a purely subconscious level.

All that being said, what do you prefer?

  • To log in to the site...
  • To log into the site...
  • To login to the site...

The more I read it and think about it, I believe the latter one to be grammatically incorrect.

Another thing I noticed is that when I read them there is a very subtle pronounciation difference among the three sentences, although I don't know how clearly that difference translates into the spoken word. When pronouncing login I run the “g” and “in” together, like I'm speaking 1.5 syllables instead of two. Log in to are pronounced as three distinct words with a briefest of pauses between each, whereas log into is pronounced as two distinct words: “log” and “into,” with no pause between “in” and “to” (again, almost as if blurring the two words together into 1.5 syllables).

The Economics Behind Writing Subsequent Editions (for Computer Trade Books)
24 June 08 05:06 PM | Scott Mitchell

The economics behind the college text book must be interesting. I've not written any textbooks, so my comments here are based more on assumption than knowledge, but what I but what always intrigued me - whilst a college student, at least - was how authors would release different versions of books and how teachers would require students to buy the most recent version (or whatever version the class was using). I can understand updated versions for cutting edge fields, like biosciences and computer-related topics, but has the knowledge or instruction of introductory level Calculus changed any since Newton and Leibniz's time? And if not, then why does a book like Calculus and Analytic Geometry - a highschool level Calculus book - have nine editions? What has changed so significantly since the eighth edition to warrant a ninth?

As a student, I was always envious of those authors who wrote a new edition. I figured it must be easy money.

  1. Correct a few typos from the previous edition,
  2. Replace some of the sample problems,
  3. Have professors (or school board administrators) require that all students use the most recent version, and
  4. PROFIT!

Sure, writing the first version might take an inordinate amount of time and energy and effort, thereby rendering the profit per unit time to be less than ideal, but once you got past writing the initial version, each subsequent version had an incredible ROI. Not only that, but with textbooks selling in the $50-$150 range (compared to the $9.95 you pay for the mass market Stephen King novel), those professors must be raking in the dough.

(As an aside, I'd be interested in any insight from authors or publishers or agents who have experience in this niche market. What are the royalties like for professors? Are they in the 10-15% range, like for computer trade authors, or are they higher (or lower)? How many copies does a successful textbook sell? I imagine that writing textbooks is like any other profession - you have a very small handful of extremely successful people - e.g., the authors whose book becomes the de facto standard for teaching a common subject across many universities or high schools and who can profit handsomely from future editions - but the vast majority of textbook authors could have earned more had they worked at a restaurant for those hours they spent their time writing, editing, and reviewing their book. In other words, I assume it's very similar to the field for writing computer trade books, except in computer trade book land, an 'extremely successful' author likely could make more money working a regular 9-5 job in industry than she could writing books full time.)

Despite having authored seven books on ASP and ASP.NET, I haven't really had much opportunity to work on 'second editions.' The challenge with computer technologies is that they change so radically so quickly that the 'second edition' is really about a brand new technology with many new and exciting features that require virtually rewriting the previous edition in its entirety. For example, my first two books were on ASP, my third on ASP.NET. ASP and ASP.NET are two very different technologies, and are different in very fundamental and important ways. I don't think I've seen a single book with a consistent title gracefully move from ASP to ASP.NET.

But what about ASP.NET? We've had five versions of the .NET Framework - 1.0, 1.1, 2.0, 3.0, and 3.5 - but only three of them had enough difference between them and previous versions to warrant a new edition (namely, version 1.0/1.1 to 2.0 and 2.0 to 3.5). And the changes from 1.x to 2.0 were profound enough that new editions between these versions required many new chapters. Granted, the move from 2.0 to 3.5 was less radical and offered an excellent opportunity for established ASP.NET authors to release a new edition with much less energy and effort than is needed to start a book anew or was required when transitioning from 1.x to 2.0. (I'll have more thoughts on this in a future blog post, when I write about my latest book, Teach Yourself ASP.NET 3.5 in 24 Hours.)

The point of it all is that, at least in the ASP.NET world, writing subsequent editions is not something that is as easy to do as you might imagine. Yes, it's easier than writing a book from scratch, as you already have an outline down and can reuse certain content, but it's not as easy as (I imagine) authors in other fields have it when producing a new edition. This is something to keep in mind if you're deciding whether to start writing computer trade books. If your plan is to write an initial book at an economic loss, but to make up that loss with future editions, chances are you'll need to reevaluate your plan. As I said in my first blog entry on the economics of writing computer trade books, “If your dream is to become a rich man, don't write computer trade books.” :-)

Not NotNorthwind
12 June 08 10:44 AM | Scott Mitchell

Scott Hanselman, one of my favorite bloggers, has expressed his deep disdain for Northwind.

I'm just sick of Northwind. Sick to death of the Northwind Database. You know, this is the Products, Categories, Suppliers, yada yada yada sample database that you've been seeing in Microsoft demos since the beginning of time. (FYI, the beginning of time was about 1997. ;) ) ... When I'm showing some technology that is talking to a Database or to POCO (Plain Ol' CLR Objects) I still need good sample data to pull from. Thus, the Northwind Virus continues. And I hate it with the heat of a thousand suns.

Sure, there are other Microsoft endorsed sample databases (most notably, AdventureWorks), but Northwind, despite its age and limitations, is still the de facto database for articles, demos, and talks involving Microsoft technologies.

What's wrong with Northwind? Scott focuses on his emotional distate and doesn't really provide any logical or rational reasons as to why he hates Northwind so passionately. Northwind certainly has its shortcomings - it's at times overly simple (very small amounts of data in each table, for example). The main two shortcomings that get under my skin are:

  • It is, literally, very dated. The date/time values in the database are from the mid-1990s, for instance. Which is a little odd and discomforting when demoing an application that allows users to filter orders by their date, and having to enter sample dates within the year 1996.
  • The images stored in the Categories table are stored as grainy, low-quality 16-color BMP images that includes an OLE header that must be stripped out. I detail these pains in Displaying Binary Data in the Data Web Controls. The short of it is that the category images are ugly and use a poor image file format for the web with antequated image quality settings.

Scott proposes that the community band together and develop a new sample database:

I suspect, though, that if we (the community) took a few weeks, did some Skype conference calls, assigned some tasks, brainstormed and did it, we could come up with NotNorthwind. The Lazy Web, the Web of Clay Shirkey, .NET Flash Mobs included, could create a sample database, (we can argue about whether to start in the middle or in the db in the first meeting) as well as some good examples of things like NHibernate, LINQ to SQL or Whatever.

I don't know if this makes the most sense or if it's the best use of time and energy and effort. As Steve Smith points out, the reason Northwind 'works' is because virtually anyone who has attended a Microsoft talk knows what Northwind is already. There's no need to spend 5-10 minutes explaining the data model of some new, community-created database. Steve explains:

The first stated requirement for NotNorthWind is this:

  • Complex enough to be called Real World but simple enough that someone could "get it" in 5-10 minutes

That alone is enough for me, as a presenter, to suggest that perhaps this is not a good idea. In the course of such presentations, which usually have 75 minutes or so allocated to them and very little tolerance for going over, I don't have an extra 5-10 minutes per presentation to stop everything and explain what the heck I'm using as my data for this thing. ... Enter NorthWind, the HTTP standard of databases, understood by virtually all Microsoft developers without need for preamble. It just works. With the words, "I'm using Northwind for my database." I now have the complete understanding of 95% of the people in the room - we're all on the same page - and I can continue with the actual point of the presentation or demo, which is not, has not, and probably will never be, "why this database isn't Northwind."

But that isn't Northwinds only selling point. Other benefits include:

  • It has few enough tables to not overwhelm a person new to the data model, yet enough that interesting and real-world examples can still be drawn from it.
  • It has stored procedures and views. Granted, there are only a handful of sprocs and views, and neither are very interesting, but at least there are some sprocs and views, so that demos can use these features, if needed.
  • It has Unicode characters (in the Products.ProductName column).
  • It has examples of storing binary data directly in the database in the form of the category images, albeit the images leave a lot to be desired.
  • There are foreign key constraints in place.
  • It models a common business scenario that everyone can wrap their heads around: products, categories, suppliers, employees, customers, orders, and order details.
  • And, perhaps most importantly, it is a Microsoft-approved database. In short, Microsoft has the license to use the product names, supplier names, and employee names in Northwind in their literature and technical papers. When reviewing some sort of database viewer for my Toolbox column in MSDN Magazine, I cannot show a screen shot of the tool displaying any old database. Rather, I have to use one of the Microsoft-approved databases: AdventureWorks or Northwind.

I'll be honest, I like Northwind. I don't love it, but I don't hate it, and I certainly don't hate it with the heat of a thousand suns. :-) I used Northwind extensively in my Working with Data tutorials, and I use it in LUG talks and in the classroom. I've used it so often I have many of the values memorized. For example, I can recite from memory the categories in order of CategoryID - Beverages, Condiments, Confections, Dairy, Grains, Meat, Produce, Seafood. I know that Chai Tea is the first product, and Chang the second. I know the big boss man is Andrew Fuller. You could say I have a sort of affinity for this database, its products, categories, employees, and customers. Yes, it is far from perfect and could use some updating with regards to the date/time values and the category pictures, but those warts aside, it does a good job at what it was designed to do.

Filed under:
Two New Master Page Tutorials Published
10 June 08 03:46 PM | Scott Mitchell

As I noted in an earlier blog post, in May the first of my Master Pages tutorials were published on www.asp.net. Two new master page tutorials were put online today:

  • URLs in Master Pages [VB | C#] - one challenge with master pages is that the master page and linked resources - image files, hyperlinks, stylesheet files, and so on - may exist in different folders, thereby breaking relative URLs. This tutorial includes tips on how to declaratively and programmatically overcome this challenge.
  • Control ID Naming in Content Pages [VB | C#] - both master pages and ContentPlaceHolder controls introduce a new naming container, which adds additional text to the rendered HTML elements' ids. This introduces challenges when referencing the controls through client-side script. It also complicates programmatically referencing the controls in the server-side code-behind class. Read this tutorial for workarounds and remedies to these issues.

There will be a total of 10 tutorials. The next batch looks at interaction between content and master pages.

Like my past tutorials, these tutorials are all available in C# and VB versions, include a complete working source code download, and are available to download as PDF, as well.

Enjoy! - http://asp.net/learn/master-pages/

Filed under:
What Do You Use to Read / Consume Blogs, News Sites, and Other RSS Feeds?
09 June 08 12:39 PM | Scott Mitchell

When I first started blogging and reading others' blogs, I tried out two stand-alone desktop applications for keeping up to date with my favorite bloggers and other news sites:

While both RssBandit and FeedDemon have slick UIs and are easy to use, I haven't used either for several years. My main gripe was not with the programs, per se, but the model itself: I didn't like having a separate program for reading blog entries. For starters, it wedded my blog subscriptions to a single computer. Second, it meant yet another program I'd have to launch at startup and yet another icon cluttering up my task tray. Back in October 2005 I wrote a blog entry lamenting stand-along blog readers: The Future of Third-Party Offline Aggregators? Are RssBandit and its Kin Dead Weight?

There are a number of popular offline aggregators available. By 'offline' I mean that these aggregators can be used while not connected to the Internet. ... The future of aggregators, in my opinion, are those that are either online ... or are part of the experience of existing 'everyman' applications (i.e., email or web browsing) and, preferrably, are preinstalled with the software. The online aggregators seem to make a lot more sense, having a number of advantages of their offline kin:

  • Not bound to a particular computer - I can be at home, at the office, or on vacation - my subscriptions travel with me.
  • Can utilize the 'social network' - services like Findory make it easy for me to get recommended news and blog items based on my clickthroughs. Services like del.icio.us allow me to share my online habits/sites/subscriptions with others with like interests. I can see what the most popular feeds are, or explore the subscriptions of those whose interests match mine.
  • Easier to 'install' and 'uninstall' - want to install My Yahoo! on your computer? Fire up the ol' browser and enter http://my.yahoo.com - couldn't be easier. And uninstalling's as easy as not visiting the site again.
  • No resource consumption - doesn't matter if I subscribe to one feed or a hundred - the disk space and bandwidth consumed on my computer stay constant when using an online service.

Another advantage of online blog readers (or any online application, for that matter) is uniquitous upgrades. When Microsoft releases a new version of Office, it is applied only to those peoples' computers who buy the upgrade and install it. When Microsoft releases a new version of Hotmail, however, the update is applied to all users instantaneously. This leads to more rapid application updates, features, and bug fixes.

Since my blog post in 2005, we have seen better integration of RSS feed support in the 'everyman' applications. Both IE and Firefox have RSS subscription capabilities (albeit rather primitive support), as does Outlook 2007. And virtually every online portal website has the ability for users to subscribe to RSS feeds. Third-party offline blog readers are always going to be at the far end of the long tail, especially with the commodity-like status of RSS aggregators these days. I don't think third-party offline readers will every necessarily die off, but they will be used only by a select and small crowd of experienced computer enthusiasts who prefer them over more mainstream or online options for some very specific reasons. And, for most people, those benefits, whatever they may be, are not strong enough to outweight the cost of downloading the application, installing it, setting it up, and learning how to use it.

I'm curious - what do you use to consume blogs and other RSS feeds? Do you use a stand-alone program, or something that's integrated with Outlook? Do you use an online service?

These days, I use Google Reader to subscribe to and keep up to date with the myriad of blogs, news sites, sport sites, and online comics I follow. Google Reader gives me one spot - accessible anywhere in the world - where I can catch up on and manage my RSS subscriptions. Google Reader also has the early stages of social networking baked in. You can share particular blog items and see your friends' shared items. And Google Reader can offer recommendations on feeds you may like based on what feeds people with similar interests have subscribed to.

Filed under:
June's Toolbox Column Online
01 June 08 01:56 PM | Scott Mitchell

My Toolbox column in the June 2008 issue of MSDN Magazine is avaiable online. The June issue examines:

  • Browser Compatibility Testing Tools - testing the myriad of browser versions, operating systems, color depth/screen resolution combinations, and existence of plugins like Flash make thorough browser testing a difficult process. It's especially difficult for smaller developer shops to maintain the IT infrastructure to test against all of these permutations. Fortunately, there are a couple of online services that assist in this endeavor. This review looks at two: BrowserShots (a free service) and BrowserCam (a pay-per-month service).
  • Typemock Isolator - one common challenge in writing unit tests is modeling external dependencies like databases, configuration file, and remote services. It can be difficult and/or time consuming to setup the external dependency for a test, configure its state, and then return the external dependency to its original state after the test. Rather than working directly with such dependencies, one option is to use mocks, which are local, in-memory objects that 'play the part' of the external dependency. You might hear a mock object say, 'No, I'm not a database, but I play one in unit tests.' Typemock Isolator is a tool for creating and using mock objects within your unit tests.
  • Blogs of Note - The Old New Thing. In his blog, Microsoftie Raymond Chen pulls back the curtain and explains some of the reasons why Windows and other Microsoft software and tools are the way they are. From why you have to click the Start button to shut down your computer, to why the registry is called a 'hive,' Raymond's blog is a fun, witty journey through the private history of Microsoft software.
  • The Bookshelf - Pro ASP.NET 3.5 in C# 2008 by Matthew MacDonald and Mario Szpuszta. Here is an excerpt from the book review:
Since the .NET Framework 2.0, subsequent versions have added new features while keeping the core functionality in place. This poses a dilemma for authors writing books on post-2.0 versions of the .NET Framework: do you focus on just the new features or do you create a book that covers the new features plus the stuff that's been around since version 2.0? At nearly 1,500 pages, it's eminently clear that Matthew MacDonald and Mario Szpuszta, coauthors of Pro ASP.NET 3.5 in C# 2008 (Apress, 2007), chose the latter. The book's 33 chapters cover it all, from creating the simplest Web Forms to using cutting- edge features like AJAX and Silverlight™.

Enjoy! - http://msdn.microsoft.com/en-us/magazine/cc546581.aspx

As always, if you have any suggestions for products or books to review for the Toolbox column, please send them into toolsmm@microsoft.com.

Filed under:
More Posts

Archives

My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.