Terse Markup and CSS for Aligned Form Labels and Inputs
18 April 11 09:46 PM | Scott Mitchell | 3 comment(s)

Like many ASP.NET developers, I am most comfortable working with C# and VB. I know just enough HTML and CSS to be dangerous. I know enough to implement the overarching page layout without using <table>s and instead to use CSS to position, size, and float the elements on the page, but when it comes to certain user interface designs within a page I’m quick to use <table>s. For instance, if asked to create a contact form like the one pictured below, my first inclination would be to use a trusty <table>.


Recently, my friend and colleague Dave Yates showed me a website he had helped design and implement, StonehengeStyle. I was particularly interested when Dave showed me the contact page, which contained very clean, terse, readable markup without the use of <table>s.

First, each right-aligned text label – Name, Phone, and so on - is implemented as a <label> element. The input elements for collecting the input are simply <input>, <textarea> or <select> elements. For example, the markup for the Name, Phone, and Email Address inputs follows. (Note: I removed the required indicator for brevity; view the contact form’s markup in your browser for the complete markup.)

<label for="name">Name</label> 
<input name="name" type="text" id="name" class="textfield" /> 

<label for="phone">Phone</label> 
<input name="phone" type="text" id="phone" class="textfield" /> 

<label for="email">Email Address</label> 
<input name="email" type="text" id="email" class="textfield" /> 

Couldn’t be simpler, right? No filler <br /> or <p> elements, no <table>s cluttering things up. Heck, no <div>s, even.

The layout of the <label> and <input> elements is handled in the CSS via the following rules. First, all <label> elements are styled such that they are 180 pixels wide with 3 pixel padding on the top, 10 pixel padding on the right, and 2 pixel padding on the bottom. Their text is right-aligned and they clear left floating elements, which is why each <label> appears beneath the one above it.

label {
    clear: left;
    float: left;
    padding: 3px 10px 2px;
    text-align: right;
    width: 180px;

The textfield, textarea, and selectlist CSS classes, which assigned to <input>, <textarea>, and <select> elements in this contact form, specify a width of 250 pixels with a bottom margin of 8 pixels.

.textfield, .textarea, .selectlist {
    font-family: Arial,Helvetica,sans-serif;
    font-size: 12px;
    margin: 0 0 8px;
    width: 250px;

And that’s all there is to it! Pretty simple and straightforward.

This may not be terribly exciting for seasoned web developers or designers, but for someone like me, with just a working knowledge of HTML and CSS, this markup/CSS pattern is a gem and is how I’ve started doing my contact forms and other similar in-page layouts.

Happy Programming!

Filed under:
I’ve Written My Last Article for 4GuysFromRolla
29 March 11 11:32 PM | Scott Mitchell | 118 comment(s)

Warning! This blog post is long and rife with navel-gazing.

In 1998 I started an ASP resource site, 4GuysFromRolla.com. Toward the tail end of the dotcom boom I sold 4Guys to Internet.com, but continued working as the editor and primary contributor for the site, writing a new article each week. This arrangement continued until just recently. My last article for 4Guys has been written – Use MvcContrib Grid to Display a Grid of Data in ASP.NET MVC.

The Beginnings

My first exposure to web programming came in 1998 working at Empower Trainers and Consultants, a mid-sized consulting and training firm with locations in Kansas City, St. Louis, and Nashville. At the time I was an inexperienced, nervous, 19 year old sophomore at the University of Missouri-Rolla (UMR) who had landed an 8-month internship with Empower at their Kansas City location. My first assignment was to add some new features to the internal timekeeping tool, a custom-build data-driven web application powered by SQL Server and ASP. At the time I had done some rudimentary HTML development, but had zero experience with JavaScript, ASP, and SQL.

Needless to say, I found ASP enthralling. The ability to quickly create an application that could be shared with the world amazed me then as it continues to amaze me to this day. At the time there weren’t many online resources for learning more about ASP. As my internship drew to an end I decided that once I got back to school I would start my own site rich with ASP information.

Upon returning to university I cajoled three good friends into starting a website, 4GuysFromRolla.com. The idea was that the site would boast four sections:

  • ASP Information
  • Programming Information
  • Linux Information
  • Humor

If you couldn’t guess, we were four witty computer nerds (with an emphasis on the nerd part).

In September 1998 4GuysFromRolla.com went live. Over time, the other three guys lost interest and moved onto other projects. By the time I graduated in May 2000, 4GuysFromRolla.com was run by one guy from Rolla and focused strictly on ASP.

Sale to Internet.com

The dotcom boom reached its fever pitch in 2000. Companies were paying $5,000 a month for a little 125x125 banner to appear on the 4Guys homepage and $500 for a two sentence text ad to appear in the weekly newsletter, not to mention the thousands of dollars per month companies were dropping to have their animated 468x60 banners in the rotation to appear at the top of each article. The spending frenzy also extended to the acquisitions side, as numerous ASP resource sites were gobbled up by larger players.

In late 2000 I decided to “cash out.” 4Guys was sold to Internet.com.

I wrestled with the decision on whether to sell the site or not for a long time. On one hand, 4Guys was my baby and I had poured uncounted hours into it over the previous three years. Having seen how sites like 15Seconds.com fared after their acquisition, I knew that selling 4Guys would be akin to signing its death warrant. When a larger company buys a smaller site it’s not uncommon for the original founders to exit stage right, either immediately or in the very near term. When that happens, and when the acquirer starts to turn the screws in an attempt to better monetize their purchase, the inevitable happens – the site withers on the vine, traffic languishes, and the death knell is sounded. On the other hand, by late 2000 I think it was pretty apparent to everyone that the dotcom boom was coming to an end.

In the end, I decided to sell. The sales price reflected more than five years of dotcom boom revenue, which I deduced would be more like ten or more years of revenue once the boom ended. At age 22, five to ten years is an unimaginable window of time. I wondered, Would I be interested in writing about ASP ten years hence? Would I even be using ASP or web-based technologies? Since the answers to those questions were “maybe,” I decided to take the bird in the hand over the two in the bush.

Of course, here we are, 11 years later, and I am still actively involved in ASP.NET and the ASP.NET community and, until recently, was still writing for 4Guys. If I had it to do over again (and knowing what I know now), I would not have sold the site. Hindsight is 20/20. But that’s not to say that I regret the decision to sell the site – I don’t. In fact, I still hold that it was the right decision at the time given the unknowns.

The Buying Eyeballs Business Model

The dotcom boom heralded an interesting time in the history of the web. At its peak, billions of dollars were spent buying traffic, or “eyeballs,” as it was commonly referred to back then. In 2000, companies like Internet.com and DevX (among many others) were buying technology resource sites not for their content or talent, but for their existing traffic. This was a workable business model at the time due to the high rates advertisers were paying. Unfortunately, it was not sustainable once the bottom dropped out of advertising.

In 2009, Internet.com and its hundreds of technology-focused websites were sold to QuinStreet for $18 million. I continued working on 4Guys for QuinStreet (until recently). Unfortunately, QuinStreet’s purchase was a continuation of the buying eyeballs business model as evidenced by the lack of investment in the purchased web properties. 4Guys retained its dated look and feel as even more ads were squeezed onto the page.

Sites like 4Guys were sold by Internet.com to QuinStreet for pennies on the dollar. Even at such a steep discount, the question remains: did QuinStreet overpay? Time will tell.

Withering On the Vine

After the sale of 4Guys to Internet.com in 2000 I continued on as the site’s editor and primary contributor, authoring an article each week. Despite my continued work on the site, 4Guys started to lose prevalence in the ASP.NET community. There were many times I talked to a developer at a User Group or at a conference who would say something nice like, “I taught myself classic ASP from your articles on 4GuysFromRolla.com - I used to go there all the time.” The message was always the same – a meaningful compliment that had embedded in it a reflection on the current state of the site - I used to visit 4Guys.

There are probably a lot of different reasons why the importance and relevance of 4GuysFromRolla diminished over the years. Some of the reasons I’ve arrived at include:

  • My predominant use of VB code samples (rather than C#). In recent years, I started writing more C#-focused articles, as well as articles with code samples in both VB and C#, but the majority of articles on 4Guys are VB-only. And my switch to a more C#-friendly style came long after C# had become the de facto .NET language.
  • Increased attempts at monetization. More ads, bigger ads, flashier ads, and more annoying ads all made the site more difficult and less enjoyable to use.
  • A dated look and feel. If you couldn’t guess, the 4GuysFromRolla.com website hasn’t had a site redesign since 2002. It just looks old and dated. I’d like to think that the quality and quantity of content can make up for such aesthetic issues, but I understand why visitors would find the site appearance off-putting and why that might make them less likely to return, especially if there was similar content to be found elsewhere, which brings me to the next three factors…
  • The Google. Google turned the Internet upside down. Prior to Google, when faced with a particular problem people would go to a particular site and start hunting (or searching) for a solution. Once Google made search quick, fast, easy, and accurate – something I think happened in the early 2000s – user behavior shifted radically. Now Google was where people went to find answers to their questions. Just ask Jeff Atwood, who notes that: “Currently, 83% of our total traffic [to Stackoverflow] is from search engines, or rather, one particular search engine.” And that search engine, if you couldn’t guess, is Google.
  • A stronger online presence from Microsoft. In the late 90s and early 2000s, Microsoft offered a substandard web presence for their web technologies. There was technical documentation buried somewhere on Microsoft’s website, some articles on their MSDN site here and there, as well as articles from MSDN Magazine that were available online. But everything was scattered and hard to find. Microsoft finally got it right in the mid-2000s when they made MSDN easier and quicker to search and separated out their core technologies into standalone sites – www.asp.net, www.iis.net, etc. This move sucked an appreciable amount of traffic from community-founded sites like 4GuysFromRolla.
  • The proliferation of blogs. Blogs are another technology that made resource sites like 4GuysFromRolla.com less relevant. Intelligent developers with something interesting or useful to share didn’t need to get their thoughts published on your site – instead, they could start their own blog. The explosion of blogs outpaced the demand for information, cutting into everyone’s traffic and relevance.

Of all of the reasons listed above only one of them falls on my shoulders, namely my slow move away from VB to C#. But perhaps there are other factors that are my fault that my ego is blinding me from. I do believe that the quality of writing that has appeared on 4Guys has improved over the years. When I read some of the articles I wrote while I was still in school (1998-2000) I cringe. Also, I posit that the articles’ topics are (relatively) timely and of interest to ASP.NET developers. (To be fair, I was a bit late to jQuery and ASP.NET MVC, but once I jumped on that bandwagon I wrote quite a bit on said topics.)

The increased attempts and monetization and dated look and feel falls on Internet.com and QuinStreet’s shoulders. The last three factors were out of everyone’s control and affected all websites, not just those in my little corner of the web. And those macro changes, while perhaps detrimental to the growth of a site like 4Guys, are net gains for the Internet (and humanity) as a whole.

Neither QuinStreet nor Internet.com has ever provided me with traffic numbers so I don’t have any hard data to back up my thoughts on this, but my presumption is that 4Guys is still used by hundreds of thousands of developers around the world each month, but that it’s become less and less relevant as time has gone on. Today, I imagine that most people reach 4Guys from a Google search or from a link posted on an old messageboard or newsgroup thread. Few visit the site to see what new content is available or because a coworker told them that it’s a great website for ASP.NET developers of any stripe.

Yes, there are still many who find a solution to their problem on 4Guys, but few say, “How do I do X? I bet 4Guys has the answer!”

Some Fun Facts

Is it just me, or is this blog post getting a little depressing? How about some fun 4Guys trivia.

For those who have never been to Rolla, it is about an hour and a half west of St. Louis, located square in the middle of nowhere. The university in Rolla focuses on engineering and the sciences and the student body is predominantly male. Many people wonder how I had the time to write nearly 750 articles while a student at UMR. The answer is that I went to school in the middle of nowhere with no girls - free time was not something that was hard to find!

When we started 4Guys, one of the other 4Guys created the site design. It had a black background with gray text and these bubbles that spanned the top and right of each page with links to each of the four sections. Together, we redesigned the site in 1999 to give it a more professional look. It was at this time that 4Guys adopted teal as its primary color. After acquiring the site, Internet.com did a resign in 2002. The redesign made the site a bit more graphics heavy and added some curved doodads here and there. I always found the 4Guys logo that Internet.com’s design team created to be hilarious.

The guy on the left looks depressed and ostracized from the group. The guy on the right wants nothing more than a big group hug. And those two guys in the middle? They look like a couple of real a-holes. Too cocky and arrogant to console their melancholy friend on the left, and too cool for school to hug the guy on the right. Jerks.

So Farewell…

My time with 4Guys has now come to an end. It was a fun and unforgettable run. I fondly remember huddled around a computer monitor with the other three guys from Rolla as we tried to decide on a domain name. I remember the excitement of landing my first advertiser and of depositing that first check. And I won’t forget the many emails from fellow developers who wrote in to thank me for an article that helped them solve a vexing problem. But most of all, my memories will center around writing the 4Guys article each week – drumming up a topic, banging out some code, and then putting that code into prose.

Having written a 4Guys article each of the preceding 650 or so weeks, it will be odd not to do so this week. Or next week. Or ever again.

Farewell, old girl, it was a good run.

Just to be clear, I am not retiring! I am a writer, that’s what I do. You’ll continue to see articles from me on this blog and on sites like DotNetSlackers.com and ASPAlliance.com. And I am always looking for additional engagements – if you have a need for a technical writer or prolific ASP.NET author, please don’t hesitate to check out my resume and drop me a line.

The Average Number of Words and Points in Boggle
04 March 11 03:21 AM | Scott Mitchell | 2 comment(s)

Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters. At the end of the game, players compare the words they found. During this comparison I've always wondered what about missed words. Was there some elusive 10-letter word that no one unearthed? Did we only discover 25 solutions when there were 200 or more?

To answer these questions I created a Boggle solver web application (back in 2008) that prompts a user for the letters in the Boggle board and then recursively explores the board to locate (and display) all available solutions. This Boggle solver is available online - fuzzylogicinc.net/Boggle. My family uses it every time we get together and play Boggle. For more information on how it works and to get your hands on the code, check out my article, Creating an Online Boggle Solver. In November 2010, I updated the code to make it more Ajax-friendly; see Updating My Online Boggle Solver Using jQuery Templates and WCF for details.

While playing a game of Boggle I found myself wondering how many total available words are present in a typical game of Boggle, and how many total points are available. It soon dawned on me that I could answer this question using my Boggle solver application and a the Monte Carlo method. My Boggle solving engine has a GenerateBoard method that randomly assembles a legal Boggle board. By generating tens of thousands of random Boggle boards, running them through my solver, and recording the number of words and total points available I could arrive at a good approximation for the average number of words and points in a given game of Boggle. (Scroll to the bottom of this blog entry if all you care about is the average number of words and points per game.)

I started by creating a new class in my Boggle solving Class Library, which I named GameLogger. This class has a single method, LogGame, which takes an inputs the BoggleBoard object that was used to find the solutions and the BoggleWorldList object that comprises the set of solutions to the accompanying BoggleBoard. This method then inserts a record into a database table (boggle_Boards) that logs:

  • The BoardID, which is a 16-character string that uniquely identifies the board. Namely, it contains one character for each letter in the board.
  • The NumberOfSolutions, which is the number of words on the for the board.
  • The Score, which is the cummulative score of all of the words on the board. In Boggle, 3 and 4 letter words score 1 point, 5 letter words score 2, six letter words score 3, seven letter words score 5 and words eight letters or longer score 11.
  • The MinimumWordLength, which specifies the minimum number of letters needed to form a valid solution. Boggle’s rules permit words three or more letters in length, but my family often plays a variation that permits only four letter or longer words.

The code for my Monte Carlo simulator is brain-dead simple – create a new Boggle game, solve it, then log it, and do this until I tell you to stop.

while (true)
    var gt = GameTiles.OfficialBoggleGameTiles();
    var board = new BoggleBoard(3, gt.GenerateBoard());

    var solutions = board.Solve();

    GameLogger.LogGame(board, solutions);

Let the above code run for 5 minutes and you’ve got tens of thousands of solved, random Boggle boards in the database from which you can now ascertain average number of words and score.

The following query returns the average number of solutions and score for games allowing words with 3 of more letters:

SELECT AVG(CAST(NumberOfSolutions AS decimal)), AVG(CAST(Score AS decimal))
FROM dbo.boggle_Boards
WHERE MinimumWordLength = 3

Which comes out to:

Average # of words: 66.82
Average score: 93.25

If we compute the average number of words and points for games requiring four or more letters we get, expectedly, lower results. For such games you can expect, on average, 42.12 words and 68.30 points.

So, next time you break out Boggle, keep in mind that, on average, there are nearly 67 words hiding there, ready for you to find. And after time expires, be sure to use my Boggle solver to see all the words that were present!

Note: Of course, these results are based on the dictionary that my Boggle solver uses. My dictionary may permit or deny certain words that you deny or allow, in which case these averages would be skewed. Unfortunately, I am unaware of where I got my dictionary file. I downloaded it from some website many years ago. I know many word-based games use the Enhanced North American Benchmark Lexicon as their dictionary. I am not using this but would like to integrate it at some point in the future…

My Latest Articles From Around the Web
01 March 11 04:01 AM | Scott Mitchell

In addition to my regular articles on 4GuysFromRolla.com, I’ve recently authored a number of articles that have appeared on other websites:

  • Use ASP.NET and DotNetZip to Create and Extract ZIP Files - This article shows how to use DotNetZip to create and extract ZIP files in an ASP.NET application, and covers advanced features like password protection and encryption. (DotNetZip is a free, open source class library for manipulating ZIP files and folders.)
  • Creating a Login Overlay - Traditionally, websites that support user accounts have their visitors sign in by going to a dedicated login page where they enter their username and password. This article shows how to implement a login overlay, which is an alternative user interface for signing into a website.
  • 5 Helpful DateTime Extension Methods - This article presents five DateTime extension methods that I have used in various projects over the years. The complete code, unit tests, and a simple demo application are available for download. Feel free to add these extension methods to your projects!

To keep abreast of my latest articles - and to read my many insightful witticisms Smile - follow me on Twitter @ScottOnWriting.

Filed under:
Select a textbox’s text on focus using jQuery
01 February 11 03:23 AM | Scott Mitchell | 2 comment(s)

A fellow ASP.NET developer asked me today how he could have the text in a TextBox control automatically selected whenever the TextBox received focus.

In short, whenever any textbox on the page receives focus you want to call its select() function. (The JavaScript select() function is the function that actually selects the textbox’s text, if any.) Implementing this functionality requires just one line of JavaScript code, thanks to jQuery:

$("input[type=text]").focus(function() { $(this).select(); });

In English, the above line of code says, “For any <input> element with a type=”text” attribute, whenever it is focused call it’s select() function.” If you only wanted certain textboxes on the page to auto-select their text on focus you’d update the selector syntax accordingly. For example, the following modification only auto-selects the text for those textboxes that use a CSS class named autoselect:

$("input[type=text].autoselect").focus(function() { $(this).select(); });

That’s all there is to it! You can view the complete script and try a working demo at http://jsfiddle.net/ScottOnWriting/Kq7A4/2/

One final comment: if one or more of the textboxes you want to auto-select exist within an UpdatePanel control then consider using jQuery’s live() function. The live() function maintains the event bindings even when the HTML is regenerated due to a partial page postback; for more information, see my article – Rebinding Client-Side Events After a Partial Page Postback. For more information on jQuery, see Using jQuery with ASP.NET.

EDIT [2011-03-29]: To get this to work in Safari / Chrome you will need to add a mouseup event handler and disable the default event, as the onmouseup event is causing the textbox to be unselected. For more details, see this Stackoverflow post: Selecting text on focus using jQuery not working in Safari and Chrome.

    function() { 
    function(e) {
Filed under:
Removing Gaps and Duplicates from a Numeric Column in Microsoft SQL Server
18 January 11 08:35 PM | Scott Mitchell | 1 comment(s)

Here’s the scenario: you have a database table with an integral numeric column used for sort order of some other non-identifying purpose. Let’s call this column SortOrder. There are a many rows in this table. Every row should have a unique, sequentially increasing value in its SortOrder column, but this may not be the case – there may be gaps and/or duplicate values in this column.

For example, consider a table with the following schema and data:




1 Scott 1
2 Jisun 8
3 Alice 7
4 Sam 7
5 Benjamin 3
6 Aaron 9
7 Alexis 4
8 Barney 5
9 Jim 5

Note how the SortOrder column has some gaps and duplicates. Ideally, the SortOrder column values for these nine rows would be 1, 2, 3, …, 9, but this isn’t the case. Instead, the current values (in ascending order) are: 1, 3, 4, 5, 5, 7, 7, 8, 9.

Our task is to take the existing SortColumn values and get them into the ideal format. That is, after our modifications, the table’s data should look like so:




1 Scott 1
2 Jisun 8
3 Alice 6
4 Sam 7
5 Benjamin 2
6 Aaron 9
7 Alexis 3
8 Barney 4
9 Jim 5

Note how now there are now no gaps or duplicates in SortOrder.

The Solution: Ranking Functions, Multi-Table UPDATE Statements and Common Table Expressions (CTEs)

Microsoft SQL Server 2005 added a number of ranking functions that simplify assigning ranks to query results, such as associating a sequentially increasing row number with each record returned from a query or assigning a rank to each result. For example, the following query – which uses SQL Server’s ROW_NUMBER() function – returns the records from the Employees table with a sequentially increasing number associated with each record:

       ROW_NUMBER() OVER (ORDER BY SortOrder) AS NoGapsNoDupsSortOrder
FROM Employees

The above query would return the following results. Note how the data is sorted by SortOrder. There’s also a new, materialized column (NoGapsNoDupsSortOrder) that returns sequentially increasing values.




Scott 1 1
Benjamin 3 2
Alexis 4 3
Barney 5 4
Jim 5 5
Alice 7 6
Sam 7 7
Jisun 8 8
Aaron 9 9

What we need to do now is take the value in NoGapsNoDupsSortOrder and assign it to the SortOrder column. If we had the above results in a separate table we could perform such an UPDATE, as SQL Server makes it possible to update records in one database table with data from another table. (See HOWTO: Update Records in a Database Table With Data From Another Table.)

While the results in the above grid are not in a table (but are rather the results from a query), the good news is that we can treat those results as if they were results in another table using a Common Table Expression (CTE). CTEs, which were introduced in SQL Server 2005, can be thought of as a one-off view; that is, a view that is created, defined, and used in a single SQL statement.

Putting it all together, we end up with the following UPDATE statement:

WITH OrderedResults(EmployeeId, NoGapNoDupSortOrder) AS 
    SELECT EmployeeId, 
              ROW_NUMBER() OVER (ORDER BY SortOrder) AS NoGapNoDupSortOrder
    FROM Employees
UPDATE Employees
    SET SortOrder = OrderedResults.NoGapNoDupSortOrder
FROM OrderedResults
WHERE Employees.EmployeeId = OrderedResults.EmployeeId AND 
       Employees.SortOrder <> OrderedResults.NoGapNoDupSortOrder

The above query starts by defining a CTE named OrderedResults that returns two column values: EmployeeId and NoGapNoDupSortOrder. It then updates the Employees table, setting its SortOrder column value to the NoGapNoDupSortOrder value where the Employees table’s EmployeeId value matches the OrderedResults CTEs EmployeeId value (and where the SortOrder does not equal the NoGapNoDupSortOrder).

For more information on CTEs, ranked results, and updating one table (Employees) with data from another table or CTE (OrderedResults), check out the following resources:

Happy Programming!

Filed under:
Customizing ELMAH’s Error Emails
06 January 11 10:40 PM | Scott Mitchell | 3 comment(s)

ELMAH (Error Logging Modules and Handlers) is my ASP.NET logging facility of choice. It can be added to a new or running ASP.NET site in less than a minute. It’s open source and it’s creator, Atif Aziz, remains actively involved with the project and can be found answering questions about ELMAH, from Stackoverflow to ELMAH’s Google Discussion group. What’s not to love about it?

ELMAH’s sole purpose is to log and notify developers of errors that occur in an ASP.NET application. Error details can be logged to any number of log sources – SQL Server, MySQL, XML, Oracle, and so forth. Likewise, when an error occurs ELMAH can notify developers by sending the error details to one or more email addresses.

The notification email message sent by ELMAH is pretty straightforward: it contains the exception type and message, the date and time the exception was generated, the stack trace, a table of all server variables, and the Yellow Screen of Death that was generated by the error.


Prior to sending the notification email, ELMAH’s ErrorMailModule class raises its Mailing event. If you create an event handler for this event you can inspect details about the error that just occurred and modify the email message that is about to be sent. In this way you can customize the notification email, perhaps setting the priority based on the error or cc’ing certain email addresses if the error has originated from a particular page on the website.

To create an event handler for the Mailing event, open (or create) the Global.asax file and add the following syntax:

void ErrorMailModuleName_Mailing(object sender, Elmah.ErrorMailEventArgs e)

In the above code snippet, replace ErrorMailModuleName with the name you assigned the ErrorMailModule HTTP Module. This module may be defined in one or two places: the <system.web>/<httpModule> section and/or the <system.webServer>/<modules> section. The following Web.config snippet shows both sections:

        <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" />


        <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" />

The name for the module in both sections should be the same - in the above snippet the name is ErrorMail. Consequently, to create an event handler for ELMAH’s Mailing event with the above configuration we would use the syntax:

void ErrorMail_Mailing(object sender, Elmah.ErrorMailEventArgs e)

Note that the Mailing event handler’s second input parameter is of type ErrorMailEventArgs. This class provides two helpful properties:

  • Error – the error that was just logged by ELMAH. This property is of type Elmah.Error and has properties like Exception, Message, User, Time, and so on.
  • Mail – the MailMessage object that is about to be sent. This gives you an opportunity to modify the outgoing email.

The following Mailing event handler shows how you could adjust the notification email based on the type of exception that occurred. Here, if the error that just occurred was an ApplicationException then the notification email is set to a High priority, it’s subject it changed to “This is a high priority item!”, and mitchell@4guysfromrolla.com is cc’d.

void ErrorMail_Mailing(object sender, Elmah.ErrorMailEventArgs e)
    if (e.Error.Exception is ApplicationException ||
        (e.Error.Exception is HttpUnhandledException && 
            e.Error.Exception.InnerException != null &&
            e.Error.Exception.InnerException is ApplicationException))
        e.Mail.Priority = System.Net.Mail.MailPriority.High;
        e.Mail.Subject = "This is a high priority item!";

Note that if an unhandled ApplicationException is what prompted ELMAH to record the error then by this point the original exception will have been wrapped in an HttpUnhandledException. So in the if statement above I check to see if the error’s Exception property is an ApplicationException or if it is an HttpUnhandledException exception with an InnerException that is an ApplicationException. If either of those conditions hold then I want to customize the notification email.

Happy Programming!

Filed under:
jQuery Usage Among Top Sites
06 January 11 12:11 AM | Scott Mitchell

If you use jQuery on your website two things to consider are:

  1. What version of jQuery to use, and
  2. How should the jQuery library be referenced from your website

Concerning the first question… Ideally everyone would use the latest and greatest version of jQuery. With each new version, the guys and gals building jQuery fix bugs, add new and useful features, and improve the library’s performance. But with any updated product there are potentially breaking changes with each new release, so upgrading carries with it some friction in the form of regression testing your script (and any plug-ins you are using). So then the question really becomes, when does the benefits of the new version outweigh the cost of upgrading – and that is a question you’ll have to answer for yourself.

Concerning the second question… Rather than hosting the jQuery library locally, public facing websites should use a Content Delivery Network (CDN). In his blog post, 3 reasons why you should let Google host jQuery for you, Dave Ward provides an excellent summary of what a CDN is and why you should use one:

A CDN — short for Content Delivery Network — distributes your static content across servers in various, diverse physical locations. When a user’s browser resolves the URL for these files, their download will automatically target the closest available server in the network. …

Potentially the greatest (yet least mentioned) benefit of using … a CDN is that your users may not need to download jQuery at all.

No matter how aggressive your caching, if you’re hosting jQuery locally then your users must download it at least once. A user may very well have dozens of identical copies of jQuery-1.3.2.min.js in their browser’s cache, but those duplicate files will be ignored when they visit your site for the first time.

On the other hand, when a browser sees multiple subsequent requests for the same … hosted version of jQuery, it understands that these requests are for the same file. … This means that even if someone visits hundreds of sites using the same hosted version of jQuery, they will only have to download it once.

(As an aside, a common concern I hear from clients when I suggest using a CDN is the fear that if the CDN goes offline then their site will break. The good news is that you can use a CDN as your primary source for jQuery and provide a local, fall-back version should the CDN be down. This Stackoverflow question - Best way to use Google's hosted jQuery, but fall back to my hosted library on Google fail – shows a couple of ways to accomplish this.)

Dave Ward’s jQuery CDN Survey

In September 2010, Dave Ward ran an interesting experiment – he wrote some software that crawled the 200,000 most popular websites (as reported by Alexa) and examined how they referenced the jQuery library (if at all) and to answer questions like:

  • Did they use a CDN?
  • If so, which CDN?
  • And so on.

In 6,953 reasons why I still let Google host jQuery for me Dave shares his results, which I’ve summarized here:

  • Only one top 1,000 ranked Alexa site uses the Microsoft jQuery CDN (Microsoft.com)
  • 47 of the top 1,000 ranked Alexa sites use the Google CDN
  • 6,953 of the top 200,000 sites use the Google CDN

The lesson to take away from Dave’s study is that you should use the Google CDN to host the jQuery library because using that CDN gives you the greatest likelihood that your visitors already have that version of the jQuery in their browser cache.

Repeating Dave’s Study

I decided to repeat Dave’s study so that I could see what interesting unreported information lie in the data. So I whipped up my own application to crawl the 13,247 top rated Alexa sites using the Html Agility Pack to grab all <script> elements with an src attribute, saving the src path if it contained the substring “jquery”. Before showing you my data, let me repeat the same considerations/warnings Dave noted with regard to the accuracy of his survey:

I’ll be the first to admit that my approach is fraught with inaccuracies:

  • Alexa – Alexa itself isn’t a great ranking mechanism. It depends on toolbar-reported data and individual rankings must be taken with a grain of salt. However, I believe that aggregate trends across its top 200,000 sites represents a useful high-level view.
  • HTTP errors – About 10% of the URLs I requested were unresolvable, unreachable, or otherwise refused my connection. A big part of that is due to Alexa basing its rankings on domains, not specific hosts. Even if a site only responds to www.domain.com, Alexa lists it as domain.com and my request to domain.com went unanswered.
  • jsapi – Sites using Google’s jsapi loader and google.load() weren’t counted in my tally, even though they do eventually reference the same googleapis.com URL. Both script loading approaches do count toward the same critical mass of caching, but my crawler’s regex doesn’t catch google.load().
  • Internal usage – It’s not uncommon for sites to pare their landing pages down to the absolute bare minimum, only introducing more superfluous JavaScript references on inner pages that require them. Since I only analyzed root documents, I undercounted any sites taking that approach and using the Google CDN to host jQuery on those inner pages.

That’s a very thorough way of saying, These results are not definitive, but are meant to give a general overview or understanding of the jQuery and CDN usage landscape.

I leave you with what I found to be some interesting statistics…

Resurrecting the Microsoft CDN Bit By Bit

For die-hard supporters of the Microsoft CDN, you’ll be happy to know that there is now more than one top 1,000 ranked site that uses the CDN! In addition to Microsoft.com (rank 24), SparkStudios.com (rank 359) and XBox.com (rank 650) now also use the Microsoft CDN. However, none of these three sites (nor www.asp.net – rank 1,226) use the suggested CDN URL:

The CDN used to use the microsoft.com domain name and has been changed to use the aspnetcdn.com domain name. This change was made to increase performance because when a browser referenced the microsoft.com domain it would send any cookies from that domain across the wire with each request. By renaming to a domain name other than microsoft.com performance can be increased by as much to 25%. Note ajax.microsoft.com will continue to function but ajax.aspnetcdn.com is recommended.

Only two sites in my survey actually use the ajax.aspnetcdn.com - wwwwwwwwwww.net (rank 7,542) and mmajunkie.com (rank 8,057). A total of 20 websites in my survey use ajax.microsoft.com.

The Ten Most Popular Websites That Host jQuery at Google’s CDN

Here are the ten most popular sites that use jQuery hosted at the Google CDN, along with which version of jQuery they use:

Alexa Rank

Domain jQuery Version
10 twitter.com 1.3.0
98 fileserve.com 1.3.2
111 taringa.net 1.4.2
117 twitpic.com 1.4.2
123 xtendmedia.com 1.4.1
145 stumbleupon.com 1.4.2
174 guardian.co.uk 1.4.2
175 stackoverflow.com 1.4.2
187 imgur.com 1.4.1
204 reference.com 1.4.2

jQuery Version Popularity

The following graph shows the popularity of different versions of jQuery. The bar height represents the total number of sites in my survey that use the particular jQuery version, whereas the red portion indicates the number that host jQuery on the Google CDN.


Note that my method for determining the jQuery version was by examining the URL itself and did not actually search for the version in the actual jQuery file. Google’s CDN uses URLs that embed the version number. Microsoft’s CDN embeds the version number in the file name. For those that hosted jQuery locally (or with some other CDN), I searched both the URL and file name for a version string. The vast majority of sites that self-host jQuery did not include any version identification in the URL or filename (e.g., the file was hosted at a path like /scripts/jquery.min.js) and therefore aren’t represented in this graph. However, I think the pattern here can be extrapolated to those site’s where the version number isn’t part of the URL/file name. Namely, version 1.3.2 and 1.4.2 are the most used.

For the record, there are only two sites in my survey that use version 1.3.0 – Twitter.com (rank 10) and MagicBricks.com (rank 4,324).

jQuery Usage in Aggregate

Of the 13,247 sites surveyed, more than 35% of the sites (4,689) use jQuery…

Of these 4,689 sites…

  • Only 18% of these sites use the Google or Microsoft CDNs…
    • 22 use the Microsoft CDN
    • 826 use the Google CDN
Filed under:
Resetting Form Field Values in an ASP.NET WebForm
23 December 10 02:14 AM | Scott Mitchell | 3 comment(s)

A recent question on Stackoverflow.com asked if there was a general method to clear a form in ASP.NET. The person asking the question had a form with many TextBox and DropDownList controls and wanted some way to be able to “reset” all of those values; specifically, the TextBoxes would be cleared out and the DropDownLists would have their first item selected.

At first blush, this seems like a job for the reset button. HTML has long supported the ability to reset a form by clicking a reset button, which is a button of type reset.

<input type="reset" value="Reset Form!" />

The reset button’s functionality can also be invoked from JavaScript by calling the form object’s reset function. The following snippet of script checks to see if there is at least one form in the document and, if so, calls its reset function:

if (document.forms && document.forms[0])

The Problem with Reset…

However, there is a potential issue when resetting form values in an ASP.NET WebForms application using this approach. The issue arises because the reset button (or reset function) does not clear out textboxes and return drop-down lists to their first value, but instead returns the forms fields to their initial values. That is, if you have a page with a textbox whose markup contains a value attribute, like:

<input type="text" value="Hello, World!" ... />

And then a user changes the textbox’s text to “Goodbye!” and then clicks the reset button, the textbox does not go blank – rather, it reverts to “Hello, World!” In an ASP.NET WebForm application anytime there is a postback on the page – such as if there is a DropDownList whose AutoPostBack property is set to True – the page’s markup is re-rendered and the text values and DropDownList selections made by the user are remembered because the re-rendered markup includes the user-entered values. Long story short, if you use the reset button approach described above and have a form where there are postbacks going on, if a user enters values into textboxes (or drop-downs), then does a postback, then clicks the reset button, the form fields will be reset to their values immediately after the postback and not to empty textboxes.

jQuery to the Rescue!

The good news is that with just a couple of lines of jQuery code we can implement the reset functionality we desire, regardless of postbacks. The following two lines of script set the value of all textboxes on the page to an empty string and set the selectedIndex of all drop-down lists on the page to be 0, which selects the first item:

$("select").attr('selectedIndex', 0);

That’s all there is to it! You could tinker with the selector syntax to limit the affected textboxes and drop-downs to those in a specific <div> or form or whatnot; likewise, you could add additional lines of code if you need to reset checkboxes, radio buttons, or other input fields.

A Server-Side Approach…

If the client-side approach doesn’t cut it for you, you can opt to reset form fields using server-side logic. You could, of course, set the Text property of each TextBox control to an empty string and clear the selections of all DropDownLists, but a more general approach is possible using recursion. Each ASP.NET control has a Controls  property that contains its children controls. Put together, the controls in an ASP.NET page form a control hierarchy. We can recurse through this control hierarchy, examining each control and modifying any TextBoxes and DropDownLists we come across.

The following code snippet illustrates such a recursive method, ClearInputs. Note that ClearInputs is passed in a ControlCollection object. This collection is enumerated and checked for TextBoxes and DropDownLists. If a TextBox is found, its Text property is set to string.Empty; if a DropDownList is found its ClearSelection method is invoked. Finally, the ClearInputs method is called again and passed the current control’s Controls collection for it to be examined.

void ClearInputs(ControlCollection ctrls)
    foreach (Control ctrl in ctrls)
        if (ctrl is TextBox)
            ((TextBox)ctrl).Text = string.Empty;
        else if (ctrl is DropDownList)


To reset all TextBox and DropDownList values you’d call this method like so:


To reset the TextBoxes and DropDownLists in a particular control (such as a Panel), you’d call ClearInputs passing in that control’s Controls collection.

Happy Programming!

Filed under: ,
Checking All CheckBoxes in a GridView Using jQuery
04 December 10 02:41 AM | Scott Mitchell | 3 comment(s)

How do I love thee, jQuery? Let me count the ways.

In May 2006 I wrote an article on 4GuysFromRolla.com titled Checking All CheckBoxes in a GridView Using Client-Side Script and a Check All CheckBox, in which I showed how to add a column of checkboxes to a GridView along with a checkbox in that column’s header that enabled the user to check/uncheck all checkboxes in one fell swoop. This check/uncheck all functionality was accomplished using JavaScript.

While the JavaScript presented in the article worked then (and still works today, of course), it is a less than ideal approach for a couple of reasons.

  • First, each checkbox in the grid is programmatically assigned a client-side onclick event handler in the GridView’s DataBound event handler that calls a function that determines whether to check or uncheck the checkbox in the head – having a client-side event handler defined directly in an HTML element violates the design goal of unobstrusive JavaScript.
  • Second, because programmatically assigned client-side attributes are not remembered across postbacks and because these client-side attributes are only assigned when data is bound to the grid, the script is lost when there is a postback that doesn’t cause the grid to have its data re-bound to it. Long story short, the check/uncheck all functionality stops working after such postbacks. I provide a workaround for this, but it’s extra steps, extra script, and another thing that you have to remember to do.
  • Third, the solution entails quite a bit of script, much more than is necessary using modern JavaScript libraries.

When I authored the article jQuery had not yet been released. Fortunately, today we have jQuery (and other JavaScript libraries) at our fingertips. jQuery is a free, open-source JavaScript library created by John Resig. In a nutshell, jQuery allows us to accomplish common client-side tasks with terse, readable script. With jQuery, I can rewrite the entire GridView check/uncheck all functionality with zero lines of server-side code and a scant 25 or so lines of JavaScript.

To demonstrate jQuery’s power, consider a GridView with the following markup:

<asp:GridView ID="gvProducts" runat="server" ...>
                <asp:CheckBox runat="server" ID="chkAll" />
                <asp:CheckBox runat="server" ID="chkSelected" />

Note the TemplateField – this is where the two CheckBox controls live. The CheckBox control in the HeaderTemplate (chkAll) is the check/uncheck all checkbox, while the CheckBox control in the ItemTemplate (chkSelected) is the checkbox that appears in each of the grid’s data rows.

Now, I need script that does the following:

  1. When one of the chkSelected checkboxes is checked or unchecked, I need to determine whether the all option needs to be checked or unchecked. In the case where all chkSelected checkboxes are checked, I want to check chkAll. Likewise, in the case when any chkSelected checkbox is unchecked, I want to uncheck chkAll.
  2. When chkAll is checked or unchecked, I need to check or uncheck all chkSelected checkboxes.

To address the first concern I created a function named CheckUncheckAllCheckBoxAsNeeded. This function determines the total number of chkSelected checkboxes in the grid and the number of checked chkSelected checkboxes. If the two numbers match then chkAll is checked, otherwise it’s unchecked.

function CheckUncheckAllCheckBoxAsNeeded() {
    var totalCheckboxes = $("#<%=gvProducts.ClientID%> input[id*='chkSelected']:checkbox").size();
    var checkedCheckboxes = $("#<%=gvProducts.ClientID%> input[id*='chkSelected']:checkbox:checked").size();

    if (totalCheckboxes == checkedCheckboxes) {
        $("#<%=gvProducts.ClientID%> input[id*='chkAll']:checkbox").attr('checked', true);
    else {
        $("#<%=gvProducts.ClientID%> input[id*='chkAll']:checkbox").attr('checked', false);

This function is executed whenever one of the chkSelected checkboxes is checked or unchecked. This event wiring is handled in the $(document).ready event handler. Also, the CheckUncheckAllCheckBoxAsNeeded function is called right off the bat in case the grid’s checkboxes are already all checked when the page loads.

$(document).ready(function () {
    $("#<%=gvProducts.ClientID%> input[id*='chkSelected']:checkbox").click(CheckUncheckAllCheckBoxAsNeeded);



Finally, we need to check/uncheck all chkSelected checkboxes when chkAll is checked or unchecked. This logic is also in the $(document).ready event handler (where the ellipsis is positioned in the above snippet).

$("#<%=gvProducts.ClientID%> input[id*='chkAll']:checkbox").click(function () {
    if ($(this).is(':checked'))
        $("#<%=gvProducts.ClientID%> input[id*='chkSelected']:checkbox").attr('checked', true);
        $("#<%=gvProducts.ClientID%> input[id*='chkSelected']:checkbox").attr('checked', false);

Pretty neat and a whole heck of a lot simpler than the technique I initially showcased in Checking All CheckBoxes in a GridView Using Client-Side Script and a Check All CheckBox. A more detailed look at this code, along with a downloadable working example, will be on 4Guys within the next couple of weeks.

UPDATE [2010-12-07]: A 4Guys article that provides much more detail and screen shots and a downloadable demo is now available: Checking All Checkboxes in a GridView Using jQuery. Also, special thanks to Elijah Manor, who offered a number of suggestions on how to improve and tighten up my jQuery script.

Happy Programming!

Filed under: ,
Just Where Is WebResource.axd?
28 October 10 10:25 PM | Scott Mitchell | 7 comment(s)

I stumbled upon and answered a question at Stackoverflow this morning – Where is WebResource.axd? – and thought it might be worth to elaborate a bit on the question and answer here, on my blog.

But first, imagine you are developing a Web control or a library or a framework that requires certain external resources, such as images, JavaScript, and CSS. When developing your control/library/framework you may have such external content sitting in a particular folder in your application, but when you get ready to package your control/library/framework you want the end product to be a single assembly (that is, a single DLL file), and not an assembly plus a folder for images and a folder for JavaScript files and a folder for CSS files. In other words, you do not want to require that your users – other web developers – have to add a bunch of folders and images/JavaScript/CSS files to their website to start using your control/library/product; rather, you want everything to work once the developer drops your assembly into their Bin folder.

Such functionality is possible by using embedding resources. An embedded resource is a resource file – like an image, JavaScript, or CSS file – that is embedded within the compiled assembly. This allows a control/library/framework developer to embed any external resources into the assembly, thereby having the entire application existing in one single file. Consider ASP.NET’s validation controls. These controls require that certain JavaScript functions be present in order for them to use client-side validation. When using the validation controls you don’t need to add any JavaScript files to your website; instead, the JavaScript used by these controls is embedded in one of the built-in ASP.NET assemblies. But if the external resources are embedded in the assembly, how do you get it out of the assembly and onto a web page?

The answer is WebResource.axd. WebResource.axd is an HTTP Handler that is part of the .NET Framework that does one thing and one thing only – it is tasked with getting an embedded resource out of a DLL and returning its content. What DLL to go to and what embedded resource to take are specified through the querystring. For instance, a request to www.yoursite.com/WebResource.axd?d=EqSMS…&amp;t=63421… might return a particular snippet of JavaScript embedded in a particular assembly. The d querystring parameter contains encrypted information that specifies the assembly and resource to return; the t querystring parameter is a timestamp and is used to only allow requests to that resource using that URL for a certain window of time.

To see WebResource.axd in action, create an ASP.NET Web page that includes some validation controls, visit the page in a browser, and then do a View/Source. You will see a number of <script> tags pulling in JavaScript from WebResource.axd like so:

<script src="/YourSite/WebResource.axd?d=fs7zUa...&amp;t=6342..." type="text/javascript"></script>

<script src="/YourSite/WebResource.axd?d=EqSMSn...&amp;t=6342..." type="text/javascript"></script>

Here, WebResource.axd is pulling out embedded JavaScript from an assembly and returning that JavaScript to the browser. If you plug in those URLs into your browser’s Address bar you’ll see the precise JavaScript returned.

Ok, so now that we know what WebResource.axd is and what it does the next question is, where is it? Clearly, there’s no file named WebResource.axd in your website – what’s going on here? Here’s my answer from the Stackoverflow question:

.axd files are typically implemented as HTTP Handlers. They don't exist as an ASP.NET web page, but rather as a class that implements the IHttpHandler interface. If you look in the root Web.config (%WINDIR%\Microsoft.NET\Framework\version\Config\Web.config) you'll find the following entry:

<add path="WebResource.axd" 
     validate="True" />

This entry says, "Hey, if a request comes in for WebResource.axd then use the HTTP Handler AssemblyResourceLoader in the System.Web.Handlers namespace.

The code for this class is a bit lengthy, so I can't post it here, but you can use a disassembler like the free Reflector to view this class's source code. You could probably get the original source code (with comments) by using the NetMassDownloader tool.

So there you have it. WebResource.axd is an HTTP Handler built into the .NET Framework that retrieves embedded resources from assemblies.

To learn more about WebResource.axd and how to go about embedding resources in an assembly, refer to my article, Accessing Embedded Resources through a URL using WebResource.axd.

Happy Programming!

Filed under:
Returning Dynamic Types from an Ajax Web Service Using C# 4.0
26 October 10 06:40 AM | Scott Mitchell | 4 comment(s)

Over at 4Guys I’m authoring a series of articles showing different techniques for accessing server-side data from client script. The most recent installment (Part 2) shows how to provide server-side data through the use of an Ajax Web Service and how to consume that data using either a proxy class created by the ASP.NET Ajax Library or by communicating with the Ajax Web Service directly using jQuery.

When returning data from a service it’s not uncommon to create a specialized Data Transfer Object (or DTO), and in Part 2 I create two such DTO classes. Here’s the basic design pattern:

  1. Create a DTO class that has properties that model the data you wan to return. For instance, the following DTO class can be used to transmit CategoryID, CategoryName, and Description information about one or more categories.
  2. public class CategoryDTO
        public int CategoryID { get; set; }
        public string CategoryName { get; set; }
        public string Description { get; set; }
  3. In the service, get the data of interest from your object layer. In the demo for this article series I use Linq-To-Sql as my data access layer and object model. Here is code from the Ajax Web Service’s GetCategories method that retrieves information about the categories in the system:
  4. [WebMethod]
    public CategoryDTO[] GetCategories()
        using (var dbContext = new NorthwindDataContext())
            var results = from category in dbContext.Categories
                            orderby category.CategoryName
                            select category;
  5. Now the results need to be mapped from the domain object to the DTO. This can be done manually or by using a library like AutoMapper. In the above example, we would iterate through the results to create an array of CategoryDTO objects, which is what would be returned.

If you are using C# 4.0 you can choose to live in a looser-typed world. Rather than having the Ajax Web Service return a strongly-typed value (namely, an array of CategoryDTO objects) you could instead opt to have a more ethereal return type – dynamic! Having a return type of dynamic allows you to return an anonymous type, meaning you don’t need to create a DTO nor do you need to map the domain object to the DTO. Instead, you’d just create an anonymous type, like so:

public dynamic GetCategories()
    using (var dbContext = new NorthwindDataContext())
        var results = from category in dbContext.Categories
                        orderby category.CategoryName
                        select new

        return results.ToArray();

Note the dynamic keyword as the method’s return type. Also note that results is a query that returns an enumeration of anonymous types, each of which has three properties – CategoryID, CategoryName, and Description. The call to the ToArray method executes the query and returns the array of anonymous types as the method’s output. Because the anonymous type’s properties. The client-side script calling this method can work with the returned anonymous type using the exact same code as with the strongly-typed return type.

Happy Programming!

Enumerating Through XML Elements Using LINQ-to-XML
28 September 10 09:25 PM | Scott Mitchell | 1 comment(s)

4Guys reader Dan D. recently emailed me with an inquiry surrounding my article series, Building a Store Locator ASP.NET Application Using Google Maps API, specifically on how to access a different set of XML elements within the XML data returned from the Google Maps API’s geocoding service. Google’s geocoding service is offered as a URL that, when requested, returns information about a particular address. For instance, if you point your browser to http://maps.google.com/maps/api/geocode/xml?address=1600+Pennsylvania+Ave,+Washington+D.C.&sensor=false you should see an XML response that indicates whether the address is valid, the formatted address, the components that make up the address, and geographical information about the address, including the latitude and longitude coordinates.

This geocoding service is used by the Store Locator application in two ways:

  1. To validate the user-entered address. If the user enters an ambiguous address, like Springfield, then the geocoding service will return possible matches. These are displayed to the user, allowing her to choose which address she meant.
  2. To determine the latitude and longitude coordinates of the user-entered address. These coordinates are used to retrieve those stores that are nearby.

The Store Location application includes a method named GetGeocodingSearchResults that, when called, makes an HTTP request to the geocoding service and returns the results as an XElement object, one of the key components of LINQ-to-XML.

Dan’s question follows:

I have a question with regards to accessing the elements contained within the address_components[] array.  Specifically, I would like to return the long and short names for locality and country.  I was wondering if you could post an small article on how to iterate through the XML array components loaded into the XElement.

The address_components[] array Dan refers to is the set of <address_component> elements returned by the geocoding service. Again, visit http://maps.google.com/maps/api/geocode/xml?address=1600+Pennsylvania+Ave,+Washington+D.C.&sensor=false. Note how there are multiple <address_component> elements detailing the type and long and short names for each component of the address. For the address 1600 Pennsylvania Ave, Washington D.C. there are the following address components:

   <long_name>Pennsylvania Ave NW</long_name>
   <short_name>Pennsylvania Ave NW</short_name>
   <long_name>District of Columbia</long_name>
   <short_name>District of Columbia</short_name>
   <long_name>District of Columbia</long_name>
   <long_name>United States</long_name>

Note that each <address_component> element has a <long_name> and <short_name> child element, and one or more <type> child elements.

To simple iterate through each <address_component> element we could use the following code:

var results = GoogleMapsAPIHelpersCS.GetGeocodingSearchResults(“...”);

var addressComponents = results.Element("result").Elements("address_component");
foreach (var component in addressComponents)
    var longName = component.Element("long_name").Value;
    var shortName = component.Element("short_name").Value;

    var types = new List<string>();
    foreach (var type in component.Elements("type"))

    // At this point you can do whatever it is you want to do 
    // with the longName, shortName, and types information for
    // this component...
    if (types.Contains("locality") || types.Contains("country"))
        Response.Write(string.Format("<p>LongName = {0}, ShortName = {1}, Types = {2}</p>",
                                    string.Join(", ", types.ToArray())

Here, we reference the set of <address_component> elements using results.Element(“result”).Elements(“address_components”), where results is the XElement object returned from the GetGeocodingSearchResults method. The Element(“results”) call gets a reference to the <result> XML element, while Elements(“address_component”) gives us the enumerable collection of <address_component> elements, which we then can loop through.

Inside the loop we get the values of the <long_name> and <short_name> XML elements and then loop through the set of <type> elements, the value of each to a List of strings (types). Finally, we can do what Dan is interested in doing – determine if the address component is for the locality or country and, if so, do something with the long and short names. Here, I simply display them via a Response.Write statement.

Another option is to use LINQ to create an anonymous type that models the information of interest. The following statement creates a variable named addressComponents2 that is an enumeration of anonymous objects that have three properties: LongName, ShortName, and Types, which contain the values of the <long_name>, <short_name>, and <type> elements for each <address_component>.

var results = GoogleMapsAPIHelpersCS.GetGeocodingSearchResults(“...”);

var addressComponents2 =
        from component in results.Element("result").Elements("address_component")
        select new
            LongName = component.Element("long_name").Value,
            ShortName = component.Element("short_name").Value,
            Types = (from type in component.Elements("type")
                        select type.Value).ToArray()

We can now filter the results using the Where method:

var filteredAddressComponents = addressComponents2
                                    .Where(addr => addr.Types.Contains("locality") ||

And now enumerating over filteredAddressComponents returns just those address components for the locality or country types. The following loop walks through each of these and emits the LongName, ShortName, and Types property values. Note how these are actual properties and not strings, meaning we have strong typing, which brings with it the benefits of IntelliSense and compile-time support.

// At this point you can use a foreach loop to 
// walk through the various components
foreach (var addr in filteredAddressComponents)
    Response.Write(string.Format("<p>LongName = {0}, ShortName = {1}, Types = {2}</p>",
                                string.Join(", ", addr.Types.ToArray())

Happy Programming!

Filed under:
Use jQuery to Open “External” URLs in a New Browser Window
15 September 10 03:24 AM | Scott Mitchell | 4 comment(s)

As any web developer knows, the HTML anchor element (<a>), when used in the following form:

<a href="http://www.scottonwriting.net">Click Me!</a>

creates a hyperlink with the text “Click Me!” that, when clicked, whisks the user to the specified href value, in this case my blog, ScottOnWriting.NET. By default, clicking a link opens the specified URL in the user’s existing browser window; however, using the <a> tag’s target attribute it is possible to open the URL in a new window. Adding target=”_blank” to the <a> element will cause the browser to open the link in a new browser window:

<a target="_blank" href="http://www.scottonwriting.net">Click Me!</a>

Some websites like to have all links to “external” web pages open in a new browser window, while having “internal” links open in the same browser window. I use the words external and internal in quotes here because their definitions can depend on the website. Some websites would consider URL that specifies a hostname in the href to be “external” – such as http://www.4guysfromrolla.com/ScottMitchell – while URLs that lack a hostname would be “internal” – such as /sowblog/archive/2010/09.aspx. Other websites might want links to partner websites to still be considered “internal,” even though they include a hostname.

I recently worked on a project where the client wanted this kind of behavior. He had hundreds of existing web pages, each with dozens of links, all of which lacked a target attribute. He didn’t want to have to go through the pages and links, one at a time, adding the target attribute where needed. To help address this problem I wrote a very simple jQuery plugin that can be used to automatically add a target attribute to “external” URLs.

WARNING: I know just enough JavaScript and jQuery to be dangerous, so please don’t presume my plugin is in any way an example of best practices. In fact, if you have any feedback or suggestions on how to improve it, please let me know in the comments!

The plugin defines a single function, UrlTarget([whiteList], [targetName]). The following line of code (which you’d place in $(document).ready, presumably) will add a target=”_blank” attribute to all “external” links. Without specifying a whiteList, all URLs that start with http:// or https:// are considered external, whereas all that don’t are considered internal:


If you want certain hostnames to be considered “internal,” simply specify one or more regular expressions in an array as the whiteList. If the hostname for a hyperlink matches on any of the regular expressions then it is considered “internal” and the target attribute is not added. For instance, to have all URLs that point to 4GuysFromRolla.com or ScottOnWriting.NET considered “internal,” you’d specify the following whiteList value:


If you specify a targetName value, the target attribute added to external URLs is assigned that targetName. If this input parameter is omitted then the target value “_blank” is used. Also, note that if a hyperlink with an external URL already has its target attribute set then it is not overwritten by UrlTarget. Likewise, if a hyperlink with an internal URL has a target attribute set, it is not removed.

To use my plugin you’ll need to download the script at http://scottonwriting.net/sowblog/CodeProjectFiles/urlTarget.js, save it to your website, and then reference it via a <script> tag. I’ve got a demo online available at http://scottonwriting.net/sowblog/CodeProjectFiles/JQueryLinksDemo.htm, which has the following JavaScript:

<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="urlTarget.js"></script>

<script type="text/javascript">
        $(document).ready(function() {

Note that both the jQuery and urlTarget.js libraries must be referenced.

Happy Programming!

Filed under:
Adding a RESTful Service to My Boggle Solver
11 September 10 02:47 AM | Scott Mitchell | 5 comment(s)

This blog post has been deprecated. Please see Updating My Online Boggle Solver Using jQuery Templates and WCF for an updated discussion on the solver service, the data it returns, and how to call it from a web page.

My immediate and extended family enjoys playing games, and one of our favorites is Boggle. Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters. At the end of the game, players compare the words they found. During this comparison I've always wondered what words we may have missed. Was there some elusive 10-letter word that no one unearthed? Did we only discover 25 solutions when there were 200 or more?

To answer these questions I created a Boggle solver web application (back in 2008) that prompts a user for the letters in the Boggle board and then recursively explores the board to locate (and display) all available solutions. This Boggle solver is available online - fuzzylogicinc.net/Boggle. My family uses it every time we get together and play Boggle. For more information on how it works and to get your hands on the code, check out my article, Creating an Online Boggle Solver.

Recently, I’ve been working on some projects that have involved creating RESTful web services using WCF. Being Friday, I decided to have a little fun and add a RESTful interface to the Boggle solver. This was actually quite easy to do and took all of 5 minutes.

Creating the Boggle Solver Service

I started by adding a new item to my website of type WCF Service, naming it Solver.svc. This created three files:

  • Solver.scr
  • ISolver.cs
  • Solver.cs

In the contract (ISolver.cs) I added a single method, Solve, that accepts two inputs: a string representing the board and a string indicating the minimum number of letters for a word to be considered a solution. (Boggle rules allow for words of three letters or more, but house rules only count words that are four letters or longer.) I then used the WebGet attribute to indicate that the board and length input parameters would be specified via the querystring fields BoardID and Length, and that the resulting output should be formatted as JSON.

public interface ISolver
    [WebGet(UriTemplate = "?BoardID={board}&Length={length}", ResponseFormat=WebMessageFormat.Json)]
    BoggleWordDTO[] Solve(string board, string length);

Note that the Solve method returns an array of BoggleWordDTO objects. This is a new class I created to represent the data to transmit from the service. This class has two properties:

  • Word – a string value that represents a word found in the Boggle board, and
  • Score – the score for that solution. According to the official rules, three and four letter words are worth 1 point, five letter words are worth 2, six letter words worth 3, seven letter words worth 5, and eight letter words and longer worth 11.

The Solve method implementation (Solver.cs) is pretty straightforward. It starts with a bit of error checking to ensure that the passed in board and letter information is kosher. Next, it creates a BoggleBoard object, specifying the minimum number of letters for a solution and the board contents. Then the BoggleBoard object’s Solve method is invoked, which performs the recursion and computes the set of solutions (as an object of type BoggleWordList). The solutions are then converted into an array of BoggleWordDTO objects, which is then returned to the client.

public BoggleWordDTO[] Solve(string board, string length)

    // Create the BoggleBoard
    BoggleBoard bb = new BoggleBoard(
                        board[0].ToString(), ..., board[15].ToString()

    // Solve the Boggle board
    var solutions = bb.Solve();

    // Populate and return an array of BoggleWordDTO objects
    return solutions
                .Select(s => new BoggleWordDTO()
                    Word = s.Word,
                        Score = s.Score

Because the service is configured to return the data using JSON, the results are serialized into a JSON array.

In addition to creating the Solver-related files and writing the code I noted, I also had to add <system.serviceModel> configuration to Web.config to permit HTTP access to the service and to enable ASP.NET compatibility. The reason I had to enable ASP.NET compatibility is because the dictionary used by the solver is a text file stored on disk, and the solver gets the path to that text file using Server.MapPath (namely, HttpContext.Current.Server.MapPath(“…”)). Without ASP.NET compatibility, HttpContext.Current is null when the service attempts to solve and then the call to Server.MapPath blows up. Also, I had to specify the Factory attribute in the <%@ ServiceHost %> directive of the Solver.svc file.

[UPDATE: 2010-09-10] Ben Amada posted a helpful comment pointing me to the existence of the HostingEnvironment.MapPath method, which does the same work as Server.MapPath but doesn’t require an HttpContext object. I updated this code accordingly. I also updated the code that cached the dictionary in memory, replacing the use of HttpContext.Current.Cache with HttpRuntime.Cache, which I probably should have been using all along. The code has been updated. Thanks, Ben!

Using the Boggle Solver Service

To use the service, just point your browser (or your code/script) to:  http://fuzzylogicinc.net/Boggle/Solver.svc?BoardID=board&Length=length. The board value should be entered as a string of the characters in the Boggle board, starting from the upper left corner and reading to the right and down. For example, if you had the board:

r e i b
t m f w
i r a e
r h s t 

You would use a board value of reibtmfwiraerhst. letter should be a number between 3 and 6, inclusive.

So, to find all solutions to the above board that are four or more letters, you’d visit: http://fuzzylogicinc.net/Boggle/Solver.svc?BoardID=reibtmfwiraerhst&Length=4

Doing so would return the following (abbreviated) JSON:


The above JSON represents an array of objects, where each object has two properties, Score and Word.

So how can this service be used? Well, with a bit of JavaScript you can call the service from a browser and display the results dynamically. I’ve included a rudimentary example in the download (which you can find at the end of this blog post) that prompts the user to enter the 16 characters for the board and the minimum number of letters. It then uses jQuery’s getJSON function to make a call to the service and get the data back. The JSON array is then enumerated and a series of list items are constructed, showing each solution in a bulleted list.

Here is the web page when you visit it and enter a boggle board and the minimum number of letters (but before you click the “Find All Words!” button.


Clicking the “Find All Words!” button executes the following jQuery script:

        "BoardID": $("#board").val(),
        "Length": $("#length").val()
    function (words) {
        var output = "No solutions exists!";

        if (words.length > 0) {
            output = "<h2>" + words.length + " Solutions!</h2><ul>";

            var score = 0;

            $.each(words, function (index, word) {
                score += word.Score;
                output += "<li>" + word.Word + " (" + word.Score + " points)</li>";

            output += "</ul><p>Total score = " + score + " points!</p>";


Note that the above script calls the Solver.svc service passing in the BoardID and Length querystring parameters. The textbox where the user enters the board has an id of board while the minimum letter drop-down list has an id of length. The function defined in the call is what is executed when the result comes back successfully. Here, the jQuery each function is used to enumerate the array and build up a string of HTML in a variable named output that produces a bulleted list of solutions. The total number of solutions and total number of points is also included in output. Finally, the contents of output are dumped to a <div> on the page with an id of solutions.

Here’s the page after clicking the “Find All Words!” button. Nothing fancy, of course, and not nearly as useful or eye-pleasing as the website’s results page, but it does illustrate one way you can use the Boggle Solver.svc service.


Download the Code!

You can download the complete Boggle solver engine, web application, and WCF RESTful service from http://aspnet.4guysfromrolla.com/code/BoggleSolver.zip.

Happy Programming, and Happy Boggling!

More Posts « Previous page - Next page »


My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.