January 2005 - Posts

A Guide to Orlando, FL, the ASP.NET Connections Spring 2005 Site
31 January 05 11:50 AM | Scott Mitchell

As I blogged about earlier, I'll be speaking at this year's ASP.NET Connections conference in March 2005. This year's conference is located in Orlando, Florida at the Grand Cypress Hyatt Hotel. Last year, when TechEd was in San Diego (where I live), I blogged about things to do in San Diego: sights to see, restaurants, tourist attractions, and so on. About a week ago regular ScottOnWriting.NET reader Mark pinged me, mentioning he was from Orlando. I asked him to provide a similar writeup for those of us trekking to Orlando for the conference in March, and he was kind enough to oblige... Here's the 411 from Orlando, in Mark's words...


The ASP.NET Connections conference is being hosted in southwest Orlando, where I've been living for the last seven years. Orlando is a rapidly growing and beautiful city famous for its theme parks, but there is more to it than just the theme parks. The weather is usually sunny and pleasant for about 7 to 8 months out of the year, but from the end of May to the beginning of October is the hot and humid thunderstorm (and more recently hurricane) season. The weather is moderate during the end of March, it's usually in the upper 70s for daytime highs and mid 50s for nighttime lows; however, Florida is prone to one last cold front in March that can sometimes drop lows into the 30s, so check your forecast before you pack.

If you'll be attending the conference, you'll no doubt enjoy being in Lake Buena Vista which is right next to Disney World and is bustling with daytime and night time activity. There are a few sites to check out in Lake Buena Vista within walking distance:

  • The Grand Cypress Hotel - This is the location of the conference; make sure to at least check out the maze of pools and the waterslide.
  • Orlando Ale House - If you have a chance on the Sunday before or Thursday after, walk right out of the entrance of the hotel to this big sports bar with great beer prices. It's a lively place when college basketball's March Madness is underway.
  • Dowtown Disney - Check out the website for all the info. Personally, I like Universal City Walk much better as I describe in the next section; however if you are bringing kids I think they will enjoy it here better. If you have the money, check out the La Nouba show. I would skip paying for Pleasure Island and just enjoy the free places (longer walk).
  • Bahama Breeze - A casual caribbean themed restaurant with excellent food and a huge outdoor balcony to enjoy the drinks, weather, friends and the calypso music (longer walk).

And these attractions are all within walking distance of the hotel where ASP.NET Connections is being hosted! If you are a little more adventurous, you can catch a cab or take your rental car and visit:

  • Theme Parks - OK, I might as well get these out of the way. Disney, Seaworld and Universal are all very close to the Grand Cypress Hotel. If you are going with just adults that like rides, check out Islands of Adventure at Universal and then enjoy an evening at City Walk which you will walk through to get to the park.
  • Restaurant Row - If you are serious about food, this is the place to come. My favorite is Seasons 52 followed by Bonefish Grill, but this whole area has a number of medium to high end restaurants and it is hard to go wrong with any of them. Get there by 6 PM to avoid long waits. Note: Seasons 52 has open seating at the bar tables (About 5 miles from the hotel).
  • Bay Hill Invitational - The last day of the tournament is on the 20th, so for those of you who get in town early and love golf, it's a fun place to spend the day and watch the best players in the world compete (About 5 miles from the hotel).
  • Tibet-Butler Preserve - For you nature lovers out there, this park has trails where you can observe a number of Florida critters including bald eagles in their natural state on a beautiful chain of lakes (About 5 miles from the hotel).
  • Disney's Fantasia Mini Golf (unofficial link that describes it best) - Most of the holes have sand traps and many require 2 shots just to have a chance at the hole. Just keeping the ball on the course is a huge challenge for a non golfer like me (About 3 miles from the hotel).

Please see my map for the locations of the above mentioned sites and a few more.

Orlando should keep you plenty entertained - I hope you enjoy your stay here!


Thanks for the info on Orlando, Mark. Hope to see you all at the ASP.NET Connections conference this year.

ASP.NET 2.0 Beta 2 Pushed Back (Again)
27 January 05 04:19 PM | Scott Mitchell

In case you hadn't heard, Microsoft has (again) pushed back the expected release date for ASP.NET 2.0 Beta 2. I remember back when Beta 2 was slated for Q4 2004, which then got moved to Q1 2005 and now is looking like it will be Q2 2005. From the article (emphasis mine):

Microsoft Corp. is expected to roll out the much-anticipated second beta release of its Visual Studio 2005 development platform at the end of March or early April, sources close to the company said.

According to sources, Microsoft is scheduled to release Beta 2 of Visual Studio 2005, code-named Whidbey, on March 31. Sources also said Microsoft is expected to release the release candidate version of Visual Studio 2005 around the time of the Microsoft PDC (Professional Developers Conference) in Los Angeles in mid-September.

Beta 2 is scheduled to ship with a “Go Live” license, which will allow developers to create public Web sites and applications using 2.0. For that reason it's essential that Beta 2 be as rock solid as a beta can possibly be. If you think about it, Beta 2 for ASP.NET 1.0 was very solid, all things considered. There were only a handful of breaking changes between Beta 2 and RTM for 1.0, and since its release there has been only a couple of service packs and one new version (1.1), which was primarily an upgrade to VS.NET.

In any event, I'm waiting on the public Beta 2 release to start working on a new book. I've been hoping for an early release, as I'm itching to get started, but a delay is probably best as I have some other projects that need my attention for the near future.

Filed under:
Visual Studio Hacks
26 January 05 07:03 PM | Scott Mitchell

Last year James Avery approached me about providing some hacks for his upcoming O'Reilly hacks book, Visual Studio Hacks. I was honored to be asked, and created five such hacks on the following topics:

  • Halting on exceptions
  • Using the Reflector Add-In
  • Setting breakpoints
  • Spell checking your code/comments, and
  • Generating metrics on your code (i.e., LOCs, code complexity, etc.)

These are just five of the many hacks. The majority were authored by James Avery and the other numerous contributors - see James's announcement for a full list of the contributors and additional information on this upcoming book.

So what is this book about? Who is the target audience? James describes it best:

I decided to write a book for 80% of the developers who use Visual Studio. This book is for the everyday developer who already knows how to use Visual Studio, but wants to learn about tips and tools that can help them in their everyday development tasks.

The book is available on Amazon.com for pre-order but won't ship until, likely, early March. If you use Visual Studio - and I know you do! - consider picking up a copy of this book.


This book is the first book I've contributed where I've not been a primary or sole contributor. Basically when only contributing a small percentage of a book, most publishers will just hire you on as a “work for hire,” paying a flat rate for your contribution. I have no details of the arrangements of the other contributors, but I'm wagering that James - the main editor/contributor - is the only one receiving a percentage of the gross.

These types of writing contributions - a single chapter or a small number of hacks - are a great way for prospective authors to ease into the scene, to test out if writing is of interest to them or not. As one who has written a handful of books, doing this type of job is nice every now and then as it's the only opportunity I have to just churn out a short amount of work over the course of a few days and be 100% done! No lengthy author review; no formulating the TOC; no endless nights, trying to hammer out a chapter to meet a deadline.

Even when only contributing a subset of the material of a book, serving as the editor/project manager can sometimes amount to more work than just writing the damn book yourself. I don't know what James's experiences were on this title, but I recall my first ASP.NET book, ASP.NET: Tips, Tutorials, and Code. This book had seven authors; I authored only three of the book's chapters (granted one of the chapters was approximately 100 pages), but served as the program manager. Essentially, I created the TOC and doled out the writing responsibilities, served as the first technical editor/QA guy, and was responsible for maintaining, as best I could, a consistent theme/writing style across all chapters. This book took more time, energy, and caused more stress than any other book I've worked on. I think this stemmed from the following reasons:

  • There was some chaos in the list of contributing authors. Four or so of the original authors ended up dropping off due to other commitments.
  • The book's code was written and fully tested on ASP.NET Beta 1 code. If you used ASP.NET in the early betas you'll remember that there were some big changes from Beta 1 to Beta 2. Meaning virtually all of the code samples had to be rewritten and retested, and the supporting prose as well.
  • Editing/QAing/managing people is not my forte, nor my idea of a good time. I learned this through this project. I like to write and program. Doling out work, reminding people to meet deadlines, dealing with various people-related issues, serving as a go-between for the authors and Sams editors - I found out that these are things I'm not good at nor particularly enjoy.

It's not surprising, then, that since then I've stuck to writing my own, single-author books. This contribution for Visual Studio Hacks was a nice foray off the normal path since it did not involve any of the tasks I find most difficult/stressful. Hopefully James had a more enjoyable and less trying time in his role in writing/editing Visual Studio Hacks than I did with ASP.NET: Tips, Tutorials, and Code.

Stopping Comment Spam in .Text Using Triggers
24 January 05 11:22 AM | Scott Mitchell

With Google's recent rel=”nofollow” initiative designed to reduce blog comment spam, I've been reading up more on comment spam, techniques to fight it, and so on. One great resource is the Comment Spam blog, which contains a blacklist of comment spammer URLs and a plugin for blocking comment spam for those with a Movable Type blog.

As discussed earlier, there are a number of techniques for fighting comment spam - moderation, filters, captchas, comment URL munging, and so on. Being a typical developer, one of my main motivations in any endeavor is choosing a tactic that will require the least amount of work from yours truly. Clearly this rules out moderation. I have doubts that URL munging, which Google's initiative is fostering, will stop comment spam. It will reduce its effectiveness, granted, but spammers are a stubborn and evil bunch, and I don't see them just throwing in the towel.

My preferred technique for stopping comment spam before it starts is to use filters. If you have access to your .Text blog's SQL Server, you can setup filtering fairly simple. My first attempt at this was to modify the blog_InsertEntry stored procedure and add hard-coded IF statements that would short circuit the INSERT into the database table if the comment text contained any offending URLs. Basically this came down to something like:

IF CHARINDEX(bannedURL1, @Text) = 0 OR CHARINDEX(bannedURL2, @Text) = 0 ... OR CHARINDEX(bannedURLN, @Text) = 0
BEGIN
... run stored procedure ...
END

Basically the logic here was if the text did not contain an instance of any of the bannedURLs, then run the stored procedure statements. Otherwise, none of the sproc's instructions were executed, resulting in nothing being commited to the database.

The problem with this approach was that the bannedURLs were hard-coded in the sproc, so if a new spammer came along, I'd have to add another OR clause to that ever-growing IF statement. This violated my “do as little work as possible” creed. In reading up on comment spam, I came across the blog entry Preventing .Text Blog Spam Using Triggers. This technique did three things that I liked:

  1. It used triggers rather than mucking with the stored procedure. The benefit of this, as I see, is that it makes the comment spam stopping piece orthogonal to the .Text application. That is, it's a component that could be added or removed without requiring any change to the core .Text sprocs, tables, etc.
  2. Rather than using hard-coded checks, the trigger queried a table that contained offending substrings. This way, updating the list of banned URLs could be done through a simple Web-based interface.
  3. In addition to checking for banned substrings, the trigger approach also counted up the number of links in the text body and blocked comments that had 15 or more links. (Many times comment spammers will leave dozens of links in one post.

I made a few changes to the script provided. The script provided had a table that would allow one to specify banned substrings. That is, you could remove comments with the word Viagra, for example. I decided this was more flexibility than needed, so I changed the table and logic to allow just URLs being specified. Additionally, I had to change the INSERT statements in the trigger since I use a more antiquated version of .Text.

The one downside of this approach (or any filter approach of this manner) is that each blogger has his own list of banned URLs. If a new comment spammer Web site comes about and this comment spammer starts spreading around his or her links, each blogger will have to waste their own time, adding this URL to the blacklist. What we need is a distributed model, where everyone can contribute to the blacklist globally, and .Text blogs will automatically update their table of banned URLs periodically, as well as update the global table. I don't know of a manner to automate this process, but it wouldn't be difficult to create a WinForms desktop client that would allow a user to add new entries to the blacklist and, optionally, update/download the data from the global blacklist. Do you think there would be a demand for this? Would enough people use this to make it worthwhile?

My Latest MSDN Article in Now Online
21 January 05 09:43 AM | Scott Mitchell

It's been a few months since my last MSDN Online article appeared in the ASP.NET DevCenter, but I'm happy to announce that the dry spell has ended with the publication of.my latest article, Creating Dynamic Data Entry User Interfaces. This article's primary focus is to serve as the source for information on using dynamically added Web controls in an ASP.NET page, a common question thread in the newsgroups and online forums. (So, admittedly, the title is a bit misleading...)

To illustrate the lessons learned the last third of the article or so examines an application that allows custom data entry user interfaces to be created based on some criteria. It allows for an administrator-type user to specify what questions a particular type of user needs to answer. Then, on the data entry page, the data entry forms are dynamically loaded based on the user visiting.

My next MSDN Online article will be on skmFAQs.NET, a packaged application for putting FAQs online. (I'm still looking for users who are interested in giving skmFAQs.NET a whirl and providing usability feedback and suggestions!!)

Filed under:
I'll Be Speaking at ASP.NET Connections Spring 2005
20 January 05 10:42 AM | Scott Mitchell

I'll be presenting three sessions at the ASP.NET Connections Spring 2005 conference this year, which will be held in Orlando, Florida. The complete schedule, with synopses, is available online at http://www.devconnections.com/shows/asp/default.asp?c=1&s=57. The titles of my three talks are:

  • Working with HTTP Handlers and Modules
  • Syndicating and Consuming RSS Content
  • Working with Client-Side Script

If you are going to be attending DevConnections this spring, please be sure to drop on by my sessions, if for nothing else than the fact that I have the goofiest looking photo among all the speakers. (The prize for most serious pose goes to Russ Nemhauser; the prize for looking the most happy without looking like that smile was forced goes to Christian Weyer.)

The End of Comment Spam?
19 January 05 09:24 AM | Scott Mitchell

What is Comment Spam?
Comment spam is an evil and real problem for blogs. The premise of it goes as follows: evil, vile spammers post use old blog entries to post comments that are littered with links to their porn/gambling/diploma/pharmaceutical sites so that Google/MSN/Yahoo! spider the site they find these links/add them to their dictionaires/spider them/improve their page rank/etc./etc. Technologies that are designed to make posting easier, such as CommentAPI, just help automate the comment spam posting by these ne'er-do-wells.

Past Techniques for Stopping Comment Spam
Until recently, the the main approaches for stopping comment spam have been:

  • Moderation - a post doesn't appear on a blog until the blog owner reviews and approves it. The advantage of this is that only on-topic, non-spam/non-inflamatory posts are displayed; the disadvantage is that the blog owner must now take the time to micro-manage approval of messages.
  • Use of a Captcha - a captcha is a test that most humans can pass, but current computer programs cannot. We've all seen these, it's typically a sequence of wavy letters that you must type into a textbox before proceeding. The downside to captchas is, to my knowledge, the CommentAPI specification does not support them, so you can only utilize captchas on entering comments through the Web interface. (There's a Captcha control for .Text blogs, as discussed here.)
  • Banning Certain Substrings from Comments - another approach, which is the one I use here on ScottOnWriting.NET, is to simply restrict certain substrings from appearing in the comment. There are varying degrees of complexity that can be applied here. I simply have a set of static strings I search for and add to them when a particularly nasty comment spammer starts causing trouble. Other solutions actually utilize a global blacklist of URLs used by comment spammers, such as http://www.jayallen.org/comment_spam/blacklist.txt.
  • Munging the URLs in Comments - since comment spammers post their URLs to improve their rank in the search engines, one can remove the impetus for a spammer by removing their desired benefit. One way to accomplish this is to munge the URLs in a comment from something like http://www.somesite.com/BuyViagra.htm to redirect.aspx?http://www.somesite.com/BuyViagra.htm, or to utilize Google's redirect link (which doesn't impact PageRank): http://www.google.com/url?sa=D&q=URL, as discussed here.
  • Require Authentication to Post Comments - many online forums use this technique, requiring that a user have an account before being able to post. The theory here is that if someone starts posting spam or off-topic, inflamatory posts, they can be banned and their obnoxious posts deleted. Sure, a motivated spammer can create a new account, but they have to go through the process of using a new email address, filling out an account creation form, and verifying their account by clicking on some link received in an email. The major downsides to this is (1) that CommentAPI (to my knowledge) doesn't support any sort of authentication piece, and (2) those who want to post to your blog need to create an account. Similarly, if another blogger takes the same approach, they'll need to create another account over there. And so on and so on for every blogger that required authentication.

None of these solutions are really panaceas; the true fix for comment spam is to have some centralized user store and to have blogs require folks to authenticate against this store in order to post. I blabbed on more about this idea in a past blog entry, Improving the Blog Commenting Experience.

A New Alternative to Fighting Comment Spam
Yesterday Google announced a new attribute for HREF tags that, if present, will indicate that its spiders won't follow the URL, thereby negating the benefits of comment spamming (much like URL munging removes the benefits, except this approach, IMO, is simpler). Basically, if you add rel=”nofollow” to an HREF, Google won't spider the link (i.e., <a href=”Blah.aspx” rel=”nofollow”>This won't be spidered!</a>.)

Will this measure stop comment spam? It depends, primarily, on how many search engines support this and, more importantly, how many blog engines support this. The good news is that not only Google will respect the rel=”nofollow” attribute, but so will MSN Search and Yahoo! Also, a large number of blog engines have promised to utilize this technique, including:

  • LiveJournal
  • SixApart
  • Blogger
  • MSN Spaces
  • Community Server (the evolution of .Text)

Even if the vast majority of blog engines start using the rel=”nofollow” attribute comment spam may still run rampant in the hope that some blogs won't support it. Think of it this way - how much stuff have you purchased from a spammer, yet how many spams a day do you get? In the end, I think Google/MSN Search/Yahoo!'s addition of the rel=”nofollow” attribute is a very positive step in the right direction, but I think one would have to be a bit naive to think that this would spell the end of comment spam, meaning we'll still need to use one or more of the techniques I discussed previously until we finally have some global authentication/user store available that everyone agrees to use...

How to Manage ListServs - Use GMail
18 January 05 12:36 PM | Scott Mitchell

A little over six months ago I started using GMail, Google's popular email service. (I have some invites, btw, if you have not yet received one and are interested; if so, drop me a line...) I still use Outlook for my personal and business-related emailing, contact management, scheduling, etc., but I've since moved over all of my ListServ email to GMail. GMail is great for such activity (ListServs) because of its:

  • Large disk quota. Even after six months of relatively heavy ListServ traffic my GMail size has reached only 119 MB, or 12% of the gigabyte GMail provides.
  • Grouping messages in a thread view. I never liked using Outlook for ListServs because of the fragmented message stream that continuously poured into my Inbox. Yes, I had incoming message automatically filtered into Folders, but I might get an email on one list from Person X about Topic Y, then the next email, chronologically, might be from Person Z on Topic A. Then the next one on Person B on Topic Y, and so forth. Basically it was hard to follow a thread. (Yes, I know that Outlook let's you group messages by Subject, but I found more often than not it would leave out messages or misorder them, due likely to mail clients or SMTP/POP servers that tweaked with the subject line.)

    GMail's grouping of messages in threads does break every now and then, as I suspect some clients leave out the Thread-Index SMTP header that GMail (I believe) uses to group threads. But for the most part, the threading is done very well, in an easy to use Web-based interface.
  • Searching is fast, thorough, and accurate. Searching email in GMail is as easy and powerful as searcing the Web through Google. No surprise here, but compare this to Outlook's default search which is SLOOOOOW. (For my Outlook searching I use Lookout, although Google Desktop Search and other technologies provide similarly fast searching of email and other contents.)

I found that with GMail I was able to subscribe and manage many more ListServs than with Outlook, for the reasons mentioned above. I wish Google could do the same for USENET; yes, the have Google Groups, but the USENET posts are delayed by several hours.

Filed under:
Presentation Style: Lots of Slides or Few?
14 January 05 09:26 AM | Scott Mitchell

I'll be speaking at the Spring 2005 ASP.NET Connections Conference this year and am putting together the final touches on my three sessions. As I've blogged about before, I'm quite verbose in my writing, and that translates a bit to my PowerPoint presentations - that is, I use a lot of slides. A lot more than some others use.

In some 60-90 minute presentations I've seen, the presenter has, maybe, 10 slides. Each slide has a series of very high-level bullet points and the speaker delves into each bullet point with, perhaps, three or five minutes of talking. There may also be lengthy demos that don't have any corresponding slides, but take up another five or ten minutes of the talk.

Compare that to my typical approach. For a 90 minute talk I might have 50-75 slides, obviously containing a finer level of detail than the aforementioned style. Oftentimes I embed my demo in slides - I still go to Visual Studio .NET, use a browser, etc., but in the slides I have code snippets, screenshots, and so forth. The benefit of this (at least in my eyes, what do you think?) is that the slide deck is self-containing in a way. When someone goes back to work at the end of the conference (or user group talk or training session or what have you), they can use the slides as a reference that actually has some level of detail. My concern is that too many slides proves too distracting, that people might not like the near-constant slide flipping (got to average about 60 seconds per slide).

Thoughts? Comments? What's your preferred presentation style?

Enhanced Client-Side Script Features in ASP.NET 2.0
13 January 05 03:33 PM | Scott Mitchell

There are two flavors of client-side script:

  • Functions and code that runs immediately - such code is placed within <script> blocks on a page.
  • Code that runs in response to some event - this code can be wired up to an HTML element's event handler (i.e., <input type=“button“ onclick=“alert('foo');“ ... />) or specified in code, like window.onload = somefunction.

In ASP.NET 1.x there exist a handful of methods for emiting script in <script> blocks: RegisterClientScriptBlock(), RegisterStartupScript(), RegisterArrayDeclaration(), and so on. For tying script to an HTML element's event handler, Web controls expose an Attributes collection for this purpose (myButton.Attributes[“onclick“] = “alert('foo');“;, for example). For much more on this topic check out my article Working with Client-Side Script.

I recently started digging into ASP.NET 2.0 in earnest, as I blogged about before. The ASP.NET team has added some great improvements to the client-side scripting capabilities of ASP.NET in ASP.NET 2.0. For one, they moved the RegisterXXX() methods out of the Page class and into a ClientScriptManager class, with the RegisterXXX() methods being static. This ClientScriptManager class is exposed as an instance through the Page class's ClientScript property, meaning your ASP.NET code now looks like:

ClientScript.RegisterClientScriptBlock(...);

As opposed to:

Page.RegisterClientScriptBlock(...);

(Although the old syntax will work for backwards compatibility, but VS 2005 will give you a warning...)

The method signature of the RegisterXXX() methods have changed, too. In ASP.NET 1.x they took in two string parameters, a key and the actual script to emit. In ASP.NET 2.0 they also take in a Type instance, which in most cases will just be set to the type of the control injecting the script. This allows for two different controls to inject script blocks with the same key without overriding one another's scripts. There's also an optional final Boolean parameter that, if set to True, will automatically add the opening and closing <script> block tags.

In addition to the ASP.NET 1.x RegisterXXX() methods, 2.0 introduces a couple new ones to boot. There's the RegisterClientScriptInclude(url) method that adds a <script src=”url” type=”text/javascript” /> tag. (The benefit of includes being that they reduce the total weight of the page since browsers can cache the external JavaScript file...) Want something to happen when the form submits? Use the RegisterOnSubmitStatement() method.

A number of Web controls have also received some script-related upgrades. The Button/LinkButton/ImageButton controls now have a OnClientClick property, which is basically a shortcut for Attributes[”onclick”], allowing you to do:

myButton.OnClientClick = “alert('foo');”;

All controls also have a Focus() method that injects a bit of script to give focus to the control on document load.

The coolest script-related feature in ASP.NET 2.0, though, is script callback. With script callback you can have a user action trigger script that makes an HTTP request back to the Web server, invoking a particular method. The callback method on the server runs and returns a string, which gets passed to a specified JavaScript function. The end result is that you can update the page in some manner with data/logic coming from the Web server in a very non-invasive way, since it doesn't involve a full postback. Site like GMail and Google Complete use these techniques to provide enhanced usability. Implementing script callback is really amazingly easy with ASP.NET 2.0 - all the plumbing is taken care of for you, you just need to write the server-side callback method, inject the appropriate JavaScript to initiate the callback, and write the JavaScript function to handle the returned data. More info available at Dino Esposito's Custom Script Callbacks in ASP.NET article.

In fact, I whipped up a little demo for a talk I'll be doing later on in the year. You can download the code sample here.

Filed under:
Tracking Book Sales
11 January 05 11:46 AM | Scott Mitchell

If you ever write a book (or more) you'll find yourself very interested in how it's selling. In part because good sales indicate that you did a good job writing the book, but in perhaps a larger part because your royalties are directly correlated to the number of units sold. Sure, every month or three (depending on the publisher) you'll get a royalty statement that has the precise sales figures for your title(s), but who wants to wait a month or more for such information?

I recently read a blog post by Steve Anglin on the APress Blog where Steve talked about the use of the Amazon.com sales rank as a useful metric for measuring a book's sales, liking Amazon.com's sales rank to the big board on a stock exchange:

... the Amazon.com rankings are sales rankings that can be utilized as the "share price" of a book where 1 is the best on up to the worst. The 5 star reviewer ratings are done by reviewers who could be likened to equity analysts who grade the book like a stock.

Unfortunately, it's not all inclusive. Most books that are bought and sold on AMZ are also bought and sold in other markets or on other sites, etc. whereas stocks on the NYSE are not traded on the NASDAQ. Likewise, stocks on the NASDAQ are not traded on the NYSE here in the US.

There was even once a site, AmazonScan.com, that allowed you to enter a book's Amazon.com ID (ASIN) and it would track the sales rank over time. (The site appears to no longer be up and running.)

The Amazon.com sales rank provides one metric of a book's success, but I have found it not to be a very accurate metric. While Amazon.com has a much broader audience than, say, Nerdbooks.com, the audience, I believe, still isn't as wide as the brick and mortar stores. For example, imagine that you are not the most computer savvy user. Sure, you may have used Amazon.com to buy the latest potboiler, or to buy a book from an author you're familiar with, but if you needed a book on a topic that was new to you and you wanted to learn, would you head to Amazon.com, or the Borders down the street? I'd wager the latter: you'd want to flip through the pages, browse the rows of books in a much more intuitive and easier way than Amazon.com could make possible.

Furthermore, how many people buy the latest potboilers or “best-sellers” from Amazon.com vs. buying those more obscure, audience-specific books (like books on ASP.NET)? In a recent interview, Amazon.com founder Jeff Bezos admits that “relative to the industry as a whole, we're disproportionately weighted toward harder-to-find titles.” I think this further skews the accuracy of the Amazon.com sales rank.

Finally, some emperical evidence. While I've found that there is a strong, positive correlation between actual sales and Amazon.com sales rank among my ASP/ASP.NET books, I've noticed that there's not a correlation between my ASP/ASP.NET books and my latest book, Creating Your Own Website (Using What You Already Know), a book geared toward computer newbies who want to build their own website. Create Your Own Website... has a horrid Amazon.com sales rank - 213,754 as of the time of this writing - while ASP.NET Data Web Controls Kick Start, ASP.NET Tips, Tutorials, and Code, and Teach Yourself ASP.NET in 24 Hours all have higher sales ranks but either have sold fewer copies or on pace to sell less.

I imagine the Amazon.com sales rank is a decent metric when comparing books that are very tightly aligned in audience demographic, but quickly falls apart if trying to compare books aimed at different audiences or trying to guage the success of a book whose intended audience is less likely to use Amazon.com over a brick and mortar outlet.

Filed under:
Form with Validators Not Submitting on a Rebuilt ASP.NET 1.1 Box
08 January 05 10:36 AM | Scott Mitchell

I got a call from a colleague I had done some work for who was having some problems with a some pages I had whipped up for him a few months back. In between then and now, the Web server had died and was rebuilt. After the rebuild, though, some pages submit buttons stopped working altogether. One would go to a page, fill in the form values, and click the Submit button and... nothing would happen. No error. No message on the screen. No sending to a blank page. No postback. Just nothing.

You can imagine how much fun our conversation became when I visited the URL my colleague was having a problem with and... everything worked perfectly for me. I could not repro any of the behavior he was experiencing. After a few confused and frustrating minutes I realized that he was probably using IE and here I was using FireFox, so I switched to IE and was able to repro the problem. This clued me into the source of the problem being the client-side validation used by the page, since this client-side validation was not emitted to “downlevel” browsers like FireFox.

I visited the page and attached the Visual Studio .NET debugger to do script debugging on IE. I found that the problem was in the form statement, where there was:

<form ... onsubmit="if (!ValidatorOnSubmit()) return false;" ...>

Now, if a form's onsubmit event handler returns false it short-circuits the form submission process, and the return statement was executing through the debugger, so that popped out as the problem. But why was it returning false? Shouldn't it not do that? To try to better understand what was going on, I poked through the ValidatorOnSubmit() function, whose code is shown below:

function ValidatorOnSubmit() {
if (Page_ValidationActive) {
return ValidatorCommonOnSubmit();
}
return true;
}

The ValidatorCommonOnSubmit() method, I found, was located in WebUIValidation.js. Looking at that function I found the following code:

function ValidatorCommonOnSubmit() {
event.returnValue = !Page_BlockSubmit;
Page_BlockSubmit = false;
}

I am not a JavaScript expert in the least, but this function looked a bit odd. Shouldn't it be returning a value? Perhaps, I reasoned, the event.returnValue was setting the return value of the function. But then why was ValidatorOnSubmit() returning false? I decided to Google “validatoronsubmit returning false” which turned up a single result, a blog entry by Thomas Freudenberg that described the exact same problem and a workaround.

In a nutshell a hotfix for the .NET Framework 1.1 causes this problem, as it has the WebUIValidation.js file and the onsubmit event handler in the form tag out of sync. If you have the ValidatorCommonOnSubmit() function like the one shown above the onsubmit event handler should just have onsubmit="ValidatorOnSubmit();". If, however, you have the onsubmit event handler as shown above (onsubmit="if (!ValidatorOnSubmit()) return false;") then the ValidatorCommonOnSubmit() function should return the value of !Page_BlockSubmit (the value of Page_BlockSubmit before it's set to false).

Basically the hotfix puts the rendered onsubmit out of sync with the WebUIValidation.js file. So if you have installed the hotfix - as my colleague had when rebuilding the Web server - then you will need to update the ValidatorCommonOnSubmit() function in WebUIValidation.js so that it looks something like:

function ValidatorCommonOnSubmit()
{
event.returnValue = !Page_BlockSubmit;
ret_Val = !Page_BlockSubmit;
Page_BlockSubmit = false;
return ret_Val;
}

Filed under:
Starting to Really Dig Into ASP.NET 2.0
05 January 05 09:24 AM | Scott Mitchell

Over the past couple of weeks I've really started to dig into ASP.NET 2.0 (fianlly). Prior to this, I had skimmed through articles and books on ASP.NET 2.0, but hadn't spend more than an hour actually playing with Visual Studio 2005 or picking through the ASP.NET 2.0 classes with Reflector. In any event, the more I use 2.0 the more I appreciate its improvements over 1.x. The main improvements, IMO, are not the new classes/Web controls/ASP.NET 2.0 features themselves - virtually all of these could be done in ASP.NET 1.x with sufficient elbow grease. What's really got me excited is Visual Studio 2005. VS 2005 is what I've always wanted VS.NET to be:

  • I have IntelliSense in Web.config, in the <script> blocks in an ASP.NET Web page, and even the page-level directives (i.e., <%@ Page ... %>)
  • I don't need to have IIS on my machine to create and debug an ASP.NET Web application. There's no virtual directories being created when I create a new ASP.NET Web site through VS 2005; there's not a plethora of files that I didn't ask to be created.
  • Switching between HTML view and Design view doesn't rearrange my markup. Also, through the HTML view I can click on an HTML element and have its properties loaded in the Properties pane.
  • MasterPages/User Controls have rich design-time support, something that was sorely lacking in VS.NET. (Although one hiccup I found in Beta 1 is that MasterPages with CSS positioning don't necessarily provide a good WYSIWYG experience in the Designer. For example, a two-column CSS layout using the tecnique discussed here renders as expected in the browser, but in the Designer it lays out the left column above the right column, not side-by-side.)

Of course my experience with VS 2005 thus far has been with just small pages. I've yet to really put it through the test, so there may be some glaring issues with large pages. I should eventually find out, though, I have a couple of clients who (for some reason) are insistent on having large data entry pages with, at times, literally dozens of form fields on a single page... perhaps once they move to ASP.NET 2.0, though, they'll be excited about using the new wizard functionality.

Speaking of ASP.NET 2.0, you could probably guess that's what I've been working on as of late by a quick examination of the latest 4Guys articles. Last week's article was A Sneak Peak at Working with Data in ASP.NET 2.0 and today's article is A Sneak Peak at MasterPages in ASP.NET 2.0. (These are another two exciting features in 2.0, albeit they are possible in 1.x. The DataSource controls discussed in the first article really speed up creating data-driven Web pages, while MasterPages (and their rich support in VS 2005) allow for easily defining a site-wide design template that can be applied to all pages and easily altered/changed on a whim.)

Filed under:
More Posts

Archives

My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.