September 2005 - Posts

RssFeed Version 1.9 Available
27 September 05 02:58 PM | Scott Mitchell | with no comments

A new version of RssFeed - everybody's favorite ASP.NET server control for displaying RSS feeds - has been released. The new version 1.9 adds two notable improvements:

  1. The RSS feed “slurping“ code/logic has been decoupled from the server control-related code, and
  2. Support for the <enclosure> element has been added. In RSS version 2.0 this element is used to associate some external document with the item, and is commonly used in podcast syndication.

Item #1 is the one I'm most excited about. Previous to this version, the code for sucking down the RSS content was embedded deep within the server control. But why? This prevented developers from using my control if they wanted complete control over how the data was displayed... or maybe if they didn't want the data displayed at all. For example, imagine that you wanted to periodically grab a feed, see if any of its items' descriptions contain a particular substring, and then save those results to a database (or email someone, or whatever). Previously this was not possible with RssFeed - now it's a snap.

Want to grab an RSS feed from a remote server? Just do:

RssEngine engine = new RssEngine();
RssDocument doc = engine.GetDataSource(url);

Want to enumerate the items in the returned RSS feed looking for descriptions that contain the term “ASP.NET“? Simple:

foreach(RssItem item in doc.Items)
if (item.Description.IndexOf(”ASP.NET”) >= 0) ...

In fact, you could use this logic to strip out those entries that don't have the term ASP.NET in them and then bind that modified RssDocument instance to the RssFeed control by assigning it to the control's DataSource property and calling the DataBind() method. Check out a demo of the RssEngine class.

The enclosure support now makes it easy to display a podcast using RssFeed, including a “Download” column that links to the items enclosure. You can see a list of the KPBS News podcast for San Diego at this live demo. Note how those items that have an associated media file specified via an <enclosure> tag have the Download link in the right column. Click it and start listening to the podcast!

To top it all off, there's even a 4Guys article detailing this new version: Displaying RSS Feeds - A Look at Version 1.9. You can download RssFeed and check out the online (or offline) documentation at the official RssFeed homepage; there are also a slew of live demos available.

Filed under:
Google Video, Personalized Search, and MSAdSense - Oh My!
26 September 05 09:32 PM | Scott Mitchell | with no comments

One of my favorite attributes of the web is the abbreviated length of the software cycle. With desktop software there's this whole mess of burning bits to a CD and shipping them to some physical location, which requires that users actually take the time out to go to said store. Even for desktop software distributed via the Internet, there's still platform limitations and bulky downloads. All of these factors that impede deployment and adoption of desktop applications are removed with web-based apps. It seems like almost once a week that there's some news coming out of Yahoo!, MSN, or (more often than not) Google.

Case in point, a slew of “search site” news came rolling out over the past few days. One of the coolest is Google Video's commercial-free airing of Everybody Hates Chris, a new comedy series on UPN about Chris Rock's childhood. Furthermore, Google Video, which used to use a program one needed to download and therefore limited it to playing only on Windows boxes, now operates as a Flash movie in the browser. That means cross-browser and cross-platform support, as well as the removed nuissance of having to download a program. Also, the Flash movie automatically resizes to your browser's resolution, automatically buffers the video and (on my DSL connection) starts playing nearly instantly. You can also seamlessly jump around the stream with little to no delay. Nice!

If that's not enough, Google also recently improved their Personalized Search feature. When logged in and searching, each search result has a “Remove this Result” link that you can click to optionally filter out search results for your particular search phrase, for all search phrases, or, heck, you can remove the entire domain. A useful tool for removing splogs and other spammy sites that clutter up the results page with useless information. This information is not only helpful in cleaning up your own results, but the information will likely be aggregated and used by Google to help improve their search engine's heuristics.

Finally, word is out that Microsoft will be releasing their own AdSense-like program. This program, called adCenter, has been in beta since March of this year, although only available in France and Singapore. However, the beta will, I expect, be opening up in the US sooner than later, seeing as there are a slew of adCenterp-related articles on all the tech news sites. It'll be interesting to see how Microsoft fares in this market, which is already cornered by the likes of Google and Yahoo. More competition is always better, of course, and hopefully Microsoft can introduce some innovative tools and metrics that force Google and Yahoo to increase their functionality and featureset. Viva la capitalism! :-)\

For a computer nerd who likes trying out new programs, it really is an exciting time we live in. As one who doesn't have cable and will, hopefully, never need/get it, I'm looking forward to a day when the computer can more effectively replace the television. Seeing Google Video's advances and capabilities in airing a sitcom makes me think we're that much closer.

Filed under:
Bound for the MVP Global Summit on Wednesday
26 September 05 05:36 PM | Scott Mitchell | with no comments

On Wednesday of this week I'll be heading up to the Microsoft MVP Global Summit in Seattle/Redmond and am staying at The W. If you're going to be at the conference and would like to say hello or meet and shoot the breeze, drop me a line and perhaps we can schedule a time.

I went to this conference last year and had a great time and was able to put a lot of faces to names that I only see on messageboards/emails/blogs. Here's hoping it will be an as fun and engaging experience as it was last year. Hope to see you there!

Filed under:
Latest MSDN Online Article - Building a ContentRotator ASP.NET Server Control
21 September 05 10:07 AM | Scott Mitchell | 1 comment(s)

My latest MSDN Online article is now available from the ASP.NET DevCenter: Building a ContentRotator ASP.NET Server Control. This article is on ASP.NET 1.x (gasp!) and looks at building a custom, compiled server control that can be used to randomly display various content. The content displayed can be vanilla HTML markup or dynamic content through the means of User Controls.

The impetus for this article was, in part, to resurrect the memory of classic ASP's AdRotator control. (Actually the AdRotator control is included in ASP.NET but gets no press.... curse the bursting of the dot com bubble!)

Enjoy!

Filed under:
Fixing "The Following Add-Ins Failed to Load" Error in Visual Studio .NET
19 September 05 09:35 PM | Scott Mitchell | with no comments

I am giving a talk on Web Services this week and one technology I discuss in virtually every Web Services talk is the Web Services Enhancement (WSE) Toolkit. One of the nice things about the WSE Toolkit is that it integrates with Visual Studio .NET - right-click on the Web Service or client project and there's a WSE 2.0 Settings option in the context menu that brings up a GUI to edit the WSE-related settings. However, in running through the talk/demo code today, I noticed that my copy of VS.NET on my laptop wasn't displaying the option in the GUI.

My first attempt at fixing this was to simply uninstall and reinstall the WSE Toolkit. Upon restarting VS.NET after this process, I got a dialog box with the warning, “The Following Add-Ins Failed to Load - WSE 2.0.” Urg. I did a bit of searching and discovered that one way to fix this was to “repair” the VS.NET installation. So, off I went to the Add/Remove New Programs screen and went to the VS.NET installation screen and clicked on the Repair/Reinstall option. This prompted me for the VS.NET CDs and, about 20 minutes later, wrapped up doing whatever in the world it was doing.

Fortunately, this fixed the problem for me. I have no idea what the problem was, exactly, but opting to repair VS.NET is what ended up working for me. Hope this helps someone else in a similar situation. For those more familiar with VS.NET Add-In specifics, any guesses/ideas as to what the problem was and, perhaps, a more direct, quicker workaround?

Google Blog Search
16 September 05 09:48 AM | Scott Mitchell | with no comments

Google's added yet another beta product to their lineup, this time it's the Google Blog Search. This new search service competes directly with other blog search engines, such as IceRocket and Technorati. I've been using IceRocket to search the “blogosphere” as of late, and they seem to have very few splog sites in their results. Furthermore, IceRocket sorts the results chronologically (whereas Google sorts by relevancy by default) and has neat little tools like the Blog Trends Tool. However, I do find that IceRocket's response time can be a bit slow at times; that is, doing a search or going to the next page of results might take a couple seconds, whereas with Google it's instantaneous. Both IceRocket and Google Blog Search provide an RSS (or Atom) feed of the search results.

What's most disappointing with Google Blog Search (and IceRocket, to a lesser degree) is the predominance of splog entries. If you do a search on anything remotely spammy - lasik, cialis, texas holdem, etc. - the majority of the results are going to be splog sites. Mark Cuban points the finger at Blogger.com in his post A splog here, a splog there, pretty soon it ads up... and we all lose:

What makes the problem particularly frustrating is that it doesn’t cost anything to setup a blog on what is probably the most common blog host, blogger.com from Google. It’s fast, its easy, it’s free and it can be automated. [Note from Scott: you can make a new blog entry in Blogger.com by simply sending an email message to a specified address...] So blogs are coming at us left and right. We are killing off thousands a day, but they keep on coming. Like Zombies. It’s straight from Night of the Living Dead. Brain dead splogs. Coming at us by the thousands.

Blogger is by far the worst offender. Google seems to be working hard to adjust their relevancy indexes to exclude splog from having influence on search rankings, but they don’t seem to be doing anything more than removing reported splogs. Kind of like going after the zombies one at a time with a shovel. Can we get some help on this Google?

Keep in mind that Mark is one of the owners/investors in IceRocket... Speaking of Mark Cuban, he also has a great entry on Google's Blog Search as well, comparing it to IceRocket and listing the major concerns he finds with Google's latest offer: Welcome to the show Google BlogSearch.

Hopefully Google will figure out a good compromise, one that eliminates the vast, vast majority of splog sites but that doesn't nullify any (or many) legit sites.

There are also a number of features currently missing from Google's Blog Search that will, I'm certain, be added eventually. Some of these include:

  • No integration with Google Search History
  • No “Blogs” tab atop the Google search results (akin to the Images, News, Groups headings)
  • No way to submit my blog's RSS/Atom feed. According to Google Blog Search Help, “If your blog publishes a site feed in any format and automatically pings an updating service (such as Weblogs.com), we should be able to find and list it. Also, we will soon be providing a form that you can use to manually add your blog to our index, in case we haven't picked it up automatically. Stay tuned for more information on this.”
  • No means of categorization. It would be nice to be able to drill into blogs by topic rather than just having to do a keyword search.
  • Lack of meta-statistics on the blogosphere. A buzz index like IceRocket provides or other metadata that can be gleamed from Google's massive index would be most appreciated.
Filed under:
FeedBurner Summary
15 September 05 12:33 PM | Scott Mitchell | with no comments

In August I moved over my RSS feed from the default .Text RSS feed source (Rss.aspx) to using FeedBurner's free service. Part of the challenge in this process was having existing subscribers automatically switch from using Rss.aspx to using FeedBurner's generated feed (http://feeds.feedburner.com/ScottOnWriting). I ended up getting everything to work by retooling Rss.aspx to send an HTTP 301 status code to aggregators, which instructed them to update their information using the new feed URL. For more on the reasons why I switched to FeedBurner along with how I made the needed changes in .Text to send an HTTP 301, refer to FeedBurner and Changing a Blog's Feed URL.

I've now been using the FeedBurner service for coming on three weeks, and wanted to share a quick review of the service. In my previous blog entry I mentioned the three motivating factors that prompted me to switch to FeedBurner were:

  1. Subscription statistics - FeedBurner provides a number of free statistics, including number of subscribers, number of requests, and aggregator breakdown.
  2. Someone else handles the bandwidth - currently requests to the RSS feed on ScottOnWriting.NET consume roughly 1.5 GB of traffic per week, or 6 GB of traffic per month (in total, ScottOnWriting does about 11 GB of traffic per month). That's a lot of 1s and 0s that would be nice to offload to another party. (I don't believe the pre-0.94 version of .Text I was using supported conditional HTTP GETs (although if I'm not mistaken the "official" 0.94 release does; had I been using a version that supported conditional GETs this bandwidth requirement would be an order of magnitude lower, I'd wager, perhaps just a GB for the month.) (To clarify, while FeedBurner does make requests to the blog's RSS URL, it caches the results for a period of time, thereby reducing the bandwidth demands for my server.)
  3. FeedBurner has a couple of neat “publicizing“ tools - FeedBurner includes a number of tools to easily make links to add your blog to My Yahoo!, MyMSN, newgator Online, and so on. Additionally, there are nifty little tools you can use to “show off“ how many folks subscribe to your blog, a la:

The free FeedBurner service provides three traffic metrics:

  • Feed Circulation, which shows how many folks subscribe to your blog, how many requests there were to your RSS feed, and how many click throughs there were. The circulation data can be broken down by day or by hour, and shown in terms of the current week, the current date, the current month, and so on. Here's a bar graph that shows the circulation for ScottOnWriting.NET since I started using FeedBurner in late August:


  • Readership - the readership stats allow me to see what aggregators are being used to subscribe to my site's content, along with a breakdown of bots and browser aggregators. As the following graphic shows, the most popular aggregator that's reporting itself (or that FeedDemon knows of) is Bloglines, followed by RssBandit (my aggregator of choice).


  • Item Stats - this shows the click throughs on an item-by-item basis over a specified date range. For example, the most popular blog entry of mine since late August (at least in terms of subscribers “clicking through“) is How Big is Too Big a ViewState? with 171 click throughs.

The second motive of mine for using FeedBurner was to have someone else (namely FeedBurner) bear the bandwidth costs. (Thanks, guys!) Anywho, in the three weeks prior to using FeedBurner the daily bandwidth for ScottOnWriting.NET was 388 MB. Since moving to FeedBurner my daily bandwidth average has dropped to 211 MB. FeedBurner alone is saving me 177 MB per day, which is more than 5 GB per month. Sweet!

One of my initial concerns with FeedBurner was that once you were using FeedBurner you were “locked in.“ That is, I worried that if, down the road, I wanted to switch back to hosting the RSS feed on my site (or use some FeedBurner competitor, or if FeedBurner went out of business), I'd be SOL, since how would I get my subscribers who subscribe to my FeedBurner feed to switch to a different feed? I'd need to HTTP 301 my FeedBurner feed and since that's hosted with FeedBurner, they have the ultimate say as to whether or not that would be possible.

This fear was assuaged by a blog post by FeedBurner cofounder and CTO, Eric Lunt. In the post Eric mentions that it is possible to have your FeedBurner feed use an HTTP 301 and spells out their business rules for implementing this (namely, different actions are taken by the feed as time progresses... the HTTP 301 isn't used indefinately, it eventually unwinds to use, basically, a 404). From Eric's entry:

... when you start directing subscribers to FeedBurner, you may, in the future (way way way in the future) change your mind and want those subscribers pointing back to your original feed. You would probably also like this to happen automatically, and you would probably like some fallbacks for subscribers who don't get redirected for some reason. To date, there has been no simple way to do this. Steve Gillmor first raised this point with us during an interview late last year, and it has also been discussed more recently. We think we have the best feed management service, we think that providing publishers with the ability to do whatever they want is always the right answer, and most importantly, we think your subscribers are your subscribers, not ours or anybody else's.

So, beginning today [June 10, 2005], we're providing a detailed service for publishers who choose to leave FeedBurner. When you delete your FeedBurner feed, we have added an option to redirect your feed. If you select this, we begin a one month process of transitioning your subscribers back to your source feed.

That's reassuring to know. If you couldn't guess, I highly recommend FeedBurner. The stats and bandwidth savings make this free service an invaluable one, and the ability to leave FeedBurner clealy and crisply takes away any potential downside in switching over your feed.

For more entries on customizing .Text be sure to check out the Blog Enhancements category.

Filed under:
How Big is Too Big a ViewState?
07 September 05 09:14 AM | Scott Mitchell | with no comments

When creating ASP.NET pages one thing that usually doesn't get looked at too intensely by developers is the page's ViewState weight (I've been guilty of this myself). While there are various mechanisms to reduce the ViewState bloat in a page, the ultimate (uneloquently worded) question is, “How big is too big a ViewState?” Dino Esposito chimes in with some metrics in his blog entry ViewState Numbers [emphasis Dino's]:

You should endeavour to keep a page size around 30 KB, to the extent that is possible of course. For example, the Google’s home page is less than 4 KB. The home page of ASP.NET counts about 50 KB. The Google’s site is not written with ASP.NET so nothing can be said about the viewstate; but what about the viewstate size of the home of the ASP.NET site? Interestingly enough, that page has only 1 KB of viewstate. On the other hand, this page on the same site (ASP.NET) is longer than 500 KB of which 120 KB is viewstate.

The ideal size for a viewstate is around 7 KB; it is optimal if you can keep it down to 3 KB or so. In any case, the viewstate, no matter its absolute size, should never exceed 30% of the page size.

I think these are good metrics to live by when building apps targetted for the Internet. ViewState enacts a “double hit” regarding page load time. First, the user must download the ViewState bytes. Then, when posting back, that same ViewState must be reuploaded to the web server (sent back in the POST headers). And then, when receiving back the resulting markup from the postback, the ViewState (possibly modified) is sent back down again. So during a postback there's a seemingly double hit. That is, if it takes x seconds to download the ViewState of the page, when the user posts back it will take at least 2x - x to upload the ViewState and another x to download it back again. Cripes! (This is part of the reason AJAX is so appealing in Internet situations, although AJAX carries with it it's own slew of issues.)

When you're building intranet apps, where you know your user's will be connecting over a LAN, the page sizes and ViewState sizes are not as important as they impact the user experience much less. Most of my “real-world” projects have been created for the intranet setting, so I've not had to fret over ViewState size as much as others may have.

For those projects where a trim ViewState size is paramount, one common question is how to quickly determine the ViewState size (and, perhaps, what junk is actually being stored in there). For the ViewState size, I usually just do a View/Source and then highlight the ViewState content. (In UltraEdit - my text editor of choice - the number of bytes selected is shown in the toolbar.) To determine the contents of ViewState there are tools like Fritz Onion's ViewState decoder (for ASP.NET 1.x and 2.0) and Nikhil Kothari's Web Development Helper (for 2.0). I also provide code for a web-based ViewState decoder (for ASP.NET 1.x) in my article, Understanding ASP.NET ViewState.

And the Point of this Comment Spam Would Be?
03 September 05 04:00 PM | Scott Mitchell | with no comments

As I've blogged about before, ScottOnWriting.NET received its fair share of comment spams, 99% of which are stopped through pattern matching and URL counts. The vast, vast majority of comment spams I receive have some purpose: they advertise a website. They exist to bolster the site's search engine placement and/or to attract visitors from my blog to their site. This makes sense and makes fighting these types of comment spam relatively easy - just create a database of 'bad' URLs and filter comments accordingly.

Over the past week, however, my blog has been receiving, on average, about a dozen comment spams a day that don't fit the bill of your typical comment spam. Rather than being some advertisement, the comment is merely a female name. That's it. No email address, no URL, no phone number, no instructions on how to earn that degree or overcome sexual inadequacy... nope, just a single word for the subject and body.

Clearly this is the work of a bot of some kind (or a very meticulous, bored person), as the names are progressing through alphabetic order (I'm now receiving names starting with 'E'). I've been very proactive to deleting these through the blog admin interface upon receiving them, but this is, clearly, an annoying, repetitive task. I am going to attempt to stop this flavor of comment spam by adding a new filter that will remove any comments with just a single word, but that's not the $64,000 question. What I'm more interested in is why someone would do this. Is it something personal? Do they dislike me or this blog? Do I know them? Is it just a test script that they forgot to turn off? Seriously, what does this accomplish for anyone? It's only a slight inconvenience for me (soon to not be one once I add the filter), which makes me think it's somewhat targetted... but who knows.

I Googled for “blog comment spam female names” to see if anyone else has experienced such an automated spew of comment spam, but was unable to find any matches. So maybe this is an isolated, annoying instance. If the guy or gal who's comment spamming me with these female names is reading this, please stop. Pretty please.

Filed under:
More Posts

Archives

My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.