October 2005 - Posts

Come Learn About ASP.NET 2.0!
30 October 05 11:54 AM | Scott Mitchell

If you live in the San Diego area and are interested in learning about ASP.NET 2.0, I am teaching my quarterly ASP.NET Programming II class at UCSD Extension starting this Wednesday, November 2nd. The class is a six week class, with each class meeting from 5:30-9:30 PM.

If you are interested, you can sign up online at http://extension.ucsd.edu/studyarea/index.cfm?vAction=singleCourse&vCourse=CSE-40739. (One thing to note - that page says that there are six meetings from Nov 2nd through Dec 7th, but the class that's slated for the Wednesday immediately before Thanksgiving is cancelled and will likely be made-up on Dec. 14th.)

Hope to see you in class! :-)

Filed under:
Google Base - Search Crap Easier!
26 October 05 02:41 PM | Scott Mitchell

The 'blogosphere' is abuzz with Google's latest pre-pre-Alpha release, Google Base (the first pre- is in there because the product isn't even released yet, save for http://base.google.com, which is up sometimes, down othertimes, and doesn't really do anything other than generate a fevered pitch among bloggers). Google Base. From a good ArsTechnica article on this pre-pre-Alpha service, “ Google Base is Google's database into which you can add all types of content. We'll host your content and make it searchable online for free.” In a nutshell, purportedly this service will allow Sally Housecoat and Joe Meatball to upload and add their own content to Google Base, which is sort of going to be this online database.

To me, it appears as if the aim of Google Base is to be that big ol' database in the sky. “Have some data? Go ahead and stick up in that cloud up there. Need to search that data? Here are the web service APIs, knock yourself out.” Users will be able to tag their uploaded information with labels, geographic information, and other categories, and the results, one would imagine, would be integrated into other Google services, such as Google Maps, Froogle, Google Local, and so on. (For example, one could say, “Show me all garage sales going on in my zip code in the next two weeks.”)

Some are calling it the craigslist and eBay killer; some see this as the potential end of classified ads in newspapers.

I am not so excited about this product for a couple of reasons. First, it's pre-pre-Alpha... If this is at all like the Google Reader released a month ago or so, then even when Google Base is officially released, it'll still need a lot of work. Second, and more importantly, they're allowing users to add to this database. If they just let any ol' person add any old thing, the quality of Google Base will quickly approach zero. Look at USENET - except in moderated or very specific forums visited only by a small sect of people, a large percentage of the stuff posted there is crap. Google Base will need some sort of moderation or community involvement that will keep this data pure. And how many people are going to keep using Google Base when they do a search for garage sales in their area and show up only to find that they moved it to next weekend, but forgot to update Google Base?

Let's just say I'm pretty pessimistic when it comes to any service that basically trusts the general public to add to their catalog, and I'd hope Google would know this better than anyone else... a-hem. Let's call this the Scott Mitchell theorem: The quality of any piece of information is inversely proportional to how many people contribute to it.”

Filed under:
RssFeed - Now With Authentication Capabilities
25 October 05 12:29 PM | Scott Mitchell

Made a minor upgrade today to RssFeed, my open-source, custom, compiled ASP.NET server control for displaying RSS content in an ASP.NET page. Specifically, version 1.9.3 includes a new property, Credentials, that can be assigned an object that implements ICredentials. Using this new property, RssFeed can now display feeds that are protected using authentication schemes, such as Basic or Digest.

To slurp down an RSS feed that requires authentication, you'd use code like:

CredentialCache myCache = new CredentialCache();
myCache.Add(new Uri(URL_to_RSS_feed), "Basic|Digest", new NetworkCredential(username, password));

RssFeedID.Credentials = myCache;
RssFeedID.DataSource = URL_to_RSS_feed;
RssFeedID.DataBind();

That's all there is to it! The RssEngine class's GetDataSource() method includes an override that takes in an ICredentials object, which is what the class and method RssFeed class uses underneath the covers when specifying the RSS feed source as a URL.

Also added to the code base was a new exception type, FeedException, which is thrown when there is some sort of web-related exception when accessing a RSS feed from a URL. The existing exception FeedTimeoutException, which is thrown when the timeout for accessing the remote feed expires, was refactored to derive from this new base class.

Enjoy!

Filed under:
A Neat Idea and the RIGHT Implementation
23 October 05 01:15 AM | Scott Mitchell

Earlier I blogged about using GMail as a backup email store, by using an Outlook rule to automatically forward all incoming messages to a 'backup' GMail account. This particular means of implementation had many disadvantages, as I blogged about here. There were some great suggestions/solutions provided in the replies, particularly by Ivan and hurcane.

The short of it is, I went with the recommended implementation - configuring all email accounts to auto-forward all messages to a single GMail account, and then using GMail's free POP3 access to suck down the messages from GMail to Outlook. Here's how I set things up:

  • Configured each account to auto-forward my email to GMail. I have four such accounts - one with my ISP, one for DataWebControls.com, one for 4Guys, and one for my business email. Fortunately, I was able to configure all of these accounts to forward to GMail. (In the comments of a past blog entry I had mentioned that I couldn't setup my WebHost4Life.com account to auto-forward emails... Ivan pointed out that WebHost4Life.com DID support this feature - he was right, I was wrong. I was looking in the web-based email, but I had to go into the control panel to setup the auto-forwarding...)
  • Added filters in GMail for each of the four incoming accounts. Essentially, the filter was like, “for all emails coming in from mitchell@datawebcontrols.com, add the DataWebControls filter.“ I did not archive the incoming messages, because...
  • I configured the POP3 access to auto-archive messages retrieved via POP access. This allows me to log into GMail and quickly see what messages have been downloaded to Outlook and what ones have not. Very useful if the computer is shut down and I am on the road, as I can see both past received mail and current mail received since hitting the road... Thanks, Ivan, for this tip!

One of my concerns with going to a GMail-centered “InBox” was that GMail's spam filters wouldn't be as good as my personalized SpamBayes Add-In in flagging spam. I'm not as concerned about GMail missing spam, since when I POP the content down, SpamBayes catches 99% of the spam... my main concern was that GMail would falsely flag good mail as spam (since spam mail isn't POPed down). So far I've found two false positives in the Spam folder (out of ~250 correctly identified spam messages).

There are a number of things I really like about using GMail as a centralized mail store, the main ones being the ability to read and search through messages while on the road, as well as having a centralized, online-accessible backup of my Inbox.

I have the following concerns:

  • I used to have four incoming email accounts in Outlook, each with a different email address associated with it. When replying, the appropriate email address would be used. With my centalized GMail account, all email comes down through one account, so all replies are with a particular email address - mitchell@4guysfromrolla.com - regardless of what email account the message came from. This is an annoyance, but not a deal breaker. Hopefully there's some way to fenagle Outlook to get it to use the proper From email address somehow...
  • I'm concerned on whether or not GMail accepts the same set of email attachments that my other email addresses accepted. If not, how does Gmail handle this? For example, say that Bob used to be able to send me emails with attachments of type .foo, but now GMail prohibits those. If Bob sends mitchell@4guysfromrolla.com a .foo attachment, will GMail send back a “We don't accept messages of this type“ to the original sender (Bob), or to the account that auto-forwarded the email? I imagine to Bob, but this may still confuse poor Bob because just last week he could send me .foo attachments.
  • GMail's quota - I know 2.6 GB, or whatever it is now, is pretty high, but I get a lot of email. How long before this fills up? This may be moot because it appears that GMail doesn't count archived emails against your quota. If this is the case, then the quota is a non-issue since POPed messages are archived.

I hope this works out well. I'm always a bit remiss to tinker with my email setup because I use email so heavily and depend on it for communicating with clients, my editor, friends, and colleagues. I hope this move is a step in the right direction, since I think it will make email easier to access when on the road as well as provide an offline backup. We shall see.

What I hope to do eventually is make a little utility program that will, each night, zip up changed files since the last day and email them individually to a backup GMail account. This would provide an offline backup of important files, with a history, accessible from anywhere with an Internet connection. Ah, so many projects, so little time! :-)

Filed under:
A Neat Idea, but a Poor Implementation
20 October 05 08:57 AM | Scott Mitchell

Last week I blogged about using GMail as an online email backup service. Essentially, I added an Outlook rule that auto-forwarded all incoming messages to my GMail account. The idea was that I could then check this account when away from my desktop computer to see my latest messages. I think the idea is grand and useful, but using Outlook rules to accomplish this is far less than ideal.

This morning I turned off the Outlook rules due to the following three annoyances:

  1. Outlook forwarded every incoming message to GMail. Problem is, 90+% of the email I get is spam. SpamBayes scuttles about 99% of the spam I get to a Junk Email folder, so I don't see it in my Inbox. However, my GMail account is flooded with spams, and the GMail filters only catch a small percentage of the deluge of crapola.
  2. Since every message is forwarded by Outlook, every message has the little 'forwarded' icon next to it (versus the unread message icon). Just an aesthetics thing, but annoying, since I use that as a visual cue to quickly determine how the message has been handled (unread, replied to, or forwarded). Furthermore, the forwarded messages sent to GMail have my signature, the subject line prepended with FW:, and so on. More aesthetic issues...
  3. My Sent Mail folder exploded in size. Since I get hundreds of emails a day and each and every one was forwarded to my GMail account, I end up with a Sent Mail folder that has a lot of bloat, including forwards of all those spams mentioned in complaint #1. Furthermore, Google Desktop Search - which I use to quickly search my Inbox - returns a bunch of crap results from the Sent Mail folder.

So I've suspended forwarding my incoming mail to GMail. I think the idea is a good one, but it needs to be implemented at the POP server level, and only after aggressive spam filtering has taken place. Ideally, there would be some service that would very often poll my computer for changes to 'important files' and, if I am online (which I always am, unless the DSL is down), it would backup those changes to some online respository (like GMail) from which I could easily restore later or view/download the files from some remote location. (Say I'm on the road and forgot to move over an important file to the laptop. I could just hop onto the online service and pull down the file, or any past revision of the file that was backed up.)

Speaking of GMail uses, Arjan Zuidhof left the following comment in my blog entry about using GMail as an email backup service:

Scott, since you mentioned the Knowledge Base tip some time ago, I've used GMail for this. Took an hour to happily subscribe to all newsletters and mailinglists I stopped subscribing to since blogs and RSS arrived. Since then, more than 10.000 items pile up there, taking about 5-6% of my space. However, I find that in practice, I'm rarely using GMail to find stuff. Google itself has indexed the net in such a superb way, that this extra knowledge base is actually a bit superflous. In the end, everything in my inbox is also indexed by Google, because it's available at some URL.

Arjan, I've found the same thing - why bother searching the GMail KB when Google search can do such a good job on its own. I still do think GMail is a great resource for effectively managing listservs, and if you are on private, non-indexed listservs, then the archiving/searching functionality is quite useful. However, for general, public listservs, you're right - using GMail as a knowledge base, while cool in theory, is really not too useful in execution.

Filed under:
Provider Model Information Out the Wazoo
18 October 05 03:13 PM | Scott Mitchell

Earlier I blogged about my 122-page long GridView tutorial on MSDN, titling that entry GridView Examples Out the Wazoo. Today, while writing my latest 4Guys article - A Look at ASP.NET 2.0's Provider Model - I stumbled across Kent Sharkey's latest blog entry, The Provider Manifesto, which has links to over 120-pages of provider information available online at MSDN.

The Introduction to the Provider Model serves as a good starting place, and is followed by more focused articles on working with the various 2.0 providers:

There's also the official Provider Toolkit homepage on MSDN where, I imagine, links to these articles will appear soon. And of course there's my most recent offerring (the 4Guys article), but, sadly, it is about 118 pages shorter than Microsoft's output. (I also have a PowerPoint presentation on some of the new ASP.NET 2.0 features, which includes a look at the provider model; this talk was presented at the .NET 2.0 University day-long lecture that was hosted back in September.)

Enjoy!

Filed under:
Another Great Use for GMail
14 October 05 05:13 PM | Scott Mitchell

I've blogged about potential uses for GMail before - Using Email as a Knowledge Base, How to Manage ListServs Using GMail, Keeping Track of TODO Items, and so on - and I offer yet another one. This idea came by way of Boing Boing: Archiving Email on GMail. The jist of the idea is to use GMail's abundant disk quota and Internet access as a means to backup email.

I've just implemented a couple Rules on Outlook to auto-forward all incoming emails to a separate GMail account setup exclusively to serve as a backup of my email accounts. Since I leave my home computer running 24x7, Outlook continuously downloads email from my various POP3 accounts and, now, will be continuously forwarding those emails to my GMail archive. The benefit, as I see it, is that I can be away from home and easily check my email messages - both ones I've received since leaving and old ones.

I accomplished this by creating a Rule in Outlook for each of the POP3 accounts I have, auto-forwarding the email to the GMail account postfixing the To address using the + system (as discussed in this blog entry) to help label/filter on GMail. Now if I'm on a laptop-free vacation, or just out and need to read an old email or check for new messages, all I need is an Internet connection!

My only concern is that the Outlook Rules engine doesn't integrate with SpamBayes, so it sends every message I get, rather than just sending those that pass the SpamBayes check. That means I'll be getting the gobs of spam and other assorted crapola in my GMail account. I suspect GMail's spam filters will pick up a good chunk of these, but early results look like a LOT still are getting through. I'm sure I could write some sort of VBA macro or something that would help reduce this, but the Rules approach was quick and easy so that'll do for now.

Filed under:
Creating Random Passwords in ASP.NET 2.0
12 October 05 09:10 AM | Scott Mitchell

One of the major design goals of ASP.NET 2.0 was to identify common page developer scenarios and to provide customizable, extensible platform support. One such area in ASP.NET 2.0 is the Membership API, which makes it a breeze to work with user accounts. In the System.Web.Security namespace you'll find the Membership class, which is designed to “validate user credentials and manage user settings.”

One method in the Membership class is GeneratePassword(length, numberOfNonAlphanumeric), which, as it's name implies, generates a random number of length length with at least numberOfNonAlphanumeric characters that are... non-alphanumeric. This method using a cryptographically-strong random number generator to grab a random byte array of the desired length. It then maps these bytes to appropriate alphanumeric and non-alphanumeric characters. Finally, it ensures that at least the numerOfNonAlphanumeric characters has been injected into the random password; if not, it hunts for alphanumeric characters and replaces them with randomly selected non-alphanumeric characters until the threshold is met.

But what if you're still using ASP.NET 1.x and you need to generate a random password? What do you do? Well, why not use Reflector to view the GeneratePassword() method's source code and simply port that back to ASP.NET 1.x code? That's precisely what I did in my most recent 4Guys article, Generating Random Passwords with ASP.NET. (In addition to looking at using GeneratePassword() the article also looks at a “quick and dirty” random password generating technique using GUIDs.)

Filed under:
The Future of Third-Party Offline Aggregators? Are RssBandit and its Kin Dead Weight?
05 October 05 06:58 PM | Scott Mitchell

There are a number of popular offline aggregators available. By 'offline' I mean that these aggregators can be used while not connected to the Internet. For example, my aggregator of choice is RssBandit, which is an offline aggregator. There's also the likes of FeedDemon, SharpReader, and a whole slew of other choices. But what does the future hold for such offline aggregators?

The future of aggregators, in my opinion, are those that are either online - My Yahoo!, BlogLines, Start.com, Google's personalized homepage, Findory, Rojo, and so on - or are part of the experience of existing 'everyman' applications (i.e., email or web browsing) and, preferrably, are preinstalled with the software. The online aggregators seem to make a lot more sense, having a number of advantages of their offline kin:

  • Not bound to a particular computer - I can be at home, at the office, or on vacation - my subscriptions travel with me.
  • Can utilize the 'social network' - services like Findory make it easy for me to get recommended news and blog items based on my clickthroughs. Services like del.icio.us allow me to share my online habits/sites/subscriptions with others with like interests. I can see what the most popular feeds are, or explore the subscriptions of those whose interests match mine.
  • Easier to 'install' and 'uninstall' - want to install My Yahoo! on your computer? Fire up the ol' browser and enter http://my.yahoo.com - couldn't be easier. And uninstalling's as easy as not visiting the site again.
  • No resource consumption - doesn't matter if I subscribe to one feed or a hundred - the disk space and bandwidth consumed on my computer stay constant when using an online service.

Of course the major disadvantage for online aggregators is that they require the user to be online. While broadband is becoming more ubiquitous, it's not universal, so those who can only get online in bursts, will, obviously, enjoy offline aggregators, as they can download the content while online and the peruse offline. (Similar to the benefits of USENET over online forums.) But these third-party aggregators are going to be crowded out of the marketplace once this feature becomes standard in email/news clients. When Outlook Express makes it a cinch to subscribe to RSS feeds and view the feeds offline, what point is there for SharpReader or any other offline reader?

Granted, these third-party apps can provide new features with a much quicker release schedule than Microsoft or any other large software company, but who's going to use them other than just a fringe population of super-geeks? I like RssBandit. I still use RssBandit. But I have a hard time seeing RssBandit (or any other offline aggregators) having much relevance in the aggregator space in the near future. I'm honestly close to just switching over permanently to online aggregators.

Am I mistaken here? There are some applications that are better suited for the web, some that are better suited for the desktop, and some that have their place both on the web and on the desktop. I think the only place aggregators have on the desktop is for offline access, and I don't see space for offline players outside of, perhaps, an offering from Microsoft and an offering from one other competitor. I mean, how many people do you know that don't use a Microsoft product for offline email access? I wish the best for today's third-party aggregators, but can't see many (if any) of them having any sort of non-trivial install base a few years out.

Filed under:
Create Your Own Website, 2nd Edition
04 October 05 12:23 PM | Scott Mitchell

About a year ago I blogged about my most recent book, Create Your Own Website (Using What You Already Know). This book was my first stab at a book aimed at computer novices and examined how to build your own website using Mozilla's Composer, a free WYSIWYG HTML editor. The book has sold well and even though I promised myself to stick to books geared toward developers, I let my editor talk me into writing a 2nd edition.

Whereas the first edition focuses solely on building your own website from the ground up, the 2nd edition turns its attention more to using free online services to help. The 2nd edition still has two chapters on using Mozilla Composer to create a website from the ground up, but also examines:

  • Selling items online with eBay Stores
  • Publishing content online with the help of Blogger, and
  • Sharing and ordering photos with SnapFish

These three additional chapters took about a month and change to write/edit/author review.

As I mentioned in my blog post about the 1st edition, you are strongly encouraged to purchase multiple copies for your non-computer savvy friends and family members. :-)

Filed under:
More Posts

Archives

My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.