ASP.NET Talk

How Big is Too Big a ViewState?
07 September 05 09:14 AM | Scott Mitchell | with no comments

When creating ASP.NET pages one thing that usually doesn't get looked at too intensely by developers is the page's ViewState weight (I've been guilty of this myself). While there are various mechanisms to reduce the ViewState bloat in a page, the ultimate (uneloquently worded) question is, “How big is too big a ViewState?” Dino Esposito chimes in with some metrics in his blog entry ViewState Numbers [emphasis Dino's]:

You should endeavour to keep a page size around 30 KB, to the extent that is possible of course. For example, the Google’s home page is less than 4 KB. The home page of ASP.NET counts about 50 KB. The Google’s site is not written with ASP.NET so nothing can be said about the viewstate; but what about the viewstate size of the home of the ASP.NET site? Interestingly enough, that page has only 1 KB of viewstate. On the other hand, this page on the same site (ASP.NET) is longer than 500 KB of which 120 KB is viewstate.

The ideal size for a viewstate is around 7 KB; it is optimal if you can keep it down to 3 KB or so. In any case, the viewstate, no matter its absolute size, should never exceed 30% of the page size.

I think these are good metrics to live by when building apps targetted for the Internet. ViewState enacts a “double hit” regarding page load time. First, the user must download the ViewState bytes. Then, when posting back, that same ViewState must be reuploaded to the web server (sent back in the POST headers). And then, when receiving back the resulting markup from the postback, the ViewState (possibly modified) is sent back down again. So during a postback there's a seemingly double hit. That is, if it takes x seconds to download the ViewState of the page, when the user posts back it will take at least 2x - x to upload the ViewState and another x to download it back again. Cripes! (This is part of the reason AJAX is so appealing in Internet situations, although AJAX carries with it it's own slew of issues.)

When you're building intranet apps, where you know your user's will be connecting over a LAN, the page sizes and ViewState sizes are not as important as they impact the user experience much less. Most of my “real-world” projects have been created for the intranet setting, so I've not had to fret over ViewState size as much as others may have.

For those projects where a trim ViewState size is paramount, one common question is how to quickly determine the ViewState size (and, perhaps, what junk is actually being stored in there). For the ViewState size, I usually just do a View/Source and then highlight the ViewState content. (In UltraEdit - my text editor of choice - the number of bytes selected is shown in the toolbar.) To determine the contents of ViewState there are tools like Fritz Onion's ViewState decoder (for ASP.NET 1.x and 2.0) and Nikhil Kothari's Web Development Helper (for 2.0). I also provide code for a web-based ViewState decoder (for ASP.NET 1.x) in my article, Understanding ASP.NET ViewState.

Review of QuickWebSoft.com's ColorPicker Web Control
31 August 05 03:00 PM | Scott Mitchell | with no comments

At the beginning of this month I started work on a project that allowed the users to customize the appearance of the site to a great degree. One such customization was in the colors the site used: background colors, menu colors, text colors, popup box colors, and so on. Rather than limiting the user from picking from a drop-down list of color names, or forcing them to know the hex color comibnation (which, honestly, only a Web developer would even know about), my client wanted users to be able to select a color via a color picker control.

The windows world abounds with color picker controls, but there are fewer ASP.NET-based color pickers, as my research found. I started by asking a question on the ASP.NET Forums: Recommend a Color Picker ASP.NET Web Control? I got a couple of suggestions, but wasn't too thrilled with either of them. Eventually I found QuickWebSoft.com's ASP.NET ColorPicker control. My client liked the functionality, cross-browser support (tested on IE, Opera, and FF for Windows and Safari for Apple), and price ($49 for a single server license), so I ended up using that particular color picker control. (There are some live demos you can try out at http://www.quickwebsoft.com/ColorPicker/ColorPicker.aspx#Demo.)

Overall, the ColorPicker control was easy to setup and start using. It took just a minute or two to get the requisite resource files copied to the aspnet_client folder, move the assembly to the /bin directory, and add the control to the VS.NET Toolbox. Once that was accomplished, I could simply drag and drop the control onto the page and determine the value entered by the user through the ColorName, ColorValue, and/or Color properties of the control. Couldn't be easier.

There were a couple of downsides to the control, however, that were quickly apparent. One issue was that the Enabled property didn't work; that is, even when the control's Enabled property was set to False, the user could still pick a color. Another issue was that each ColorPicker instance emitted the client-side JavaScript it needed, even though this script was identical for all ColorPickers on the page that used the same palette data. This script was around 10 KB. On one page I had 12 color pickers, which led to over 120 KB of client-side script, of which only about 10 KB was unique!

I emailed QuickWebSoft.com's support with these two issues and they acknowledged the problems and informed me that these issues would be fixed in a future version. What was highly unexpected, however, is that this next version was sitting in my Inbox the next morning. I must have sparked an all night coding session with my requests! :-) (One thing to be a tad wary of, but you can tell that this is, likely, a one-man shop here. The support emails usually take about 12 hours to hear back from and there are no online support forums where you can ask questions or where answers to your questions may already reside...)

The ColorPicker works well for simple use cases, but in more involved cases I had to exert quite a bit of effort to get it to 'play nice.' For example, the ColorPicker has a server-side event, ColorPicked, that you can have fired when a color is selected. However, if you want to have some client-side action take place when a color is picked you're pretty much on your own. Sure, there's a property that you can set that will run any JavaScript script you set to it, but you still have to write the client-side script that reads or sets the color value, which, while not terribly complex, still took a good half hour to hammer out correctly. Another gripe is that the palette files ColorPicker ships with or can be downloaded from the website are only the 216 web safe colors (or less!). For this application I wanted to allow users to pick from a wider array of colors, much like the standard MS Office color picker, where there's a fine gradient. Sure, you can create your own palette files, but this project's time and budget did not account for that. One last gripe - there isn't an option to include a textbox into which a user can just type in a color. For example, if a user knows that their logo's background color is, say, #AA7601, they may want to just type that in rather than having to peck through the color picker (or, worse yet, if the color isn't available in the ColorPicker's palette). The main challenge in adding a textbox was wiring up all the client-side events to ensure that after entering a value in the textbox, the color would be displayed properly, and when picking a color from the color picker, the color value would be inserted automatically into the textbox.

Overall, I do recommend QuickWebSoft.com's ColorPicker if you need a simple color picker control for a very affordable price. If you need to do any great deal of customization of your color picker, or need to be able to tinker around with it in a client-side setting, you might want to take sufficient time trying out this and other options before settling on one.

Filed under:
Upcoming One-Day Lecture on .NET 2.0 (San Diego, CA)
30 August 05 10:12 AM | Scott Mitchell | with no comments

On Saturday, September 10th University of California - San Diego Extension will be putting on a one-day lecture serving as an introduction to .NET version 2.0! We'll be discussing what's new in version 2.0 along with tips for migrating your existing applications. The event will be held at the Salk building in UC-San Diego's beautiful campus.

The day of lectures includes four talks and a speaker's panel / wrap-up discussion. The speakers at this event include:

These talks cover smart clients, ASP.NET 2.0, the .NET Framework version 2.0, and language migration. The day-long lecture runs from 9:00 AM to 3:00 PM with lunch included. All this for a scant $95!!

If you're interested in learning more about this event or are interested in enrolling, visit http://extension.ucsd.edu/studyarea/index.cfm?vCourse=CSE-40940. Hope to see you there!

Filed under:
FeedBurner and Changing a Blog's Feed URL
28 August 05 05:23 PM | Scott Mitchell | with no comments

This weekend I moved over my RSS feed - previously http://ScottOnWriting.NET/sowBlog/Rss.aspx - to a feed managed by FeedBurner (http://feeds.feedburner.com/ScottOnWriting). FeedBurner serves as a sort of feed URL proxy. Basically you give FeedBurner a link to your RSS feed and it creates a feed based on that feed. You then point your subscribers to the FeedBurner feed and FeedBurner serves up your site's content, maintains statistics on who's subscribing to your blog, and so on.

I decided to move to FeedBurner to realize three benefits (keep in mind that ScottOnWriting.NET (still) runs off of an old version of Scott Watermasysk's .Text blogging engine, as I've yet to upgrade to Community Server; previous to today, I was actually using a pre-0.94 version, but today "upgraded" to the official 0.94 release downloadable from the .Text GotDotNet Workspace):

  1. Subscription statistics - FeedBurner provides a number of free statistics, including number of subscribers, number of requests, and aggregator breakdown.
  2. Someone else handles the bandwidth - currently requests to the RSS feed on ScottOnWriting.NET consume roughly 1.5 GB of traffic per week, or 6 GB of traffic per month (in total, ScottOnWriting does about 11 GB of traffic per month). That's a lot of 1s and 0s that would be nice to offload to another party. (I don't believe the pre-0.94 version of .Text I was using supported conditional HTTP GETs (although if I'm not mistaken the "official" 0.94 release does; had I been using a version that supported conditional GETs this bandwidth requirement would be an order of magnitude lower, I'd wager, perhaps just a GB for the month.) (To clarify, while FeedBurner does make requests to the blog's RSS URL, it caches the results for a period of time, thereby reducing the bandwidth demands for my server.)
  3. FeedBurner has a couple of neat “publicizing“ tools - FeedBurner includes a number of tools to easily make links to add your blog to My Yahoo!, MyMSN, newgator Online, and so on. Additionally, there are nifty little tools you can use to “show off“ how many folks subscribe to your blog, a la:

When changing over your RSS feed URL the main challenge is making sure that your existing subscriber base starts to use the new feed URL. There are, to my knowledge, to ways this can be done, with the first of the two ways being the ideal way:

  1. Have your old feed URL emit an HTTP 301 status code - The HTTP 301 status code is a message from the server to the client saying, “Hey, this resource has been permanently moved to URL xyz.“ The client, then, can make a new request to the specified URL; too, if there's some database being used to track the URL, this message informs the client that it's time to update the database and use the new location. If I'm not mistaken, virtually all modern aggregators support HTTP 301 status codes and will automatically update a site's feed URL to use the newly specified location.
  2. Tell people of your new feed URL - if you do not have control over your blog website you may not be able to take the steps needed to replace the current feed URL with an HTTP 301 status code. In this case, the only approach I know of to inform users of the new feed URL is simply through word of mouth. That is, you'll just have to post on your blog an entry telling users to update their aggregators. As Kent Sharkey has noted, though, the results may be somewhat disappointing.

Since I run ScottOnWriting.NET myself (well, through a web hosting company), I have control over these matters. The only challenge, then, was getting .Text to play nice. In .Text version 0.94 the site's RSS feed comes from a file named Rss.aspx. This file, though, does not actually exist; rather, in the Web.config file all requests are handed off to a .Text HTTP Handler. When a request comes in for Rss.aspx, .Text generates the appropriate output.

To get Rss.aspx replaced with an HTTP 301 status code, the first step is to create an Rss.aspx file in your blog's root directory. The code needed for this page is alarmingly simple - all you want to do is return an HTTP 301 specifying the new feed URL, like so:

<script runat="server" language="C#">
void Page_Load(object sender, EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location", "
http://feeds.feedburner.com/ScottOnWriting");
}
</script>

(Of course replace the http://feeds.feedburner.com/ScottOnWriting Location header value with the URL of your new RSS feed...)

Creating this file is not enough. In fact, even after creating this file if you visit Rss.aspx through your browser you'll still see the complete RSS feed rather than being auto-redirected to the specified URL. This is because the ASP.NET engine is handing off the request to the .Text HTTP Handler rather than handling the request itself. If you look at the <httpHandlers> section in the Web.config file you'll find an entry like:

<add verb="*" path="*.aspx" type="Dottext.Framework.UrlManager.UrlReWriteHandlerFactory,Dottext.Framework" />

This entry says, “Any request for an ASP.NET page should be handled by the class Dottext.Framework.UrlManager.UrlReWriteHandlerFactory,Dottext.Framework,” and HTTP Handler. This includes requests for Rss.aspx. Hence we need to add the following line to the <httpHandlers> section:

<add verb="*" path="Rss.aspx" type="System.Web.UI.PageHandlerFactory" />

That tells the ASP.NET engine to take care of requests to Rss.aspx. At first I naively thought that I was done, but I had just unwittingly setup an infinite loop! When a request comes into Rss.aspx, it sends back a 301 status code to the client, saying, “No, no, no, you want to go to this FeedBurner URL.“ This is what we want to tell people coming through a browser or aggregator, but remember that FeedBurner also needs to know the URL of the site's feed, which, at this point, I had set simply as Rss.aspx! So when FeedBurner periodically checked to see if a new version of my feed was available it requested Rss.aspx, which told it to check itself, which says to check Rss.aspx, which says to check itself, which... you get the point.

Instead, what I needed to do was rename .Text's Rss.aspx to something else, like RssFromText.aspx and instruct FeedBurner to use this alternate, “secretive” feed URL. With this setup, a user who already subscribes to ScottOnWriting.NET through Rss.aspx will automatically be switched over to FeedBurner. FeedBurner's RSS content will be populated from RssFromText.aspx, which will be generated from .Text, reflecting the most recent blog entries. No more infinite loops!

To accomplish this I had to edit blog.config to tell .Text that it should use RssFromText.aspx as its RSS feed URL as opposed to Rss.aspx. This involved updating the appropriate <HttpHandler> line like so:

<HttpHandler Pattern = "(?:\/RssFromText.aspx)$" Type = "Dottext.Framework.Syndication.RssHandler, Dottext.Framework" HandlerType = "Direct" />

With this addition, requests now to Rss.aspx are sent back an HTTP 301 status code, but FeedBurner can still slurp down the site's content through RssFromText.aspx.

Next, you'll probably want to update the link to the site's feed in the My Links section (since this will point the users to Rss.aspx, but you want them to go directly to the FeedBurner link). (In actuality, this step is probably optional since even if you do leave it as Rss.aspx, when the attempt to view that page through a browser or slurp it through an aggregator, they'll get the 301 status code and auto-redirect to the FeedBurner URL... but still, for completeness let's change this link.) To accomplish this, simply edit the MyLinks.ascx file in the ~/Skins/skin_name/Controls/ directory. With version 0.94 you'll find two HyperLink controls that .Text automatically looks for and sets their NavigateUrl properties to Rss.aspx - these controls have IDs Syndication and XMLLink. Even if you explicitly set the NavigateUrl properties to your FeedBurner URL, .Text will overwrite it and the link will be rendered as Rss.aspx. If you try to simply remove these HyperLink controls you'll find that (at least with 0.94) .Text will barf. What I did was simply set their Visible property to False. I then added two HyperLink Web controls of my own that referenced the new feed URL.

There's one more facet that should be changed, although I've not made the change since (to my understanding) you'd need to actually hack the .Text source code, recompile, and re-deploy. In the portion of the web pages in your blog you'll find a tag that's used for RSS feed auto-discovery:

<link rel="alternate" href="http://scottonwriting.net/sowblog/rss.aspx" type="application/rss+xml" title="RSS" >

Ideally the href attribute would contain the URL of your new feed... but I didn't feel like going through the headache of pecking through the source, making a change, testing, and so forth. So I just left it as-is, figuring in the worst case someone will “discover” my feed to be Rss.aspx, which will automatically be updated to the FeedBurner syndication URL as soon as their aggregator makes its first request to Rss.aspx.

The FeedBurner service looks pretty cool upon first glance. Once this new feed URL gets some use and I get some metrics in FeedBurner's database, I plan on sharing some of the stats... it'll be interesting to see what the most popular aggregator out there is for those who are ASP.NET developers (the primary audience of this blog, I imagine), among other data points.

Enhancements to skmLinkButton
24 August 05 12:45 AM | Scott Mitchell | with no comments

In a previous blog entry I talked about skmLinkButton, a simple, open-source enhancement to the built-in ASP.NET LinkButton Web control. skmLinkButton includes properties to display text in the browser's status bar when mousing over the link as well properties to easily add a client-side confirm messagebox when clicking the link.

I recently received an email inquiry in my Inbox from loyal 4Guys reader David P., who asked:

I'm writing to you because I think you may be able to help me. I enjoyed your article about overriding Linkbutton very much, but I have a further need that you may have a solution to. A LinkButton is rendered like <a href="javascript:__doPostBack(.....)" onClick="alert('pressed')">Click me</a>.

My problem is that if I SHIFT+CLICK the LinkButton [or right-click on it and opt to open in a new window], a new window opens [with the JavaScript, javascript:__doPostBack(.....), in the Address bar, resulting in a client-script error and a confused user.] ... I want to avoid that. I've found a solution:

<a href='#' onClick="__doPostBack(....);">Click me</a>

My problem is that I don't know which attribute to modify so that the Href value is moved to the onClick event... It must be done in an inherited control or codebehind.

(To see an example of David's problem, click here. The hyperlink's markup is simply <a href="javascript:var x = 4;" target="_blank">click here</a>. Note that clicking the link opens a new window that displays the JavaScript in the address bar and returns an error. This mimics the behavior of an ASP.NET LinkButton that is clicked to have opened in a new window.)

I tinkered with skmLinkButton to do what David requested. The end result can be read about in this week's 4GuysFromRolla.com article, Stopping JavaScript Errors When Opening a LinkButton in a New Window. There are live demos to tinker around with over at http://scottonwriting.net/demos/skmLinkButtonDemo.aspx. You can download the complete source code, along with an example ASP.NET page and a prec-compiled assembly, at http://aspnet.4guysfromrolla.com/code/skmLinkButton.zip.

Filed under:
Requiring HTTPS for Certain ASP.NET Pages
17 August 05 12:38 AM | Scott Mitchell | with no comments

I'm currently working on a project that has SSL capabilities. Users reach the site through a partner site, with the user's session using SSL only if the partner site also used secure communications. There's also an administrative section that, regardless of whether or not the user entered using SSL, must be accessed via SSL. How, then, do you ensure that when a user visits a particular page or subset of pages that they do so via a secure channel?

There are a couple of techniques that I'm aware of. Probably the most sound way is to make the setting through IIS. If you go to the IIS metabase you can right-click on the folder or website you want to require be accessed only through SSL and go to Properties. Next, tab over to the Directory or File Security tab (depending on if you're configuring a directory or file) and, in the Secure Communications section, click the Edit button. This will bring up the Secure Communications dialog box; from there you can click the "Require secure channel" checkbox and, voila, the directory or file can now only be visited through HTTPS.

Once you have made this setting if a client attempts to visit such a configured resource through HTTP (rather than HTTPS) an HTTP Error 403.4 - Forbidden: SSL is required to view this resource.

You can also require SSL programmatically through your ASP.NET pages' code-behind classes, which can be useful if you don't have direct access to the web server's metabase to make the settings as described above). (Realize that setting the "Require secure channel" option through IIS has the advantage that it requires a secure channel for all types of resources - ASP.NET pages, HTML pages, images, and so on. By making this setting through ASP.NET - either in a code-behind class or in an HTTP Module - will only require SSL in resources requests that IIS hands off the the ASP.NET worker process.)

With the ASP.NET code-behind technique you basically add a bit of code to your code-behind class (or, better yet, a base class or HTTP Module) that checks to see if the request is through a secure channel; if not, it redirects the user to the same URL but through HTTPS. An example can be seen in this blog entry: 443 <--> 80 - Seamlessly moving requests in and out of SSL.

In my project I ended up using both techniques, actually. For the ASP.NET pages in the administrative interface I used a technique very similar to the one described in the blog entry. My main difference, though, was that I added a check to see if the incoming request was coming through localhost. If it was, then I didn't sweat the HTTP --> HTTPS translation. I just stayed with HTTP. (I did this because locally I do not have an SSL cert; sure, I could easily create and setup one, but why bother?) I went with the programmatic approach because the user might already 'legally' be on the site through HTTP and then click on a link to go to the admin page. I guess I could have went back and ensured that all admin links were fully qualified with URLs starting with https://, but instead I opted to let the person hit the admin page through HTTP only to be auto-redirected to the same page, but through HTTPS. I used the IIS approach in a couple of places where I had .htm files that needed to be protected. Also, I am using ELMAH on the site and wanted to ensure that its error log viewing page (elmah/default.aspx) could only be viewed through SSL, so potentially sensitive information couldn't 'accidentally' be sent over an insecure channel.

In closing, here are some additional resources I found on this topic that are worth reading:

Filed under:
DVXP's TEdit.NET Control
10 August 05 12:31 AM | Scott Mitchell | with no comments

My last blog entry provided a list of commercial, third-party ASP.NET server controls I have used in past projects, along with a mini-review of each. Today I added another notch to the proverbial commercial ASP.NET server control headboard: DVXP's TEdit.NET.

TEdit, which stands for Table Edit, is a Web control designed to make viewing/editing/inserting/deleting data from a database table as easy as point-and-click. And it does a pretty good job at that. In past projects I've always extended the DataGrid in some manner to show, edit, delete, etc. data from a database table. TEdit.NET makes this job much easier, but does introduce a bit of a learning curve for those who have always used the DataGrid.

First, rather than specifying the “structure” of the grid in the ASP.NET page's declarative syntax, a separate XML configuration file is used. Furthermore, with TEdit.NET you need to provide either a table name, view, or stored procedure that has the data to bind to the grid. If, like me, you've spent the time to build up a rich middle tier with an assortment of business objects and classes that provide the data to the presentation tier, you're SOL, as far as I can tell - with TEdit.NET there's a tight coupling to the data model.

While this is an annoyance and a potential maintainance issue for large ASP.NET applications, I think TEdit.NET is a killer app for small applications. Once you get it set up you get, without writing a lick of code, the ability to:

  • Sort data
  • Search on data
  • Page through data
  • Edit data
  • Insert data
  • Delete data
  • Cache data

Granted, the GridView and DataSource controls in ASP.NET 2.0 provide a lot of this out of the box, but with TEdit.NET I've found it amazingly easy to make an impressively nice-looking and very intuitive table editor with minimal effort.

TEdit.NET's usage and syntax is a bit different from the DataGrid's, so, as I mentioned earlier, there is a bit of a learning curve for those of us who have always used DataGrids to provide access to backend data. For example, there are different events and different “patterns” used for formatting the data in the grid, iterating through the records of the grid, accessing a particular record's column values, and so on. If you are familiar with DataTables, though, you'll likely find this learning curve to be quite flat, as TEdit.NET basically provides programmatic access to the DataTable that serves as the foundation of TEdit.NET's data.

If you have a smallish ASP.NET application where you need to create professional-looking interfaces for editing backend data, I'd highly recommend TEdit.NET. At $249.00 for a single server license, I think you'll find this control pays for itself in time saved within the first week.

Filed under:
Third-Party, Commercial ASP.NET Components I've Used
04 August 05 11:49 AM | Scott Mitchell | with no comments

Over a number of ASP.NET projects I've done for companies I've used a variety of third-party, commercial ASP.NET components to accomplish a bevy of common tasks. Oftentimes it makes sense for a client to plunker down the money for a pre-built app that I can plug into his application rather than paying me to create the needed functionality from scratch. I thought I'd share the list of third-party components I've used in the past, along with a (very) short review of each. I'd also be interested to hear what third-party ASP.NET components you've used in the past, and what you thought of them. (Of course, for a more lengthy list of third-party ASP.NET components - some commercial, some free - check out the ASP.NET Control Gallery.)

  • r.a.d. menu - despite having created my own free, open-source ASP.NET menu component (skmMenu), I have used r.a.d. menu in a number of applications. (The reasons for using this over skmMenu have varied. In some cases the client had already purchased r.a.d. menu; r.a.d. menu also offers better cross-browser support than skmMenu and can be customized more easily to provide a more professional look. There have been projects, though, in which I have used skmMenu, so don't let my use of 'competitor' menu components deter you from checking out skmMenu!) I like r.a.d. menu, as it's always been easy to setup and get working with. I've used it in situations both invovling rather static, boring menus, and ones where the entire menu structure was dynamically generated based on various parameters (the logged in user, the current data being worked on, the state of the system, and so on).
  • r.a.d. spell - while I've really liked r.a.d. menu, unfortunately I've not been nearly as impressed with r.a.d. spell. r.a.d. spell provides a client-side spell checker, and while it's realtively easy to setup and looks slick, I've had many random problems reported from various users on sites that I've used r.a.d. spell on. Complaints ranging from dictionary suggestions that seem 'off' to cryptic, client-side error messages when attempting to spell check.
  • Peter's Date Package and Professional Validation and More (VAM) - I'm a big fan of both of these products from Peter Blum. In one project I use the DateTextBox, CurrencyTextBox, DecimalTextBox, and IntegerTextBox like nobody's business. The end users love them, as they use client-side JavaScript to restrict the data being entered and have various bells and whistles (such as the nice-looking drop-down calendar in the DateTextBox, little up and down arrows to increment/decrement the value in the IntegerTextBox, DecimalTextBox, and CurrencyTextBox, and so on. I've yet to explore the true depth of the extra validation controls and capabilities, but expect those features to become more useful as this project evolves. But, seriously, the 'masked' TextBoxes alone made the entire purchase totally worthwhile. (In fact, in an earlier blog entry I carried on about my affection for Peter Blum's controls.)
  • aspNet Email - I had a client who needed to blast out customized emails to thousands of registered users on his site, and he contacted me for advice. I recommended Dave Wanta's aspNet Email component, which is not only easy to setup and blindingly fast in shooting emails out the door, but has a MailMerge() method that made writing the entire application about a thirty minute endeavor.
  • Tall PDF - I've not worked with this component in great enough detail to give it much of a review. I was able to accomplish what I needed to with it - building up a rather simple, two-page report in a PDF document - and didn't have too hard of a time doing it. The major disappointment was that the resolution of the PDF file seemed a bit low - that is, I could not import an image (specifically a graph that needed to appear in the report) unless it was like less than 460 pixels wide and like 620 pixels high. Kind of a bummer because I had to shrink down the dynamically generated graph in order to get it to fit, thereby losing some of the detail. But other than that annoyance, no complaints.
  • Dundas Chart for .NET - I haven't used Dundas in a client's application, but I was given a free copy from the kind folks at Dundas about half a year ago and have used the component in a couple of my own, non-public web applications. I was impressed by the look and feel and the ease with which I could chart data. One such 'private' application I've used this component in is one that I use to track my weight and caloric intake. Here's a graph showing my weight over the last six months or so. Creating this chart basically involved just a half dozen lines of code, and it's appearance can be easily configured through the Visual Studio .NET Designer. Personally I think it's a pretty snazzy looking chart considering my artistic skills!

I'm sure there are a handful of controls I'm forgetting, but the above list gives a smattering of the commercial ASP.NET components I'm currently using and have used in the past.

Care to add a comment about one of the commercial controls listed above? Want to add mini-review on a 3rd-party ASP.NET component you've used? If so, simply add a comment.

Filed under:
Watch Your Cookies!
18 July 05 05:14 PM | Scott Mitchell | with no comments

RCF 2109 - HTTP State Management Mechanism - provides the standard for how browsers should handle cookies. Near the end of the RFC, the following recommendation is made:

Practical user agent implementations have limits on the number and size of cookies that they can store. In general, user agents' cookie support should have no fixed limits. They should strive to store as many frequently-used cookies as possible. Furthermore, general-use user agents should provide each of the following minimum capabilities individually, although not necessarily simultaneously:

  • at least 300 cookies
  • at least 4096 bytes per cookie (as measured by the size of the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie header)
  • at least 20 cookies per unique host or domain name

On KB article 306070, Number and Size Limits of a Cookie in Internet Explorer, Microsoft states that they “comply” with the RFC by providing the bare minimum cookie requirements. Urm, hello, Microsoft... these are the minimum capabilities. Perhaps the goal here was not to have to have different cookie-handling code for IE based on the device (for example, a hand held device or cell phone might need to place such size/length restrictions on cookies), but it seems a bit annoying that such limitations should be placed on IE 6 when being used on my desktop, which has dozens of gigabytes just waiting to hold cookie data. (There is a workaround for the 20 cookie limit - namely, use a cookie dictionary. See the KB article for more info.)

Anyway, this 20 cookie limit came to bite me in the behind. The scenario: a website using forms-based authentication and a Web page using Telerik's r.a.d. spell control. Each r.a.d. spell control on a site emits a session-based cookie for the purpose of 'securing the dialog.' I'm not 100% clear what that means or what, exactly, is being secured. Initially I assumed that it had something to do with the client-side spell checking dialog, ensuring that some cross-site script or some other attack couldn't be used to read or modify the contents while being spelled checked. That was my guess, at least, until I found that there's a UseSession property that will store whatever is needed to ensure the 'security' in a session variable. Hrm, so it's a server-side check... I'm confused, but I digress.

Anywho, the UseSession property is False by default, so by default each time a user visits a page that has a spell checker control, or visits a page with many spell checker controls, new cookies get added to the browser. You can probably see where I'm heading with this - one particular page had over 40 spell checking control instances on it which proceeded to write 40+ cookies to the client. Since IE can only remember 20, it conveniently decided to nix my forms-based authentication ticket cookie. The end result? Whenever I visited this page, after posting back I'd immediately be kicked back to the logon page because the authentication ticket was getting overwritten by the slew of r.a.d. spell cookies. Meh.

The workaround, in my case, was to simply set the UseSession property to True, thereby avoiding the plethora of r.a.d. spell cookies and thereby preserving my forms-based authentication ticket.

It took a bit of researching to figure this particular one out, and it wouldn't have been possible without Fiddler, a great tool for inspecting the incoming & outgoing HTTP streams. (I mentioned Fiddler a few weeks back in my Scott Hanselman's Recommended Tools blog entry.) Thanks to Fiddler I was able to see the mess of cookies being added by r.a.d. spell and the corresponding loss of my forms-based authentication ticket cookie. In addition, I set a breakpoint on the Global.asax's AuthenticateRequest event handler and added to the Watch window Request.IsAuthenticated. On postback, then, I would hit the breakpoint and see that I had 'lost' my authenticated status.

In any event, if you find yourself being 'automagically' logged out of your site, perhaps it may be due to using IE as your browser and having too many cookies from your domain. Hope this helps someone save the time that I invested.

Happy Programming!

Filed under:
Database Projects in Visual Studio .NET
13 July 05 12:07 AM | Scott Mitchell | with no comments

Until about two years ago, I was completely ignorant about Visual Studio .NET's Database projects. That is, until my wife let me in on the secret. Actually it was kind of funny, she came home from work one day excited after she had learned about the Database project type in VS.NET and wanted to show me how to create/setup/use such a project. We were about to head out of the house, so I said, Later, and over the next few weeks she mentioned it occassionally, but there never was a good time.

Finally, we both had some free time and I asked her to show me the Database project. Since then I've been using them in every single data-driven ASP.NET Web application project I've created, and have immensely enjoyed the benefits. Many thanks, my dear.

I recently wrote an article on 4Guys about the advantages of Database projects and how to get started using them. I invite you to read this new article, Database Projects in Visual Studio .NET. Additionally, be sure to check out the Visual Studio .NET Database Projects sample chapter from Database Access with Visual Basic .NET.

To summarize the advantages of Database projects, I quote from my article on 4Guys:

  • Source control on database objects - if you are using source control (and you most definitely should be), the scripts managed through the Database project can be added to your source control provider. This means that any changes to your database objects will be recorded by your source control provider, thereby providing the myriad of advantages that source control affords (rolling back to older versions, a complete history of changes, etc.).
  • A centralized development experience - rather than having to poke through SQL Enterprise Manager you can manage your database-related objects through the same IDE that you are using to manage the pages and components in your ASP.NET application.
  • An improved text-editor - Visual Studio .NET's text-editor is head and shoulder's above SQL Enterprise Manager's built-in text-editing experience. Additionally, with SQL Enterprise Manager many of the dialog boxes that are used to create/edit database objects are modal, thereby making it impossible to examine other facets of the database when creating/editing a database object. Not so when doing it through Visual Studio .NET.
  • Ease of deployment - if you need to quickly replicate your database's structure having a Database project makes it as easy as right-clicking on the Database project's objects and selecting the 'Run' context-menu option.
Filed under:
Scott Hanselman's Recommended Tools
28 June 05 09:58 PM | Scott Mitchell | with no comments

The other day I stumbled across Scott Hanselman's 2005 Ultimate Developer and Power Tools List. In his blog entry, Scott lists a whole slew of recommended tools and utilities covering a wide array of facets. Definitely worth checking out.

There are a few tools Scott mentioned that I wanted to give a shout out to, as well:

  • Reflector - this tool has given me an understanding and comprehension of the .NET Framework and ASP.NET internals that (IMO) no book could ever provide. Learning some factoid by reading about it doesn't stick in the ol' noggin nearly as well as figuring out the same bit of information by spending 15 minutes plowing through the actual source code. Highly, highly, highly, highly, highly recommended.
  • Firefox Extensions - extensions make Firefox one sweet browser. The most useful to me in a day-to-day setting are GoogleBar, IEView, and WebDeveloper. For having to dig into the HTTP internals for a request, the Live HTTP Headers extension is invaluable. (Ditto for Fiddler for IE.)
  • Peter Blum's Validation and More and Date Package - a client of mine bought these controls on a whim and it was the best investment he could have made. It's probably saved me a couple dozen hours, netting the client several thousand dollars in savings... all this for a few hundred dollars.
  • Andy Smith's MetaBuilders - a number of very useful, free, open-source server controls and custom DataGrid column classes, that have also saved my clients oodles of money over the past couple years.
  • ELMAH - Atif Aziz's Error Logging Module and Handlers component, for which I co-authored an article on MSDN Online (Using HTTP Modules and Handlers to Create Pluggable ASP.NET Components). This is another thing I use in every ASP.NET project - takes about two minutes to setup and the rewards are priceless. Plus you can't beat the fact that ELMAH is free and open-source, so you can enhance it as needed.

One tool not on Scott's list that is invaluable to me is UltraEdit32. (Granted, Scott has a similar buffed up text editor, Notepad2, the benefit over UltraEdit being that Notepad2 is freeware. I've not used Notepad2, so I can't compare it feature-wise to UltraEdit.) I also want to give a shout-out to FileZilla, which is a free FTP program that, since I started using it, I could not imagine life without it. (I was using WS_FTP before... ick.)

For more tools and utilities be sure to check out Scott's list. There are also some past blog entries that touch on similar topics, such as Programs You Can't Live Without and What Programs Do You Have Running 24x7?

I'm No Longer Cutting Edge
27 June 05 08:43 PM | Scott Mitchell | with no comments

I'm no longer cutting edge. Well, I guess I've never really been super cutting edge, but since the ASP 3.0 days I've always been on the forefront of new ASP-related goodness coming down the Microsoft pipeline... I started working with classic ASP 3.0 months before it officially shipped with Windows 2000 / IIS 4 when I started working on my first book. Right as I was finishing my second (and final) book on classic ASP I was invited up to Redmond to learn about this new technology, ASP+ (which later became ASP.NET). Those were the days.

Here we are now, getting closer to having an RTM version of 2.0, and I'm so far behind. I've looked into 2.0, played around with it, even written some tutorials on the GridView, but I've not created an end-to-end application with ASP.NET 2.0 or done anything remotely close to it to prepare myself for my upcoming book, which I've yet to start and whose priority seems to be diminishing daily. I feel so outdated.

A large part of this is due to the fact that I do quite a bit more consulting work these days than I used to do in the past and my clients are happy with ASP.NET 1.x (not that I've tried to convince them to switch their existing, working ASP.NET 1.x systems to beta software). I will master ASP.NET 2.0 here sooner than later, but what concerns me is that I'll need to know both ASP.NET 1.x and 2.0 quite intimately. The 4Guys visitors and my students will be interested in 2.0 content, but my clients will want me to keep expanding their existing, functional 1.x systems.

There was a time when I knew classic ASP and VBScript like a second language. It's been years since I've last created or edited a classic ASP page, thankfully. Without the practice with ASP/VBScript, however, this “second language” has fallen into disrepair. I had to write some VBScript the other day and was able to whip it out in 15 minutes or so, but about 13 minutes of that was poking around the Microsoft Scripting docs.

I know there's a lot less difference between ASP.NET 2.0 and 1.0 than between classic ASP and ASP.NET, but nevertheless I find myself worrying that I won't be able to have a mastery of both versions like I know I will need. I'm concerned that as I learn and use ASP.NET 2.0 more and more that my ASP.NET 1.x skills will erode, where it will take twice as long to tackle some given ASP.NET 1.x task than it does now, and that's not fair to my clients. I don't mind change in the least - if I did I'd be in the wrong field - I'm just hoping that I can continue to thrive in the 1.x and 2.0 worlds simultaneously.

Tip: When Adding Dynamic Controls, Specify an ID
03 June 05 12:32 PM | Scott Mitchell | with no comments

One area where ASP.NET developers seem to have the most difficulty with is working with dynamic controls. This difficulty is understandable, as there are a plethora of subtleties in getting everything to work right. I've actually authored a number of articles on working with dynamic controls:

The other day I was talking to a colleague who was having some problems with dynamic controls. His site was designed as a single “master” page that had on it navigation controls (skmMenu, to be precise) and a PlaceHolder. Whenever a skmMenu menu item was clicked, this colleague was wanting to dynamically load in a corresponding user control into the PlaceHolder.

His first attempts at knocking this out was to do the following:

  1. In his “master“ page he had a method called LoadPageControls() that was invoked from the Page_Init event handler. This LoadPageControls() method looked at a Session variable - if the session variable was set to a user control path, that user control was loaded into the PlaceHolder (using Page.LoadTemplate(path)- see An Extensive Examination of User Controls for more info on dynamically loading user controls). If, on the other hand, the session variable was not set (such as on the first page load before the user had clicked an item from the skmMenu), a default user control was loaded.
  2. When a skmMenu menu item is clicked the MenuItemClicked event fires. This colleague then created an event handler for this event, set the Session variable to the menu item's CommandName property and recalled LoadPageControls(), thereby loading in the appropriate user control based on the menu item clicked. And since this information was saved in the Session, if the dynamically loaded user control caused a postback, for example, upon reloading the “master“ page the appropriate user control would be re-added, thereby remembering the user control to display across postbacks. (Side note: this information could have also been serialized to view state. I don't know if the Session option was chosen out of ignorance over using ViewState or if it was chosen because he wanted the user to be able to return to the “master“ page sometime later and have the last loaded menu item for that user brought back up. One issue with Session, though, is that this “master“ page will not work for those who have cookies disabled in their browser.)

This approach worked well, or so it seemed. The user could click a skmMenu menu item and its appropriate user control would be loaded up in the PlaceHolder. If the user control caused a postback - say it contained a Button Web control - even when the postback was invoked, everything worked as expected - the page was reloaded and the appropriate user control was displayed.

There was, however, one problem. One dynamically loaded user control contained an editable DataGrid. If a user clicked the Edit button in the DataGrid, nothing happened. If he clicked it a second time, the DataGrid row became editable. There was a one-click pause, so to speak. Setting a breakpoint in the code, my colleague was able to determine that the event handler wasn't being invoked until the second (and all subsequent) clicks.

Anytime you are working with dynamic controls and events seem to go missing, the first thing to do is a View/Source. View the rendered HTML of the page that's sent to the browser before you do the action that doesn't trigger the server-side event (but should), and compare it to the HTML of the page right before the action does trigger the server-side event. Specifically, pay attention to the IDs of the Web controls since its the ID is what is passed back during postback to indicate what triggered the postback...

When examining the HTML after the editable DataGrid was loaded the first time, the DataGrid's ID was something like _ctrl1_DataGridID. After clicking the Edit button (which did not make the row editable), the returned HTML differed in that the DataGrid's ID was now _ctrl0_DataGridID. Note the change from 1 to 0 in the ID. Since the ID differed from the first time the DataGrid user control was loaded to all subsequent postbacks, the event wasn't picked up that first time, when the ID was still in flux.

Once we had identified that this was the problem (as it usually is when missing an event once, but not in subsequent tries), the next step was to figure out why this was happening. The problem could be traced back to the chain of events that unfolded when the skmMenu menu item was clicked. When the “master” page is visited for the very first time, the LoadPageControls() method fires during the Init event and, since there is no Session variable set, the default user control is loaded. Now, when the skmMenu menu item to load the editable DataGrid is clicked, the page posts back and the LoadPageControls() method runs first, and it says, “Hey, I still have no Session variable,” so it loads the default user control. Then, later in the page lifecycle, the MenuItemClicked event handler runs, which sets the Session variable and then recalls the LoadPageControls() method, which says, “Ah, yes, here is this Session variable, let me load the associated user control.”

The problem is, the PlaceHolder has had two controls added to its Controls collection, hence the reason why we get the _ctrl1 in the DataGrid's ID! (This happened even though the PlaceHolder's Controls collection was Clear()ed each time the LoadPageControls() method was called...) When the Edit button was clicked for the first time, the page posted back and the LoadPageControls() method was called from the Init event. This time it said, “I do have a Session variable set, so let me add the DataGrid user control.“ Note that in this sequence only one control is added to the PlaceHolder instead of the two that were added in the previous page lifecycle. Hence we get the _ctrl0 in the DataGrid's ID the second (and all subsequent) times. Furthermore, in this lifecycle, the _ctrl1 ID was what was sent back in the post headers, so when the Page class can't tie the event that caused the postback back to the DataGrid, and that was why the DataGrid's Edit button wasn't firing the DataGrid's EditCommand event on the first click.

The solution? Simply give a specific, named ID to the user control being dynamically added. That is, my colleague's code, before this change, looked like (very rough, I'm leaving out the Session variable check):

PlaceHolderID.Controls.Clear()
Dim c as Control = Page.LoadControl(Session(”pathToUC”))
PlaceHolderID.Controls.Add(c)

Everything worked fine and dandy once the code was changed to:

PlaceHolderID.Controls.Clear()
Dim c as Control = Page.LoadControl(Session(”pathToUC”))
c.ID = “someStaticString“
PlaceHolderID.Controls.Add(c)

With that change, the DataGrid's ID was always the same thing, something like someStaticString_DataGridID. As you can see, working with dynamic controls can introduce all sorts of hard to find and diagnose subtleties. To be able to debug dynamic control scenarios, the following things are paramount:

  • Patience! :-) This is probably true for all debugging but moreso for debugging dynamic controls.
  • A solid, air-tight, profound understanding of the ASP.NET page lifecycle. You need to know what events fire when, what methods run in response, and what happens when controls are added to the control hierarchy mid-way through the page's lifecycle. Some articles worth reading for more information on this include:
  • A good understanding of the ASP.NET event model, and the ability to dig through the rendered HTML for a page and spot differences that may be causing problems.

Happy Programming!

Filed under:
Little Known, Invaluable Methods and Properties in the .NET Framework Base Class Library
24 May 05 04:32 PM | Scott Mitchell | with no comments

I've decided to create a running article series on little known, but invaluable methods/properties in the .NET Framework BCL, starting with this week's 4Guys article enumerating some of the more handy methods for working with file paths. Over the coming weeks I'm planning on adding additional topics and associated methods/properties that don't get much press in the ASP.NET developer world. The next topics I'm planning on addressing are:

  • Working with Colors, and
  • Parsing Strings

In the mean time, I am interested in any suggestions you might have for such methods and properties and/or general categories. As Julia Lerman has said before in her presentations on ASP.NET topics, “There's more to .NET than System.Web.“ I couldn't agree more, and this ongoing article is an attempt to help highlight some of the more useful elements in the BCL that ASP.NET developers can benefit from.

Read Little Known, Invaluable Methods and Properties in the .NET Framework Base Class Library.

Filed under:
More On Why I Don't Use DataSets in My ASP.NET Applications
16 May 05 08:38 PM | Scott Mitchell | with no comments

A couple of weeks ago I wrote an article on 4Guys titled Why I Don't Use DataSets in My ASP.NET Applications, along with an accompanying blog entry. As of May 16th, 2005, that blog entry has racked up over sixty comments from readers, ranging from gung-ho agreement to cogent arguments against my thesis. In any event, I thought it would be worthwhile to highlight some of the more eloquent feedback along with correcting some apparent misconceptions. So I introduce to you my latest 4Guys article, More On Why I Don't Use DataSets in My ASP.NET Applications.

Filed under:
More Posts « Previous page - Next page »

Archives

My Books

  • Teach Yourself ASP.NET 4 in 24 Hours
  • Teach Yourself ASP.NET 3.5 in 24 Hours
  • Teach Yourself ASP.NET 2.0 in 24 Hours
  • ASP.NET Data Web Controls Kick Start
  • ASP.NET: Tips, Tutorials, and Code
  • Designing Active Server Pages
  • Teach Yourself Active Server Pages 3.0 in 21 Days

I am a Microsoft MVP for ASP.NET.

I am an ASPInsider.