Tag Archives: Internet

Client Side Validation of Dynamically Loaded Input Fields in MVC3

In MVC3 it’s possible to enable client side validation of dynamically loaded input fields by marking up the associated models with attributes via unobtrusive javascript. The result is validation code that is executed straight away on the client side to give the user feedback as well as on the server side to stop bad data from being entered. This approach is a double edged sword, since on one hand you get client and server side code for free, but you’re also limited to a few different kinds of input validation. The other catch associated with this approach is if you are dynamically loading form fields into your page that you want validated, it will not work automatically. 

To understand why, let me tell you a story that explores the depths of your model’s views and controllers as well as the life of your page.

First, your controller is accessed by the incoming request.

Then your controller will load up a model that has properties marked up with validation attributes.

The model is passed to the view.

When the view engine starts pumping out the resulting html it sees the attributes and the unobtrusive javascript settings and adds extra attributes to your html.

The request is fulfilled and the page content is loaded into the browser.

When the page is loaded the unobtrusive and validation javascript scripts are loaded and the validation script adds event handlers to the fields on the page.

It’s at this point that validation should work as expected.

However, if you make an AJAX request that gets a different form or more fields to show on the page, the new content will not run the client side validation.

This is because the content we loaded via the AJAX request is not hooked up like the fields that were there when the page first loaded.

So to get around the issue, all we really need to do is hook up the fields that were dynamically loaded into the page. This can be achieved by adding a bit of javascript to where you are dynamically loading your new fields:

$("#contentid").load("/ContentUrl", postData, function (responseText, textStatus, xmlhttpRequest) {
  if (textStatus == "success") {
    jQuery.validator.unobtrusive.parse("#contentid");
  } else if (textStatus == "error") {
    $("#contentid").html(responseText);
  }
});

The important bit in this code snippet is where we call jQuery.validator.unobtrusive.parse, this hooks up the validation functions to the control events that are required for unobtrusive validation. Now that you can validate input, don’t be a validation nazi.

Computing is Cheap

Over a decade ago I used to run a Linux server (Redhat at first and later Gentoo) that did some routing, stored files and hosted my website and a squid proxy. The Internet link was a terrible upload capped cable connection with a dynamic domain name service to update the host name when the IP address changed.

The server was a bit of a beast with 4 hard drives in it so I imagine it would have used some electricity. Using the current electricity rates (0.19c per kwh * 24hrs * 31 days * 0.250kw) I estimate running the server would cost around $35. And that does not factor in the cost of the connection.

This long weekend I got myself a Virtual Private Server for around $7 a month (with the current good AUS exchange rate).

Here is the comparison:

  • The upload connection is 2mbps rather than 128kbps.
  • The disk space is a modest 5GB rather than 160GB.
  • Software these days seems to be pretty memory intensive and needed tweaking to run on 128MB efficiently; on the older server I think I had 512MB.
  • The CPU is 2.4GHz compared to a 350MHz but I am sharing it with other virtual machines.

Computing power is cheap and virtualisation is making it cheaper.

My History of the Web

The web has rapidly evolved over the last 10 to 15 years. Recently there has been a lot of talk about the Internet or Facebook generation. So I thought it might be interesting to tell the kiddies about ancient web history, but purely from my own point of view (rather than Wikipedia’s).

When I first started out on the Internet, it was at libraries or other people’s homes. When I finally got the internet it home it was over a 33.6K modem. Imagine that folks – the internet being that slow and using up the phone line while you are on it.

My first website was created using an image editor called ‘Corel Draw’, which saved the picture as ‘html’. It was hosted on a url like this: www.isp.com/~myispusername. The image would take ages to download and there was only one page, but designing it was nice.

From there I progressed to WYSIWYG editors like FrontPage, HomeSite or HotDog. These tools allowed you to create static html. Hosting at the time was done on a free server like geocities which would give you a url like www.geocities.com/crazycitynametogroupcontent/yourusername and they would put banner ads on your pages. The trend at the time was to put up ‘under construction’ images, and replace it when you were finished with a page full of flaming and spinning animated gifs. I swear it’s true, half the internet was either under construction, spinning or on fire.

After animated gifs went out of fashion the next craze was to steal javascripts to put on your site that didn’t really do anything useful, but it was cool.

Then came the age of the guestbooks and hit counters. Everyone needed to use a guestbook and/or hit counter service so that your friends and family could prove that they went to your web page.

After guestbooks and WYSIWYG editors we all started to learn html because many of the WYSIWYG editors showed the html as well as the design surface. We also wanted to have a navigation bar that allowed easy navigation between multiple pages, but we didn’t want to design a navigation bar on every page. So we resorted to iframes.

Then we all realised that iframes meant the URL that appeared in the address bar was not very useful. By this time, because we used guestbooks and counters we started to get cluey about CGI perl scripts and probably even PHP. Now we could use these CGI technologies and html tables for the layout, to create a navigation bar on every page without having to write the html for it many times over. But just imagine the excitement of being able to dynamically create content on your web page! However, not many hosts allowed server side scripting, not for free anyway.

It was around this time that broadband came out, so now with an always-on connection and the availability of dynamic DNS services it became possible to host your home page on your own server. No ads, no hosting fees, all the server side scripting you want, and… a domain name!

Because we were now running our own webservers, server side web applications became much easier to cobble together, and we could actually create something useful, like shared calendars or photo albums.

Now I missed this craze because I was too cheap or sensible to buy windows for a server, but because scripts were nasty non-object oriented demons, ASP.net WebForms abstracted that pesky webby stuff away from the developer. Sadly the html it created was nearly as crap as the WYSIWYG age, and some times abstractions get in the way.

Then Google started to make web application really nice to use and powerful, so now any idiot on the internet can do more with the free services than the elite gurus that wrote or hacked together scripts and hosted them on their own server at home.

Around the same time hosting became free or close to free. The cost of electricity to run your own server made the exercise seem redundant.

Now with ASP.net MVC, .net developers are following Java and Ruby developers and remembering how the web works and how to write html, how to layout content with CSS and how to do actual useful stuff with javascript or jQuery (to get cross browser compatibility).

And that is how it happened, kiddies.