Accessing the browser’s stylesheets with CSSelection

October 20th, 2010 § 3 comments § permalink

When writing applications that modify the DOM there’s one thing to keep in mind: your code will not be the only code touching the DOM. Defensive code is good code.

With defensive programming in mind there sometimes exists a need to peek into the CSS that is loaded into the browser. Say an application your writing depends on user-based styles that are dynamically loaded in. It’d be nice to inject those styles into the document CSS and know you’re not overwriting anything. Perhaps your code is meant to take an existing style and modify it.

To this end I present a work-in-progress of my first jQuery plugin, CSSelection.

Code: View CSSelection on GitHub

Download jquery.CSSelection.js

CSSelection start by accepting a jQuery selector. With that selector you can read the current styles applied to that selector, add new styles, create such a selector if one does not exist or even remove the current selector from the stylesheet. I tried to keep the interface predictable to jQuery users at the expense of a bit of extra code. Passing no arguments retrieves the current rules, if any, of the selector. Passing the string “remove” does as expected to first found matching selector in the style sheets. Finally passing an object literal with rules will either add them to the stylesheet or modify the matching selector.

While this code is used in production I’ll still call it beta as bugs are cropping up in IE9 and are sure to arrive in future revs of Safari and Firefox. Enjoy!

Usage examples

How to create a rule for HTML elements:

$('p').CSSelection({rules: {'font-size': '120%', 'color': 'red', 'background-color': '#fff'}});

Modifying existing CSS rule/add new rule:

$('.important').CSSelection({rules: {'text-decoration': 'underline overline', 'background': 'yellow'}});

Delete a CSS rule:

$('.annoyingDecoration').CSSeleciton('remove');

Get attributes for existing rule:

var attrs = $('.wickedAwesomeClass').CSSelection();

The subtle art of ranges – Intro

September 21st, 2010 § Comments Off on The subtle art of ranges – Intro § permalink

You’re starting to see it more and more. You select some text on a website and — woosh — something happens. Sometimes you expect the action. Oftentimes you don’t. Regardless, these things are possible due to DOM ranges.

And DOM ranges are terrible.

I don’t mean they are a terrible idea. I mean they’re terrible to work with. Of course there is no single, standard API to work with. Of course Internet Explorer adds another level of challenge. And of course you’re going to at some point have to know how these things work if you expect to be a JavaScript ninja.

There are many reasons to deal with ranges. Some lame services will hijack your selection and inject in their own tags. If you’ve ever needed to hack around a WYSWYG editor then ranges will be on the docket. Or say you’re leading a team that allows users to select text on a page and do things like highlight it, make a note attached to it or even place a bookmark at that point in the page. That’d be my job.

The default range object seems good enough for a lot of tasks. Where it starts to show its weakness is interacting with the DOM outside of just the range.

You want to take what a user has selected and wrap it in a decorative <span>? That should be easy, right? What happens if that selection starts in the middle of an <li>, moves down a few paragraphs, enters a <div> and ends a few levels deeper in the node tree? Well, you can’t just wrap that in a single element.

The browsers all try to be helpful. Really, they do. If your range starts in the middle of an <em> tag but ends outside of it the range you get back from the browser will automatically split the tag properly if you wrap it. That’s nice. But if you remove that wrapper now you have a split tag that equates to more child nodes in memory than in code. That makes for bad assumptions later.

During my research into and development of this project I’ve found a stunning lack of comprehensive information (much less example code) for DOM ranges. Obviously PPK is the place to start but his level of detail is light and — at best — beginner. I’ve worked with some guys at Google who are on the Closure Library team and are ridiculously smart. Even they didn’t plan for the basic needs I have. Or it could be I’m doing it wrong.

In any case, I’ll be working on a series of entries to this blog that will go through what I’ve learned, show some code, talk about common pitfalls, propose some best practices and hopefully shine some light on the dark corner that is DOM ranges.

Viewing VMware Fusion guest Cassini/Development Server on host

April 25th, 2010 § 3 comments § permalink

Preamble

Using a Mac is great. You can pry mine from my cold, dead hands. However, there are times I need to run things in a Windows VM. It’s easiest to do as much work as possible in VM but there’s the unavoidable need to connect remote machines to the guest server.

In my specific case I am part of a team developing a C# app so that means Visual Studio 2008 and its built-in development server, Cassini. Microsoft added this lovely hard-coded trick to Cassini so that it listens only to requests from the loopback, localhost. This is supposedly to prevent developers from shipping apps with Cassini built in. I don’t care about the reason. I care that I can’t connect to my development VM on my Mac or any other remote machine. There is a fairly simple solution to that, though.

My setup

  • OS X 10.6.3
  • VMware Fusion 3.0.2
    • Windows 7 Professional
    • Visual Studio 2008
    • VPN’d connection to the office

Yes, it’s wonky to VPN in my VM but I can’t VPN in my Mac and that’s a whole other story.

What you’ll need

That’s it.

The Concept

Fiddler is a debugging proxy but it also has the ability to be a reverse proxy. We’re going to use that feature to take requests to the guest VM on Fiddler’s port of 8888 and automatically reroute them to Cassini’s port.

The Execution

First thing’s first: my particular setup is probably unique and the tedious steps I’m about to list will likely never be needed by the average developer. However, if I need them that tells me someone else does as well so this is bound to help some lucky Googler.

  1. Make sure your VM’s network connection is running in Bridged mode
  2. Start your Cassini server by going to Visual Studio in your guest VM and pressing ctrl+F5
  3. Fire up Fiddler and open Tools->Options
  4. In the Connections tab make sure “Allow remote computers to connect” is checked and hit OK to close the options window
  5. Press ctrl+r to open the rules (defaults to opening it in notepad.exe)
  6. Search for the OnBeforeRequest function and add the following condition:

    if (oSession.host.toLowerCase() == "192.168.1.108:8888") oSession.host = "localhost:5867";

    Just modify the 192.168.1.108 IP to be your VM’s IP and change 5867 to be Cassini’s port

  7. Fire up cmd.exe and renew your IP by doing an ipconfig/release followed by ipconfig/renew

You’re done!

Testing your work

Now that Fiddler is set up to route requests to Cassini you can go to any machine on your local network, type in your VM’s IP followed by Fiddler’s port and Fiddler will accept your request, route it to Cassini and return back results. In case you need to copy and paste something the path in my browser is simply

http://192.168.1.108:8888

Ways to improve this

There’s an additional rule that can be added to Fiddler. In the OnBeforeResponse add the condition “oSession.utilReplaceInResponse("localhost:5867","192.168.1.108:8888");“. That does what it sounds like. You may have noted the magic numbers in the Fiddler rules. These will suck to maintain. You can right-click your web solution, go to properties, click the Web tab and assign a specific port. If you go to your Windows network settings and force in an IP you’ll never have to edit the rules file again. However, these are hack solutions around a weak implementation.

A potentially better route would be to ditch Cassini altogether and use UltiDev, a free .NET server. I may still. Should get around these headaches fairly well.

The best option would be to run IIS in your VM. For an as-yet-undiagnosed reason my web solution will not allow me to run it through IIS. I hope you, casual reader, have more luck.

Closing

There have to be more devs out there running a Windows VM in their Mac, needing to test their web apps in OS X, on their iPod Touch/iPhone/iPad, Blackberry, yada yada. Yes, this could all be solved by using the dev or QA server but those aren’t updated automatically and I can’t be bothered to make a build every time I want to tweak some JavaScript. I’m marking this down as an exercise in learning about VM networking types, reverse proxy functionality and the power of releasing and renewing your DHCP license.

Uninstalling Active Technology’s ODBC drivers

January 4th, 2010 § 2 comments § permalink

This article is quite old. Please don’t use it for…anything. Thanks!

Since my upgrade to Snow Leopard I’ve been fighting a few very particular issues in my development environment. It should be noted that I think nearly all regular consumers and a large percentage of developers will be satisfied with the development environment provided by Apple. However, some developers need more.

While able to run in 64-bit, I was running Apache in 32-bit mode under my Leopard install. This is because the ODBC drivers I was running from Actual Technologies only supported 32-bit Apache. In Leopard land that was fine. Now that I’m in Snow Leopard and everything else is in 64-bit I thought my ODBC drivers ought be as well.

My problem is that I had the 32-bit Actual Technologies drivers, compiled in the FreeTDS drivers for my MSSQL support when compiling PHP and then installed a 64-bit beta build from Actual Technologies. Somewhere between the Snow Leopard upgrade and all these other installs things have gone a bit wonky. Best bet is to clean out all ODBC drivers and start anew.

Small hiccup: Actual Technologies does not provide an uninstaller. Well, not that you’d be able to find. I emailed their support and received a link to a script bundled up in a dmg. By request I’m not going to post a link to the script or the script itself and I think that’s fair. Looking at the script it is no more than a series of rm -rf commands that blindly erase whole directories. Even existing iODBC data and settings you may have from other products or drivers.

I’m going to run the uninstaller, recompile FreeTDS, rebuild PHP and see where that gets me. Hopefully not towards a fresh Snow Leopard install.

Making friends with the DOM

November 19th, 2009 § Comments Off on Making friends with the DOM § permalink

Any JavaScript developer who spends more than a blink of an eye working with the DOM has probably come despise the API. It’s not that there’s a dearth of features (welll….); I think it’s the verbosity that causes modern application development woes. For the one-off script that adds an onclick or modifies some text the DOM API is fine and dandy. Try scaling up to developing an application or, perhaps, a framework and things get annoying and messy right quick.

When Tommy finally gave jQuery a try and discovered there was no native DOM manipulation tools he decided to roll his own plug-in he calls FluentDom. The code is clean and quality, as I’d expect from him, but the interface drives me bonkers.

I’m a fan of configuration over convention, a concept which, when combined with my MooTools history, has lead me to favor using object literals as configuration “objects”. When I use MooTools to create a new element the only arguments are the element type and an object literal with any configuration options. This is where Tommy’s code style differs. In FluentDom you set a specific attribute with specific methods with calls able to be chained together. This isn’t to say that FluentDom couldn’t be used with a configuration object, it’s just not as clean an implementation as I prefer.

In a message to him I said this:

The benefit to setting attributes via an object literal is that I can use one set of preset attribute values and pass that around. Quite handy if you’re doing something like iterating and creating a list of similar elements but particular ones might have subtle differences. Also easier to maintain and extend. Given your code it’d be a trivial modification to do this during a create call.

Which do readers prefer, specific, chainable method calls or ambiguous configuration object?

PHP and Segmentation Fault (11)

November 16th, 2009 § Comments Off on PHP and Segmentation Fault (11) § permalink

Despite research that feels all-inclusive, reading other blogger’s posts and the collective groans on Marc’s message board (not to mention my own now-invalid feeling of accomplishment) I am still being bested by the infamous “child pid xxxxx exit signal Segmentation fault (11)” issue.

From what I can tell the segmentation fault occurs when I execute odbc_fetch_array. I cannot tell you what a joy it is to try and debug a script that dies when using a built-in function. PHP 5.3 how I wanted to love you. How I so very much despise you now.

If anybody out there in the tubes has any helpful, constructive ideas I sure would appreciate some feedback in the comments.

Deleting dynamically generated elements

November 11th, 2009 § Comments Off on Deleting dynamically generated elements § permalink

A coworker, Tommy, posed an interesting question to me the other day. How would one handle dynamically adding elements to a form, especially with regard to giving the user ability to remove those elements? We both admitted to implementing this a few times and he described to me his method of iterating through elements until the right node was found and then removing it.

As a habit I avoid DOM traversals whenever possible so I suggested an alternate solution. Instead of having a controller function look for some kind of unique ID or scan the DOM why not break down the control to the individual element and programmatically make the removing function a closure with a reference to each node. That way the code to remove an element is only ever concerned with its own instance. No crawling the DOM, no messy lookups, no muss no fuss.

If that didn’t make sense, perhaps this snippet will.

var addControls = function(element)
{
	var remove = document.createElement('input');
	remove.setAttribute('type', 'button');
	remove.setAttribute('value', '-');
	remove.onclick = function()
	{
		return function(self)
		{
			if(confirm('Are you sure?'))
				self.parentNode.removeChild(self);
		}(element);
	}
	
	element.appendChild(remove);
}

addControls(someNodeThatYouAlreadyFound);

Pretty simple. Questions? Let em rip in the comments.

Getting back to my roots

October 29th, 2009 § 1 comment § permalink

Or: There are legit reasons to write your own library

Around the office it’s no secret that I’m a fan of the MooTools JavaScript library. For those of you who can remember back before jQuery, Moo.fx (from whence MooTools came) was a tiny, feature rich library for animations and effects. It was a welcome relief from having to write effects and deal with the cross-browser issues that invariably crop up. Compared to the heavy weight (and necessity) of including both the Scriptaculous and Prototype libraries, Moo.fx was a boon for byte-pinching developers and infrastructure managers.

Over the years as many libraries popped up and fell by the wayside, jQuery came along and showed us the difference between a library and a framework. Perhaps the most important result of competition between libraries was performance. I won’t hold MooTools above any other library arbitrarily but when they created SlickSpeed it was a great way to show performance between libraries and let developers decide what library best fit needs. John Resig came along and blasted out Sizzle and – BAM – kicked up the speed of jQuery by a wide margin. The healthy competition rages on and it just means better speed, compatibility and features for users of JavaScript libraries.

With the availability of tested, proven, mature libraries why would anybody ever bother writing their own native JavaScript code any more? I know that the last time I write document.getElementById() can’t come soon enough. Still, there are reasons to stick to pure, native JavaScript.

For starters, no matter how elegant or optimal the code a library’s code will never be as efficient as native JavaScript. Sure, they may delegate to native functions and the cost could end up being a millisecond or two. That’s fine. Then again, do you want to be the one tracing back performance issues when your code causes a locked browser when you run var header = $('header'); and hit some kind of weird edge case?

I had a better reason than performance. I’m a realist. I don’t have the time or patience to write the fastest code in the west. I write code to solve solutions. Functionality first, optimization second. That doesn’t mean that I write slow or hacky code. I just want to get the problem solved first and sometimes it’s easier to write something in straight JavaScript instead of finding a MooTools class or a jQuery plugin, making sure it doesn’t pollute the namespace, assuring that it runs reliably and trying to configure it for my specific needs, even though it was never meant for that.

Sadly, it has been a few years since I’ve been able to really stretch my JavaScript legs and run. From a CTO who banned the use – and all but the mention of – AJAX to projects that were more business rules than user interface I haven’t had just cause to go nuts on the client side. There have been occasional forays into widgets and code snippets but nothing serious. Any real geek should realize that a dearth of reasons to code passionately causes bad things to happen in a programmer’s brain.

The straw that broke the camel’s back was a manager walking over to me, asking me if JavaScript could do XYZ, to which I boldly assured him it could. I sat down, cracked open a fresh TextMate skeleton project and stared blankly at the screen. I had no idea how to accomplish the task, where to look for guidance or even if I could get it done. This could not stand. I needed a core JavaScript refresher. More than that, I needed to cover topics I had never mastered like, say, wholly p0wning the DOM.

It was at that moment that I decided to write my own JavaScript library. Not to have more features. Not to be the fastest. Not to be the most popular. Simply to better understand the language and environment in which it is most often run. A refresher on what I know, a chance to learn new tricks and an opportunity to better learn how to architect something bigger than a snippet.

Things are progressing nicely and when there is a stable base and interface I plan on releasing the code open and free. I doubt it will blow the minds of any seasoned JavaScript experts but already I have some tricks that I haven’t yet found in either MootTools or jQuery. It has been frustrating, fun and frivolous – all the best attributes of any personal code project.

I’ll be clear: I do not recommend writing your own library, framework or other massive code base. Not on a whim, at least. It’s not worth the time to do something that is only half as good as what already exists.

However, you cannot call yourself an expert at anything without knowing how the guts work. Knowing how fuel injection works does not make me a champion race car driver.

And that’s how I’ll end this chapter. JavaScript programmers are not race car drivers.

Compiling Subversion 1.6 on a Mac

October 12th, 2009 § Comments Off on Compiling Subversion 1.6 on a Mac § permalink

If it wasn’t already obvious from my previous posts I’ve been bitten by the dangerous and anti-productive bug that is building your own apps from source. However, with the case of Subversion I feel the need to build this one instead of using the installer as I prefer my local apps to go in to /usr/local not into /opt.

Word of warning: My personal needs for SVN do not include the Berkeley DB, Neon or other more advanced features. If you require those features, you can run ./configure --help to review the possible arguments and what is required to use them.

  1. Download the latest Subversion source. I chose 1.6.5.
  2. Decompress the package. You can click on the tarball or use Terminal to run this: tar xvjf subversion-1.6.5.tar.bz2
  3. Now run the following commands in terminal:
    
    cd subversion-1.6.5
    ./configure --prefix=/usr/local --with-ssl --with-zlib=/usr/lib
    make
    sudo make install

That’s it! You’re done and can test your version by running the following command in Terminal: svn --version --quiet.

Using Remote Disc on non-MacBook Air machines

September 27th, 2009 § 1 comment § permalink

Just a quick tip on how to use the “Remote Disc” capabilities on any Mac that isn’t an Air. For some reason Apple will let you share a drive but not read it over the network unless you’re on an Air. Yet another quirk from Cupertino. In any case, the solution is easy.

Open up Terminal and run these two commands to update your settings. If you’re afraid of Terminal, just copy and paste these. Trust me, they’re harmless.

defaults write com.apple.NetworkBrowser EnableODiskBrowsing -bool true
defaults write com.apple.NetworkBrowser ODSSupported -bool true

Then open up Activity Monitor and force quit Finder. It will automatically restart. Open up a Finder window and marvel at the glory of sharing an OS X install disc over the network. Particularly handy when your MacBook Pro won’t read a disc.

I’m off to install Snow Leopard on my home/studio MacBook Pro. Over the wireless. Why does this sound like a bad idea? Hmm….