144. Scale

On this week’s show Paul talks to Joe Stump from Digg about scalable websites, we review the best apps for web designers and investigate services for sending bulk emails.

Download this show.

Launch our podcast player

News and events

How much should you charge?

If you are starting your freelance career the number one question you will have is ‘how much should I charge?’ It is important and yet strangely it is not something you are taught at college. Perhaps they don’t teach it because it is a damn hard question to answer. It is certainly something we have avoided talking about on this show.

Fortunately an article entitled ‘Six things to consider when setting your freelance rate‘ has been released this week. Although the article does not give a magic number, it does provide 6 valuable insights that will inform your final decision. These include…

  • Young freelancers and recent grads almost always ask for too little.
  • You can do things your clients can’t.
  • Your rate influences your perceived value.
  • You don’t get to keep it all.
  • An hour worked is not an hour billed.
  • The higher you start, the less you’ll need to increase.

I couldn’t agree more with everything said in this article. However, the one that really resonated with me was ‘You do not get to keep it all.’ Your rate has not only got to cover your billable hours but the cost of sales and marketing, as well as your various overhead. The article has a link to a superb rates calculator that helps you work out your chargeable rate based on these various costs. Definitely worth checking out.

A plethora of accessibility posts

With the implement arrival of WCAG 2.0. we are seeing a resurgence of interest in accessibility. This has led to a plethora of accessibility posts over the last few weeks. These include…

  • Writing good ALT text – This is a simple post about the use of the ALT attribute. It suggests two rules of thumb when it comes to writing ALT text. First, if you were to describe the document to someone over the phone, would you mention the image or its content? If you would, the image probably needs an alternative text. Second, does the alternative text of any images in the document make sense if you turn off the display of images in your web browser? Simple advice, but well worth remembering.
  • Designing for Dyslexia – This is a series of 3 in depth articles that look at the subject of Dyslexia. It asks what Dyslexia is and how we as web designers can improve our sites to accommodate the needs of Dyslexia users. Its interesting stuff. Part 1 defines what Dyslexia is. Part 2 looks at some of the conflicting requirements with users who have visual impairments. Part 3 suggests some specific things you can do to improve the legibility of your designs.
  • Accessible forms using WCAG 2.0. – This extensive post aims to provide web developers and others with practical advice about the preparation of accessible HTML forms. It compares the WCAG 1.0 accessibility requirements relating to forms with those contained in WCAG 2.0. Important stuff but not a 5 minute read!
  • Too much accessibility – The RNIB explains how the LEGEND tag can cause more harm than good if not concise and relevant. The reason? LEGEND text isn’t read at the start of the FIELDSET, it is read at the start of the label. It repeats at the beginning of every single text label in that FIELDSET.

A business case for deleting content

I find myself using the word ‘simplify’ a lot when I talk to clients these days. So many website owners are constantly wanting to add features or content to their site. However, in reality we should be removing not adding to our already bloated websites. This is particularly true for large institutional websites. However it does also apply to smaller sites.

Take for example the Headscape website. When we started the redesign process for our site, I sat down and really thought through what information prospective clients wanted. The answer was very little. In the end our large text heavy website was reduced to a single page. That is the power of simplicity.

Gerrry McGovern summed it up perfectly this week in his post entitled the ‘Business case for deleting content‘. He wrote:

The more you delete, the more you simplify. The more you simplify, the more you increase the chances of your customers succeeding on your website.

We may think that we cannot delete content because ‘somebody might want it’ or because we believe ‘it will help our search engine ranking’. However, bloated sites bring complexity and with complexity comes confusion. The more content on your site, the less chance a user will be able to find the content they need.

12 principles for keeping your code clean

We finish today with a great post for those who need help with their HTML code. Whether you are a student learning HTML or a designer who is more comfortable in Photoshop than Coda, this is a very useful article.

The post provides 12 excellent tips for keeping your code clean. These include…

  • Use a strict doctype
  • Set your character set and encode those characters
  • Indent your code
  • Keep your CSS and JavaScript external
  • Nest your tags properly
  • Eliminate unnecessary divs
  • Use better naming conventions
  • Leave typography to the CSS
  • Add a class/ID to your BODY tag
  • Validate
  • Order your code logically
  • Just do what you can!

The article explores each of these points in depth and communicates clearly current best practice in coding HTML. Well worth the read even if only as a reminder.

Back to top

Interview: Joe Stump on Building a Scalable Site

Paul: Ok, so joining me today is Joe Stump from Digg. Good to have you on the show Joe.

Joe: Oh, good to be here.

Paul: Have we had you on the show before?

Joe: Ah, not that I’m aware of.

Paul: Oh, wow, well we need to rectify that then. It’s good to have you on. Well, I have to say, this interview was arranged by Ryan, who is our producer. And he’s a developer, and I’m a designer. And he suggested we got you on the show, not that I wouldn’t like you on the show, obviously. That we got you on the show, obviously about scaling websites. Now, I’m going to be out of my depth very quickly here, so you’re going to have to be very gentle with me Joe.

Joe: Sure

Paul: So, in fact, it was so bad, that as I sat down to write questions I thought: "I don’t know what I’m doing here" , so I went and talked to some of the developers at headscape, and I asked some of the Boagworld listeners, and so we’ve got a little selection of questions for you, that, hopefully we can learn a little bit about how you go about doing things at Digg. Lets start off, what’s your job title, what is it that you do at Digg?

Joe: Ah, I have a really fancy job title that doesn’t mean a lot of anything, but ah, my official job title is "Lead Architect" and um, I think what best describes it, is that I manage the technical implementation on the code side.

Paul: OK

Joe: So, Digg’s broken up into a lot of different arenas on the tech side, we’ve got, R&D, which is headed up by Anton Cast, we’ve got operations, which is headed up by Scott Baker, and then under that are the people that I work with, ah, probably most closely in implementing solutions for Digg. Ron Gorodetzky is our lead systems engineer, Tim Ellis, also known as timeless, is our chief DB wonk, and then, Mike Newton is our lead network guy. So I think us four kind of steer the technical implementation along. The managers, ah, the manage, and handle the strategy and partners, and stuff like that.

Paul: You managed to say the word manager with real distain.

Joe: Oh, no actually, I have a great manager, John Quinn, he’s our VP of engineering, he’s by far the best direct manager I’ve probably ever worked with. Yeah, he’s really good.

Paul: OK, well lets go back in time a little bit. And start by, well, when was the point when you realized, that you were going to start having scaling issues with Digg? When did you start thinking about the whole subject of scaling?

Joe: Um, well Digg was pretty big when I came on board, so Digg was about 10 – 12 million uniques when I joined on.

Paul: Wow.

Joe: And I think we’d just cleared 35 million last month. So scaling was obviously an issue, but the big difference is that, I think sites generally go through a few different levels of scaling, where like the first one’s like, "I’m just going to throw it on a virtual server, or an Amazon server, you know, you’re basically just seeing if things are going to just "stick to the wall", and then they do. Ah, so the first thing you normally do is start breaking services off onto separate boxes. I want to put my DB on one box, my server on another box, and maybe memcached on each of them. And then you hit, read saturation on that one DB server, so then you go to the kinda next level of scaling. Which is where Digg was when I started, where you start dangling, a whole bunch of read slaves, off of your DB master, so, and for those who are not familiar with the master / slave terms, you send all your writes to one database server, and then that disseminates those writes to a whole bunch of slaves, and then you send all your read traffic to those slaves. So that’s where Digg was when I started. It’s write http traffic across a whole bunch of servers, its read traffic across a whole bunch of slaves, and then we have one master. And we’re now going through, what I think is the third phase, where you hit write saturation on your master, which is a bigger problem, because you then need to start sending some write traffic to some masters, and we’re actually going with a strategy that’s common with Facebook, and Flickr, and those kind of websites, where it’s called horizontal partitioning, where you put some of your records on one server, and the other records on another, so it’s like, you can say, for users, all users whose names start with A through J, would go on this database server, and K to Z live on this other database server. So we’re in the middle of implementing the first swipe at that. So we’ll be pretty aggressively into where everything will be federated and partitioned across a whole bunch of servers.

Paul: OK, one of the questions which kinda came up, which kinda relates to that, is, once you start spreading things across multiple servers, how do you handle things like user sessions, which have obviously got to be persistent.

Joe: Aha, so there are a couple of ways to handle that which, I’d say most people are handling it by.. There’s two ways, probably that you can do it easily. One of them, is if you have what they call "session affinity" on your load balancer, so the load balancer will say: "Oh, well this person, last time I had them here, they went to server A, so we’ll send it back to that server". So the session always lives on only one box. That’s one way to do it, we don’t do that here, we have a custom session handler in PHP which sorts the session in Memcache, and that allows you to.

Paul: Can you just clarify what memcache is, for idiots like me who don’t know.

Joe: Sure, memcache is a distributed caching system that’s actually, basically what it allows you to do, is expose a machines RAM over the network, and cache stuff into another machines RAM across the network.

Paul: Ah, OK

Joe: Yeah, it’s insanely fast, it was developed by Danga back in the day, and Brice Fitzpatrick, yeah so it’s heavily used by anyone whose scaling with LAMP, even a lot of people who aren’t. They all use memcached.

Paul: Wow

Joe: So, yeah, we store all of our session data in memcached, so PHP creates a unique session id, and we just stuff session data into that in memcached, and we can distribute that across, I don’t know, 50 or 60 memcached servers, and what not.

Paul: So how many servers do you guys have, it must be a staggering number by now.

Joe: Um, yeah, it’s kinda funny, every time I ask Ron that, he’s always like "Ah, I don’t know"

Paul: Laughs

Joe: Because we really can’t I mean, I couldn’t give you a specific number because on any given day, we’ll pull or push into production, a dozen servers, so, hundreds, there’s definitely hundreds in production. So.

Paul: I mean, with that many servers, so obviously you’re talking about taking servers on and offline, and all that kind of thing, I mean, making updates to the site, when Daniel comes up with some stupid idea, that you’ve got to apply to the site, of a new feature that he wants to apply on the site, and you’ve gone through the process of making it work. And you’ve then got to push it live.

Joe: Aha

Paul: How does that work? How do you go about pushing something like that live when there are so many servers involved.

Joe: So we have Ron Gorodetzky our lead systems engineer guy. So us developers have a bunch of M4 make files, that, when you check the code out, you run basically Make, Install, and it, for lack of a better word, it builds or compiles the website into a cohesive package, and then Ron pushes that to each server, I think he is still doing it by rsynch, but I know we are migrating over to Puppet, so it may happen via Puppet soon. The production side of things, is something that’s handled completely by operations, so I couldn’t tell you specifically how it happens, but generally, we make a tag of the website, and tell Ron, we need you to push "9.4.15" or something like that, and he does a checkout, builds it, and pushes it to all of the different servers.

Paul: So is that – do you actually have to take the site offline to do those updates? How do you minimize the downtime that’s involved with that.

Joe: Oh, well there’s a bunch of different ways. Um, we don’t bring the website down normally for pushes, it depends on the size and complexity of the push. But like, day to day pushes, we probably push I guess, a minimum of once a day, just little bug fixes and stuff like that. And those happen generally in the middle of the day, and nobody notices, it’s no big deal. Ah, the outages these days, are completely dependant on database activity, you know, like database fixes and stuff like that. And the ways that we’re getting around that these days, is using a different method of partitioning called vertical partitioning. Where, like for instance, like I think our comment Diggs table, of like, who’s dug a comment, up or down, that’s like 15 billion records in it.

Paul: Wow

Joe: that’s like, yeah, if you’re like to alter that table, you’d probably crash mysql, but if you were, it would probably take a week to alter it. So instead we probably create another table, where we have like comments, and then we have another one called comments_made_up, or something like comments_diggs, comments_diggs_beta, and that has a couple of extra fields in it. And so we’ll say, OK, we’re about to push the code, at the end. When we push the code, the first comment id that was added to the table was 15,000,000,001, so then you start at 15,000,000,000 and work my way back in the table. So we do some of that live as well. For the next push that we’re doing, we’re using a migration table, which will tell us how far along each record is, and which records we’ve actually migrated, and stuff like that. And then we’re going to use this package called "GearMan" which is a queuing system which we’ve had in production for a while. And we’re basically turning our servers into a giant BotNet, so we’ll back fill all those partitions quickly.

Paul: Wow, that kind of amount of data, it must create huge problems, even adding a new piece of functionality onto the site, to actually code it in a way that’s not going to have a momentous impact on the database. This must be something that’s always constantly on your mind I guess?

Joe: Yeah, I’ll tell you a really funny story that highlights that perfectly, we have these little green badges that are on the Digg button, and they indicate, that a friend of yours has dug that story. And when you hover it shows the last four friends to dig it or something. So that’s a pretty knurly query, against a very big table, and we’ve actually had to, what I would call "dial it down a bit", so that it only shows up on the stories that are 48 hours old, and it won’t show up if there are more than 500 diggs or something. So the features fairly usable, but it’s not like… Well before if someone went to the top of 365, it was basically crashing our servers. So we’ve been rewriting that, and we basically, the way that we’re rewriting it is: If you write something, we take that Digg and we drop it into each of your followers buckets. So we make a bucket for each story for each person. Any time one of their friends digs it, we drop that dig into their bucket, but the problem is, like Kevin Rose is followed by 40,000 people, so every time he digs something, I need to drop 40,000 things into 40,000 different buckets. And we did the math, and just to get that feature up and running in a vast sane manner, so that it will scale, so we can bring it back in full capacity so everyone can use it all the time. We need 1.25 GB of storage, and we need to be able to sustain 3000 writes per second in order to keep just that small feature online.

Paul: So that really kind of illustrates that a relatively small feature that someone comes up with, has massive ramifications from your point of view.

Joe: Yeah, this is something that has kind of been something that I always talk about. I mean even back when I was doing consulting, I’d kind of have clients come to me, and it would be: "Check out this awesome design", and I would be like "that designs awesome, but that little feature down there, that’s going to cost you know, $100,000 in servers, and 500 man hours. But it’s, like, well the designers think of sizes and shapes, and so Daniel always jokes around and says: "Well I can make it purple" if that will make it easier for you" you know, it’s like…

Paul: Laughs

Joe: Laughs – well that doesn’t make it easier!

Paul: Well, we’re going to get you and Daniel back on the show to talk about this whole design / developer relationships, so you can lace your side of it now, and get your side in first. Before he defends himself.

Joe: Sounds like a plan.

Paul: So are you at the point now where you’ve got an architecture that’s kind of infinitely scalable, or are you going to have to go back to the drawing board if Digg just goes even more, you know off the scale than it already is?

Joe: Yeah, well we’re actually at the drawing board right now.

Paul: Yeah?

Joe: Yeah, Ron, myself, and some of the other senior peeps, about 8 or 10 months ago, we started putting together… well we knew that we were going to start to have serious limitations, especially since we were going to be scaling internationally. You know there is a problem with latency. With you guys across the pond hitting the west coast and other things like that. So we want to be in multiple data centers. We want to be actively serving traffic from multiple data centers, so we’re are, well we need to horizontally partition, we need to make sure we can do that. And we need to live in two different data centers. We need to be able to survive one data center disappearing. So we spent basically a week in front of the white board, and we created this thing called IDDB, which is kind of an elastic storage engine, built on top of SQL, and memcachedb, that we’re going to be putting the first bits and pieces into production, probably over a month or so. And really, the whole partitioning thing isn’t the difficult thing to figure out. The difficult thing is basically spanning multiple data centers, and also we’ll have a couple hundred gigabytes of data, and I need to spray that across, you know, a couple dozen different servers, without bringing the site down. So we have a couple million – 3 or 4 million users, and I need to take all of their records, and all of their auxiliary records, here’s like your user record, and there’s also a bunch of cruft related to that. So I need to take all of that, and migrate it to different partitions. But I need to do that while the site’s still up and running, and I need to do that without breaking the site for you. So, that’s the more complex problem at this point.

Paul: I mean you talk about spreading across multiple data centers, and if one of those data centers goes down, the site does too, and whatever. How are you currently handling redundancy? How are you making sure the site stands up at the minute?

Joe: Right now, our only single point of failure at this point, is our actual data center, so if the data center falls off the planet, then we’ve got problems. But we’ve got a general architecture. We’ve got a couple of general balancers up front. And those two have, what we call a "heartbeat" between them, and if one of them falls off, the other instantly takes over traffic for it. And that balances traffic across, I couldn’t even tell you, dozens and dozens of web servers, and of course, the load balancer does help checks on those, so if any of those falls over, it will drop it out of the pool. Behind that, we have, I think, 4, I guess our masters are technically single points of failure, but we have multiple masters as well, and we have dozens of read slaves hanging off of them. I think right now it takes about 10 minutes to bring a new master into production if one fails. So, and then we have, to store our files, we have a thing called MogileFS, which is a distributed web dav storage engine of sorts, and we can loose multiple nodes on that, and not have any problem with that as well.

Paul: Yeah, so at the moment it’s a physical problem that you have, that if your data center gets hit by an earthquake or whatever, then you have problems. Please tell me it’s not in San Francisco?

Joe: It’s not in San Francisco.

Paul: Ha ha, yeah, you’re not actually going to say where it is are you?

Joe: No I can say we have one on the west coast, and we have one on the east coast.

Paul: Oh, well that’s at least something. Um, I mean so far we’ve concentrated a lot on scaling technology, but there’s kind of another side to this, as well, where you get something like Digg, that has grown incredibly rapidly, over a very short length of time, and that is, kind of scaling the team behind it. You know, I don’t know how many developers were working on Digg when you joined it, but there would certainly be a heck of a lot more now. And I’m quite interested in how you went about growing the team. And how you deal with that kind of rapid growth, and making sure everyone knows what they’re working on, and not overwriting others work, and all the complexity that goes along side of that. How have you dealt with that?

Joe: Sure, I guess, to put things into context a little bit, when I was hired, we had both Kurt Wilms and I, were both hired on the same day, and were respectively employees 18 and 19, and developers, I think there were 7 of us. So, now we’re at the low 20’s as far as developers, and we’re in the mid 80s, as far as total employees, and that’s been in a year and 9 months. So as far as scaling the teams go, some of the building blocks were well in place by the time I got here. Like, source repository, stuff like that. But I think the crucial things that we’ve done, since I’ve come on board, that were, we had some coding standards that were out there, but they weren’t really in force. And then we had, we didn’t really have any frameworks in place. I think one of the problems, you know, when Jay, our CEO, was asking, where do we find more senior developers, I told Jay, like that’s not the right question, the right question is like, how do we get the code, and how do we get the technology, in a position, where we don’t have to hire really senior people. So I think the keys to that are, we do have pretty strict coding standards, so we do enforce those rigorously, through continuous integeration environment, and code reviews. Every piece of code that gets pushed to production has to be reviewed. And that’s literally 4 or 5 coders, sitting in a room, going line by line through change sets, and stuff like that. And that sounds really time consuming, but without question, on every code review I’ve sat in on, we’ve found one show stopper bug. So, those have been crucial, in getting things put together. The other things we did as we grew, is we broke the team up into smaller teams, so we have a development team of 20 – 25 developers, but that’s broken up into teams of between 3 and 5 developers. This really helps in a couple of areas. 1, it enforces code ownership. So everybody has this problem. I code this, then I go and work on something completely different. And 6 months later I come back to this code and I’ve forgotten. I don’t remember any of that. Where as if you’re always working in the same area of the sites, not only do you remember a lot more, you’re a lot more familiar with that. But also, you feel a bit more of a sense of ownership over that. You’re not just this hired gun that’s come in and ploughs through this feature then moves on to something else. You have your own territory that you need to keep track of. You need to keep really nice and what-not. So we did that, and then we have a bunch of core frameworks, that we’ve built. We have a small application framework, we have an AJAX framework, and now, we have this data access layer that we’ve been building up called IDDB. So I think those are pretty crucial too. It’s difficult to find coders that are intimately aware of memcached, and race conditions that exist in memcached, and um, we have to be very selective about what fields we add indexes on in mySQL. We also have to be very selective about how we store that. Normalization flies totally out the window, if you’re a DBA guy. A lot of concepts, they are not bad developers, by any means, they do great AJAX work, they do great application level PHP work, but if you don’t have frameworks in place for them to not have to worry about the super-super complex stuff. It’s going to be really difficult to hire, and it’s going to be really difficult to, you know, get a lot of stuff running in parallel and stuff.

Paul: Wow.

Joe: Yeah, and then we also, another one of the things we’ve adopted, is the agile SCRUM. So we’re doing sprints, and we’re running those in parallel now across all the teams. So right now we have 4 major projects going on right now.

Paul: Ok. So you talk about a sense of ownership there, and the developers are split down into multiple certain areas. You know, does that not get boring, for the developer, having to work on the same bit of code long time, or do you rotate people?

Joe: Well, we don’t currently rotate people, the team structure’s been in place for about 4 or 5 months now. And you know, most of the work they get is fairly immediate, and we’re moving major projects like Facebook connect, so the "Tools and integration team", their doing facebook connect now, and after this, they will maybe work on a new version of the API and so on. It’s written out to wide swaths of the site, so that we have "Site Apps" which does like, all the different applications on the site. And then we have "Tools and Integration" where we have the external projection of Digg, then we have "Core Apps" which is like, search, R&D stuff like recommendation engine, and what not, and then we have "Core Infrastructure".

Paul: So it’s probably enough to be interesting?

Joe: Yeah, we have pretty broad teams, and also, when we put people on those teams, even if someone has an amazing core infrastructure background, but they say, look, like, one of our UI guys, used to be really heavy into core infrastructure stuff when he worked at Quest, and managed massive warehouses, but he says, like, "That’s not what gets me up in the morning any more". It’s like, "Javascript UI interfaces are". So we try to put people on the teams where, you know, where their passions lie. And that’s kind of another thing that people need to recognize. And that’s like, not all developers are driven by, or interested in the same things. We have some, what I would call "UI / Frontend" developers, where like, they could care less about PHP, but we have PHP guys who could care less about Javascript. So I think, recognizing strengths and weaknesses, and capitalizing on those, is pretty important too.

Paul: OK, one last question to finish off, and that is, well you know, the kind of things that you’ve been talking about are fascinating to hear, about the kind of challenges that you have to face with the size of Digg, and the amount of traffic you have to handle. But obviously a lot of people who are listening to this podcast, aren’t at that stage. They are not working on massive projects like that. So I’m really interested in what advice you would have, for those who are just beginning to suffer from scalability problems, and they feel that it’s coming, and it’s something they need to be paying attention to. But it’s not on the enormous scale that you have to deal with. What things can they do right now to prevent problems down the line?

Joe: Um, I think, the easiest things you can do, is you need to re-think the LAMP acronym, because that stack is actually no longer really that stack. I would take Linux, and I’d take Apache out of that stack, and it doesn’t matter, as long as you’re running on a Unix. And as far as Apache goes, Lighty and EngineX are much better at getting a lot more money out of your box, as far as scalability goes. The two areas, that I think people, they sound hard, but they are really easy. One of them is installing and using Memcached, and the other one is installing and using a queuing system of some sort. And I think, like, recently I went through this with a little side project, called "Please Dress Me" which AJ and Gary Benashack and I did.

Paul: Oh, yes yes.

Joe: And we had a very small virtual server at MediaTemple, that survived pretty crushing blows from TechCrunch, Digg, BoingBoing, with almost no load. And that was like, beforehand, memcached is so second nature to me at this point, that I was just like, "Oh, well I’m just going to cache everything in memcached", and it was literally just like this RAM spewing machine. It didn’t actually hit the database. Actually my sysadmin at MediaTemple was like "Something’s really weard, MySQL is only doing like 1 query a second, and you’re doing like 500 requests per second from BoingBoing. So I’m cached. Yeah memcached is just like, it takes literally 10 minutes to install, and probably another hour or two to implement.

Paul: Wow, that sounds excellent, that’s really good advice. Joe, thank you so much for coming on the show, and I can’t wait to get you and Daniel fighting with one an other in the not too distant future. I’m hoping to get a good violent argument about designers and developers, just because I can.

Joe: Laughs.

Paul: I was a little bit disappointed when you guys sat down at Future of Web Design, were far too nice to one another, compared to the evening before, when you’d had a bit to drink, and you were talking in the restaurant. That’s the kind of conversation I want, that real vicious session.

Joe: OK, I’ll make sure that Daniel and I get liquored up before coming on then.

Paul: Yeah, that’s the answer. Ok, thank you so much Joe, that’s really good advice, and we’ll talk to you soon.

Joe: Thanks Paul, bye.

Thanks goes to Nathan O’Hanlon for transcribing this interview.

Back to top

Listeners feedback:

Top web designer applications

Often this section of the show consists of questions for myself and Marcus. However for a change, I thought we should ask the questions. Via the forum, the boagworld site and twitter I recently asked you to vote for your ‘favourite web designer application‘. The response was overwhelming and you can see the complete list of suggestions on UserVoice. However, here are the top 5…

  1. Firebug – Firebug is a Firefox addon that puts a wealth of development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. A less popular suggestion was the Web Inspector in Safari.
  2. Web developer toolbar – The Web Developers toolbar is a Firefox addon that provides a variety of web development tools. You can disable CSS and Javascript, visually highlight elements, manage cookies and much more. A less popular alternative was the IE developers toolbar.
  3. Adobe Photoshop – A professional image-editing and graphics creation software from Adobe. It provides a large library of effects, filters and layers. This is the grandfather of such applications and many (like myself) use it out of habit more than anything else. Less popular suggestions included Fireworks, Illustrator and Inkscape.
  4. Coda – Coda is a one window development environment for the mac. It includes FTP, text editor, browser preview and even terminal window. A beautifully designed app it appeals to the more visual web designer. Simple, easy to use and elegant.
  5. TextMate – TextMate is a powerful text editor for the mac with an extensive plug-in architecture. From its code highlighting in near endless languages to support for numerous version control systems, TextMate is probably the most powerful text editor out there.

If you disagree with the Boagworld Listeners top five or want to see the other entries then head on over to UserVoice and vote for yourself.

Sending out bulk emails

My second listener contribution comes from the forum. It is a question from Richard about sending bulk email.

Richard writes: I need to send out bulk emails to approx 200k registered (opted in) users on a monthly basis.

Does anyone have any recommendations for an outsourced bulk email provider?

As with the previous contribution I want to focus on your responses rather than my own. This is what the Boagworld community had to say…

Jamie was the first of a number of people to recommend Campaign Monitor. Judging by the feedback from the forum they offer an excellent service but are expensive when compared to others.

As well as recommending Campaign Monitor Nick mentions Silverpop, which he described as ‘an enterprise affair’. Apparently it is not as polished as campaign monitor but considerably more powerful.

Phil recommended two more, Mail Chimp and Mad Mini. He hasn’t used them personally but the prices look good and he says the user interfaces appear polished.

Doug doesn’t recommend a specific service but does refer Richard to a post on Creative Tips which provides a comprehensive review of Campaign Monitor, MailChimp, AWeber, and Constant Contact.

If you have suggestions for Richard or would just like to share your experiences of using bulk email services then contribute to the thread in the boagworld forum.

Back to top

Boagworks

Boagworld