Why Ruby?

March 22, 2013

I've been a Microsoft developer for decades now. I weaned myself on various flavors of home computer Microsoft Basic, and I got my first paid programming gigs in Microsoft FoxPro, Microsoft Access, and Microsoft Visual Basic. I have seen the future of programming, my friends, and it is terrible CRUD apps running on Wintel boxes!

Of course, we went on to build Stack Overflow in Microsoft .NET. That's a big reason it's still as fast as it is. So one of the most frequently asked questions after we announced Discourse was:

Why didn't you build Discourse in .NET, too?

Let me be clear about something: I love .NET. One of the greatest thrills of my professional career was getting the opportunity to place a Coding Horror sticker in the hand of Anders Hejlsberg. Pardon my inner fanboy for a moment, but oh man I still get chills. There are maybe fifty world class computer language designers on the planet. Anders is the only one of them who built Turbo Pascal and Delphi. It is thanks to Anders' expert guidance that .NET started out such a remarkably well designed language – literally what Java should have been on every conceivable level – and has continued to evolve in remarkably practical ways over the last 10 years, leveraging the strengths of other influential dynamically typed languages.

Turbo-pascal

All that said, it's true that I intentionally chose not to use .NET for my next project. So you might expect to find an angry, righteous screed here about how much happier I am leaving the oppressive shackles of my Microsoft masters behind. Free at last, free at least, thank God almighty I'm free at last!

Sorry. I already wrote that post five years ago.

Like any pragmatic programmer, I pick the appropriate tool for the job at hand. And as much as I may love .NET, it would be an extraordinarily poor choice for an 100% open source project like Discourse. Why? Three reasons, mainly:

  1. The licensing. My God, the licensing. It's not so much the money, as the infernal, mind-bending tax code level complexity involved in making sure all your software is properly licensed: determining what 'level' and 'edition' you are licensed at, who is licensed to use what, which servers are licensed... wait, what? Sorry, I passed out there for a minute when I was attacked by rabid licensing weasels.

    I'm not inclined to make grand pronouncements about the future of software, but if anything kills off commercial software, let me tell you, it won't be open source software. They needn't bother. Commercial software will gleefully strangle itself to death on its own licensing terms.

  2. The friction. If you want to build truly viable open source software, you need people to contribute to your project, so that it is a living, breathing, growing thing. And unless you can download all the software you need to hack on your project freely from all over the Internet, no strings attached, there's just … too much friction.

    If Stack Overflow taught me anything, it is that we now live in a world where the next brilliant software engineer can come from anywhere on the planet. I'm talking places this ugly American programmer has never heard of, where they speak crazy nonsense moon languages I can't understand. But get this. Stand back while I blow your mind, people: these brilliant programmers still code in the same keywords we do! I know, crazy, right?

    Getting up and running with a Microsoft stack is just plain too hard for a developer in, say, Argentina, or Nepal, or Bulgaria. Open source operating systems, languages, and tool chains are the great equalizer, the basis for the next great generation of programmers all over the world who are going to help us change the world.

  3. The ecosystem. When I was at Stack Exchange we strove mightily to make as much of our infrastructure open source as we could. It was something that we made explicit in the compensation guidelines, this idea that we would all be (partially) judged by how much we could do in public, and try to leave behind as many useful, public artifacts of our work as we could. Because wasn't all of Stack Exchange itself, from the very first day, built on your Creative Commons contributions that we all share ownership of?

    You can certainly build open source software in .NET. And many do. But it never feels natural. It never feels right. Nobody accepts your patch to a core .NET class library no matter how hard you try. It always feels like you're swimming upstream, in a world of small and large businesses using .NET that really aren't interested in sharing their code with the world – probably because they know it would suck if they did, anyway. It is just not a native part of the Microsoft .NET culture to make things open source, especially not the things that suck. If you are afraid the things you share will suck, that fear will render you incapable of truly and deeply giving back. The most, uh, delightful… bit of open source communities is how they aren't afraid to let it "all hang out", so to speak.

    So as a result, for any given task in .NET you might have – if you're lucky – a choice of maybe two decent-ish libraries. Whereas in any popular open source language, you'll easily have a dozen choices for the same task. Yeah, maybe six of them will be broken, obsolete, useless, or downright crazy. But hey, even factoring in some natural open source spoilage, you're still ahead by a factor of three! A winner is you!

As I wrote five years ago:

I'm a pragmatist. For now, I choose to live in the Microsoft universe. But that doesn't mean I'm ignorant of how the other half lives. There's always more than one way to do it, and just because I chose one particular way doesn't make it the right way – or even a particularly good way. Choosing to be provincial and insular is a sure-fire path to ignorance. Learn how the other half lives. Get to know some developers who don't live in the exact same world you do. Find out what tools they're using, and why. If, after getting your feet wet on both sides of the fence, you decide the other half is living better and you want to join them, then I bid you a fond farewell.

I no longer live in the Microsoft universe any more. Right, wrong, good, evil, that's just how it turned out for the project we wanted to build.

Im-ok-with-this

However, I'd also be lying if I didn't mention that I truly believe the sort of project we are building in Discourse does represent most future software. If you squint your eyes a little, I think you can see a future not too far in the distance where .NET is a specialized niche outside the mainstream.

But why Ruby? Well, the short and not very glamorous answer is that I had narrowed it down to either Python or Ruby, and my original co-founder Robin Ward has been building major Rails apps since 2006. So that clinched it.

I've always been a little intrigued by Ruby, mostly because of the absolutely gushing praise Steve Yegge had for the language way back in 2006. I've never forgotten this.

For the most part, Ruby took Perl's string processing and Unix integration as-is, meaning the syntax is identical, and so right there, before anything else happens, you already have the Best of Perl. And that's a great start, especially if you don't take the Rest of Perl.

But then Matz took the best of list processing from Lisp, and the best of OO from Smalltalk and other languages, and the best of iterators from CLU, and pretty much the best of everything from everyone.

And he somehow made it all work together so well that you don't even notice that it has all that stuff. I learned Ruby faster than any other language, out of maybe 30 or 40 total; it took me about 3 days before I was more comfortable using Ruby than I was in Perl, after eight years of Perl hacking. It's so consistent that you start being able to guess how things will work, and you're right most of the time. It's beautiful. And fun. And practical.

Steve is one of those polyglot programmers I respect so much that I basically just take whatever his opinion is, provided it's not about something wacky like gun control or feminism or T'Pau, and accept it as fact.

I apologize, Steve. I'm sorry it took me 7 years to get around to Ruby. But maybe I was better off waiting a while anyway:

  • Ruby is a decent performer, but you really need to throw fast hardware at it for good performance. Yeah, I know, interpreted languages are what they are, and caching, database, network, blah blah blah. Still, we obtained the absolute fastest CPUs you could buy for the Discourse servers, 4.0 Ghz Ivy Bridge Xeons, and performance is just … good on today's fastest hardware. Not great. Good.

    Yes, I'll admit that I am utterly spoiled by the JIT compiled performance of .NET. That's what I am used to. I do sometimes pine away for the bad old days of .NET when we could build pages that serve in well under 50 milliseconds without thinking about it too hard. Interpreted languages aren't going to be able to reach those performance levels. But I can only imagine how rough Ruby performance had to be back in the dark ages of 2006 when CPUs and servers were five times slower than they are today! I'm so very glad that I am hitting Ruby now, with the strong wind of many solid years of Moore's law at our backs.

  • Ruby is maturing up nicely in the 2.0 language release, which happened not more than a month after Discourse was announced. So, yes, the downside is that Ruby is slow. But the upside is there is a lot of low hanging performance fruit in Ruby-land. Like.. a lot a lot. On Discourse we got an across the board 20% performance improvement just upgrading to Ruby 2.0, and we nearly doubled our performance by increasing the default Ruby garbage collection limit. From a future performance perspective, Ruby is nothing but upside.

  • Ruby isn't cool any more. Yeah, you heard me. It's not cool to write Ruby code any more. All the cool people moved on to slinging Scala and Node.js years ago. Our project isn't cool, it's just a bunch of boring old Ruby code. Personally, I'm thrilled that Ruby is now mature enough that the community no longer needs to bother with the pretense of being the coolest kid on the block. That means the rest of us who just like to Get Shit Done can roll up our sleeves and focus on the mission of building stuff with our peers rather than frantically running around trying to suss out the next shiny thing.

And of course the Ruby community is, and always has been, amazing. We never want for great open source gems and great open source contributors. Now is a fantastic time to get into Ruby, in my opinion, whatever your background is.

(However, It's also worth mentioning that Discourse is, if anything, even more of a JavaScript project than a Ruby on Rails project. Don't believe me? Just go to try.discourse.org and view source. A Discourse forum is not so much a website as it is a full-blown JavaScript application that happens to run in your browser.)

Even if done in good will and for the best interests of the project, it's still a little scary to totally change your programming stripes overnight after two decades. I've always believed that great programmers learn to love more than one language and programming environment – and I hope the Discourse project is an opportunity for everyone to learn and grow, not just me. So go fork us on GitHub already!

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    91 Comments

Civilized Discourse Construction Kit

February 5, 2013

Occasionally, startups will ask me for advice. That's a shame, because I am a terrible person to ask for advice. The conversation usually goes something like this:

We'd love to get your expert advice on our thing.

I probably don't use your thing. Even if I tried your thing out and I gave you my so-called Expert advice, how would it matter? Anyway, why are you asking me? Why don't you ask your community what they think of your thing?

And if you don't have a community of users and customers around your thing, well, there's your problem right there. Go fix that.

Like I said, I don't get asked for advice too often. But for what it's worth, it is serious advice. And the next question they ask always strikes fear into my heart.

You're so right! We need a place for online community around our thing. What software should we use?

This is the part where I start playing sad trombone in my head. Because all your software options for online community are, quite frankly, terrible. Stack Exchange? We only do strict, focused Q&A there and you'd have to marshal your proposal through Area 51. Get Satisfaction, UserVoice, Desk, etcetera? Sorry, customer support isn't the same as community. Mailing lists? Just awful.

Forum software? Maybe. Let's see, it's 2013, has forum software advanced at all in the last ten years?

Straight Dope forums in 2000 Straight Dope forums in 2012

I'm thinking no.

Forums are the dark matter of the web, the B-movies of the Internet. But they matter. To this day I regularly get excellent search results on forum pages for stuff I'm interested in. Rarely a day goes by that I don't end up on some forum, somewhere, looking for some obscure bit of information. And more often than not, I find it there.

There's an amazing depth of information on forums.

  • A 12 year old girl who finds a forum community of rabid enthusiasts willing to help her rebuild a Fiero from scratch? Check.
  • The most obsessive breakdown of Lego collectible minifig kits you'll find anywhere on the Internet? Check.
  • Some of the most practical information on stunt kiting in the world? Check.
  • The only place I could find with scarily powerful squirt gun instructions and advice? Check.
  • The underlying research for a New Yorker article outing a potential serial marathon cheater? Check.

I could go on and on. As much as existing forum software is inexplicably and terrifyingly awful after all these years, it is still the ongoing basis for a huge chunk of deeply interesting information on the Internet. These communities are incredibly passionate about incredibly obscure things. They aren't afraid to let their freak flag fly, and the world is a better place for it.

At Stack Exchange, one of the tricky things we learned about Q&A is that if your goal is to have an excellent signal to noise ratio, you must suppress discussion. Stack Exchange only supports the absolute minimum amount of discussion necessary to produce great questions and great answers. That's why answers get constantly re-ordered by votes, that's why comments have limited formatting and length and only a few display, and so forth. Almost every design decision we made was informed by our desire to push discussion down, to inhibit it in every way we could. Spare us the long-winded diatribe, just answer the damn question already.

After spending four solid years thinking of discussion as the established corrupt empire, and Stack Exchange as the scrappy rebel alliance, I began to wonder – what would it feel like to change sides? What if I became a champion of random, arbitrary discussion, of the very kind that I'd spent four years designing against and constantly lecturing users on the evil of?

I already built an X-Wing; could I build a better Tie Fighter?

Tie-fighter

If you're wondering what all those sly references to Tie Fighters were about in my previous blog posts and tweets, now you know. All hail the Emperor, and by the way, what's your favorite programming food?

Today we announce the launch of Discourse, a next-generation, 100% open source discussion platform built for the next decade of the Internet.

Discourse-logo-big

The goal of the company we formed, Civilized Discourse Construction Kit, Inc., is exactly that – to raise the standard of civilized discourse on the Internet through seeding it with better discussion software:

  • 100% open source and free to the world, now and forever.
  • Feels great to use. It's fun.
  • Designed for hi-resolution tablets and advanced web browsers.
  • Built in moderation and governance systems that let discussion communities protect themselves from trolls, spammers, and bad actors – even without official moderators.

Our amazingly talented team has been working on Discourse for almost a year now, and although like any open source software it's never entirely done, we believe it is already a generation ahead of any other forum software we've used.

I greatly admire what WordPress did for the web; to say that we want to be the WordPress of forums is not a stretch at all. We're also serious about this eventually being a viable open-source business, in the mold of WordPress. And we're not the only people who believe in the mission: I'm proud to announce that we have initial venture capital funding from First Round, Greylock, and SV Angel. We're embarking on a five year mission to improve the fabric of the Internet, and we're just getting started. Let a million discussions bloom!

So now, when someone says to me …

You're so right! We need a place for community around our thing. What software should we use?

I can reply without hesitation.

And hopefully, so can you.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    105 Comments

The End of Ragequitting

January 21, 2013

When Joel Spolsky, my business partner on Stack Overflow and Stack Exchange, asked me what I wanted to do after I left Stack Exchange, I distinctly remember mentioning Aaron Swartz. That's what Aaron was to us hackers: an exemplar of the noble, selfless behavior and positive action that all hackers aspire to – but very few actually achieve.

And now, tragically, Aaron is gone at the tender age of 26. He won't be achieving anything any more.

I never knew Aaron, but I knew Aaron.

Aaron-swartz-stack-overflow

Most of all, I am disappointed.

I'm deeply disappointed in myself, for not understanding just how bitterly unfair the government charges were against Aaron. Perhaps the full, grotesque details couldn't be revealed for a pending legal case. But we should have been outraged. I am gutted that I did not contribute to his defense in any way, either financially or by writing about it here. I blindly assumed he would prevail, as powerful activists on the side of fairness, openness, and freedom are fortunate enough to often do in our country. I was wrong.

I'm disappointed in our government, for going to such lengths to make an example of someone who was so obviously a positive force. Someone who actively worked to change the world for the better in everything he did, starting from the age of 12. There was no evil in this man. And yet the absurd government case against him was cited by his family as directly contributing to his death.

I'm frustrated by the idea that martyrdom works. The death of Aaron Swartz is now turning into an effective tool for change, a rallying cry, proving the perverse lesson that nobody takes an issue seriously until a great person dies for the cause. The idea that Aaron killing himself was a viable strategy, more than going on to prevail in this matter and so many more in his lifetime, makes me incredibly angry.

But also, I must admit that I am a little disappointed in Aaron. I understand that depression is a serious disease that can fell any person, however strong. But he chose the path of the activist long ago. And the path of the activist is to fight, for as long and as hard as it takes, to effect change. Aaron had powerful friends, a powerful support network, and a keen sense of moral cause that put him in the right. That's how he got that support network of powerful friends and fellow activists in the first place.

It is appropriate to write about Aaron on Martin Luther King day, because he too was a tireless activist for moral causes.

I hope you are able to see the distinction I am trying to point out. In no sense do I advocate evading or defying the law, as would the rabid segregationist. That would lead to anarchy. One who breaks an unjust law must do so openly, lovingly, and with a willingness to accept the penalty. I submit that an individual who breaks a law that conscience tells him is unjust, and who willingly accepts the penalty of imprisonment in order to arouse the conscience of the community over its injustice, is in reality expressing the highest respect for law.

Let's be clear that the penalty in Aaron's case was grossly unfair, bordering on corrupt. I've been a part of exactly one trial, but I can't even imagine having the full resources of the US Government brought to bear against me, with extreme prejudice, for a year or more. His defense was estimated to cost millions. The idea that such an engaged citizen would be forever branded a felon – serving at least some jail time and stripped of the most fundamental citizenship right, the ability to vote – must have weighed heavily on Aaron. And Aaron was no stranger to depresson, having written about it on his blog many times, even penning a public will of sorts on his blog all the way back in 2002.

I think about ragequitting a lot.

Rage Quit, also seen as RageQuit in one word, is Internet slang commonly used to describe the act of suddenly quitting a game or chatroom after either an argument, extreme frustration, or loss of the game.

At least one user ragequits Stack Exchange every six months, because our rules are strict. Some people don't like rules, and can respond poorly when confronted by the rules of the game they choose to play. It came up often enough that we had to create even more rules to deal with it. I was forced to think about ragequitting.

I was very angry with Mark Pilgrim and _why for ragequitting the Internet, because they also took all their content offline – they got so frustrated that they took their ball and went home, so nobody else could play. How incredibly rude. Ragequitting is childish, a sign of immaturity. But it is another thing entirely to play the final move and take your own life. To declare the end of this game and all future games, the end of ragequitting itself.

I say this not as a person who wishes to judge Aaron Swartz. I say it as a fellow gamer who has also considered playing the same move quite recently. To the point that I – like Aaron himself, I am sure – was actively researching it. But the more I researched, the more I thought about it, the more it felt like what it really was: giving up. And the toll on friends and family would be unimaginably, unbearably heavy.

What happened to Aaron was not fair. Not even a little. But this is the path of the activist. The greater the injustice, the greater wrong undone when you ultimately prevail. And I am convinced, absolutely and utterly convinced, that Aaron would have prevailed. He would have gone on to do so many other great things. It is our great failing that we did not provide Aaron the support network he needed to see this. All we can do now is continue the mission he started and lobby for change to our corrupt government practices of forcing plea bargains.

It gets dark sometimes. I know it does. I'm right there with you. But do not, under any circumstances, give anyone the satisfaction of seeing you ragequit. They don't deserve it. Play other, better moves – and consider your long game.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    69 Comments

Web Discussions: Flat by Design

December 13, 2012

It's been six years since I wrote Discussions: Flat or Threaded? and, despite a bunch of evolution on the web since then, my opinion on this has not fundamentally changed.

If anything, my opinion has strengthened based on the observed data: precious few threaded discussion models survive on the web. Putting aside Usenet as a relic and artifact of the past, it is rare to find threaded discussions of any kind on the web today; for web discussion communities that are more than ten years old, the vast majority are flat as a pancake.

I'm game for trying anything new, I mean, I even tried Google Wave. But the more I've used threaded discussions of any variety, the less I like them. I find precious few redeeming qualities, while threading tends to break crucial parts of discussion like reading and replying in deep, fundamental, unfixable ways. I have yet to discover a threaded discussion design that I can tolerate long term.

A part of me says this is software Darwinism in action: threaded discussion is ultimately too complex to survive on the public Internet.

Hacker-news-threading

Before threaded discussion fans bring out their pitchforks and torches, I fully acknowledge that aspects of threading can be useful in certain specific situations. I will get to that. I know I'm probably wasting my time even attempting to say this, but please: keep reading before commenting. Ideally, read the whole article before commenting. Like Parappa, I gotta believe!

Before I defend threaded discussion, let's enumerate the many problems it brings to the table:

  1. It's a tree.

    Poems about trees are indeed lovely, as Joyce Kilmer promised us, but data of any kind represented as a tree … isn't. Rigid hierarchy is generally not how the human mind works, and the strict parent-child relationship it enforces is particularly terrible for fluid human group discussion. Browsing a tree is complicated, because you have to constantly think about what level you're at, what's expanded, what's collapsed … there's always this looming existential crisis of where the heck am I? Discussion trees force me to spend too much time mentally managing that two-dimensional tree more than the underlying discussion.

  2. Where did that reply go?

    In a threaded discussion, replies can arrive any place in the tree at any time. How do you know if there are new replies? Where do you find them? Only if you happen to be browsing the tree at the right place at the right time. It's annoying to follow discussions over time when new posts keep popping up anywhere in the middle of the big reply tree. And God help you if you accidentally reply at the wrong level of the tree; then you're suddenly talking to the wrong person, or maybe nobody at all. It absolutely kills me that there might be amazing, insightful responses buried somewhere in the middle of a reply chain that I will never be able to find.

  3. It pushes discussion off your screen.

    So the first reply is indented under the post. Fair enough; how else would you know that one post is a reply to another post? But this indentation game doesn't ever end. Reply long and hard enough and you've either made the content column impossibly narrow, or you've pushed the content to exit, stage right. That's how endless pedantic responses-to-responses ruin the discussion for everyone. When we play the "indent everything to the right" game, everyone loses. It is natural to scroll down on the web, but it is utterly unnatural to scroll right. Indentation takes the discussion in the wrong direction.

  4. You're talking to everyone.

    You think because you clicked "reply" and your post is indented under the person you're replying to, that your post is talking only to that person? That's so romantic. Maybe the two of you should get a room. A special, private room at the far, far, far, far, far right of that threaded discussion. This illusion that you are talking to one other person ends up harming the discussion for everyone by polluting the tree with these massive narrow branches that are constantly in the way.

    At an absolute minimum you're addressing everyone else in that discussion, but in reality, you're talking to anyone who will listen, for all time. Composing your reply as if it is a reply to just one person is a quaint artifact of a world that doesn't exist any more. Every public post you make on the Internet, reply or not, is actually talking to everyone who will ever read it. It'd be helpful if the systems we used for discussion made that clear, rather than maintaining this harmful pretense of private conversations in a public space.

  5. I just want to scroll down.

    Reddit (and to a lesser extent, Hacker News) are probably the best known examples of threaded comments applied to a large audience. While I find Reddit so much more tolerable than the bad old days of Digg, I can still barely force myself to wade through the discussions there, because it's so much darn work. As a lazy reader, I feel I've already done my part by deciding to enter the thread; after that all I should need to do is scroll or swipe down.

    Take what's on the top of reddit right now. It's a cool picture; who wouldn't want to meet Steve Martin and Morgan Freeman? But what's the context? Who is this kid? How did he get so lucky? To find out, I need to collapse and suppress dozens of random meaningless tangents, and the replies-to-tangents, by clicking the little minus symbol next to each one. So that's what I'm doing: reading a little, deciding that tangent is not useful or interesting, and clicking it to get rid of it. Then I arrive at the end and find out that information wasn't even in the topic, or at least I couldn't find it. I'm OK with scrolling down to find information and/or entertainment, to a point. What I object to is the menial labor of collapsing and expanding threaded portions of the topic as I read. Despite what the people posting them might think, those tangents aren't so terribly important that they're worth making me, and every other reader, act on them.

Full bore, no-holds-barred threading is an unmitigated usability disaster for discussion, everywhere I've encountered it. But what if we didn't commit to this idea of threaded discussion quite so wholeheartedly?

The most important guidance for non-destructive use of threading is to put a hard cap on the level of replies that you allow. Although Stack Exchange is not a discussion system – it's actually the opposite of a discussion system, which we have to explain to people all the time – we did allow, in essence, one level of threading. There are questions and answers, yes, but underneath each of those, in smaller type, are the comments.

Stack-exchange-threading

Now there's a bunch of hard-core discussion sociology here that I don't want to get into, like different rules for comments, special limitations for comments, only showing the top n of comments by default, and so forth. What matters is that we allow one level of replies and that's it. Want to reply to a comment? You can, but it'll be at the same level. You can go no deeper. This is by design, but remember: Stack Exchange is not a discussion system. It's a question and answer system. If you build your Q&A system like a discussion system, it will devolve into Yahoo Answers, or even worse, Quora. Just kidding Quora. You're great.

Would Hacker News be a better place for discussion if they capped reply level? Would Reddit? From my perspective as a poor, harried reader and very occasional participant, absolutely. There are many chronic problems with threaded discussion, but capping reply depth is the easiest way to take a giant step in the right direction.

Another idea is to let posts bring their context with them. This is one of the things that Twitter, the company that always does everything wrong and succeeds anyway, gets … shockingly right out of the gate. When I view one of my tweets, it can stand alone, as it should. But it can also bring some context along with it on demand:

Twitter-threading

Here you can see how my tweet can be expanded with a direct link or click to show the necessary context for the conversation. But it'll only show three levels: the post, my reply to the post, and replies to my post. This idea that tweets – and thus, conversations – should be mostly standalone is not well understood, but it illustrates how Twitter got the original concept so fundamentally right. I guess that's why they can get away with the terrible execution.

I believe selective and judicious use of threading is the only way it can work for discussion. You should be wary of threading as a general purpose solution for human discussions. Always favor simple, flat discussions instead.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    92 Comments

The Organism Will Do Whatever It Damn Well Pleases

December 1, 2012

In the go-go world of software development, we're so consumed with learning new things, so fascinated with the procession of shiny new objects that I think we sometimes lose sight of our history. I don't mean the big era-defining successes. Everyone knows those stories. I'm talking about the things we've tried before that … didn't quite work out. The failures. The also-rans. The noble experiments. The crazy plans.

I'm all for reinventing the wheel, because it's one of the best ways to learn. But you shouldn't even think about reinventing a damn thing until you've exhaustively researched every single last wheel, old or new, working or broken, that you can lay your hands on. Do your homework.

That's why I love unearthing stories like The Lessons of Lucasfilm's Habitat. It's basically World of Warcraft … in 1985.

Habitat is "a multi-participant online virtual environment," a cyberspace.

Habitat

Each participant ("player") uses a home computer (Commodore 64) as an intelligent, interactive client, communicating via modem and telephone over a commercial packet-switching network to a centralized, mainframe host system. The client software provides the user interface, generating a real-time animated display of what is going on and translating input from the player into messages to the host. The host maintains the system's world model enforcing the rules and keeping each player's client informed about the constantly changing state of the universe.

This was the dark ages of home computing. In 1985, that 64k of memory in a Commodore 64 was a lot. The entirety of Turbo Pascal 3.02 for DOS, released a year later in 1986, was barely 40k.

The very concept of a multiplayer virtual world of any kind – something we take for granted today, since every modern website is essentially a multiplayer game now — was incredibly exotic. Look at the painstaking explanation Lucasfilm had to produce to even get folks to understand what the heck Habitat was, and how it worked:

The technical information in The Lessons of Lucasfilm's Habitat is incredibly dated, as you'd expect, and barely useful even as trivia. But the sociological lessons of Habitat cut to the bone. They're as fresh today as they were in 1985. Computers have radically changed in the intervening 27 years, whereas people's behavior hasn't. At all. This particular passage hit home:

Again and again we found that activities based on often unconscious assumptions about player behavior had completely unexpected outcomes (when they were not simply outright failures). It was clear that we were not in control. The more people we involved in something, the less in control we were. We could influence things, we could set up interesting situations, we could provide opportunities for things to happen, but we could not predict nor dictate the outcome. Social engineering is, at best, an inexact science, even in proto-cyberspaces. Or, as some wag once said, "in the most carefully constructed experiment under the most carefully controlled conditions, the organism will do whatever it damn well pleases."

Even more specifically:

Propelled by these experiences, we shifted into a style of operations in which we let the players themselves drive the direction of the design. This proved far more effective. Instead of trying to push the community in the direction we thought it should go, an exercise rather like herding mice, we tried to observe what people were doing and aid them in it. We became facilitators as much as designers and implementors. This often meant adding new features and new regions to the system at a frantic pace, but almost all of what we added was used and appreciated, since it was well matched to people's needs and desires. As the experts on how the system worked, we could often suggest new activities for people to try or ways of doing things that people might not have thought of. In this way we were able to have considerable influence on the system's development in spite of the fact that we didn't really hold the steering wheel -- more influence, in fact, than we had had when we were operating under the delusion that we controlled everything.

That's exactly what I was trying to say in Listen to Your Community, But Don't Let Them Tell You What to Do. Unfortunately, because I hadn't read this essay until a few months ago, I figured this important lesson out 25 years later than Randy Farmer and Chip Morningstar. So many Stack Overflow features were the direct result of observing what the community was doing, then attempting to aid them in it:

  • We noticed early in the Stack Overflow beta that users desperately wanted to reply to each other, and were cluttering up the system with "answers" that were, well, not answers to the question. Rather than chastize them for doing it wrong – stupid users! – we added the commenting system to give them a method of annotating answers and questions for clarifications, updates, and improvements.

  • I didn't think it was necessary to have a place to discuss Stack Overflow. And I was … kind of a jerk about it. The community was on the verge of creating a phpBB forum instance to discuss Stack Overflow. Faced with a nuclear ultimatum, I relented, and you know what? They were right. And I was wrong.

  • The community came up with an interesting convention for handling duplicate questions, by manually editing a blockquote into the top of the question with a link to the authoritative question that it was a duplicate of. This little user editing convention eventually became the template for the official implementation.

I could go on and on, but I won't bore you. I'd say for every 3 features we introduced on Stack Overflow, at least two of them came more or less directly from observing the community, then trying to run alongside them, building tools that helped them do what they wanted to do with less fuss and effort. That was my job for the last four years. And I loved it, until I had to stop loving it.

Randy Farmer, one of the primary designers of Habitat at Lucasfilm, went on to work on a bunch of things that you may recognize: with Douglas Crockford on JSON, The Sims Online, Second Life, Yahoo 360°, Yahoo Answers, Answers.com, and so forth. He eventually condensed some of his experience into a book, Building Web Reputation Systems, which I bought in April 2011 as a Kindle edition. I didn't know who Mr. Farmer was at this time. I just saw a new O'Reilly book on an area of interest, and I thought I'd check it out.

Building-web-reputation-systems

As the co-founder of Stack Overflow, I know a thing or two about web reputation systems! Out of curiosity, I looked up the author on my own site. And I found him, with a tiny reputation. So I sent this friendly jibe on Twitter:

pff, look at @frandallfarmer's tiny rep! look at it!

But the last laugh was on Randy, as it should be, because I didn't realize he had over 6,000 reputation on rpg.stackexchange.com. Turns out, Randy Farmer was already an avid Stack Exchange user. And, as you might guess given his background, a rather expert Stack Exchange user at that. The Stack Exchange ruleset is complex, strict, and requires discipline to understand. Kind of like.. maybe a certain role playing game, if you will.

Advanced-dungeons-and-dragons

Randy is the sort of dad who had his first edition Dungeons & Dragons books bound into a single leather tome and handed it down to his son as a family heirloom. How awesome is that?

If we've learned anything in the last 25 years since Habitat, it is that people are the source of, and solution to, all the problems you'll run into when building social software. Are you looking to (dungeon) master the art of guiding and nudging your online community through their collective adventure, without violating the continuity of your own little universe? If so, you could do a whole heck of lot worse than reading Building Web Reputation Systems and following @FRandallFarmer on Twitter.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    13 Comments

For a Bit of Colored Ribbon

November 26, 2012

For the last year or so, I've been getting these two page energy assessment reports in the mail from Pacific Gas & Electric, our California utility company, comparing our household's energy use to those of the houses around us.

Here's the relevant excerpts from the latest report; click through for a full-page view of each page.

Pge-page-1-small

Pge-page-2-small

These poor results are particularly galling because I go far out of my way to Energy Star all the things, I use LED light bulbs just about everywhere, we set our thermostat appropriately, and we're still getting crushed. I have no particular reason to care about this stupid energy assessment report showing our household using 33% more energy than similar homes in our neighborhood. And yet… I must win this contest. I can't let it go.

  • Installed a Nest 2.0 learning thermostat.
  • I made sure every last bulb in our house that gets any significant use is LED. Fortunately there are some pretty decent $16 LED bulbs on Amazon now offering serviceable 60 watt equivalents at 9 watt, without too many early adopter LED quirks (color, dimming, size, weight, etc).
  • I even put appliance LED bulbs in our refrigerator and freezer.
  • Switched to a low-flow shower head.
  • Upgraded to a high efficiency tankless water heater, the Noritz NCC1991-SV.
  • Nearly killed myself trying to source LED candelabra bulbs for the fixture in our dining room which has 18 of the damn things, and is used quite a bit now with the twins in the house. Turns out, 18 times any number … is still kind of a large number. In cash.

(Most of this has not helped much on the report. The jury is still out on the Nest thermostat and the candelabra LED bulbs, as I haven't had them long enough to judge. I'm gonna defeat this thing, man!)

I'm ashamed to admit that it's only recently I realized that this technique – showing a set of metrics alongside your peers – is exactly the same thing we built at Stack Overflow and Stack Exchange. Notice any resemblance on the user profile page here?

Stack-overflow-user-page-small

You've tricked me into becoming obsessed with understanding and reducing my household energy consumption. Something that not only benefits me, but also benefits the greater community and, more broadly, benefits the entire world. You've beaten me at my own game. Well played, Pacific Gas & Electric. Well played.

Davetron5000-tweet

This peer motivation stuff, call it gamification if you must, really works. That's why we do it. But these systems are like firearms: so powerful they're kind of dangerous if you don't know what you're doing. If you don't think deeply about what you're incentivizing, why you're incentivizing it, and the full ramifications of all emergent behaviors in your system, you may end up with … something darker. A lot darker.

The key lesson for me is that our members became very thoroughly obsessed with those numbers. Even though points on Consumating were redeemable for absolutely nothing, not even a gold star, our members had an unquenchable desire for them. What we saw as our membership scrabbled over valueless points was that there didn't actually need to be any sort of material reward other than the points themselves. We didn't need to allow them to trade the points in for benefits, virtual or otherwise. It was enough of a reward for most people just to see their points wobble upwards. If only we had been able to channel that obsession towards something with actual value!

Since I left Stack Exchange, I've had a difficult time explaining what exactly it is I do, if anything, to people. I finally settled on this: what I do, what I'm best at, what I love to do more than anything else in the world, is design massively multiplayer games for people who like to type paragraphs to each other. I channel their obsessions – and mine – into something positive, something that they can learn from, something that creates wonderful reusable artifacts for the whole world. And that's what I still hope to do, because I have an endless well of obsession left.

Just ask PG&E.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    95 Comments

Touch Laptops

November 19, 2012

I'm a little embarrassed to admit how much I like the Surface RT. I wasn't expecting a lot when I ordered it, but after a day of use, I realized this was more than Yet Another Gadget. It might represent a brave new world of laptop design. How can you not love a laptop that lets you touch Zardoz to unlock it?

Zardoz-surface-unlock

(I'll leave the particular unlock gestures I chose to your imagination. Good luck hacking this password, Mitnick!)

I have an ultrabook I like, but the more I used the Surface, the more obsolete it seemed, because I couldn't touch anything on the screen. I found touch interactions on Surface highly complementary to the keyboard. Way more than I would have ever believed, because I lived through the terror that was Pen Computing. If you need precision, you switch to the mouse or touchpad – but given the increasing prevalence of touch-friendly app and web design, that's not as often as you'd think. Tablets are selling like hotcakes, and every day the world becomes a more touch friendly place, with simpler apps that more people can understand and use on basic tablets. This a good thing. But this also means it is only a matter of time before all laptops must be touch laptops.

I've become quite obsessed enamored with this touch laptop concept. I've used the Surface a lot since then. I own two, including the touch and type covers. I also impulsively splurged on a Lenovo Yoga 13, which is a more traditional laptop form factor.

Yoga-13-rotation

One of the primary criticisms of the Surface RT is that, since it is an ARM based Tegra 3 device, it does not run traditional x86 apps. That's likely also why it comes with a bundled version of Office 2013. Well, the Yoga 13 resolves that complaint, because it's a Core i5 Ivy Bridge machine. But there is a cost for this x86 compatibility:

 Surface RTSurface ProYoga 13
weight1.5 lb2.0 lb3.4 lb
volume27"39"78"
runtime8 hr6? hr5.5 hr
display10.6" 1366×76810.6" 1920×108013.3" 1600×900
memory2 GB / 32 GB4 GB / 64 GB4 GB / 128 GB
price$599$999$999

The size comparison isn't entirely fair, since the Yoga is a 13.3" device, and the Surface is a 10.6" device. But Surface Pro has x86 internals and is otherwise as identical to the Surface RT as Microsoft could possibly make it, and it's still 44% larger and 33% heavier. Intel inside comes at a hefty cost in weight, battery life, and size.

You do get something for that price, though: compatibility with the vast library of x86 apps, and speed. The Yoga 13 is absurdly fast by tablet standards. Its Sunspider score is approximately 150 ms, compared to my iPad 4 at 738 ms, and the Surface RT at 1036 ms. Five hours of battery life might not seem like such a bad tradeoff for six times the performance.

I like the Yoga 13 a lot, and it is getting deservedly good reviews. Some reviewers think it's the best Windows 8 laptop available right now. It is a fine replacement for my ultrabook, and as long as you fix the brain-damaged default drive partitioning, scrape off the handful of stickers on it, and uninstall the few pre-installed craplets, it is eminently recommendable. You can also easily upgrade it from 4 GB to 8 GB of RAM for about $40.

But there were things about the practical use of a touch laptop, subtle things that hadn't even occurred to me until I tried to sit down and use one for a few hours, that made me pause:

  1. The screen bounces when you touch it. Maybe I just have hulk-like finger strength, but touching a thin laptop screen tends to make it bounce back a bit. That's … exactly what you don't want in a touch device. I begin to understand why the Surface chose its "fat screen, thin keyboard" design rather than the traditional "thin screen, fat keyboard" of a laptop. You need the inertia on the side you're touching. The physics of touching a thin, hinged laptop screen are never going to be particularly great. Yes, on the Yoga I can wrap the screen around behind the keyboard, or even prop it up like a tent – but this negates the value of the keyboard which is the biggest part of the touch laptop story! If I wanted a keyboardless tablet, I'd use one of the four I have in the house already. And the UPS guy just delivered a Nexus 10.

  2. A giant touchpad makes the keyboard area too large. On a typical laptop, a Texas size touchpad makes sense. On a touch laptop, giant touchpads are problematic because they push the screen even farther away from your hand. This may sound trivial, but it isn't. A ginormous touchpad makes every touch interaction you have that much more fatiguing to reach. I now see why the Surface opted for a tiny touchpad on its touch and type covers. A touchpad should be a method of last resort on a touch laptop anyway, because touch is more convenient, and if you need true per-pixel precision work, you'll plug in a mouse. Have I mentioned how convenient it is to have devices that accept standard USB mice, keyboards, drives, and so on? Because it is.

  3. Widescreen is good for keyboards, but awkward for tablets. A usable keyboard demands a certain minimum width, so widescreen it is; all touch laptops are going to be widescreen by definition. You get your choice between ultra wide or ultra tall. The default landscape mode works great, but rotating the device and using it in portrait mode makes it super tall. On a widescreen device, portrait orientation becomes a narrow and highly specialized niche. It's also very rough on lower resolution devices; neither the 1366×768 Surface RT nor the 1600×900 Yoga 13 really offer enough pixels on the narrow side to make portrait mode usable. You'd need a true retina class device to make portrait work in widescreen. I began to see why the iPad was shipped with a 4:3 display and not a 16:9 or 16:10 one, because that arrangement is more flexible on a tablet. I frequently use my iPad 4 in either orientation, but the Yoga and Surface are only useful in landscape mode except under the most rare of circumstances.

  4. About 11 inches might be the maximum practical tablet size. Like many observers, I've been amused by the race to produce the largest possible phone screen, resulting in 5" phablets that are apparently quite popular. But you'll also note that even the most ardent Apple fans seem to feel that the 7" iPad mini is an inherently superior form factor to the 10" iPad. I think both groups are fundamentally correct: for a lot of uses, the 3.5" phone really is too small, and the 10" tablet really is too big. As a corollary to that, I'd say anything larger than the 10.6" Surface is far too large to use as a tablet. Attempting to use the 13.3" Yoga as a tablet is incredibly awkward, primarily because of the size. Even if the weight and volume were pushed down to imaginary Minority Report levels, I'm not sure I would want a 13.3" tablet on my lap or in my hands. There must be a reason the standard letter page size is 8½ × 11", right?

  5. All-day computing, or, 10 hours of battery life. The more devices I own, the more I begin to appreciate those that I can use for 8 to 10 hours before needing to charge them. There is truly something a little magical about that 10 hour battery life number, and I can now understand why Apple seemed to target 9-10 hours of battery life in their initial iPad and iPhone designs. A battery life of 4 to 6 hours is nothing to sneeze at, but … I feel anxiety about carrying the charger around, whether I've charged recently or not, and I worry over screen brightness and other battery maximization techniques. When I can safely go 8 to 10 hours, I figure that even if I use the heck out of the device – as much as any human being reasonably could in a single day – I'll still safely make it through and I can stick it in a charger before I go to bed.

To appreciate just how extreme portrait mode is on a widescreen tablet, experience it yourself:

Yoga-13-landscape-small Yoga-13-portrait-small

This isn't specific to touch laptops; it's a concern for all widescreen devices. I have the same problem with the taller iPhone 5. Because I now have to choose between super wide or super tall, it is a less flexible device in practice.

The Yoga 13, if representative of the new wave of Windows 8 laptops, is a clear win even if you have no intention of ever touching your screen:

  • It boots up incredibly fast, in a few seconds.
  • It wakes and sleeps incredibly fast, nearly instantaneously.
  • The display is a high quality IPS model.
  • A rotating screen offers a number of useful modes: presentation, (giant) tablet, standard laptop.
  • Touchpad and keyboard work fine; at the very least, they're no worse than the typical PC laptop to me.
  • Does the prospect of using Windows 8 frighten and disturb you? No worries, smash Windows+D on your keyboard immediately after booting and pretend you're using Windows 7.5. Done and done.

It's a nice laptop. You could do far worse, and many have. In the end, the Yoga 13 is just a nice laptop with a touchscreen slapped on it. But the more I used the Yoga the more I appreciated the subtle design choices of Surface that make it a far better touch laptop. I kept coming back to how much I enjoyed using the Surface as the platonic ideal of what touch laptops should be.

Yes, it is a bummer that the only currently available Surface is ARM based and does not run any traditional Windows apps. It's easy to look at the x86 performance of the Yoga 13 and assume that Windows on ARM is a cute, temporary throwback to Windows NT on Alpha or MIPS which will never last, and understandably so. Do you see anyone running Windows on Alpha or MIPS CPUs today? But I'm mightily impressed with the Tegra 3 SOC (system-on-a-chip) that runs both the Surface RT and the Nexus 7. Upcoming Tegra releases, all named after superheroes, promise 75 times the performance of Tegra 2 by 2014. I can't quite determine how much faster Tegra 3 was than Tegra 2, but even if it is "only" ten times faster by 2014, that's … amazing.

I think we're beginning to uncover the edges of a world where lack of x86 compatibility is no longer the kiss of death it used to be. It's unclear to me that Intel can ever reach equivalent performance per watt with ARM; Intel's ultra-low-end Celeron 847 is twice as fast as the ARM A15, but it's also 17 watts TDP. In a land of ARM chips that pull an absolute maximum of 4 watts at peak, slapping Intel Inside will instantly double the size and weight of your device – or halve its battery life, your choice. Intel's been trying to turn the battleship, but with very limited success so far. Haswell, the successor to the Ivy Bridge CPUs in the Surface Pro and Yoga 13, only gets to 10 watts at idle. And Intel's long neglected Atom line, thanks to years of institutional crippling to avoid cannibalizing Pentium sales, is poorly positioned to compete with ARM today.

Still, I would not blame anyone for waiting on the Surface Pro. A high performance, HD touch laptop in the Surface form factor that runs every x86 app you can throw at it is a potent combination … even if it is 44% larger and 33% heavier.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    43 Comments

A SSD in Your Pocket

November 13, 2012

I woke up a few days ago and realized I was still carrying the same 32 GB USB flash drive on my keychain that I purchased in 2010. I thought to myself, this is an unacceptable state of affairs. Totally. Unacceptable.

It's been few years since I seriously looked at USB drive performance. Premium USB flash drives typically eke out about 10-20 MB per second, strongly favoring reads, but I've recently purchased a number of inexpensive 4 GB USB drives that barely got to 4 MB per second. That's OK, since they were only intended as cheap floppy drive CD and DVD replacements. Based on that experience, I wasn't expecting much improvement in the status quo.

USB 3.0 is finally becoming somewhat prevalent on PCs and Macs, so I figured I'd:

  • Switch to a current-generation USB 3.0 flash drive.
  • Bump up to 64 GB storage this generation, one step over the 32 GB model I currently carry.
  • Optimistically hope against hope that they've gotten fast enough by now to get anywhere near USB 2.0 throughput limits.

I checked around and the Patriot Supersonic Magnum got good reviews. The price seemed about right at $75 for a 64 GB device. So I bought one. I plugged it in to one of the USB 3.0 ports on my PC and …

Usb-drive-read

Usb-drive-write

Holy. Crap.

237 MB/s reads and 143 MB/s writes? Yes please!

Needless to say, this thing handily saturates a USB 2.0 connection at around 27 - 30 MB/sec but plug it into one of those blue USB 3.0 ports on newer Macs or PCs and prepare to feel like the "blown away" guy in the Maxell ad.

I haven't run a full set of benchmarks on this guy, but the only downside I've noticed so far is that it is a bit chunkier in width than my previous USB flash drive. It might be a bit more to carry, and might not fit some USB ports depending on what's adjacent.

Patriot-magnum-64gb

Now I feel like a total dork for continuing to carry around a 2010 era flash drive that I thought had decent performance at 20 MB/sec. Forget that noise. Now we can darn near carry pocket solid state hard drives on our keychains! Nobody told me, man!

So now I'm telling you. Enjoy.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    42 Comments

Do You Wanna Touch

November 1, 2012

Traditional laptops may have reached an evolutionary dead-end (or, more charitably, a plateau), but it is an amazing time for things that … aren't quite traditional laptops.

The Nexus 7 is excellent, the Nexus 10 looks fantastic, I can't wait to get my hands on the twice-as-fast iPad 4, the new Chromebooks are finally decent and priced right, and then there's the Microsoft Surface RT. In short, it is a fantastic time to be a computer nerd.

Revenge of the nerds

I love computers, always have, always will. My strategy with new computing devices is simple: I buy 'em all, then try living with them. The devices that fall away from me over time – the ones that gather dust, or that I forget about – are the ones I eventually get rid of. So long, Kindle Fire! I knew that the Nexus 7 was really working for me when I gave mine to my father as a spontaneous gift while he was visiting, then missed it sorely when waiting for the replacement to arrive.

As I use these devices, I've grown more and more sold on the idea that touch is going to dominate the next era of computing. This reductionism is inevitable and part of the natural evolution of computers. Remove the mouse. Remove the keyboard. Remove the monitor. Reducing a computer to its absolute minumum leads us inexorably, inevitably to the tablet (or, if a bit smaller, the phone). All you're left with is a flat, featureless slate that invites you to touch it. Welcome to the future, here's your … rectangle.

tablets

I've stopped thinking of touch as some exotic, add-in technology contained in specialized devices. I belatedly realized that I love to touch computers. And why not? We constantly point and gesture at everything in our lives, including our screens. It's completely natural to want to interact with computers by touching them. That's why the more unfortunate among us have displays covered in filthy fingerprints.

Although I love my touch devices, one thing I've noticed is that they are a major disincentive to writing actual paragraphs. On screen keyboards get the job done, but if I have to scrawl more than a Twitter length reply to someone on a tablet or phone, it's so much effort that I just avoid doing it altogether, postponing indefinitely until I can be in front of a keyboard. By the time that happens I've probably forgotten what I wanted to say in the first place, or that I even needed to reply at all. Multiply that by millions or billions, and you have a whole generation technologically locked into a backwater of minimal communication. Yelp, for example, does not allow posting reviews from their mobile app because when they did, all they got was LOL OMG raspberry poop Emoji.

Omg-raspberry-poop

It's not good. In fact, it's a little scary. I realize that there are plenty of ways of creating content that don't involve writing, but writing is pretty damn fundamental to communication and civilization as we know it. Anything that adds a significant barrier to the act of placing words on a page is kind of dangerous – and a major regression from the world where every computer had a keyboard in front of it, inviting people to write and communicate with each other. So the idea that billions of people in the future will be staring at touchscreen computers, Instagramming and fingerpainting their thoughts to each other, leaves me with deeply mixed feelings. As Joey Hess said:

If it doesn't have a keyboard, I feel that my thoughts are being forced out through a straw.

When I pre-ordered the Microsoft Surface RT, I wasn't expecting much. This is a version one device from a company that has never built a computer before, running a brand new and controversial operating system. On paper, it doesn't seem like a significant change from all the other tablets on the market, and its primary differentiating feature – the touch keyboard – can be viewed as merely flipping a regular laptop over, so the "fat" side is on the display rather than the keyboard.

Laptop vs. Surface

Surface is just like the first iPad in that it has all the flaws and rough edges you'd expect in a version one device. But it is also like the first iPad in that there is undeniably the core of something revelatory and transformative here – a vision of the future of computing that doesn't sacrifice either keyboard or touch.

Reviewers think Surface is intended to be a tablet killer, but it isn't. It's a laptop killer. After living with the Surface RT for a few days now, I'm convinced that this form factor is the replacement and way forward for the stagnant laptop. I can't even remember the last time I was this excited about a computer. The more I use it, the more I think that touch plus keyboard is the future of all laptops.

How wonderful it is to flip open the Surface and quickly type a 4 paragraph email response when I need to. How wonderful it is to browse the web and touch whatever I want to. And switching between the two modes of interaction – sometimes typing, sometimes touching – is completely natural. Remember when I talked about two-fisted computing, referring to the mouse and keyboard working in harmony? With Surface, I found that also applies to touch. In spades.

The Surface RT in my lap

This isn't a review, per se, but let me get into a few specifics:

  • Yes, it is ridiculous that the keyboard cover is not included in the base Surface, as the near-perfect integration of keyboard with touch is the whole story here. Don't even consider buying a Surface without the touch keyboard cover. Within an hour or so I was hitting 80% of my regular typing speed on it, and it's firm enough to be used on a lap without too much loss of accuracy. Astonishingly, the tiny fabric touchpad is quite good, better than the ones I've used on many laptops. Which probably says more about the sad state of the PC ecosystem than it does about Surface, but still.
  • Yeah, yeah, it doesn't run x86 apps. So your beloved copy of Windows Landscape Designer 1998 won't run on Surface RT. You'll need to wait a few months for Surface Pro to do that, but you'll pay the Intel Premium™ in price, battery life, and size. Rumor has it that Intel will get their act together with Haswell, and finally be competitive with ARM in price, performance, and power consumption, but I'll believe that when I see it.
  • The hardware design is beyond reproach; I'd even argue it's better than Apple quality hardware design. Unless you're required by God to hate all things touched by Microsoft, There's no way you could handle a Surface and not think that this is a genuinely well made thing.
  • The default Surface mail application is an embarrassment and everyone associated with it should be fired. Android and iOS both have decent default mail apps, as well they should, because email is bedrock. Not having this right really hurts. If Microsoft doesn't get their A Team "hey dummies, all you have to do is just copy Sparrow already" team on that soon, they'll be sorry.
  • Many of the native applications currently available run poorly on Surface RT due to lack of optimization and testing for the ARM platform versus x86. Probably not terribly different from the iPad 1 on launch day, but it remains to be seen how quickly that will get resolved.
  • The web browser is stellar and a model of how the Internet should work on a tablet. You are almost always in fullscreen mode, swiping around with nothing but content on your screen, the way it should be. However, back button performance is bizarrely slow, and the way IE10 handles web hovers is poor, much worse than Mobile Safari and Chrome. Try upvoting a comment on Stack Overflow to see what I mean.

Notice how the 2010 iPad 1 is already obsolete? Expect the same thing with the Surface RT. It's a fascinating glimpse into the future, but it'll be totally utterly obsolete in 2 years. Do not buy this device expecting longevity. Buy it because you want to see tomorrow today.

The received wisdom about touchscreen interaction with computers was that it didn't work. That you'd get "gorilla arm". That's why we had to have special tablet devices. But Surface proves that's not true; typing and touching are spectacularly compatible, at least for laptops. And I'm beginning to wonder about my desktop a little, because lately I'm starting to I think I wanna touch that, too.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    64 Comments

The Future of Markdown

October 25, 2012

Markdown is a simple little humane markup language based on time-tested plain text conventions from the last 40 years of computing.

Meaning, if you enter this… …you get this!
Lightweight Markup Languages
============================

According to **Wikipedia**:

> A [lightweight markup language](http://is.gd/gns)
is a markup language with a simple syntax, designed 
to be easy for a human to enter with a simple text 
editor, and easy to read in its raw form. 

Some examples are:

* Markdown
* Textile
* BBCode
* Wikipedia

Markup should also extend to _code_: 

    10 PRINT "I ROCK AT BASIC!"
    20 GOTO 10

Lightweight Markup Languages

According to Wikipedia:

A lightweight markup language is a markup language with a simple syntax, designed to be easy for a human to enter with a simple text editor, and easy to read in its raw form.

Some examples are:

  • Markdown
  • Textile
  • BBCode
  • Wikipedia

Markup should also extend to code:

10 PRINT "I ROCK AT BASIC!"
20 GOTO 10

You can think of Markdown as a radically simplified and far more human readable form of HTML. I have grown to love Markdown over the last few years. If you're a programmer of any shape, size, or color, you can't really avoid using Markdown, as it's central to both GitHub and Stack Overflow. For that matter, my new project uses Markdown, too.

Markdown is a wonderful tool, but it does suffer a bit from lack of project leadership. The so-called "spec" is anything but, and there are dozens of different flavors of Markdown out there, all with differences in the way they behave. While they are broadly compatible, Stack Overflow and GitHub have both tweaked Markdown in ways that can trip you up if you're familiar with one but not the other; compare GitHub Flavor with Stack Overflow Flavor.

That's why I was so excited to get this email from David Greenspan a few days ago:

I'm the creator of EtherPad (a collaborative WYSIWYG editor), now working at Meteor. At Meteor, we're trying to "pave the web" for developers by writing better components. For example, we just released universal login buttons that talk over WebSockets and are wired into the users table of the app's database. Since Markdown is increasingly ubiquitous for writing content, it's going to be part of the Meteor toolchain. I wouldn't be surprised if we end up releasing a component like Stack Overflow's editor, with the full "Meteor" standard of code quality, so that no one has to roll their own again. Today, we use Markdown in our API docs generation, and we're going to be writing more and more content in it -- which is a scary thought.

I think you and I share some concern (horror?) about Markdown's lack of spec and tests. The code is ugly to boot. Extending or customizing Markdown is tricky (we already have some hacks and they are terrible), and I worry about "bit rot" of content if the format doesn't have a spec. I'm evaluating the possibility of starting over with a new implementation coupled with a real spec and test suite, and I've been thinking a lot about how to parse a language like Markdown in a principled way. I'm pretty fearless about parsers, by the way; I wrote a full ECMAScript parser in a week as a side project.

I want this new language – working name "Rockdown" – to be seen as Markdown with a spec, and therefore only deviate from Markdown's behavior in unobtrusive ways. It should basically be a replacement that paves over the problems and ambiguities in Markdown. I'm trying to draw a line between what behavior is important to preserve and what behavior isn't.

I was excited because, like David, I freaking love Markdown. I love it so much that I want to see it succeed and flourish over the next 20 years. I believe the best way to achive that goal is for the most popular sites using Markdown to band together and take ownership of Markdown as a standard. I propose that Stack Exchange, GitHub, Meteor, Reddit, and any other company with lots of traffic and a strategic investment in Markdown, all work together to come up with an official Markdown specification, and standard test suites to validate Markdown implementations. We've all been working at cross purposes for too long, accidentally fragmenting Markdown while popularizing it.

Like any dutiful and well-meaning suitor, we first need to ask permission for this courtship from the parents. So I'm asking you, John Gruber: as the original creator of Markdown, will you bless this endeavor? Also, as a totally unreleated aside, have I mentioned what a huge Yankees fan I am? Derek Jeter is one of the all-time greats.

Yankees_logo

I realize that the devil is in the details, but for the most part what I want to see in a Markdown Standard is this:

  1. A standardization of the existing core Markdown conventions, as documented by John Gruber, in a formal language specification.
  2. Make the three most common real world usage "gotchas" in Markdown choices with saner defaults: intra-word emphasis (off), auto-hyperlinking (on), automatic return-based linebreaks (on).
  3. A formal set of tests anyone can use to validate a Markdown implementation.
  4. Some cleanup and tweaks for ambiguous edge cases that exist in Markdown due to the lack of a formal specification.
  5. A registry of known flavor variants, with some possible future lobbying to potentially add only the most widely and strongly supported variants (I am thinking of the GitHub style code blocks which are quite nice) to future versions of Markdown.

And that's it, really. I don't want to extend Markdown by adding tons of crazy new functionality, or radically change the way it currently works, or anything like that. I'd be opposed to such changes. I just want to solidify and standardize the simple, useful version of Markdown that is working so well for everyone right now. I want there to be an unambiguous, basic standard that everyone using Markdown can expect to work in the same way across all web sites in the world when they begin typing.

Markdown mark

I'd really prefer not to fork the language; I'd much rather collectively help carry the banner of Markdown forward into the future, with the blessing of John Gruber and in collaboration with other popular sites that use Markdown.

So … who's with me?

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    117 Comments

Judging Websites

October 16, 2012

I was invited to judge the Rails Rumble last year, but was too busy to participate. When they extended the offer again this year, I happily accepted.

The Rails Rumble is a distributed programming competition where teams of one to four people, from all over the world, have 48 hours to build an innovative web application, with Ruby on Rails or another Rack-based Ruby web framework. After the 48 hours are up, a panel of expert judges will pick the top ten winners.

I received an email notifying me that judging begins today, so I cracked my knuckles, sat down in front of my three monitors (all the better to judge with!) and … saw that there were around 340 entries.

Rails rumble entries

That's when I started to get a little freaked out about the math. Perhaps we can throw 5% of the entrants out as obviously incomplete or unfinished. That leaves 323 entries to judge. Personally, I'm not comfortable saying I judged a competition unless I actually look at each one of the entries, so at an absolute minimum I have to click through to each webapp. Once I do, I couldn't imagine properly evaluating the webapp without spending at least 30 seconds looking at the homepage.

Let's be generous and say I need 10 seconds to orient myself and account for page load times, and 30 seconds to look at each entry. That totals three and a half hours of my, y'know, infinitely valuable time. In which I could be finding a cure for cancer, or clicking on LOLcats. I still felt guilty about only allocating half a minute per entry; is it fair to the contestants if I make my decision based on 30 seconds of scanning their landing page and maybe a few desultory clicks?

But then I had an epiphany: yes, deciding in 30 seconds is totally completely unfair, but that's also exactly how it works in the real world. Users are going to click through to your web site, look at it for maybe 30 seconds, and either decide that it's worthy, or reach for the almighty back button on their browser and bug out. Thirty seconds might even be a bit generous. In one Canadian study, users made up their mind about websites in under a second.

Researchers led by Dr. Gitte Lindgaard at Carleton University in Ontario wanted to find out how fast people formed first impressions. They tested users by flashing web pages for 500 millseconds and 50 milliseconds onto the screen, and had participants rate the pages on various scales. The results at both time intervals were consistent between participants, although the longer display produced more consistent results. Yet, in as little as 50 milliseconds, participants formed judgments about images they glimpsed. The "halo effect" of that emotional first impression carries over to cognitive judgments of a web site's other characteristics including usability and credibility.

The opportunity cost to switch websites is one tiny little click of the mouse or tap of the finger. What I learned from judging the Rails Rumble most of all is that your website's front page needs to be kind of awesome. It is never the complete story, of course, but do not squander your first opportunity to make an impression on a visitor. It may be the only one you get.

I'm not sure I was learning much about these apps while I judged, and for that I am truly sorry. But along the way I accidentally learned a heck of a lot about what makes a great front page for a web application. So I'd like to share that with you, and all future Rails Rumble entrants:

  1. Load reasonably fast.

    I've talked about performance as a feature before; the sooner the front page of your site loads, the sooner I can decide whether or not I am interested. If you are slow, I will resent you for being slow, and the slower you are the more I will resent you for keeping me from not just finding out about you but also keeping me from moving on to the next thing. I need to be an efficient informavore. That means moving quickly. Above all else, load fast.

  2. What the %#!@^ is this thing?

    The first challenge you have is not coding your app. It is explaining what problem your app solves, and why anyone in the world would possibly care about that. You need an elevator pitch on your front page: can you explain to a complete stranger, in 30 seconds, why your application exists? Yes, writing succinctly and clearly is an art, but keep pounding on that copy, keep explaining it over and over and over until you have your explanation polished to the fine sheen of a diamond. When you're confident you could walk up to any random person on the street, strike up a conversation about what you're working on, and not have their eyes gloss over in boredom and/or fear – that's when you're ready. That's the text you want on your home page.

  3. Show me an example.

    OK, so you're building the ultimate tool for cataloging and sharing Beanie Babies on Facebook. Awesome, let me be an angel investor in your project so I can get me a piece of those sweet, sweet future billions. The idea is sound. But everyone knows that ideas are worthless, whereas execution is everything. I have no clue what the execution of your idea is unless you show it to me. At the very least throw up some screenshots of what it would look like if I used your webapp, with some juicy real world examples. And please, please, please, for the love of God please, do not make me sign up, click through a video, watch a slideshow, or any of that nonsense. Only emperors and princes have that kind of time, man. Show, don't tell.

  4. Give me a clear, barrier-free call to action.

    In the rare cases where the app passes the above three tests with flying colors, I'm invested: I am now willing to spend even more of my time checking it out. What do I do next? Where do I go? Your job is to make this easy for me. I call this "the put a big-ass giant obvious fluorescent lime green button on your home page" rule. You can have more than one, but I'd draw the line at two. And make the text on the button descriptive, like Start sharing your favorite Beanie Babies → or Build your dream furry costume →. If you require login at this point, I strongly urge you to skip that barrier and have a live sample I can view without logging in at all, just to get a taste of how things might work. If you're really, really slick you will make it seamless to go from an unregistered to a registered state without losing anything I've done.

  5. Embrace your audience, even if it means excluding other audiences.

    Even if you nail all the above, you might not fit into my interest zone through absolutely no fault of your own. If you built the world's most innovative and utterly disruptive Web 5.0 Pokédex, there's a lot of people who won't care one iota about it, because they're not really into Pokemon. This is not your fault and it is certainly not their fault. You need to embrace the idea that half of all success is knowing your core audience and not trying to water it down so much that it appeals to "everyone". Don't patronize me by trying to sell me on the idea that everyone should care about babies, or invoicing, or sports, or being a student, or whatever. Only the people who need to care will care, and that's who you are talking to. So have the confidence to act like it.

I realize that Rails Rumble apps only have a mere 48 hours to build an entire app from scratch. I am not expecting a super professional amazing home page on every one of the entries, nor did I judge it that way. But I do know that a basic sketch of a homepage design is the first thing you should work on in any webapp, because it serves as the essential starting design document and vision statement. Unless you start with a basic homepage that meets the above 5 rules, your app won't survive most judges, much less the herds of informavores running wild on the Internet.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    12 Comments

Building Servers for Fun and Prof... OK, Maybe Just for Fun

October 15, 2012

In 1998 I briefly worked for FiringSquad, a gaming website founded by Doom and Quake champion Thresh aka Dennis Fong and his brother Kyle. I can trace my long-standing interest in chairs and keyboards to some of the early, groundbreaking articles they wrote. Dennis and Kyle were great guys to work with, and we'd occasionally chat on the phone about geeky hardware hotrodding stuff, like the one time they got so embroiled in PC build one-upmanship that they were actually building rack-mount PCs … for their home.

So I suppose it is inevitable that I'd eventually get around to writing an article about building rack-mount PCs. But not the kind that go in your home. No, that'd be as nuts as the now-discontinued Windows Home Server product.

Mommy, Why is There a Server in the House

Servers belong in their native habitat, the datacenter. Which can be kind of amazing places in their own right.

Facebook-datacenter-1u-racks

The above photo is from Facebook's Open Compute Project, which is about building extremely energy efficient datacenters. And that starts with minimalistic, no-frills 1U server designs, where 1U is the smallest amount of space divisible in a server rack.

I doubt many companies are big enough to even consider building their own datacenter, but if Facebook is building their own custom servers out of commodity x86 parts, couldn't we do it too? In a world of inexpensive, rentable virtual machines, like Amazon EC2, Google Compute Engine, and Azure Cloud, does it really make sense to build your own server and colocate it in a datacenter?

It's kind of tough to tell exactly how much an Amazon EC2 instance will cost you since it varies a lot by usage. But if I use the Amazon Web Services simple monthly calculator and select the Web Application "common customer sample", that provides a figure of $1,414 per month, or $17k/year. If you want to run a typical web app on EC2, that's what you should expect to pay. So let's use that as a baseline.

The instance types included in the Web Application customer sample are 24 small (for the front end), and 12 large (for the database). Here are the current specs on the large instance:

  • 7.5 GB memory
  • 2 virtual cores with 2 EC2 Compute Units each
  • 850 GB instance storage
  • 64-bit platform
  • I/O Performance: High

You might be wondering what the heck a EC2 Compute Unit is; it's Amazon's way of normalizing CPU performance. By their definition, what we get in the large instance is akin to an old 2008 era dual core 2.4 GHz Xeon CPU. Yes, you can pay more and get faster instances, but switching instances from the small to the high-CPU and from the large to the high-MEM more than doubles the bill to $3,302 per month or $40k/year.

Assuming you subscribe to the theory of scaling out versus scaling up, building a bunch of decent bang-for-the-buck commodity servers is what you're supposed to be doing. I avoided directly building servers when we were scaling up Stack Overflow, electing to buy pre-assembled hardware from Lenovo instead. But this time, I decided the state of hardware has advanced sufficiently since 2009 that I'm comfortable cutting out the middleman in 2012 and building the servers myself, from scratch. That's why I just built four servers exactly like this:

(If you are using this as a shopping list, you will also need 4-pin power extensions for the case, and the SuperMicro 1u passive heatsink. The killer feature of SuperMicro motherboards that makes them all server-y in the first place is the built in hardware KVM-over-IP. That's right, unless the server is literally unplugged, you can remote in and install an operating system, tweak the BIOS, power it on and off, and so on. It works. I use it daily.)

Parts for building 1U server

Based on the above specs, this server has comparable memory to the High-Memory Double Extra Large Instance, comparable CPU power to the High-CPU Extra Large Instance, and comparable disk performance to the High I/O Quadruple Extra Large Instance. This is a very, very high end server by EC2 standards. It would be prohibitively expensive to run this hardware in the Amazon cloud. But how much will it cost us to build? Just $2,452. Adding 10% for taxes, shipping, etc let's call it $2,750 per server. One brand new top-of-the-line server costs about as much as two months of EC2 web application hosting.

Of course, that figure doesn't include the cost in time to build and rack the server, the cost of colocating the server, and the ongoing cost of managing and maintaining the server. But I humbly submit that the one-time cost of paying for three of these servers, plus the cost of colocation, plus a bunch of extra money on top to cover provisioning and maintenance and support, will still be significantly less than $17,000 for a single year of EC2 web application hosting. Every year after the first year will be gravy, until the servers are obsolete – which even conservatively has to be at least three years. Perhaps most importantly, these servers will offer vastly better performance than you could get from EC2 to run your web application, at least not without paying astronomical amounts of money for the privilege.

Newly built rackmount 1U server

(If you are concerned about power consumption, don't be. I just measured the power use of the server using my trusty Kill-a-Watt device: 31 watts (0.28 amps) at idle, 87 watts (0.75 amps) under never-gonna-happen artificial 100% CPU load. The three front fans in the SuperMicro case are plugged into the motherboard and only spin up at boot and under extreme load. It's shockingly quiet in typical use for a 1U server.)

I realize that to some extent we're comparing apples and oranges. Either you have a perverse desire to mess around with hardware, or you're more than willing to pay exorbitant amounts of money to have someone else worry about all that stuff (and, to be fair, give you levels of flexibility, bandwidth, and availability that would be impossible to achieve even if you colocate servers at multiple facilities). $51,000 over three years is enough to pay for a lot of colocation and very high end hardware. But maybe the truly precious resource at your organization is people's time, not money, and that $51k is barely a rounding error in your budget.

Anyway, I want to make it clear that building and colocating your own servers isn't (always) crazy, it isn't scary, heck, it isn't even particularly hard. In some situations it can make sense to build and rack your own servers, provided …

  • you want absolute top of the line server performance without paying thousands of dollars per month for the privilege
  • you are willing to invest the time in building, racking, and configuring your servers
  • you have the capital to invest up front
  • you desire total control over the hardware
  • you aren't worried about the flexibility of quickly provisioning new servers to handle unanticipated load
  • you don't need the redundancy, geographical backup, and flexibility that comes with cloud virtualization

Why do I choose to build and colocate servers? Primarily to achieve maximum performance. That's the one thing you consistently just do not get from cloud hosting solutions unless you are willing to pay a massive premium, per month, forever: raw, unbridled performance. I'm happy to spend money on nice dedicated hardware because I know that hardware is cheap, and programmers are expensive.

But to be totally honest with you, mostly I build servers because it's fun.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    45 Comments

Todon't

October 4, 2012

What do you need to do today? Other than read this blog entry, I mean.

Have you ever noticed that a huge percentage of Lifehacker-like productivity porn site content is a breathless description of the details of Yet Another To-Do Application? There are dozens upon dozens of the things to choose from, on any platform you can name. At this point it's getting a little ridiculous; per Lifehacker's Law, you'd need a to-do app just to keep track of all the freaking to-do apps.

The to-do appgasm

I've tried to maintain to-do lists at various points in my life. And I've always failed. Utterly and completely. Even turning it into a game, like the cleverly constructed Epic Win app, didn't work for me.

Eventually I realized that the problem wasn't me. All my to-do lists started out as innocuous tools to assist me in my life, but slowly transformed, each and every time, into thankless, soul-draining exercises in reductionism. My to-do list was killing me. Adam Wozniak nails it:

  1. Lists give the illusion of progress.
  2. Lists give the illusion of accomplishment.
  3. Lists make you feel guilty for not achieving these things.
  4. Lists make you feel guilty for continually delaying certain items.
  5. Lists make you feel guilty for not doing things you don't want to be doing anyway.
  6. Lists make you prioritize the wrong things.
  7. Lists are inefficient. (Think of what you could be doing with all the time you spend maintaining your lists!)
  8. Lists suck the enjoyment out of activities, making most things feel like an obligation.
  9. Lists don't actually make you more organized long term.
  10. Lists can close you off to spontaneity and exploration of things you didn't plan for. (Let's face it, it's impossible to really plan some things in life.)

For the things in my life that actually mattered, I've never needed any to-do list to tell me to do them. If I did, then that'd be awfully strong evidence that I have some serious life problems to face before considering the rather trivial matter of which to-do lifehack fits my personality best. As for the things that didn't matter in my life, well, those just tended to pile up endlessly in the old to-do list. And the collective psychic weight of all these minor undone tasks were caught up in my ever-growing to-do katamari ball, where they continually weighed on me, day after day.

Yes, there's that everpresent giant to-do list, hanging right there over your head like a guillotine, growing sharper and heavier every day.

Like a crazy hoarder I mistake the root cause of my growing mountain of incomplete work. The hoarder thinks he has a storage problem when he really has a 'throwing things away problem'. I say I am 'time poor' as if the problem is that poor me is given only 24 hours in a day. It's more accurate to say… what exactly? It seems crazy for a crazy person to use his own crazy reasoning to diagnose his own crazy condition. Maybe I too easily add new projects to my list, or I am too reluctant to exit from unsuccessful projects. Perhaps I am too reluctant to let a task go, to ship what I've done. They're never perfect, never good enough.

And I know I'm not alone in making the easy claim that I am 'time poor'. So many people claim to be time poor, when really we are poor at prioritizing, or poor at decisiveness, or don't know how to say 'no' (…to other people, to our own ideas).

If only I had a hidden store of time, or if only I had magical organisation tools, or if only I could improve my productive throughput, then, only then would I be able to get things done, to consolidate the growing backlogs and todo lists into one clear line of work, and plough through it like an arctic ice breaker carving its way through a sheet of ice.

But are you using the right guillotine? Maybe it'd work better if you tried this newer, shinier guillotine? I'd like to offer you some advice:

  1. There's only one, and exactly one, item anyone should ever need on their to-do list. Everything else is superfluous.
  2. You shouldn't have a to-do list in the first place.
  3. Declare to-do bankruptcy right now. Throw out your to-do list. It's hurting you.
  4. Yes, seriously.
  5. Maybe it is a little scary, but the right choices are always a little scary, so do it anyway.
  6. No, I wasn't kidding.
  7. Isn't Hall and Oates awesome? I know, rhetorical question. But still.
  8. Look, this is becoming counterproductive.
  9. Wait a second, did I just make a list?

Here's my challenge. If you can't wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day – then you should seriously work on fixing that. I don't mean install another app, or read more productivity blogs and books. You have to figure out what's important to you and what motivates you; ask yourself why that stuff isn't gnawing at you enough to make you get it done. Fix that.

Tools will come and go, but your brain and your gut will be here with you for the rest of your life. Learn to trust them. And if you can't, do whatever it takes to train them until you can trust them. If it matters, if it really matters, you'll remember to do it. And if you don't, well, maybe you'll get to it one of these days. Or not. And that's cool too.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    100 Comments

The PC is Over

October 1, 2012

MG Siegler writes:

The PC is over. It will linger, but increasingly as a relic.

I now dread using my computer. I want to use a tablet most of the time. And increasingly, I can. I want to use a smartphone all the rest of the time. And I do.

The value in the desktop web is increasingly an illusion. Given the rate at which these mobile devices are improving, a plunge is rapidly approaching.

Don’t build an app based on your website. Build the app that acts as if websites never existed in the first place. Build the app for the person who has never used a desktop computer. Because they’re coming. Soon.

Realize that MG Siegler is a journalist, and a TechCrunch air-quotes journalist at that, so he's well versed in hyperbole. You might say he's a billion times better at hyperbole than the average blogger. In his own way, he is a creator, I suppose: he creates hype.

But he's not entirely wrong here.

I've noticed the same pattern in my own computing habits. As I wrote in The Last PC Laptop, it's becoming more and more difficult to justify any situation where a traditional laptop is your best choice – even a modern, svelte, fancypants laptop.

Desktops, on the other hand, are perfectly justifiable. That is, if you want three monitors, eight blazingly fast CPU cores, 64 GB of memory, and fire-breathing multi-GPU configurations. If you need absurd, obscene amounts of power, a desktop computer is the way to go. And it's probably cheaper than you think, because desktops are all built from the same interchangeable pool of parts. It's also a lot more fun than laptops, because willingness to tinker combined with lust for ostentatious power is the essence of hot rodding.

And it is freakin' awesome.

Hot-rod

But even as an inveterate PC hot-rodder, I've noticed that in the last few years I've started to lose interest in the upgrade treadmill of ever faster CPUs with more cores, more sophisticated GPUs, more bandwidth, more gigabytes of RAM. Other than solid state drives, which gave us a badly needed order of magnitude improvement in disk speeds, when was the last time you felt you needed to upgrade a powerful desktop or laptop computer? If I dropped a SSD in it, do you honestly think you could tell the difference in real world non-gaming desktop usage between a high end 2009 personal computer and one from today?

Because I'm not sure I could.

Imagine the despair of a hot-rodder who regularly sees the streets awash in boring Chrysler K-Cars and Plymouth minivans with more ponies under the hood than a sweet custom rig he built just two years ago.

I think we're way past the point of satisfying the computing performance needs of the typical user. I'd say we hit that around the time dual CPU cores became mainstream, perhaps 2008 or so. What do you do when you have all the computing performance anyone could ever possibly need, except for the freakish one-percenters, the video editors and programmers? Once you have "enough" computing power, for whatever value of "enough" we can agree to disagree on, the future of computing is, and always has been, to make the computers smaller and cheaper. This is not some new trend that MG Siegler revealed unto the world from his journalistic fortress of solitude.

Mainframe-mini-micro

We've already seen this before in the transition from mainframes that fit in a building, to minicomputers that fit in a room, to microcomputers that fit on your desk. Now we're ready for the next stage: computers that don't just fit in your lap, they fit in your hand. The name of the game is no longer to make computers more powerful, but to radically reduce their size and power consumption without compromising the performance too much.

Laptop-tablet-phone

I mentioned how boring the performance scene has gotten for laptops and desktops. It's so boring that I can't be bothered to dig up representative benchmarks. Let's just assume that, outside of SSDs, there have been at best cost-of-living inflation type improvements in desktop and laptop benchmarks since 2008. Now contrast that with the hyperbolic performance improvement in the iPhone since 2008:

iPhone-performance-2008-2012

In case the graph didn't make it clear, in the last four years of iPhone, we've seen a factor of 20 improvement in Browsermark and a factor of four improvement in GeekBench. In the smartphone world, performance is – in the worst case – almost doubling every year.

Ironically enough, these results were printed in PC magazine. I'd like to draw your attention to two little letters in the title of said magazine. The first one is Pee, and the second one is Cee. That's right, PC Magazine is now in the business of printing the kind of smartphone performance benchmarks that are enough to make any hotrodder drool. What does that have to do with PCs? Well, it has everything to do with PCs, actually.

I have an iPhone 5, and I can personally attest that it is crazy faster than the old iPhone 4 I upgraded from. Once you add in 4G, LTE, and 5 GHz WiFi support, it's so fast that – except for the obvious size limitations of a smaller screen – I find myself not caring that much if I get the "mobile" version of websites any more. Even before the speed, I noticed the dramatically improved display. AnandTech says that if the iPhone 5 display was a desktop monitor, it would be the best one they had ever tested. Our phones are now so damn fast and capable as personal computers that I'm starting to wonder why I don't just use the thing I always have in my pocket as my "laptop", plugging it into a keyboard and display as necessary.

So maybe MG Siegler is right. The PC is over … at least in the form that we knew it. We no longer need giant honking laptop and desktop form factors for computers any more than we need entire rooms and floors of a building to house mainframes and minicomputers.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    134 Comments

Somebody is to Blame for This

September 27, 2012

This is not a post about programming, or being a geek. In all likelihood, this is not a post you will enjoy reading. Consider yourselves warned.

I don't remember how I found this Moth video of comedian Anthony Griffith.

It is not a fun thing to watch, especially as a parent. Even though I knew that before I went in, I willingly chose to watch this video. Then I watched it again. And again. And again. I watched it five times, ten times. I am all for leaning into the pain, but I started to wonder if maybe I was addicted to the pain. I think my dumb programmer brain was stuck in an endless loop trying to make sense out of what happened here.

But you don't make sense of a tragedy like this. You can't. There are no answers.

My humor is becoming dark, and it's biting, and it's becoming hateful. And the talent coordinator is seeing that there's a problem, because NBC is all about nice, and everything is going to be OK. And we're starting to buck horns because he wants everything light, and I want to be honest and tell life, and I'm hurting, and I want everybody else to hurt. Because somebody is to blame for this!

The unbearable grief demands that someone must be to blame for this unimaginably terrible thing that is happening to you, this deeply, profoundly unfair tragedy. But there's nobody. Just you and this overwhelming burden you've been given. So you keep going, because that's what you're supposed to do. Maybe you get on stage and talk about it. That's about all you can do.

So that's what I'm going to do.

Five weeks ago, I was selected for jury duty in a medical malpractice trial.

This trial was the story of a perfectly healthy man who, in the summer of 2008, was suddenly killed by a massive blood clot that made its way to his heart, after a surgery to repair a broken leg. Like me, he would have been 41 years old today. Like me, he married his wife in the summer of 1999. Like me, he had three children; two girls and a boy. Like me, he had a promising, lucrative career in IT.

I should have known I was in trouble during jury selection. When they called your name, you'd come up from the juror pool – about 50 people by my estimation – and sit in the jury booth while both lawyers asked you some questions to determine if you'd be a fair and impartial juror for this trial. What I hadn't noticed at the time, because she was obscured by a podium, is that the wife was sitting directly in front of the jury. I heard plenty of people get selected and make up some bogus story about how they couldn't possibly be fair and impartial to get out of this five week obligation. And they did, if they stuck to their story. But sitting there myself, in front of the wife of this dead man, I just couldn't do it. I couldn't bring myself to lie when I saw on her face that her desire not to be there was a million times more urgent than mine.

Now, I'm all for civic duty, but five weeks in a jury seemed like a bit more than my fair share. Even worse, I was an alternate juror, which meant all of the responsibility of showing up every day and listening, but none of the actual responsibility of contributing to the eventual verdict. I was expecting crushing boredom, and there was certainly plenty of that.

On day one, during opening remarks, we were treated to multiple, giant projected photographs of the three happy children with their dead father – directly in front of the very much still alive wife. She had to leave the courtroom at one point.

The first person we heard testimony from was this man's father, who was and is a practicing doctor. He was there when his son was rushed to the emergency room. He was allowed to observe as the emergency room personnel worked, so he described to the jury the medical process of treatment, his son thrashing around on the emergency room table being intubated, his heart stopping and being revived. As a doctor, he knows what this means.

On day two, we heard from the brother-in-law, also a doctor, and close friend of the family. He described coming home from the hospital to explain to the children that their father was dead, that he wasn't coming home. The kids were not old enough to understand what death means, so for a year afterward, every time they drove by the hospital, they would ask to visit their dad.

I did not expect to learn what death truly was in a courtroom in Martinez, California, at age 41. But I did. Death is a room full of strangers listening to your loved ones describe, in clinical detail and with tears in their eyes, your last moments. Boredom, I can deal with. This is something else entirely.

As a juror, you're ordered not to discuss the trial with anyone, so that you can form a fair and impartial opinion based on the shared evidence that everyone saw in the courtroom together. So I'm taking all this in and I'm holding it down, like I'm supposed to. But it's hard. I feel like becoming a parent has opened emotional doors in me that I didn't know existed, so it's getting to me.

Sometime later, the wife finally testifies. She explains that on the night of the incident, her husband finally felt well enough after the surgery on his right leg to read a bedtime story to their 4 year old son. So she happily leaves father and son to have their bedtime ritual together. Later, the son comes rushing in and tells her there's something wrong with dad, and the look on his face is enough to let her know that it's dire. She found him collapsed on the floor of her son's room and calls 911.

A week later, I was putting our 4 year old son Henry to bed. I didn't realize it at the time, but this was the first time I had put him to bed since the trial started. Henry isn't quite old enough to have a stable sleep routine, so sometimes bedtime goes well, and sometimes it doesn't. It went well that particular night, so I'm happy lying there with him in the bed waiting for his breathing to become regular so I know he's fully asleep. And then the next thing I know I'm breaking down. Badly. I'm desperately trying to hold it together because I don't want to scare him, and he doesn't need to know about any of this. But I can't stop thinking about what it would feel like for my wife to see pictures of me with our children if I died. I can't stop thinking about what it would feel like to watch Henry die on an emergency room table at age 38. I can't stop thinking about what it would feel like to explain to someone else's children that their father is never coming home again. Most of all, I can't stop thinking about the other 4 year old boy who will never stop blaming himself because he saw his Dad collapse on the floor of his room, and then never saw him again for the rest of his life.

Somebody is to blame for this. Somebody must be to blame for this.

Now I urgently want this trial to be over. I'm struggling to understand the purpose of it all. Nothing we see or do in this courtroom is bringing a husband and father back from the dead. The plaintiff could be home with her children. The parade of doctors and hospital staff making their way through this courtroom could be helping patients. The jurors could be working at their jobs. My God how I would love to be doing my job rather than this, anything in the world other than this. A verdict for either party has immense cost. Nobody is in this courtroom because they want to be here. So why?

I don't know these people. I don't care about these people. I mean, it's in my job description as a juror: I am fair and impartial because I don't care what happens to them. But finally I realized that this trial is part of our ride.

We get on the ride because we know there will be thrills and chills. Nobody gets on a rollercoaster that goes in a straight line. That's what you sign up for when you get on the ride with the rest of us: there will be highs, and there will be lows. And those lows – whether they are, God forbid, your own, or someone else's – are what make the highs so sweet. The ride is what it is because the pain of those valleys teaches us.

Sharing this tragic, horrible, private thing that happened to these poor people is how we cope. Watching this play out in public, among your peers, among other fellow human beings, is what it takes to for all of us to survive and move on. We're here in this courtroom together because we need to be here. It's part of the ride.

I've heard and seen things in that courtroom I think I will remember for the rest of my life. It's been difficult to deal with, though I am sure it is the tiniest reflected fraction of what you and your family went through. I am so, so sorry this happened to you. But I want to thank you for sharing it with me, because I now know that I am to blame. We're all to blame.

That's what makes us human.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    58 Comments

The Last PC Laptop

September 23, 2012

I've been chasing the perfect PC laptop for over a decade now.

Though I've tolerated lugging around five to seven pound machines because I had to, laptops were always about portability first and most of all to me. I quickly gravitated to so-called ultraportable laptops as soon as they became available. The first one was the 2003 Dell Inspiron 300M. It was the first laptop I found that delivered a decent 3-ish pound package without too many compromises. How I loved this little thing.

Inspiron-300m

But there was a downside to that 2003-era ultraportability – the default battery in the system provided about 2 hours of runtime. Switching to the larger battery extended that to a much more respectable 5.5 hours, but it also added a pound to the system and protruded from the rear a bit.

I've pursued the same dream of reasonable power with extreme portability ever since, with varying degrees of success. The PC industry isn't exactly known for its design leadership, and it can be downright schizophrenic at times. So if you were a fan of laptops that were actually thin and light and portable, it's been rough going for a long time. 2007's Dell XPS M1330 was a brief bright spot, but honestly, it's only in the last few months I've found an ultraportable that lived up to my expectations, one that I feel confident in recommending. That laptop is the Asus Zenbook Prime UX31A.

ASUS Zenbook Prime UX31A

Having lived with this laptop for about two months now, I can safely say it is without question the best PC laptop I've ever owned. Consider the Tech Report review and the Engadget review, both rave. Here's what you need to know:

  • Retina-esque 1920x1080 resolution in an amazingly high quality 13.3" IPS display
  • Intel's latest 17 watt Ivy Bridge processor with (finally!) decent integrated graphics
  • 128 GB SSD with fast 6Gbps interface
  • Just under 3 pounds
  • Decent 6 hour runtime
  • Classy brushed metal case and cover

All of this for about $1,050 at the time of writing. If you're suffering through a sub-par TN display on your current laptop, the awesome IPS display is almost worth an upgrade on its own. After switching to bargain Korean IPS displays on the desktop, I'm desperately hoping my poor eyeballs never have to endure another awful TN LCD display for the rest of my life.

This is a machine that pleasantly surprised me at every turn. The keyboard is solid feeling with a dimmable backlight, and the achilles heel of all PC laptops, the trackpad, is about as good as it ever gets on PCs. Which is to say still not great. Even the power adapter is classy, although highly derivative of Apple. While this is substantially closer to the ideal ultraportable hardware I've had in my brain since 2003, it still exhibits some of the same problems I experienced with that Inspiron 300M almost 10 years ago:

  • An operating system pre-loaded with useless craplets and pointless bloatware, all in the name of hypothetical value add by the vendor and/or marketing subsidies.
  • Several branding stickers I had to peel off the machine after I opened the box. (Note that the press photos for a machine never include these ugly stickers. Go figure.)
  • A trackpad that works kinda-sorta OK, but never quite inspires enough confidence that I can stop carrying an external mouse around in my laptop bag with me.

The first thing I did when I got the laptop was wipe it and install the Windows 8 preview, and soon after updated it to the final Windows 8 release. Despite all the grousing about the tablet-centric nature of Windows 8 – some of which is warranted, but can easily be ignored entirely – I am an unabashed fan of the operating system. It is a big improvement over Windows 7 in my day to day use. The more I use Windows 8 the more I believe it's the biggest step forward in Windows since Windows 95. So what I've put together here is probably the best, most platonic ideal form of Wintel laptop hardware you can buy in mid-2012.

(In the interests of full disclosure, I actually own two of these. One for my wife and one for me. Because I am an inveterate hotrodder, I had to have more memory and a larger, faster SSD. So I bought the UX32VD model which has a discrete Nvidia 620M GPU and, most importantly, can be upgraded internally. So I dropped in a Samsung 830 512 GB SSD and 8 GB DIMM. This led to a slightly oddball final configuration of 10 GB RAM and an internal embedded 32 GB SSD plus the 512 GB SSD. It hurts battery life by at least an hour, too. You should also know that the teeny-tiny Torx screws on the back of this laptop are not to be trifled with. Bring your jeweler's loupe. In case it wasn't already abundantly clear, let me spell it out for you: going this route is not recommended unless you are as crazy as I am. The base model is really nice! Trust me!)

If pressed, I might admit the combination of ASUS Zenbook Prime hardware and modern Windows 8 amenities lives up to the whole Intel "Ultrabook" marketing schtick. But I'm not sure that's enough any more.

Every time I leave the house – heck, every time I leave the room – I have to decide what kind of computer I'm going to take with me, if any. Besides the ultraportable laptops, I now own an iPhone 5, several retina iPads, and a Nexus 7. I'm sure there are many more of these devices on the way. In the calculus of deciding what kind of computing device I want with me, even the most awesome ultraportable laptop I can find is no longer enough. Consider:

  • Want 10 hours of real world battery life? Even when doing actual work that would ramp the CPU up? Many tablets and phones can achieve that magical 10 hour battery life figure, but it will be a long, long time before you reliably get that out of any ultraportable laptop. Personally, I blame x86.

  • Want to start doing stuff immediately? Even Windows 8, which has radically improved wake times, is laughably slow to start up compared to tablets and phones which are practically instant-on by design.

  • Want the smallest most portable device you can get away with? It's unlikely that will be a laptop, even an ultraportable, because of the implied keyboard and connectivity ports, plus the big screen and hinge. There is no form factor more compact than the touchscreen tablet. And you've got to take your phone along in any case, because that's how your family and loved ones will contact you, right? Have you seen the iPhone 5 benchmarks? It's faster than most tablets!

  • Want to be always connected to the Internet? Sure you do; how else can you get to Stack Overflow and Stack Exchange for all of life's essential questions? Then you probably need some kind of cellular support, for 3G or 4G or LTE or whatever the telephone companies are calling high speed Internet access these days. That is quite rare on traditional laptops, but obviously common on phones and much easier to find on tablets.

  • Want easy access? Just try opening a laptop on a crowded subway train or bus. Or with, say, 3 toddlers running around your house. I dare you. But phones and 7" tablets offer easy one handed operation; you can whip them out and fill whatever time you have available, whereas cracking open a laptop feels like a sizable commitment in time and space to doing something.

My laptop is increasingly a device I only take when I know I'll need to do a lot of typing, and/or I'll need a lot of screen space to work. But even a phone could do that if it had decent support for bluetooth keyboards and external displays, couldn't it? And even a few programmers, the audience who would most need all the power and flexibility of laptops, are switching to tablets.

I've waited 12 years for the PC industry to get its collective act together and, if nothing else, successfully copy Apple's laptop hardware designs. Now that they (mostly) have, I wonder: is it too late? Has the PC industry irrevocably shifted underneath them while they were so busy pumping out endless refinements to generic x86 boxes? I love this new laptop, and in many ways it is the perfect ultraportable hardware I dreamed of having in 2003. But every time I power it up and use it, I feel a little sad. I can't shake the feeling that this might end up being the last PC laptop I ever own.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    79 Comments

Computer Crime, Then and Now

September 12, 2012

I've already documented my brief, youthful dalliance with the illegal side of computing as it existed in the late 1980s. But was it crime? Was I truly a criminal? I don't think so. To be perfectly blunt, I wasn't talented enough to be any kind of threat. I'm still not.

There are two classic books describing hackers active in the 1980s who did have incredible talent. Talents that made them dangerous enough to be considered criminal threats.

The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage
The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage
Ghost in the Wires: My Adventures as the World's Most Wanted Hacker
Ghost in the Wires: My Adventures as the World's Most Wanted Hacker

Cuckoo is arguably the first case of hacking that was a clearly malicious crime circa 1986, and certainly the first known case of computer hacking as international espionage. I read this when it was originally published in 1989, and it's still a gripping investigative story. Cliff Stoll is a visionary writer who saw how trust in computers and the emerging Internet could be vulnerable to real, actual, honest-to-God criminals.

I'm not sure Kevin Mitnick did anything all that illegal, but there's no denying that he was the world's first high profile computer criminal.

Kevin Mitnick FBI wanted poster

By 1994 he made the FBI's 10 Most Wanted list, and there were front page New York Times articles about his pursuit. If there was ever a moment that computer crime and "hacking" entered the public consciousness as an ongoing concern, this was it.

The whole story is told in minute detail by Kevin himself in Ghost in the Wires. There was a sanitized version of Kevin's story presented in Wizzywig comix but this is the original directly from the source, and it's well worth reading. I could barely put it down. Kevin has been fully reformed for many years now; he wrote several books documenting his techniques and now consults with companies to help improve their computer security.

These two books cover the genesis of all computer crime as we know it. Of course it's a much bigger problem now than it was in 1985, if for no other reason than there are far more computers far more interconnected with each other today than anyone could have possibly imagined in those early days. But what's really surprising is how little has changed in the techniques of computer crime since 1985.

The best primer of modern – and by that I mean year 2000 and later – computer crime is Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground. Modern computer crime is more like the classic sort of crime you've seen in black and white movies: it's mostly about stealing large sums of money. But instead of busting it out of bank vaults Bonnie and Clyde style, it's now done electronically, mostly through ATM and credit card exploits.

Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground

Written by Kevin Poulson, another famous reformed hacker, Kingpin is also a compelling read. I've read it twice now. The passage I found most revealing is this one, written after the protagonist's release from prison in 2002:

One of Max’s former clients in Silicon Valley tried to help by giving Max a $5,000 contract to perform a penetration test on the company’s network. The company liked Max and didn’t really care if he produced a report, but the hacker took the gig seriously. He bashed at the company’s firewalls for months, expecting one of the easy victories to which he’d grown accustomed as a white hat. But he was in for a surprise. The state of corporate security had improved while he was in the joint. He couldn’t make a dent in the network of his only client. His 100 percent success record was cracking.

Max pushed harder, only becoming more frustrated over his powerlessness. Finally, he tried something new. Instead of looking for vulnerabilities in the company’s hardened servers, he targeted some of the employees individually.

These “client side” attacks are what most people experience of hackers—a spam e-mail arrives in your in-box, with a link to what purports to be an electronic greeting card or a funny picture. The download is actually an executable program, and if you ignore the warning message

All true; no hacker today would bother with frontal assaults. The chance of success is miniscule. Instead, they target the soft, creamy underbelly of all companies: the users inside. Max, the hacker described in Kingpin, bragged "I've been confident of my 100 percent [success] rate ever since." This is the new face of hacking. Or is it?

One of the most striking things about Ghost In The Wires is not how skilled a computer hacker Kevin Mitnick is (although he is undeniably great), but how devastatingly effective he is at tricking people into revealing critical information in casual conversations. Over and over again, in hundreds of subtle and clever ways. Whether it's 1985 or 2005, the amount of military-grade security you have on your computer systems matters not at all when someone using those computers clicks on the dancing bunny. Social engineering is the most reliable and evergreen hacking technique ever devised. It will outlive us all.

For a 2012 era example, consider the story of Mat Honan. It is not unique.

At 4:50 PM, someone got into my iCloud account, reset the password and sent the confirmation message about the reset to the trash. My password was a 7 digit alphanumeric that I didn’t use elsewhere. When I set it up, years and years ago, that seemed pretty secure at the time. But it’s not. Especially given that I’ve been using it for, well, years and years. My guess is they used brute force to get the password and then reset it to do the damage to my devices.

I heard about this on Twitter when the story was originally developing, and my initial reaction was skepticism that anyone had bothered to brute force anything at all, since brute forcing is for dummies. Guess what it turned out to be. Go ahead, guess!

Did you by any chance guess social engineering … of the account recovery process? Bingo.

After coming across my [Twitter] account, the hackers did some background research. My Twitter account linked to my personal website, where they found my Gmail address. Guessing that this was also the e-mail address I used for Twitter, Phobia went to Google’s account recovery page. He didn’t even have to actually attempt a recovery. This was just a recon mission.

Because I didn’t have Google’s two-factor authentication turned on, when Phobia entered my Gmail address, he could view the alternate e-mail I had set up for account recovery. Google partially obscures that information, starring out many characters, but there were enough characters available, m••••n@me.com. Jackpot.

Since he already had the e-mail, all he needed was my billing address and the last four digits of my credit card number to have Apple’s tech support issue him the keys to my account.

So how did he get this vital information? He began with the easy one. He got the billing address by doing a whois search on my personal web domain. If someone doesn’t have a domain, you can also look up his or her information on Spokeo, WhitePages, and PeopleSmart.

Getting a credit card number is tricker, but it also relies on taking advantage of a company’s back-end systems. … First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry’s published self-check algorithm.) Then you hang up.

Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account — not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits.

Phobia, the hacker Mat Honan documents, was a minor who did this for laughs. One of his friends is a 15 year old hacker who goes by the name of Cosmo; he's the one who discovered the Amazon credit card technique described above. And what are teenage hackers up to these days?

Xbox gamers know each other by their gamertags. And among young gamers it’s a lot cooler to have a simple gamertag like “Fred” than, say, “Fred1988Ohio.” Before Microsoft beefed up its security, getting a password-reset form on Windows Live (and thus hijacking a gamer tag) required only the name on the account and the last four digits and expiration date of the credit card on file. Derek discovered that the person who owned the “Cosmo” gamer tag also had a Netflix account. And that’s how he became Cosmo.

“I called Netflix and it was so easy,” he chuckles. “They said, ‘What’s your name?’ and I said, ‘Todd [Redacted],’ gave them his e-mail, and they said, ‘Alright your password is 12345,’ and I was signed in. I saw the last four digits of his credit card. That’s when I filled out the Windows Live password-reset form, which just required the first name and last name of the credit card holder, the last four digits, and the expiration date.”

This method still works. When Wired called Netflix, all we had to provide was the name and e-mail address on the account, and we were given the same password reset.

The techniques are eerily similar. The only difference between Cosmo and Kevin Mitnick is that they were born in different decades. Computer crime is a whole new world now, but the techniques used today are almost identical to those used in the 1980s. If you want to engage in computer crime, don't waste your time developing ninja level hacking skills, because computers are not the weak point.

People are.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    35 Comments

I Was a Teenage Hacker

August 8, 2012

Twenty-four years ago today, I had a very bad day.

On August 8, 1988, I was a senior in high school. I was working my after school and weekend job at Safeway as a cashier, when the store manager suddenly walked over and said I better stop ringing up customers and talk to my mother on the store phone right now. Mom told me to come home immediately because, well, there were police at the front door asking for me with some legal papers in hand.

He did unlawfully between June 7, 1988 and June 8, 1988 use a computer or computer network without authority and with the intent to temporarily or permanently remove computer data, in violation of Section 18.2-152.4 of the 1950 Code of Virginia, as amended.

Like I said, definitely not a good day. The only sliver of good news was that I was still 17 at the time, so I enjoyed the many protections that the law provides to a minor. Which I shall now throw away by informing the world that I am a dirty, filthy, reprehensible adult criminal. Thanks, law!

One of the problems you had in the pre-Internet 1980s as a hardcore computer geek was that all the best bulletin boards and online services were kind of expensive. Either because you had to pay an hourly fee to access them, like CompuServe, or because they were a long distance modem call. Or both. Even after the 1984 AT&T breakup, long distance at around 20-30 cents a minute was a far, far cry from today's rates. (Does anyone actually even worry about how much voice calls cost any more, to anywhere in the world? This, my friends, is progress.)

Remember, too, that this is back when 9600 baud modems were blazing, state of the art devices. For perspective, the ultra-low-power wireless bluetooth on your phone is about 80 times faster. If you wanted to upload or download any warez software, that meant potentially hours on your modem at rates of around $20/hour. Adjusted for inflation, that's closer to $40 in 2012 dollars. My family wasn't well off enough to afford a second telephone line, so most of my calling was done late at night both because the rates were lower, and also so that I wouldn't be monopolizing the telephone. Nothing was worse than the dreaded "mom picked up the phone" disconnect to an elite difficult-to-access BBS with limited slots.

One way or another, I eventually got involved with the seedier side of the community, even joining a lesser Apple // pirate group. Probably my main claim to fame is that while trolling BBSes, I personally discovered and recruited a guy who turned out to be an amazing cracker. He was so good he eventually got recruited away.

Psi-5-trading-company

I was, at best, a footnote to a footnote to a footnote in Apple // history. This was mainly a process of self-discovery for me. I learned I was the type of geek who doesn't even bother attending his high school prom, partially because I was still afraid of girls even as a high school senior, yes, but mainly because I was so addicted to computers and playing my tiny role in these nascent online communities. I was, and am, OK with that. This is the circuitous path of 30 years that led me to create Stack Overflow. And there's more, so much more, but I can't talk about it yet.

But addicted, I think, is too weak a word for what I felt about being a part of these oddball, early online home computer communities. It was more like an all-consuming maniacal blood lust. So obtaining access to free, unlimited long distance calling rapidly became an urgent priority in my teenage life. I needed it. I needed it so bad. I had to have it to talk on the phone to the other members of my motley little crew, who were spread all over the USA, as well as for calling BBSes.

I can't remember exactly how I found it, probably on one of the BBSes, but I eventually discovered a local 804 area code number for "calling cards" that accepted a 5 digit PIN, entered via touch-tone phone. Try over and over, and you might find some valid PIN codes that let you attain the holy grail of free long distance calling. Only one small problem: it's a crime. But, at least to my addled teenage brain, this was a victimless crime, one that I had to commit. The spice must flow!

All I had to do is write software to tell the modem to dial over and over and try different combinations. Because I was a self-taught programmer, this was no problem. But because I was an overachieving self-taught programmer, I didn't just write a program. No, I went off and built a full-blown toolkit in AppleBasic, with complete documentation and the best possible text user interface I could muster, and then uploaded it to my favorite BBSes so every other addict could get their online modem fix, too. I called it The Hacking Construction Set, and I spent months building it. I didn't just gold plate, I platinum plated this freaking thing, man. (Yes, I know the name isn't really correct. I read as many 2600 textfiles as the next guy. This is mere phreaking, not hacking, but I guess I was shooting for poetic license. Maybe you could use the long distance dialing codes to actually hack remote machines, or something.)

I never knew if anyone else ever used my little program to dial for calling codes. It certainly worked for me, and I tried my level best to make it work for all the possible dialing situations I could think of. It even had an intro screen with music and graphics of my own creation. But searching now, for the first time in 24 years, I found my old Hacking Construction Set disk image on an Apple ROM site. It even has real saved numbers in the dialing list! Someone was using my illicit software!

Hacking-construction-set

If you're curious, fire up your favorite Apple // emulator and give the disk image a spin. Don't forget to connect your modem. There's full blown documentation accessible from the main menu. Which, re-reading now, was actually not half bad, if I do say so myself:

Hacking-construction-set-docs
Hacking-construction-set-docs-2

I used to regularly call BBSes in Florida, California, and Missouri? That's news to me; I haven't seen any of this stuff in over 24 years! All I did was upload a disk image to a few BBSes in 1986. After all that time, to discover that someone used and loved my little bit of software still gives me a little thrill. What higher praise is there for a software developer?

About that trouble. Using my own software got me in trouble with the law. And deservedly so; what I wrote the software to do was illegal. I hired a local lawyer (who, as I recall, was missing a hand; he had a prosthetic hand that was almost impossible not to look at) who represented me. It was quite clear at preliminary hearings that the Chesterfield County court system did not see any computer crime cases, and they had absolutely no idea what to make of me, or what this was all about. All they saw was a smart kid with a bit of bad judgment who loved computers and was headed to the University of Virginia, most likely not a life as a career criminal. So the case was dismissed for the cost of lawyer's fees. Which, for the record, I had to pay myself, using my income as a Safeway cashier.

This was definitely a wake up call for me; in the summer of 1988, I was about to graduate from high school, and I thought I'd try being just a regular guy at college, with less of an obsessive focus on computers that causes me to get in trouble with the law, and perhaps spread my wings to other interests. Who knows, maybe even girls!

That didn't last long. Because after all these years, I must confess I've grown to love my own bad judgment. It's led me to the most fascinating places.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    84 Comments

Today is Goof Off at Work Day

August 2, 2012

When you're hired at Google, you only have to do the job you were hired for 80% of the time. The other 20% of the time, you can work on whatever you like – provided it advances Google in some way. At least, that's the theory.

Google's 20 percent time policy is well known in software engineering circles by now. What's not as well known is that this concept dates all the way back to 1948 at 3M.

In 1974, 3M scientist Art Fry came up with a clever invention. He thought if he could apply an adhesive (dreamed up by colleague Spencer Silver several years earlier) to the back of a piece of paper, he could create the perfect bookmark, one that kept place in his church hymnal. He called it the Post-It Note. Fry came up with the now iconic product (he talks to the Smithsonian about it here) during his "15 percent time," a program at 3M that allows employees to use a portion of their paid time to chase rainbows and hatch their own ideas. It might seem like a squishy employee benefit. But the time has actually produced many of the company's best-selling products and has set a precedent for some of the top technology companies of the day, like Google and Hewlett-Packard.

There's not much documentation on HP's version of this; when I do find mentions of it, it's always referred to as a "convention", not an explicit policy. Robert X. Cringely provides more detail:

Google didn’t invent that: HP did. And the way the process was instituted at HP was quite formal in that the 10 percent time was after lunch on Fridays. Imagine what it must have been like on Friday afternoons in Palo Alto with every engineer working on some wild-ass idea. And the other part of the system was that those engineers had access to what they called “lab stores” — anything needed to do the job, whether it was a microscope or a magnetron or a barrel of acetone could be taken without question on Friday afternoons from the HP warehouses. This enabled a flurry of innovation that produced some of HP’s greatest products including those printers.

Maybe HP did invent this, since they've been around since 1939. Dave Raggett, for example, apparently played a major role in inventing HTML on his 10% time at HP.

Although the concept predates Google, they've done more to validate it as an actual strategy and popularize it in tech circles than anyone else. Oddly enough, I can't find any mention of the 20% time benefit listed on the current Google jobs page, but it's an integral part of Google's culture. And for good reason: notable 20 percent projects include GMail, Google News, Google Talk, and AdSense. According to ex-employee Marissa Meyer, as many as half of Google's products originated from that 20% time.

At Hewlett-Packard, 3M, and Google, "many" of their best and most popular products come from the thin sliver of time they granted employees to work on whatever they wanted to. What does this mean? Should we all be goofing off more at work and experimenting with our own ideas? That's what the book The 20% Doctrine explores.

The 20% Doctrine: How tinkering, goofing off, and breaking the rules at work drive success in business

Closely related to 20% time is the Hack Day. Hack Days carve out a specific 24 hour timeframe from the schedule, encouraging large groups to come together to work collaboratively (or in friendly competition) during that period. Chad Dickerson instituted one of the first at Yahoo in 2005.

The Friday before, I had organized the first internal Hack Day at Yahoo! with the help of a loosely-organized band of people around the company. The “hack” designation for the day was a tip of the hat to hacker culture, but also a nod to the fact that we were trying to fix a system that didn’t work particularly well. The idea was really simple: all the engineers in our division were given the day off to build anything they wanted to build. The only rules were to build something in 24 hours and then show it at the end of the period. The basic structure of the event itself was inspired by what we had seen at small startups, but no one had attempted such an event at a large scale at an established company.

The first Yahoo! Hack Day was clearly a success. In a company that was struggling to innovate, about seventy prototypes appeared out of nowhere in a single 24-hour period and they were presented in a joyfully enthusiastic environment where people whooped and yelled and cheered. Sleep-deprived, t-shirt-clad developers stayed late at work on a Friday night to show prototypes they had built for no other reason than they wanted to build something. In his seminal book about open source software, The Cathedral and the Bazaar, Eric Raymond wrote: “Every good work of software starts by scratching a developer’s personal itch.” There clearly had been a lot of developer itching around Yahoo! but it took Hack Day to let them issue a collective cathartic scratch.

Atlassian's version, a quarterly ShipIt Day, also dates back to 2005. Interestingly, they also attempted to emulate Google's 20% time policy with mixed results.

Far and away, the biggest problem was scheduling time for 20% work. As one person put it, “Getting 20% time is incredibly difficult amongst all the pressure to deliver new features and bug fixes.” Atlassian has frequent product releases, so it is very hard for teams to schedule ‘down time’. Small teams in particular found it hard to afford time away from core product development. This wasn’t due to Team Leaders being harsh. It was often due to developers not wanting to increase the workload on their peers while they did 20% work. They like the products they are developing and are proud of their efforts. However, they don’t want to be seen as enjoying a privilege while others carry the workload.

I think there's enough of a track record of documented success that it's worth lobbying for something like Hack Days or 20% time wherever you work. But before you do, consider if you and your company are ready:

  1. Is there adequate slack in the schedule?

    You can't realistically achieve 20% time, or even a single measly hack day, if there's absolutely zero slack in the schedule. If everyone around you is working full-tilt boogie as hard as they can, all the time, that's … probably not healthy. Sure, everyone has crunch times now and then, but if your work environment feels like constant crunch time, you'll need to deal with that first. For ammunition, try Tom Demarco's book Slack.

  2. Does daydreaming time matter?

    If anyone gets flak for not "looking busy", your company's work culture may not be able to support an initiative like this. There has to be buy-in at the pointy-haired-boss level that time spent thinking and daydreaming is a valid part of work. Daydreaming is not the antithesis of work; on the contrary, creative problem solving requires it.

  3. Is failure accepted?

    When given the freedom to "work on whatever you want", the powers that be have to really mean it for the work to matter. Mostly that means providing employees the unfettered freedom to fail miserably at their skunkworks projects, sans repercussion or judgment. Without failure, and lots of the stuff, there can be no innovation, or true experimentation. The value of (quickly!) learning from failures and moving on is enormous.

  4. Is individual experimentation respected?

    If there isn't a healthy respect for individual experimentation versus the neverending pursuit of the Next Thing on the collective project task list, these initiatives are destined to fail. You have to truly believe, as a company, and as peers, that crucial innovations and improvements can come from everyone at the company at any time, in bottom-up fashion – they aren't delivered from on high at scheduled release intervals in the almighty Master Plan.

Having some official acknowledgement that time spent working on whatever you think will make things better around these here parts is not just tolerated – but encouraged – might go a long way towards making work feel a lot less like work.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    44 Comments

The IPS LCD Revolution

July 26, 2012

When I wrote about TN LCD panels 5 years ago, I considered them acceptable, despite their overall mediocrity, mostly due to the massive price difference.

Unfortunately, the vast majority of LCDs on the market now are TN. You can opt to pay a little bit more for one of the few models with *VA – if there are any available in the size you want. *-IPS is widely considered the best all around LCD display technology, but it is rapidly being pushed into the vertical "pro" graphics designer market due to the big jump in price. It's usually not an option, unless you're willing to pay more than twice as much for a monitor.

But when the $499 iPad 3 delivers an amazingly high resolution IPS panel that's almost reference quality, I found myself a whole lot less satisfied with the 27" TN LCDs on my desktop. And on my laptop. And everywhere else in my life.

I'll spare you all the exposition and jump to the punchline. I am now the proud owner of three awesome high resolution (2560x1440) 27" IPS LCDs, and I paid less than a thousand dollars for all three of them.

Three Korean LCDs

(If you're curious about the setup, I use Ergotron monitor arms to fit everything in there.)

I won't deny that it is a little weird, because everything is in Korean. I replaced the Korean 3 prong power cord in the power brick with a regular US power cord I had laying around. But a monitor is a monitor, and the IPS panel is stunning. The difference between TN and IPS is vast in every measurable dimension. No bad pixels on these three panels, either. Although, as my friend Scott Wasson of Tech Report fame says, "every pixel on a TN panel is a bad pixel".

How is this possible? You can thank Korea. All three of these monitors were ordered from Korean eBay vendors, where a great 27" IPS LCD goes for the equivalent of around $250 in local currency. They tack on $100 for profit and shipping to the USA, then they're in business. It's definitely a grey market, but something is clearly out of whack, because no domestic monitor of similar quality and size can be had for anything under $700.

I wanted to get this out there, because I'm not sure how long this grey market will last, and these monitors are truly incredible deals. Heck, it's worth it just to get out of the awful TN display ghetto most of us are stuck in. Scott Wasson got the exact same model of Korean LCD I did, and his thorough review concludes:

Even with those last couple of quirks uncovered, I still feel like I won this thing in a drawing or something. $337 for a display of this quality is absolutely worth it, in my view. You just need to keep your eyes open to the risks going into the transaction, risks I hope I've illustrated in the preceding paragraphs. In many ways, grabbing a monitor like this one on the cheap from eBay is the ultimate tinkerer's gambit. It's risky, but the payoff is huge: a combination of rainbow-driven eye-socket ecstasy and the satisfying knowledge that you paid less than half what you might pay elsewhere for the same experience.

There are literally dozens of variants of these Korean 27" LCDs, but the model I got is the FSM-270YG. Before you go rushing off to type ebay.com in your browser address bar, remember that these are bare-bones monitors being shipped from Korea. They work great, don't get me wrong, but they are the definition of no-frills:

  • Build quality is acceptable, but it's hardly Jony Ive Approved™.
  • These are glossy panels. Some other variants offer matte, if that's your bag.
  • They only support basic dual-link DVI inputs, and nothing, I mean nothing else.
  • There is no on-screen display. The only functional controls are power and brightness (this one caught me out; you must hold down the brightness adjustment for many, many seconds before you see a change.)

Although the noise-to-signal ratio is off the charts, it might be worth visiting the original overclock.net thread on these inexpensive Korean monitors. There's some great info buried in there, if you can manage to extract it from the chaos. And if you're looking for a teardown of this particular FSM-270YG model (minus the OSD, though), check out the TFT Central review.

In the past, I favored my wallet over my eyes, and chose TN. I now deeply regret that decision. But the tide is turning, and high quality IPS displays are no longer extortionately expensive, particularly if you buy them directly from Korea. Is it a little risky? Sure, but all signs point to the risk being fairly low.

In the end, I decided my eyes deserve better than TN. Maybe yours do too.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    81 Comments

But You Did Not Persuade Me

July 23, 2012

One of my favorite movie scenes in recent memory is from The Last King of Scotland, a dramatized "biography" of the megalomaniac dictator Idi Amin, as seen through the eyes of a fictional Scottish personal physician.

Idi AminI want you to tell me what to do!
GarriganYou want ME to tell YOU what to do?
AminYes, you are my advisor. You are the only one I can trust in here. You should have told me not to throw the Asians out, in the first place!
GarriganI DID!
AminBut you did not persuade me, Nicholas. You did not persuade me!

If you haven't watched this movie yet, you should. It is amazing. (For trivia buffs, this is the video clip that prompted me to write YouTube vs. Fair Use. The kind folks at vive.ly originally offered to host this fair use video clip, and I took them up on that offer, because I still can't find anywhere to host it.)

What I love about this tour de force of a scene – beyond the incredible acting – is that it illustrates just how powerful of a force persuasion really is. In the hands of a madman or demagogue, dangerously powerful. Hopefully you don't deal with too many insane dictators on a daily basis, but the reason this scene works so well is the unavoidable truth it exposes: to have any hope of influencing others, you must be able to persuade them.

Steve Yegge is as accomplished a software engineer as I can think of. I was amazed to hear him tell us repeatedly and at length on a podcast that the one thing every software engineer should know is not how to write amazing code, but how to market themselves and their project. What is marketing except persuasion?

Marc Hedlund, who founded Wesabe and is now the VP of Engineering at Etsy, thinks of himself not as a CEO or boss, but as the Lobbyist-in-Chief. I believe that could be re-written as Persuader-in-Chief with no loss of meaning or nuance.

I was recently asked how I run our development team. I said, “Well, basically I blog about something I think we should do, and if the blog post convinces the developers, they do it. If not, I lobby for it, and if that fails too, the idea falls on the floor. They need my approval to launch something, but that’s it. That’s as much ‘running things’ as I do, and most of the ideas come from other people at this point, not from me and my blog posts. I’ve argued against some of our most successful ideas, so it’s a good thing I don’t try to exert more control.”

I’m exaggerating somewhat; of course I haven’t blogged about all of our ideas yet. But I do think of myself as Lobbyist-in-Chief, and I have lots of good examples of cases where I failed to talk people into an idea and it didn’t happen as a result. One person I said this to asked, “So who holds the product vision, then?” and I replied, “Well, I guess I do,” but really that’s not right. We all do. The product is the result of the ideas that together we’ve agreed to pursue. I recruit people based on their interest in and enthusiasm about the ideas behind Wesabe, and then set them loose, and we all talk and listen constantly. That’s how it works — and believe it or not, it does work.

So how do we persuade? Primarily, I think, when we lead by example. Even if that means getting down on your knees and cleaning a toilet to show someone else how it's done. But maybe you're not a leader. Maybe you're just a lowly peon. Even as a peon, it's still possible to persuade your team and those around you. A commenter summarized this grassroots method of persuasion nicely:

  • His ideas were, on the whole, pretty good.
  • He worked mostly bottom-up rather than top-down.
  • He worked to gain the trust of others first by dogfooding his own recommendations before pushing them on others.
  • He was patient and waited for the wheels to turn.

Science and data are among the best ways to be objectively persuasive, but remember that data alone isn't the reductionist end of every single topic. Beware the 41 shades of blue pitfall.

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

If I measure by click data alone, all Internet advertising should have breasts in it. Incorporate data, by all means. But you need to tell a bigger, grander, more inspiring story than that to be truly persuasive.

I re-read Letter from a Birmingham Jail every year because I believe it is the single best persuasive essay I've ever read. It is remarkably persuasive without ever resorting to anger, incivility, or invective. Read it now. But do more than just read; study it. How does it work? Why does it work? Does it cite any data? What techniques make this essay so incredibly compelling?

Letter-from-birmingham-jail

Nobody ever changed anything by remaining quiet, idly standing by, or blending into the faceless, voiceless masses. If you ever want to effect change, in your work, in your life, you must learn to persuade others.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    20 Comments

New Programming Jargon

July 20, 2012

Stack Overflow – like most online communities I've studied – naturally trends toward increased strictness over time. It's primarily a defense mechanism, an immune system of the sort a child develops after first entering school or daycare and being exposed to the wide, wide world of everyday sneezes and coughs with the occasional meningitis outbreak. It isn't always a pleasant process, but it is, unfortunately, a necessary one if you want to survive.

Consider this question from two years ago:

New programming jargon you coined?

What programming terms have you coined that have taken off in your own circles (i.e. have heard others repeat it)? It might be within your own team, workplace or garnered greater popularity on the Internet.

Write your programming term, word or phrase in bold text followed by an explanation, citation and/or usage example so we can use it in appropriate context.

Don't repeat common jargon already ingrained in the programming culture like: kludge, automagically, cruft, etc. (unless you coined it).

This question serves in the spirit of communication among programmers through sharing of terminology with each other, to benefit us by its propagation within our own teams and environments.

Is this even a question, really? How many answers does it have?

Three hundred and eighty six!

A question that invites 386 different "answers" isn't a question at all. It's an opinion survey, a poll, a List of X. I suppose you could argue that reading through all those responses would teach you something about programming, but it was pretty clear that the bulk of the responses were far more about laughs and GTKY (Getting to Know You) than learning. That's why it was eventually deleted by experienced Stack Overflow community members. Although it is somewhat borderline in terms of learning, and I didn't personally vote to delete it, I tend to agree that it was correctly deleted. Though opinions vary.

I won't bore you with the entire history, our so-called "war on fun", and the trouble with popularity. Ultimately, Stack Overflow is a college, not a frat house. All the content on the site must exist to serve the mission of learning over entertainment – even if that means making difficult calls about removing some questions and answers that fail to meet those goals, plus or minus 10 percent.

In terms of programmer culture, though, there is precedent in the form of The Jargon File. Unfortunately, we don't have a good designated place for deleted "too fun" questions to live, but all Stack Exchange content is licensed under Creative Commons in perpetuity. Which means, with proper attribution, we can give it a permanent home on our own blogs. So I did. I've collected the top 30 Stack Overflow New Programming Jargon entries below, as judged by the Stack Overflow community. Enjoy.*

1. Yoda Conditions

zneak

Yoda-conditions

Using if(constant == variable) instead of if(variable == constant), like if(4 == foo). Because it's like saying "if blue is the sky" or "if tall is the man".

2. Pokémon Exception Handling

woot4moo

Pokemon

For when you just Gotta Catch 'Em All.

try {
}
catch (Exception ex) {
   // Gotcha!
}

3. Egyptian Brackets

computronium

Egyptian

You know the style of brackets where the opening brace goes on the end of the current line, e.g. this?

if (a == b) {
    printf("hello");
}

We used to refer to this style of brackets as "Egyptian brackets". Why? Compare the position of the brackets with the hands in the picture. (This style of brackets is used in Kernighan and Ritchie's book The C Programming Language, so it's known by many as K&R style.)

4. Smug Report

aaronaught

Pathreport-med

A bug submitted by a user who thinks he knows a lot more about the system's design than he really does. Filled with irrelevant technical details and one or more suggestions (always wrong) about what he thinks is causing the problem and how we should fix it.

Also related to Drug Report (a report so utterly incomprehensible that whoever submitted it must have been smoking crack.), Chug Report (where the submitter is thought to have had one too many), and Shrug Report (a bug report with no error message or repro steps and only a vague description of the problem. Usually contains the phrase "doesn't work.")

5. A Duck

kyoryu

Duck-wireframe

A feature added for no other reason than to draw management attention and be removed, thus avoiding unnecessary changes in other aspects of the product.

I don't know if I actually invented this term or not, but I am certainly not the originator of the story that spawned it.

This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to PMs) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.

The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.

Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."

6. Refuctoring

Jason Gorman

Bottle-smashing

The process of taking a well-designed piece of code and, through a series of small, reversible changes, making it completely unmaintainable by anyone except yourself.

7. Stringly Typed

Mark Simpson

Cat-string-values

A riff on strongly typed. Used to describe an implementation that needlessly relies on strings when programmer & refactor friendly options are available.

For example:

  • Method parameters that take strings when other more appropriate types should be used.
  • On the occasion that a string is required in a method call (e.g. network service), the string is then passed and used throughout the rest of the call graph without first converting it to a more suitable internal representation (e.g. parse it and create an enum, then you have strong typing throughout the rest of your codebase).
  • Message passing without using typed messages etc.

Excessively stringly typed code is usually a pain to understand and detonates at runtime with errors that the compiler would normally find.

8. Heisenbug

unknown

Heisenbug

A computer bug that disappears or alters its characteristics when an attempt is made to study it. (Wikipedia)

9. Doctype Decoration

Zurahn

Charlie-brown-christmas-tree

When web designers add a doctype declaration but don't bother to write valid markup.

<!DOCTYPE html>
<BLINK>Now on sale!</BLINK>

10. Jimmy

Gord

Jimmy

A generalized name for the clueless/new developer.

Found as we were developing a framework component that required minimal knowledge of how it worked for the other developers. We would always phrase our questions as: "What if Jimmy forgets to update the attribute?"

This led to the term: "Jimmy-proof" when referring to well designed framework code.

11. Higgs-Bugson

gingerbreadboy

Higgs-boson-guy

A hypothetical bug predicted to exist based on a small number of possibly related event log entries and vague anecdotal reports from users, but it is difficult (if not impossible) to reproduce on a dev machine because you don't really know if it's there, and if it is there what is causing it. (see Higgs-Boson)

12. Nopping

Stanislav

Statue-napping

I'm writing a scifi novel from the POV of an AI, and their internal language has a lot of programming jargon in it. One of the more generalizable terms is "nopping", which comes from assembler NOP for no-operation. It's similar to 'nap', but doesn't imply sleep, just zoning out. "Stanislav sat watching the screensaver and nopped for a while."

13. Unicorny

Yehuda Katz

Stack-overflow-unicorn

An adjective to describe a feature that's so early in the planning stages that it might as well be imaginary. We cribbed this one from Yehuda Katz, who used it in his closing keynote at last year's Windy City Rails to describe some of Rails' upcoming features.

14. Baklava Code

John D. Cook

Baklava

Code with too many layers.

Baklava is a delicious pastry made with many paper-thin layers of phyllo dough. While thin layers are fine for a pastry, thin software layers don’t add much value, especially when you have many such layers piled on each other. Each layer has to be pushed onto your mental stack as you dive into the code. Furthermore, the layers of phyllo dough are permeable, allowing the honey to soak through. But software abstractions are best when they don’t leak. When you pile layer on top of layer in software, the layers are bound to leak.

15. Hindenbug

Mike Robinson

Oh-the-huge-manatee

A catastrophic data destroying bug. "Oh the humanity!"

Also related to Counterbug (a bug you present when presented with a bug caused by the person presenting the bug) and Bloombug (a bug that accidentally generates money).

16. Fear Driven Development

Arnis L.

Youre-fired

When project management adds more pressure (fires someone, moves deadlines forward, subtracts resources from the project, etc).

17. Hydra Code

Nick Dandoulakis

800px-Hercules_slaying_the_Hydra

Code that cannot be fixed. Like the Hydra of legend, every new fix introduces two new bugs. It should be rewritten.

18. Common Law Feature

anonymous

Common-law-marriage

A bug in the application that has existed so long that it is now part of the expected functionality, and user support is required to actually fix it.

19. Loch Ness Monster Bug

russau

Loch-ness-monster

I've started Loch Ness Monster bug for anything not reproducible / only sighted by one person. I'm hearing a lot of people in the office say it now. (Possible alternates: Bugfoot, Nessiebug.)

20. Ninja Comments

schar

Ninja-comments

Also known as invisible comments, secret comments, or no comments.

21. Smurf Naming Convention

sal

Brainy-smurf

When almost every class has the same prefix. IE, when a user clicks on the button, a SmurfAccountView passes a SmurfAccountDTO to the SmurfAccountController. The SmurfID is used to fetch a SmurfOrderHistory which is passed to the SmurfHistoryMatch before forwarding to either SmurfHistoryReviewView or SmurfHistoryReportingView. If a SmurfErrorEvent occurs it is logged by SmurfErrorLogger to ${app}/smurf/log/smurf/smurflog.log

22. Protoduction

Chris Pebble

Uno_motorcycle_segway

A prototype that ends up in production. Heard this from a tech at the Fermi lab. He said he didn't coin the term but had heard it used a number of times at Fermi.

23. Rubber Ducking

wesgarrison

Sesamstrasse_ernie_bert

Sometimes, you just have to talk a problem out. I used to go to my boss and talk about something and he'd listen and then I'd just answer my own question and walk out without him saying a thing. I read about someone that put a rubber duck on their monitor so they could talk to it, so rubberducking is talking your way through a problem.

24. Banana Banana Banana

juliet

Dancing-banana

Placeholder text indicating that documentation is in progress or yet to be completed. Mostly used because FxCop complains when a public function lacks documentation.

/// <summary>
/// banana banana banana
/// </summary>
public CustomerValidationResponse Validate()

Other food-related jargon: Programmer Fuel (Mountain Dew, coffee, Mate, anything which gets you well-caffeinated), Hot Potato (Http and Https respectively. Same number of syllables, but more fun to say), Cake (Marty's noob cake broke the build), Chunky Salsa (based on the chunky salsa rule, a single critical error or bug that renders an entire system unusable, especially in a production environment).

25. Bicrement

evilteach

Plus-two

Adding 2 to a variable.

26. Reality 101 Failure

Loren Pechtel

Feature-fail

The program (or more likely feature of a program) does exactly what was asked for but when it's deployed it turns out that the problem was misunderstood and it's basically useless.

27. Mad Girlfriend Bug

Jeduan Cornejo

Mad-girlfriend-cartoon

When you see something strange happening, but the software is telling you everything is fine.

28. Megamoth

zolomon

Mothra

Stands for MEGA MOnolithic meTHod. Often contained inside a God Object, and usually stretches over two screens in height. Megamoths of greater size than 2k LOC have been sighted. Beware of the MEGAMOTH!

29. Hooker Code

NullPointerException

Muppet-pimps

Code that is problematic and causes application instability (application "goes down" often). "Did the site go down again? Yeah, Jim must still have some hooker code in there."

30. Jenga Code

sumit

Hasbro-jenga

When the whole thing collapses after you alter a block of code.

This is just the top 30, what I consider to be the most likely candidates for actual new programming jargon based on community upvotes, not just "funny thing that another programmer typed on a webpage and I felt compelled to upvote for hilarity". Because that would be Reddit. If you're itching to see even more, there are plenty more answers to read – three hundred and fifty six more to be precise. Longtime Stack Overflow user Greg Hewgill maintains an archive of old deleted Stack Overflow questions, but this one hasn't quite made it in there yet. In the meantime, try Stack Printer, or if you have the requisite 10k rep on Stack Overflow, you can view the full soft-deleted question on the site.

* But don't enjoy it too much. We will be watching you.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    155 Comments

Coding Horror: The Book

July 10, 2012

If I had to make a list of the top 10 things I've done in my life that I regret, "writing a book" would definitely be on it. I took on the book project mostly because it was an opportunity to work with a few friends whose company I enjoy. I had no illusions going in about the rapidly diminishing value of technical books in an era of pervasive high speed Internet access, and the book writing process only reinforced those feelings.

In short, do not write a book. You'll put in mountains of effort for precious little reward, tangible or intangible. In the end, all you will have to show for it is an out-of-print dead tree tombstone. The only people who will be impressed by that are the clueless and the irrelevant.

As I see it, for the kind of technical content we're talking about, the online world of bits completely trumps the offline world of atoms:

  • It's forever searchable.
  • You, not your publisher, will own it.
  • It's instantly available to anyone, anywhere in the world.
  • It can be cut and pasted; it can be downloaded; it can even be interactive.
  • It can potentially generate ad revenue for you in perpetuity.

And here's the best part: you can always opt to create a print version of your online content, and instantly get the best of both worlds. But it only makes sense in that order. Writing a book may seem like a worthy goal, but your time will be better spent channeling the massive effort of a book into creating content online. Every weakness I listed above completely melts away if you redirect your effort away from dead trees and spend it on growing a living, breathing website presence online.

A few weeks ago, Hyperink approached me with a concept of packaging the more popular entries on Coding Horror, its "greatest hits" if you will, into an eBook. They seemed to have a good track record doing this with other established bloggers, and I figured it was time to finally practice what I've been preaching all these years. So you can now download Effective Programming: More Than Writing Code for an introductory price of $2.99. It's available in Kindle, iPad, Nook, and PDF formats.

Blog to Book - Effective Programming: More Than Writing Code (Jeff Atwood)   Blog to Book - How to Stop Sucking and Be Awesome Instead (Jeff Atwood)

(As of March 2013, the first book was apparently popular enough to warrant a second volume, How to Stop Sucking and Be Awesome Instead)

I've written about the ongoing tension between bits and atoms recently, and I want to be clear: I am a fan of books. I'm just not necessarily a fan of writing them. I remain deeply cynical about current book publishing models, which feel fundamentally broken to me. No matter the price of the book, outside of J.K. Rowling, you're basically buying the author a drink.

As the author, you can expect to make about a dollar on every copy that sells. The publisher makes several times that, so they make a nice profit with as few as, say, five thousand copies sold. Books that sell ten or fifteen thousand are rare, and considered strong sellers. So let's say you strike gold. After working on your book for a year or more, are you going to be happy with a payday of ten to fifteen grand?

Incidentally, don't expect your royalty check right away. The publisher gets paid first, by the bookstores, and the publisher may then hold on to your money for several months before they part with any of it. Yes, this is legal: it's in the publisher's contract. Not getting paid may be a bummer for you, but it's a great deal for the publisher, since they make interest on the float (all the money they owe to their authors) - which is another profit stream. They'll claim one reason for the delay is the sheer administrative challenge of cutting a check within three months (so many authors to keep track of! so many payments!)... a less ridiculous reason is that they have to wait to see whether bookstores are going to return unsold copies of your book for a full refund.

Here's one real world example. John Resig sold 4,128 copies of Pro Javascript, for which he earned a grand total of $1.87 per book once you factor in his advance. This is a book which still sells for $29.54 on Amazon new.

Resig-book-check

Tellingly, John's second book seems permanently unfinished. It's been listed as "in progress" since 2008. Can't say I blame him. (Update: John explains.)

When I buy books, I want most of that money to go to the author, not the publishing middlemen. I'd like to see a world where books are distributed electronically for very little cost, and almost all the profits go directly to the author. I'm not optimistic this will happen any time soon.

I admire people willing to write books, but I honestly think you have to be a little bit crazy to sit down and pound out an entire book these days. I believe smaller units of work are more realistic for most folks. I had an epic email discussion with Scott Meyers about the merits of technical book publishing versus blogging in 2008, and I don't think either of us budged from our initial positions. But he did launch a blog to document some of his thoughts on the matter, which ended with this post:

My longer-term goal was to engage in a dialogue with people interested in the production of fast software systems such that I could do a better job with the content of [my upcoming book]. Doing that, however, requires that I write up reasonable initial blog posts to spur discussion, and I've found that this is not something I enjoy. To be honest, I view it as overhead. Given a choice between doing background research to learn more about a topic (typically reading something, but possibly also viewing a technical presentation, listening to a technical podcast, or exchanging email with a technical expert) or writing up a blog entry to open discussion, I find myself almost invariably doing the research. One reason for this is that I feel obliged to have done some research before I post, anyway, and I typically find that once I'm done with the research, writing something up as a standalone blog entry is an enterprise that consumes more time than I'm willing to give it. It's typically easier to write the result up in the form of a technical presentation, then give the presentation and get feedback that way.

Overhead? I find this attitude perplexing; the research step is indeed critical, but no less important than writing up your results as a coherent blog entry. If you can't explain the results of your research to others, in writing, in a way they can understand, you don't understand it. And if you aren't willing to publish your research in the form of a simple web page that anyone in the world can visit and potentially learn from, why did you bother doing that research in the first place? Are you really maximizing the value of your keystrokes?

More selfishly, you should always finish by writing up your results purely for your own self-improvement, if nothing else. As Steve Yegge once said: "I have many of my best ideas and insights while blogging." Then you can take all that published writing, fold in feedback and comments from the community, add some editorial embellishment on top, and voilà – you have a great book.

Of course, there's no getting around the fact that writing is just plain hard. Seth Godin's advice for authors still stands:

Lower your expectations. The happiest authors are the ones that don't expect much.

Which, I think, is also good life advice in general. Maybe the easiest way to lower your expectations as an author is by attempting to write one or two blog entries a week, keep going as long as you can, and see where that takes you.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    65 Comments

Betting the Company on Windows 8

July 9, 2012

I'd argue that the last truly revolutionary version of Windows was Windows 95. In the subsequent 17 years, we've seen a stream of mostly minor and often inconsequential design changes in Windows – at its core, you've got the same old stuff: a start menu, a desktop with icons, taskbar at the bottom, overlapping windows, toolbars, and pull-down menus.

Win95-small Win7-small-desktop

Windows 7 may be bigger, prettier, and more refined – finally, a proper sequel to Windows XP – but it's also safe. Rote. Familiar. Maybe a little too safe.

Windows 95 was a big deal because it innovated, because it was a break from the status quo. It sold 40 million copies in a year. It marked the coming of age of the Wintel beige box PC hegemony, and in the process dealt a near death blow to Apple and its rapidly aging System 7 OS.

But we all know how that story ends – with the iPhone in 2007, and most of all the iPad in 2010, Apple popularized the idea of simple touch computing surfaces that are now defining the Post-PC Era. The best way to predict the future is to invent it. And to their credit, Apple did; that is why their star is ascendant. Kind of absurdly scarily ascendant, actually.

It's not like Microsoft isn't investing in R&D. The Surface table looked amazing. Unfortunately, it was also trapped in a ridiculous, giant coffee table form factor that no regular person could afford or even want. That's too bad, because the Surface table was actually … kind of amazing. I've only ever seen one, in the lobby of a Seattle hotel in 2008. I went in skeptical, but when I actually got to try the Surface table, I came away impressed. It was a fascinating and intuitive multi-touch experience … that virtually nobody will ever get to experience or use. The iPad also offers a fascinating and intuitive multi-touch experience; let's compare:

a multi-touch Surface Table priced at $10,000 that, statistically speaking, nobody will ever be able to see or afford

… versus …

a multi-touch iPad in the hands of every consumer with $500 in their pocket

Now guess which of these companies is currently worth umpteen bazillion dollars. Go on, guess! No, it's not Webvan, you jokers.

After using the retina iPad for a while, I was shocked just how much of my everyday computing I can pull off on a tablet. Once you strip away all the needless complexities, isn't a tablet the simplest form of a computer there can be? How could it get any simpler than a tablet? Is this the ultimate and final form of computing? I wonder. It's a display in your hands, with easy full-screen applications that have simple oversize click targets to poke your finger at, and no confusing file systems to puzzle over or power-draining x86 backwards compatibility to worry about. Heck, maybe a tablet is better than traditional PCs, because it sidesteps all the accumulated cruft and hacks the PC ecosystem has accreted over the last 30 years.

If you're Microsoft, this is the point at which you should be crapping your pants in abject fear.

It is nothing less than the first stages of the heat death of the PC ecosystem, the formation of a tidal wave that will flow inexorably forward from this point. But you can't say they didn't see it coming. Bill Gates, of all people, saw this coming all the way back in 1995, the same year Windows 95 was released.

One scary possibility being discussed by Internet fans is whether they should get together and create something far less expensive than a PC which is powerful enough for Web browsing. This new platform would optimize for the datatypes on the Web. Gordon Bell and others approached Intel on this and decided Intel didn't care about a low cost device so they started suggesting that General Magic or another operating system with a non-Intel chip is the best solution.

To be honest, I had almost written Microsoft off at this point, to the "whatever the abomination that IBM is now" enterprisey deadpool. It's not like they would disappear, necessarily, but they no longer had a viable horse in the race for the future of consumer computing devices. In these darkest of hours, I was actually considering … switching to OS X.

That is, until I tried Windows 8, and until I watched Microsoft unveil Surface. No, not the huge table one, the new one that's roughly the size (and one hopes, the price) of the iPad. I was expecting Yet Another Incremental Improvement to Windows, but I got something else altogether.

Microsoft-surface

It took a little longer than originally anticipated, but what's 17 years between friends?

Windows 8 is, in my humble opinion, the most innovative version of Windows Microsoft has released since Windows 95. Maybe ever. And it's good. Really good! I can't remember the last time I was this excited about a Windows release, except when I was kind of obsessively running betas of Windows 95 and waiting for Windows 95 to be released. Don't judge me man!

What's good about Windows 8? A ton of stuff.

  • Excellent, beautiful, "live tile" Metro multi-touch tablet optimized interface, as honed from two prior Windows Phone releases.
  • Integrated app store with updates for Metro apps. Yes, it actually works.
  • Fantastic new overlay notification system.
  • Noticeably faster to boot, faster to shut down, faster to sleep. It's just faster.
  • Awesome new Task Manager. I am seriously in love with this thing.
  • Updated Office 2010 style "ribbon" Explorer UI.
  • New copy dialog with graph of transfer rates over time, along with a visible moving average.
  • Lower system requirements and smaller footprint than Windows 7.

That's just a list off the top of my head. But don't take my word for it. Download the free Release Preview and try Windows 8 yourself.

Now, I will warn you that Windows 8 definitely has a wee bit of Jekyll and Hyde going on, because it smushes together two radically different paradigms: the old school mouse and keyboard centric desktop UI, and the new school tablet and touch centric Metro UI. It can be disconcerting to get kicked abruptly from one to the other. It's different, so there's a learning curve. (Protip: using your mouse scroll wheel in a Metro panel scrolls sideways. Don't forget the hover corners, or the right click, either.) But I have to say, this choice seems, at least so far, to be a bit saner approach than the super hard totally incompatible iOS/OSX divide in Apple land.

I expect that most people will decide early on whether they prefer treating their computer like a traditional laptop, or a tablet, and stick to their guns. Fortunately, the tablet stuff in Windows 8 doesn't get in the way. Even if only used as a glorified Start Menu, the Metro interface works surprisingly well – just start typing and match what you want to launch.

What's even more amazing is that Microsoft is actually pricing the upgrade sanely. Can you believe it's only $40 to upgrade from Windows 8 from XP, Vista, or Windows 7? It's like someone at Microsoft woke up and finally listened to what I've desperately been trying to tell them for years.

In the post PC era, Microsoft is betting the company on Windows 8, desperately trying to serve two masters with one operating system. The traditional mouse and keyboard desktop is no longer the default; it is still there, but slightly hidden from view, as the realm of computer nuts, power users, and geeks. For everyone else, the Metro UI puts an all new, highly visual touch and tablet friendly face on the old beige Wintel box. Will Microsoft succeed? I'm not sure yet. But based on what I've seen so far of Windows 8, its pricing, and the new Surface hardware – I'm cautiously optimistic.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    108 Comments

The PHP Singularity

June 29, 2012

Look at this incredible thing Ian Baker created. Look at it!

The PHP hammer

What you're seeing is not Photoshopped. This is an actual photo of a real world, honest to God double-clawed hammer. Such a thing exists. Isn't that amazing? And also, perhaps, a little disturbing?

That wondrous hammer is a delightful real-world acknowledgement of the epic blog entry PHP: A Fractal of Bad Design.

I can’t even say what’s wrong with PHP, because – okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

Remember the immediate visceral reaction you had to the double-clawed hammer? That's exactly the reaction most sane programmers have to their first encounter with the web programming language PHP.

This has been going on for years. I published my contribution to the genre in 2008 with PHP Sucks, But It Doesn't Matter.

I'm no language elitist, but language design is hard. There's a reason that some of the most famous computer scientists in the world are also language designers. And it's a crying shame none of them ever had the opportunity to work on PHP. From what I've seen of it, PHP isn't so much a language as a random collection of arbitrary stuff, a virtual explosion at the keyword and function factory. Bear in mind this is coming from a guy who was weaned on BASIC, a language that gets about as much respect as Rodney Dangerfield. So I am not unfamiliar with the genre.

Except now it's 2012, and fellow programmers are still writing long screeds bemoaning the awfulness of PHP!

What's depressing is not that PHP is horribly designed. Does anyone even dispute that PHP is the worst designed mainstream "language" to blight our craft in decades? What's truly depressing is that so little has changed. Just one year ago, legendary hacker Jamie Zawinski had this to say about PHP:

I used to think that PHP was the biggest, stinkiest dump that the computer industry had taken on my life in a decade. Then I started needing to do things that could only be accomplished in AppleScript.

Is PHP so broken as to be unworkable? No. Clearly not. The great crime of PHP is its utter banality. Its continued propularity is living proof that quality is irrelevant; cheap and popular and everywhere always wins. PHP is the Nickelback of programming languages. And, yes, out of frustration with the status quo I may have recently referred to Rasmus Lerdorf, the father of PHP, as history's greatest monster. I've told myself a million times to stop exaggerating.

The hammer metaphor is apt, because at its core, this is about proper tooling. As presciently noted by Alex Papadimoulis:

A client has asked me to build and install a custom shelving system. I'm at the point where I need to nail it, but I'm not sure what to use to pound the nails in. Should I use an old shoe or a glass bottle?

How would you answer the question?

  1. It depends. If you are looking to pound a small (20lb) nail in something like drywall, you'll find it much easier to use the bottle, especially if the shoe is dirty. However, if you are trying to drive a heavy nail into some wood, go with the shoe: the bottle will shatter in your hand.

  2. There is something fundamentally wrong with the way you are building; you need to use real tools. Yes, it may involve a trip to the toolbox (or even to the hardware store), but doing it the right way is going to save a lot of time, money, and aggravation through the lifecycle of your product. You need to stop building things for money until you understand the basics of construction.
What we ought to be talking about is not how terrible PHP is – although its continued terribleness is a particularly damning indictment – but how we programmers can culturally displace a deeply flawed tool with a better one. How do we encourage new programmers to avoid picking up the double clawed hammer in favor of, well, a regular hammer?

This is not an abstract, academic concern to me. I'm starting a new open source web project with the goal of making the code as freely and easily runnable to the world as possible. Despite the serious problems with PHP, I was forced to consider it. If you want to produce free-as-in-whatever code that runs on virtually every server in the world with zero friction or configuration hassles, PHP is damn near your only option. If that doesn't scare you, then check your pulse, because you might be dead.

Everything goes with PHP sauce! Including crushing depression.

Therefore, I'd like to submit a humble suggestion to my fellow programmers. The next time you feel the urge to write Yet Another Epic Critique of PHP, consider that:

  1. We get it already. PHP is horrible, but it's used everywhere. Guess what? It was just as horrible in 2008. And 2005. And 2002. There's a pattern here, but it's subtle. You have to look very closely to see it. On second thought, never mind. You're probably not smart enough to figure it out.

  2. The best way to combat something as pervasively and institutionally awful as PHP is not to point out all its (many, many, many) faults, but to build compelling alternatives and make sure these alternatives are equally pervasive, as easy to set up and use as possible.

We've got a long way to go. One of the explicit goals of my next project is to do whatever we can to buff up a … particular … open source language ecosystem such that it can truly compete with PHP in ease of installation and deployment.

From my perspective, the point of all these "PHP is broken" rants is not just to complain, but to help educate and potentially warn off new coders starting new codebases. Some fine, even historic work has been done in PHP despite the madness, unquestionably. But now we need to work together to fix what is broken. The best way to fix the PHP problem at this point is to make the alternatives so outstanding that the choice of the better hammer becomes obvious.

That's the PHP Singularity I'm hoping for. I'm trying like hell to do my part to make it happen. How about you?

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    213 Comments

Concluding the Great MP3 Bitrate Experiment

June 27, 2012

And now for the dramatic conclusion to The Great MP3 Bitrate Experiment you've all been waiting for! The actual bitrates of each audio sample are revealed below, along with how many times each was clicked per the goo.gl URL shortener stats between Thursday, June 21st and Tuesday, June 26th.

Limburger~160kbps VBR10,265
Cheddar320kbps CBR7,183
Goudaraw CD6,159
Brie~192kbps VBR5,508
Feta128kbps CBR5,567

During that six day period, my overall Amazon CloudFront and S3 bill for these downloaded audio samples was $103.72 for 800 GB of data, across 200k requests.

Based on the raw click stats, it looks like a bunch of folks clicked on the first and second files, then lost interest. Probably because of, y'know, Starship. Still, it's encouraging to note that the last two files were both clicked about 5.5k times for those that toughed their way out to the very end. Of those listeners, 3,512 went on to contribute results. Not bad at all! I mean, considering I made everyone listen to what some people consider to be one of the bestworst "rock" songs of all time. You guys are troopers, taking one in the ear for the team in the name of science. That's what I admire about you.

I belatedly realized after creating this experiment that there was an easy way to cheat. Simply compress all the samples with FLAC, then sort by filesize.

10,836,505   We+Built+This+City+-+Excerpt+(Feta).flac
11,054,288   We+Built+This+City+-+Excerpt+(Limburger).flac
11,294,757   We+Built+This+City+-+Excerpt+(Brie).flac
11,731,999   We+Built+This+City+-+Excerpt+(Cheddar).flac
11,816,415   We+Built+This+City+-+Excerpt+(Gouda).flac

The higher the bitrate, apparently, the less compressible the audio files are with lossless FLAC compression. It's a small difference in absolute file size, but it's enough to sort exactly with quality. At least you can independently verify that I wasn't tricking anyone in this experiment; each sample was indeed different, and the bitrates are what I said they were.

But you guys and gals wouldn't do that, because you aren't dirty, filthy cheaters, right? Of course not. Let's go over the actual results. Remember each sample was ranked in a simple web form from 1 to 5, where 1 is worst quality, and 5 is highest quality.

Mp3-experiment-results-graph

The summary statistics for the 3,512 data points:

Avg Std dev
160kbps VBR (Limburger) 3.49 1.38
320kbps CBR (Cheddar) 3.30 1.34
raw CD audio (Gouda) 3.34 1.26
192kbps VBR (Brie) 3.27 1.29
128kbps CBR (Feta) 2.95 1.40

(If you'd like to perform more detailed statistical analysis, download the Excel 2010 spreadsheet with all the data and have at it.)

Even without busting out hard-core statistics, I think it's clear from the basic summary statistics graph that only one audio sample here was discernably different than the rest – the 128kbps CBR. And by different I mean "audibly worse". I've maintained for a long, long time that typical 128kbps MP3s are not acceptable quality. Even for the worst song ever. So I guess we can consider this yet another blind listening test proving that point. Give us VBR at an average bitrate higher than 128kbps, or give us death!

But what about the claim that people with dog ears can hear the difference between the higher bitrate MP3 samples? Well, first off, it's incredibly strange that the first sample – encoded at a mere 160kbps – does better on average than everything else. I think it's got to be bias from appearing first in the list of audio samples. It's kind of an outlier here for no good reason, so we have to almost throw it out. More fuel for the argument that people can't hear a difference at bitrates above 128kbps, and even if they do, they're probably imagining it. If we didn't throw out this result, we'd have to conclude that the 160kbps sample was somehow superior to the raw CD audio, which is … clearly insane.

Running T-Test and Analysis of Variance (it's in the spreadsheet) on the non-insane results, I can confirm that the 128kbps CBR sample is lower quality with an extremely high degree of statistical confidence. Beyond that, as you'd expect, nobody can hear the difference between a 320kbps CBR audio file and the CD. And the 192kbps VBR results have a barely statistically significant difference versus the raw CD audio at the 95% confidence level. I'm talking absolutely wafer thin here.

Anyway, between the anomalous 160kbps result and the blink-and-you'll-miss-it statistical difference between the 192kbps result and the raw CD audio, I'm comfortable calling this one as I originally saw it. The data from this experiment confirms what I thought all along: for pure listening, the LAME defaults of 192kbps variable bit rate encoding do indeed provide a safe, optimal aural bang for the byte – even dogs won't be able to hear the difference between 192kbps VBR MP3 tracks and the original CD.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    83 Comments

The Great MP3 Bitrate Experiment

June 21, 2012

Lately I've been trying to rid my life of as many physical artifacts as possible. I'm with Merlin Mann on CDs:

Can't believe how quickly CDs went from something I hate storing to something I hate buying to something I hate merely existing.

Although I'd extend that line of thinking to DVDs as well. The death of physical media has some definite downsides, but after owning certain movies once on VHS, then on DVD, and finally on Blu-Ray, I think I'm now at peace with the idea of not owning any physical media ever again, if I can help it.

My current strategy of wishing my physical media collection into a cornfield involves shipping all our DVDs to Second Spin via media mail, and paying our nephew $1 per CD to rip our CD collection using Exact Audio Copy and LAME as a summer project. The point of this exercise is absolutely not piracy; I have no interest in keeping both digital and physical copies of the media I paid for the privilege of owningtemporarily licensing. Note that I didn't bother ripping any of the DVDs because I hardly ever watched them; mostly they just collected dust. But I continue to love music and listen to my music collection on a daily basis. I'll donate all the ripped CDs to some charity or library, and if I can't pull that off, I'll just destroy them outright. Stupid atoms!

CDs, unlike DVDs or even Blu-Rays, are considered reference quality. That is, the uncompressed digital audio data contained on a CD is a nearly perfect representation of the original studio master, for most reasonable people's interpretation of "perfect", at least back in 1980. So if you paid for a CD, you might be worried that ripping it to a compressed digital audio format would result in an inferior listening experience.

I'm not exactly an audiophile, but I like to think I have pretty good ears. I've recommended buying $200+ headphones and headphone amps for quite a while now. By the way: still a good investment! Go do it! Anyhow, previous research and my own experiments led me to write Getting the Best Bang for Your Byte seven years ago. I concluded that nobody could really hear the difference between a raw CD track and an MP3 using a decent encoder at a variable bit rate averaging around 160kbps. Any bit rate higher than that was just wasting space on your device and your bandwidth for no rational reason. So-called "high resolution audio" was recently thoroughly debunked for very similar reasons.

Articles last month revealed that musician Neil Young and Apple's Steve Jobs discussed offering digital music downloads of 'uncompromised studio quality'. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Young's group several months ago.

Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.

There are a few real problems with the audio quality and 'experience' of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we're not going to see any actual improvement.

The authors of LAME must have agreed with me, because the typical, standard, recommended, default way of encoding any old audio input to MP3 …

lame --preset standard "cd-track-raw.wav" "cd-track-encoded.mp3"

… now produces variable bit rate MP3 tracks at a bitrate of around 192kbps on average.

Encspot-omigod-disc-3

(Going down one level to the "medium" preset produces nearly exactly 160kbps average, my 2005 recommendation on the nose.)

Encoders have only gotten better since the good old days of 2005. Given the many orders of magnitude improvement in performance and storage since then, I'm totally comfortable with throwing an additional 32kbps in there, going from 160kbps average to 192kbps average just to be totally safe. That's still a miniscule file size compared to the enormous amount of data required for mythical, aurally perfect raw audio. For a particular 4 minute and 56 second music track, that'd be:

Uncompressed raw CD format51 mb
Lossless FLAC compression36 mb
LAME insane encoded MP3 (320kbps)11.6 mb
LAME standard encoded MP3 (192kbps avg)7.1 mb

Ripping to uncompressed audio is a non-starter. I don't care how much of an ultra audio quality nerd you are, spending 7× or 5× the bandwidth and storage for completely inaudible "quality" improvements is a dagger directly in the heart of this efficiency-loving nerd, at least. Maybe if you're planning to do a lot of remixing and manipulation it might make sense to retain the raw source audio, but for typical listening, never.

The difference between the 320kbps track and the 192kbps track is more rational to argue about. But it's still 1.6 times the size. Yes, we have tons more bandwidth and storage and power today, but storage space on your mobile device will never be free, nor will bandwidth or storage in the cloud, where I think most of this stuff should ultimately reside. And all other things being equal, wouldn't you rather be able to fit 200 songs on your device instead of 100? Wouldn't you rather be able to download 10 tracks in the same time instead of 5? Efficiency, that's where it's at. Particularly when people with dog's ears wouldn't even be able to hear the difference.

But Wait, I Have Dog Ears

Of course you do. On the Internet, nobody knows you're a dog. Personally, I think you're a human being full of crap, but let's drop some science on this and see if you can prove it.

On-the-internet-nobody-knows-youre-a-dog

When someone tells me "Dudes, come on, let's steer clear of the worst song ever written!", I say challenge accepted. Behold The Great MP3 Bitrate Experiment!

As proposed on our very own Audio and Video Production Stack Exchange, we're going to do a blind test of the same 2 minute excerpt of a particular rock audio track at a few different bitrates, ranging from 128kbps CBR MP3 all the way up to raw uncompressed CD audio. Each sample was encoded (if necessary), then exported to WAV so they all have the same file size. Can you tell the difference between any of these audio samples using just your ears?

1. Listen to each two minute audio sample

(update: experiment concluded; links removed.)

Limburger
Cheddar
Gouda
Brie
Feta

2. Rate each sample for encoding quality

Once you've given each audio sample a listen – with only your ears please, not analysis softwarefill out this brief form and rate each audio sample from 1 to 5 on encoding quality, where one represents worst and five represents flawless.

Yes, it would be better to use a variety of different audio samples, like SoundExpert does, but I don't have time to do that. Anyway, if the difference in encoding bitrate quality is as profound as certain vocal elements of the community would have you believe it is, that difference should be audible in any music track. To those who might argue that I am trolling audiophiles into listening to one of the worst-slash-best rock songs of all time … over and over and over … to prove a point … I say, how dare you impugn my honor in this manner, sir. How dare you!

I wasn't comfortable making my generous TypePad hosts suffer through the bandwidth demands of multiple 16 megabyte audio samples, so this was a fun opportunity to exercise my long dormant Amazon S3 account, and test out Amazon's on-demand CloudFront CDN. I hope I'm not rubbing any copyright holders the wrong way with this test; I just used a song excerpt for science, man! I'll pull the files entirely after a few weeks just to be sure.

You'll get no argument from me that the old standby of 128kbps constant bit rate encoding is not adequate for most music, even today, and you should be able to hear that in this test. But I also maintain that virtually nobody can reliably tell the difference between a 160kbps variable bit rate MP3 and the raw CD audio, much less 192kbps. If you'd like to prove me wrong, this is your big chance. Like the announcer in Smash TV, I say good luck – you're gonna need it.

So which is it – are you a dog or a man? Give the samples a listen, then rate them. I'll post the results of this experiment in a few days.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    144 Comments

Because Everyone (Still) Needs a Router

June 18, 2012

About a year and a half ago, I researched the state of routers: about as unsexy as it gets but essential to the stability, reliability, and security of your Internet connection. My conclusion?

This is boring old plain vanilla commodity router hardware, but when combined with an open source firmware, it is a massive improvement over my three year old, proprietary high(ish) end router. The magic router formula these days is a combination of commodity hardware and open-source firmware. I'm so enamored of this one-two punch combo, in fact, I might even say it represents the future. Not just of the everyday workhorse routers we all need to access the Internet – but the future of all commodity hardware.

I felt a little bad about that post, because I quickly migrated from the DD-WRT open source firmware to OpenWRT and then finally settled on Tomato. I guess that's open source, too many choices with nobody to really tell you what's going to work reliably on your particular hardware. But the good news is that I've been running Tomato quite happily with total stability for about a year now – primarily because it is gloriously simple, but also because it has the most functional quality of service (QoS) implementation.

Tomato-qos

Why does functional Quality of Service matter so very much in a router? Unless you have an Internet connection that's only used by your grandmother to visit her church's website on Sundays, QoS is the difference between a responsive Internet and one that's brutally dog slow.

Ever sat in an internet shop, a hotel room or lobby, a local hotspot, and wondered why you can't access your email? Unknown to you, the guy in the next room or at the next table is hogging the internet bandwidth to download the Lord Of The Rings Special Extended Edition in 1080p HDTV format. You're screwed - because the hotspot router does not have an effective QoS system. In fact, I haven't come across a shop or an apartment block locally that has any QoS system in use at all. Most residents are not particularly happy with the service they [usually] pay for.

When I switched from DD-WRT and OpenWRT to Tomato, I had to buy a different router, because Tomato only supports certain router hardware, primarily Broadcom. The almost universal recommendation was the Asus RT-N16, so that's what I went with.

Asus RT-N16

And it is still an excellent choice. If you just want a modern, workhorse single band wireless N router that won't break the bank, but has plenty of power and memory to run Tomato, definitely try the Asus RT-N16. It's currently available for under $80 (after $10 rebate). Once you get Tomato on there, you've got a fine combination of hardware and software. Take it from this SmallNetBuilder user review:

I'm a semigeek. Some of the stuff on this site confuses me. But I figured out enough to get this router and install Tomato USB. Great combination. Have not had any problems with the router. Love all the features that Tomato gives me. Like blocking my son's iPod after 7 PM. Blocking certain websites. Yeah, I know you can do that with other routers but Tomato made it easy. Also love the QoS features. Netflix devices get highest bandwidth while my wife's bittorrent gets low.

Review was too heavily slanted against the Asus software, which I agree is crap. I bought the router for its hardware specs. Large memory. Fast processor. Gigabyte lan. 2 USB ports.

What's not to love? Well, the dual band thing, mainly. If you want a truly top of the line router with incredible range, and simultaneous dual band 2.4 GHz and 5 GHz performance bragging rights, fortunately there's the Asus RT-N66U.

Asus RT-N66U

This is, currently at least, the state of the art in routers. It has a faster CPU and twice the memory (256 MB) of the RT-N16. But at $190 it is also over twice the price. Judge for yourself in the SmallNetBuilder review:

As good as the RT-66U is, our wireless performance results once again show that no router is good in every mode that we test. But that said, the Dark Knight clearly outperformed both the NETGEAR WNDR4500 and Cisco Linksys E4200V2 in most of our two and three-stream tests. And it's the only router in recent memory able to reach to our worst-case/lowest-signal test location on the 5 GHz band, albeit with barely-usable throughput. Still, this is an accomplishment in itself.

If you're going to spend close to $200 for a wireless router, you should get a lot for your money. The Dark Knight seems to deliver wireless performance to justify its high price and has routing speed fast enough to handle any service a consumer is likely to have, even our friends in Europe and Asia.

Its only weakness? Take a guess. Oh wait, no need to guess, it's the same "weakness" the RT-N16 shared, the sketchy Asus firmware it ships with out of the box. That's why we get our Tomato on, people! There is complete and mature support for the RT-N66U in Tomato; for a walkthrough on how to get it installed (don't be shy, it's not hard) Check out Shadow Andy's TomatoUSB firmware flashing guide.

Does having nice router hardware with a current open source firmware matter? Well, if your livelihood depends on the Internet like mine does, then I certainly think so.

Internet-serious-business

At the very least, if you or someone you love is also an Internet fan and hasn't given any particular thought to what router they use, maybe it's time to start checking into that. Now if you'll excuse me, I'm going to go donate to the Tomato project.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    65 Comments

How to Talk to Human Beings

June 14, 2012

I hesitate to say everyone should have a child, because becoming a parent is an intensely personal choice. I try my best to avoid evangelizing the experience, but the deeper in I get, the more I believe that nothing captures the continued absurdity of the human condition better than having a child does.

After becoming a parent, the first thing you'll say to yourself is, my God, it is a miracle any of us even exist, because I want to freakin' kill this kid at least three times a day. But then your child will spontaneously hug you, or tell you some stupid joke that they can't stop laughing at, or grab for your hand while crossing the street and then … well, here we all are, aren't we? I'm left wondering if I'll ever be able to love other people – or for that matter myself – as much as I love my children. Unconditional, irrational, nonsensical love. That's humanity in a nutshell.

Parenting is by far the toughest job I've ever had. It makes my so-called career seem awfully quaint in comparison.

the first 9 months is the hardest

My favorite part of the parenting process, though, is finally being able to talk to my kids. When the dam breaks and all that crazy stuff they had locked away in those tiny brains for the first two years comes uncontrollably pouring out. Finding out what they're thinking about and what kind of people they are at last. Watching them discover and explore the surface of language is utterly fascinating. After spending two years trying to guess – with extremely limited success – what they want and need, truly, what greater privilege is there than to simply ask them? Language: Best. Invention. Ever. I like it so much I'm using it right now!

Language also allows kids to demonstrate just what crazy little roiling balls of id they (and by extension, we) all are on the inside. Kids don't know what it means to be mad, to be happy, to be sad. They have to be taught what emotions are, how to handle them, and how to deal in a constructive way with everything the world is throwing at them. You'll get a ringside seat to this process not as a passive observer, but as their coach and spirit guide. They have no coping mechanisms except the ones we teach them. The difference between a child who freaks out at the slightest breeze, and a child who can confidently navigate an unfamiliar world? The parents.

See, I told you this was going to be tough.

There are of course innumerable books on parenting and child-rearing, most of which I have no time to read because by the time I'm done being a parent for the day, I'm too exhausted to read more about it. And, really, who wants to read about parenting when you're living the stuff 24/7? Except on Parenting Stack Exchange, of course. However, there is one particular book I happened to discover that was shockingly helpful, even after barely ten pages in. If you ever need to deal with children aged 2 to 99, stop reading right now and go buy How to Talk So Kids Will Listen & Listen So Kids Will Talk.

How to Talk So Kids Will Listen & Listen So Kids Will Talk

We already own three copies. And you're welcome.

What's so great about this book? I originally found it through A.J. Jacobs, who I mentioned in Trust Me, I'm Lying. Here's how he describes it:

The best marriage advice book I’ve read is a paperback called How to Talk So Kids Will Listen & Listen So Kids Will Talk. As you might deduce from the title, it wasn’t meant as a marriage advice book. But the techniques in this book are so brilliant, I use them in every human interaction I can, no matter the age of the conversant. It’s a strategy that was working well until today.

The book was written by a pair of former New York City teachers, and their thesis is that we talk to kids all wrong. You can’t argue with kids, and you shouldn’t dismiss their complaints. The magic formula includes: listen, repeat what they say, label their emotions. The kids will figure out the solution themselves.

I started using it on Jasper, who would throw a tantrum about his brothers monopolizing the pieces to Mouse Trap. I listened, repeated what he said, and watched the screaming and tears magically subside. It worked so well, I decided, why limit it to kids? My first time trying it on a grown-up was one morning at the deli. I was standing behind a guy who was trying unsuccessfully to make a call on his cell.

“Oh come on! I can’t get a signal here? Dammit. This is New York.”
He looked at me.
“No signal?” I say. “Here in New York?” (Repeat what they say.)
“It’s not like we’re in goddamn Wisconsin.”
“Mmmm.” (Listen. Make soothing noises.)
“We’re not on a farm. It’s New York, for God’s sake,” he said.
“That’s frustrating,” I say. (Label their emotions.)
He calmed down.

This book taught me that, as with so many other things in life, I've been doing it all wrong. I thought it was my job as a parent to solve problems for my children, to throw myself on life's figurative grenades to protect them. Consider the following illustrated examples from the book.

How to Talk So Kids Will Listen, cartoon about empathy

Notice how she cleverly lets the child reach an alternative solution himself, rather than providing the "solution" to him on a silver platter as the all-seeing, all-knowing omniscient adult. This honestly would never have occurred to me, because, well, if we're out of Toastie Crunchies, then we are out of freaking Toastie Crunchies!

How to Talk So Kids Will Listen, cartoon about description

I've learned to fall back whenever possible to simply describing things or situations instead of judging or pontificating. I explain the consequences of potential actions rather than jumping impatiently to "don't do that".

How to Talk So Kids Will Listen & Listen So Kids Will Talk is full of beautiful little insights on human interaction like this, and I was surprised to find how often what I thought was a good parenting behavior was working against us. Turns out, children aren't the only ones who have trouble dealing with their emotions and learning to communicate. I haven't just improved my relationship with my kids using the practical advice in this book, I've improved my interactions with all human beings from age 2 to 99.

Kids will teach you, if you let them. They'll teach you that getting born is the easy part. Anyone can do that in a day. But becoming a well-adjusted human being? That'll take the rest of your life.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    63 Comments

How to Stop Sucking and Be Awesome Instead

June 1, 2012

I've been fortunate to have some measure of success in my life, primarily through this very blog over the last eight years, and in creating Stack Overflow and Stack Exchange over the last four years. With the birth of our twin girls, I've had a few months to pause and reflect on those experiences. What did I do right? What did I do wrong? How would I do things differently next time? What advice should I give other people based on my own life experiences?

The short answer is that I wouldn't.

There are too many paths forward in life; I barely feel qualified to make decisions about what to do in my own life, much less recommend strategies for others in theirs. On some level I feel like Jared Fogle, who lost 245 pounds eating nothing but Subway subs. Maybe that worked for him, but how does that make it a valid diet strategy for the rest of the world? In other words, what I did worked for me, but I'm crazy.

That's also never stopped anyone else from handing out terrible life advice hand over fist before. So I figure why not. Who wants to live forever?

Flashgordon-vultan

Under pressure to make some sense of what I've been doing with my life for the last eight years, I put together a small presentation which I delivered yesterday at this year's Atlassian summit.

How to Stop Sucking and Be Awesome Instead

If you're reading this abstract, you're not awesome enough. Attend this session to unlock the secrets of Jeff Atwood, world famous blogger and industry leading co-founder of Stack Overflow and Stack Exchange. Learn how you too can determine clear goals for your future and turn your dreams into reality through positive-minded conceptualization techniques.* Within six to eight weeks, you'll realize the positive effects of Jeff Atwood's wildly popular Coding Horror blog in your own life, transporting you to an exciting new world of wealth, happiness and political power.

* May or may not also include working hard on things that matter for the rest of your life.

I hope you can forgive me for the title, and I guess the rest of the abstract, and probably the entirety of the presentation too, but I find it's easier to be serious when I'm not being entirely serious. At any rate, it's complicated.

Here's what I've seen work:

1. Embrace the Suck

2. Do It in Public

3. Pick Stuff That Matters

The slides explain. When put on the spot, under duress, I have selectively doled out this advice to a few people over the years – and miraculously, I've seen them succeed using these rules, too.

Better-safe-than-sorry

(I put a lot of additional explanatory detail in the slide notes that you'll only see if you download the full presentation.)

Mostly, I think it's the fear that gets us, in all its forms. Fear of not achieving. Fear of not keeping up. Fear of looking dumb. Fear of being inadequate. Fear of being exposed. Fear of failure. The only thing preventing us from being awesome is our own fear of sucking.

So that's why I say we embrace it. Who wants to live forever?

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    44 Comments

So You Want to be a Programmer

May 25, 2012

I didn't intend for Please Don't Learn to Code to be so controversial, but it seemed to strike a nerve. Apparently a significant percentage of readers stopped reading at the title.

So I will open with my own story. I think you'll find it instructive.

My mom once told me that the only reason she dated my father is because her mother told her to stay away from that boy, he's a bad influence.

If she had, I would not exist.

True story, folks.

I'd argue that the people who need to learn to code will be spurred on most of all by honesty, not religious faith in the truthiness of code as a universal good. Go in knowing both sides of the story, because there are no silver bullets in code. If, after hearing both the pros and cons, you still want to learn to code, then by all means learn to code. If you're so easily dissuaded by hearing a few downsides to coding, there are plenty of other things you could spend your time learning that are more unambiguously useful and practical. Per Michael Lopp, you could learn to be a better communicator. Per Gina Trapani, you could learn how to propose better solutions. Slinging code is just a tiny part of the overall solution in my experience. Why optimize for that?

On the earliest computers, everyone had to be a programmer because there was no software. If you wanted the computer to do anything, you wrote code. Computers in the not so distant past booted directly to the friendly blinking cursor of a BASIC interpreter. I view the entire arc of software development as a field where we programmers spend our lives writing code so that our fellow human beings no longer need to write code (or even worse, become programmers) to get things done with computers. So this idea that "everyone must know how to code" is, to me, going backwards.

Grace-hopper-and-the-univac

I fully support a push for basic Internet literacy. But in order to be a competent driver, does everyone need to know, in detail, how their automobile works? Must we teach all human beings the basics of being an auto mechanic, and elevate shop class to the same level as English and Mathematics classes? Isn't knowing how to change a tire, and when to take your car in for an oil change, sufficient? If your toilet is clogged, you shouldn't need to take a two week in depth plumbing course on toiletcademy.com to understand how to fix that. Reading a single web page, just in time, should be more than adequate.

What is code, in the most abstract sense?

code (kōd) …

    1. A system of signals used to represent letters or numbers in transmitting messages.
    2. A system of symbols, letters, or words given certain arbitrary meanings, used for transmitting messages requiring secrecy or brevity.
  1. A system of symbols and rules used to represent instructions to a computer…

The American Heritage Dictionary of the English Language

Is it punchcards? Remote terminals? Emacs? Textmate? Eclipse? Visual Studio? C? Ruby? JavaScript? In the 1920s, it was considered important to learn how to use slide rules. In the 1960s, it was considered important to learn mechanical drawing. None of that matters today. I'm hesitant to recommend any particular approach to coding other than the fundamentals as outlined in Code: The Hidden Language of Computer Hardware and Software, because I'm not sure we'll even recognize coding in the next 20 or 30 years. To kids today, perhaps coding will eventually resemble Minecraft, or building levels in Portal 2.

But everyone should try writing a little code, because it somehow sharpens the mind, right? Maybe in the same abstract way that reading the entire Encyclopedia Brittanica from beginning to end does. Honestly, I'd prefer that people spend their time discovering what problems they love and find interesting, first, and researching the hell out of those problems. The toughest thing in life is not learning a bunch of potentially hypothetically useful stuff, but figuring out what the heck it is you want to do. If said research and exploration leads to coding, then by all means learn to code with my blessing … which is worth exactly what it sounds like, nothing.

So, no, I don't advocate learning to code for the sake of learning to code. What I advocate is shamelessly following your joy. For example, I received the following email yesterday.

I am a 45 year old attorney/C.P.A. attempting to abandon my solo law practice as soon as humanly possible and strike out in search of my next vocation. I am actually paying someone to help me do this and, as a first step in the "find yourself" process, I was told to look back over my long and winding career and identify those times in my professional life when I was doing something I truly enjoyed.

Coming of age as an accountant during the PC revolution (when I started my first "real" job at Arthur Andersen we were still billing clients to update depreciation schedules manually), I spend a lot of time learning how to make computers, printers, and software (VisiCalc anyone?) work. This quasi-technical aspect of my work reached its apex when I was hired as a healthcare financial analyst for a large hospital system. When I arrived for my first day of work in that job, I learned that my predecessor had bequeathed me only a one page static Excel spreadsheet that purported to "analyze" a multi-million dollar managed care contract for a seven hospital health system. I proceeded to build my own spreadsheet but quickly exceeded the database functional capacity of Excel and had to teach myself Access and thereafter proceeded to stretch the envelope of Access' spreadsheet capabilities to their utmost capacity – I had to retrieve hundreds of thousands of patient records and then perform pro forma calculations on them to see if the proposed contracts would result in more or less payment given identical utilization.

I will be the first to admit that I was not coding in any professional sense of the word. I did manage to make Access do things that MS technical support told me it could not do but I was still simply using very basic commands to bend an existing application to my will. The one thing I do remember was being happy. I typed infinitely nested commands into formula cells for twelve to fourteen hours a day and was still disappointed when I had to stop.

My experience in building that monster and making it run was, to date, my most satisfying professional accomplishment, despite going on to later become CFO of another healthcare facility, a feat that should have fulfilled all of my professional ambitions at that time. More than just the work, however, was the group of like-minded analysts and IT folks with whom I became associated as I tried, failed, tried, debugged, and continued building this behemoth of a database. I learned about Easter Eggs and coding lore and found myself hacking into areas of the hospital mainframe which were completely offlimits to someone of my paygrade. And yet, I kept pursuing my "professional goals" and ended up in jobs/careers I hated doing work I loathed.

Here's a person who a) found an interesting problem, b) attempted to create a solution to the problem, which naturally c) led them to learning to code. And they loved it. This is how it's supposed to work. I didn't become a programmer because someone told me learning to code was important, I became a programmer because I wanted to change the rules of the video games I was playing, and learning to code was the only way to do that. Along the way, I too fell in love.

All that to say that as I stand at the crossroads once more, I still hear the siren song of those halcyon days of quasi-coding during which I enjoyed my work. My question for you is whether you think it is even possible for someone of my vintage to learn to code to a level that I could be hired as a programmer. I am not trying to do this on the side while running the city of New York as a day job. Rather, I sincerely and completely want to become a bona fide programmer and spend my days creating (and/or debugging) something of value.

Unfortunately, calling yourself a "programmer" can be a career-limiting move, particularly for someone who was a CFO in a previous career. People who work with money tend to make a lot of money; see Wall Street.

But this isn't about money, is it? It's about love. So, if you want to be a programmer, all you need to do is follow your joy and fall in love with code. Any programmer worth their salt immediately recognizes a fellow true believer, a person as madly in love with code as they are, warts and all. Welcome to the tribe.

And if you're reading this and thinking, "screw this Jeff Atwood guy, who is he to tell me whether I should learn to code or not", all I can say is: good! That's the spirit!

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    81 Comments

The Eternal Lorem Ipsum

May 19, 2012

If you've studied design at all, you've probably encountered Lorem Ipsum placeholder text at some point. Anywhere there is text, but the meaning of that text isn't particularly important, you might see Lorem Ipsum.

Tintin-lipsum

Most people recognize it as Latin. And it is. But it is arbitrarily rearranged and not quite coherent Latin, extracted from a book Cicero wrote in 45 BC. Here's the complete quote, with the bits and pieces that make up Lorem Ipsum highlighted.

Nemo enim ipsam voluptatem, quia voluptas sit, aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit amet, consectetur, adipisci[ng] velit, sed quia non numquam [do] eius modi tempora inci[di]dunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur?

At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga.

But what does it all mean? Here's an English translation with the same parts highlighted.

Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?

On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain.

Of course the whole point of Lorem Ipsum is that the words aren't supposed to mean anything, so attempting to divine its meaning is somewhat … unsatisfying, perhaps by design. Lorem Ipsum is a specific form of what is generally referred to somewhat cheekily as "Greeking":

Greeking is a style of displaying or rendering text or symbols, not always from the Greek alphabet. Greeking obscures portions of a work for the purpose of either emphasizing form over details or displaying placeholders for unavailable content. The name is a reference to the phrase "Greek to me", meaning something that one cannot understand, so that it might as well be in a foreign language.

So when you need filler or placeholder text, you naturally reach for Lorem Ipsum as the standard. The theory is that, since it's unintelligible, nobody will attempt to read it, but instead focus on other aspects of the design. If you put readable text in the design, people might think the text is important to the design, that the text represents the sort of content you expect to see, or that the text somehow itself needs to be copyedited and updated and critiqued.

(Regular readers of this blog may remember that I am fond of using Alice in Wonderland in this manner, when I need a bit of text to demonstrate something in a post.)

Lorem-ipsum

However, not everyone agrees that relying on a standard boilerplate greeked placeholder text is appropriate, even going so far as to call for the death of Lorem Ipsum. I think it depends what you're trying to accomplish. I once noted that it's better to use real content to avoid Blank Page Syndrome, for example.

There are quite a few websites that helpfully offer up the classic Lorem Ipsum text in various eminently copy-and-pastable forms.

Classic Lorem Ipsum

Beyond that, if you just want a bunch of, uh, interesting text to fill an area, there a lot – and I mean a lot – of websites to choose from. So many in fact that I was a little overwhelmed trying to index them all. I've tried to broadly categorize the ones I did find, below. If you know of more, feel free to leave a comment and I'll update the list.

Novelty

Clever English Tricks

Literature

Professions

Social Networks

TV, Movies and Media

Possibly NSFW

Regional

This is a lot to go through. If I had to pick a favorite, I'd say Fillerati because it's all dignified and stuff. But I think truer to the spirit of Lorem Ipsum are definitely the homophonic transformations, which consistently blow my mind every time I attempt to read them. Isn't that the implied goal of any properly greeked text? You were one deliciously perverse professor of romance languages, Howard L. Chace.

In today's Pinteresting world, images are arguably more important than text. But what is the Lorem Ipsum of images? Is there even one? I guess you could just slap some Lorem Ipsum text in an image, but where is the fun in that? Anyway, there are also plenty of websites offering up placeholder images of various types to go along with your Lorum Ipsum placeholder text.

Images

I'm not sure the world needs any more Lorem Ipsum-alikes than we already have at this point. Like the market for ironic t-shirts, the Internet has ensured that our placeholder greeked text needs have not merely been met but vastly exceeded for the forseeable future. But after discovering all the creative things people have done with Lorem Ipsum, and text placeholders in general, it's sure tempting to dream yet another one up, isn't it?

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    31 Comments

Please Don't Learn to Code

May 15, 2012

The whole "everyone should learn programming" meme has gotten so out of control that the mayor of New York City actually vowed to learn to code in 2012.

Bloomberg-vows-to-code

A noble gesture to garner the NYC tech community vote, for sure, but if the mayor of New York City actually needs to sling JavaScript code to do his job, something is deeply, horribly, terribly wrong with politics in the state of New York. Even if Mr. Bloomberg did "learn to code", with apologies to Adam Vandenberg, I expect we'd end up with this:

10 PRINT "I AM MAYOR"
20 GOTO 10

Fortunately, the odds of this technological flight of fancy happening – even in jest – are zero, and for good reason: the mayor of New York City will hopefully spend his time doing the job taxpayers paid him to do instead. According to the Office of the Mayor home page, that means working on absenteeism programs for schools, public transit improvements, the 2013 city budget, and … do I really need to go on?

To those who argue programming is an essential skill we should be teaching our children, right up there with reading, writing, and arithmetic: can you explain to me how Michael Bloomberg would be better at his day to day job of leading the largest city in the USA if he woke up one morning as a crack Java coder? It is obvious to me how being a skilled reader, a skilled writer, and at least high school level math are fundamental to performing the job of a politician. Or at any job, for that matter. But understanding variables and functions, pointers and recursion? I can't see it.

Look, I love programming. I also believe programming is important … in the right context, for some people. But so are a lot of skills. I would no more urge everyone to learn programming than I would urge everyone to learn plumbing. That'd be ridiculous, right?

Advice-for-plumbers

The "everyone should learn to code" movement isn't just wrong because it falsely equates coding with essential life skills like reading, writing, and math. I wish. It is wrong in so many other ways.

  • It assumes that more code in the world is an inherently desirable thing. In my thirty year career as a programmer, I have found this … not to be the case. Should you learn to write code? No, I can't get behind that. You should be learning to write as little code as possible. Ideally none.

  • It assumes that coding is the goal. Software developers tend to be software addicts who think their job is to write code. But it's not. Their job is to solve problems. Don't celebrate the creation of code, celebrate the creation of solutions. We have way too many coders addicted to doing just one more line of code already.

  • It puts the method before the problem. Before you go rushing out to learn to code, figure out what your problem actually is. Do you even have a problem? Can you explain it to others in a way they can understand? Have you researched the problem, and its possible solutions, deeply? Does coding solve that problem? Are you sure?

  • It assumes that adding naive, novice, not-even-sure-they-like-this-whole-programming-thing coders to the workforce is a net positive for the world. I guess that's true if you consider that one bad programmer can easily create two new jobs a year. And for that matter, most people who already call themselves programmers can't even code, so please pardon my skepticism of the sentiment that "everyone can learn to code".

  • It implies that there's a thin, easily permeable membrane between learning to program and getting paid to program professionally. Just look at these new programmers who got offered jobs at an average salary of $79k/year after attending a mere two and a half month bootcamp! Maybe you too can teach yourself Perl in 24 hours! While I love that programming is an egalitarian field where degrees and certifications are irrelevant in the face of experience, you still gotta put in your ten thousand hours like the rest of us.

I suppose I can support learning a tiny bit about programming just so you can recognize what code is, and when code might be an appropriate way to approach a problem you have. But I can also recognize plumbing problems when I see them without any particular training in the area. The general populace (and its political leadership) could probably benefit most of all from a basic understanding of how computers, and the Internet, work. Being able to get around on the Internet is becoming a basic life skill, and we should be worried about fixing that first and most of all, before we start jumping all the way into code.

Please don't advocate learning to code just for the sake of learning how to code. Or worse, because of the fat paychecks. Instead, I humbly suggest that we spend our time learning how to …

  • Research voraciously, and understand how the things around us work at a basic level.
  • Communicate effectively with other human beings.

These are skills that extend far beyond mere coding and will help you in every aspect of your life.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    251 Comments

This Is All Your App Is: a Collection of Tiny Details

May 7, 2012

Fair warning: this is a blog post about automated cat feeders. Sort of. But bear with me, because I'm also trying to make a point about software. If you have a sudden urge to click the back button on your browser now, I don't blame you. I don't often talk about cats, but when I do, I make it count.

We've used automated cat feeders since 2007 with great success. (My apologies for the picture quality, but it was 2007, and camera phones were awful.)

Old-petmate-feeders

Feeding your pets using robots might sound impersonal and uncaring. Perhaps it is. But I can't emphasize enough how much of a daily lifestyle improvement it really is to have your pets stop associating you with ritualized, timed feedings. As my wife so aptly explained:

I do not miss the days when the cats would come and sit on our heads at 5 AM, wanting their breakfast.

Me neither. I haven't stopped loving our fuzzy buddies, but this was also before we had onetwothree children. We don't have a lot of time for random cat hijinks these days. Anyway, once we set up the automated feeders in 2007, it was a huge relief to outsource pet food obsessions to machines. They reliably delivered a timed feeding at 8am and 8pm like clockwork for the last five years. No issues whatsoever, other than changing the three D batteries about once a year, filling the hopper with kibble about once a month, and an occasional cleaning.

Although they worked, there were still many details of the automated feeders' design that were downright terrible. I put up with these problems because I was so happy to have automatic feeders that worked at all. So when I noticed that the 2012 version of these feeders appeared to be considerably updated, I went ahead and upgraded immediately on faith alone. After all, it had been nearly five years! Surely the company had improved their product a bit since then … right? Well, a man can dream, can't he?

New-petmate-feeders

When I ordered the new feeders, I assumed they would be a little better than what I had before.

Petmate-lebistro-old-and-new

The two feeders don't look so radically different, do they? But pay attention to the details.

  • The food bowl is removable. It drove me crazy that the food bowl in the old version was permanently attached, and tough to clean as a result.
  • The food bowl has rounded interior edges. As if cleaning the non-removable bowl of our old version wasn't annoying enough, it also had sharp interior edges, which tended to accrete a bunch of powdered food gunk in there over time. Very difficult to clean properly.
  • The programming buttons are large and easy to press. In the old version, the buttons were small watch-style soft rubber buttons that protruded from the surface. The tactile feedback was terrible, and they were easy to mis-press because of their size and mushiness.
  • The programming buttons are directly accessible on the face of the device. For no discernable reason whatsoever, the programming buttons in the old version were under a little plastic clear protective "sneeze guard" flap, which you had to pinch up and unlock with your thumb before you could do any programming at all. I guess the theory was that a pet could somehow accidentally brush against the buttons and do … something … but that seems incredibly unlikely. But most of all, unnecessary.
  • The programming is easier. We never changed the actual feed schedule, but just changing the time for daylight savings was so incredibly awkward and contorted we had to summarize the steps from the manual on a separate piece of paper as a "cheat sheet". The new version, in contrast, makes changing the time almost as simple as it should be. Almost.
  • There is an outflow cover flap. By far the number one physical flaw of the old feeder: the feed slot invites curious paws, and makes it all too easy to fish out kibble on demand. You can see in my original photo that we had to mod the feed slot to tape (and eventually bolt) a wire soap dish cover over it so the cats wouldn't be able to manual feed. The new feeder has a perfectly aligned outflow flap that I couldn't even dislodge with my finger. And it works; even our curious-est cat wasn't able to get past it.
  • The top cover rotates to lock. On the old feeder, the top cover to the clear kibble storage was a simple friction fit; dislodging it wasn't difficult, and the cats did manage to do this early on with some experimentation. On the new feeder, the cover is slotted, and rotates to lock against the kibble storage securely. This is the same way the kibble feeder body locks on the base (on both old and new feeders), so it's logical to use this same "rotate to lock into or out of position" design in both places.
  • The feed hopper is funnel shaped. The old feed hopper was a simple cylinder, and holds less in the same space as a result. When I transferred the feed over from the old full models (we had literally just filled them the day before) to the updated ones, I was able to add about 15-20 percent more kibble despite the device being roughly the same size in terms of floor space.
  • The base is flared. Stability is critical; depending how adventurous your cats are, they may physically attack the feeders and try to push them over, or hit them hard enough to trigger a trickle of food dispensing. A flared base isn't the final solution, but it's a big step in the right direction. It's a heck of a lot tougher to knock over a feeder with a bigger "foot" on the ground.
  • It's off-white. The old feeder, like the Ford Model T, was available in any color customers wanted, so long as it was black. Which meant it did a great job of not blending in with almost any decor, and also showed off its dust collection like a champ. Thank goodness the new model comes in "linen".

These are, to be sure, a bunch of dumb, nitpicky details. Did the old version feed our cats reliably? Yes, it did. But it was also a pain to clean and maintain, a sort of pain that I endured weekly, for reasons that made no sense to me other than arbitrarily poor design choices. But when I bought the new version of the automated feeder, I was shocked to discover that nearly every single problem I had with the previous generation was addressed. I felt as if the Petmate Corporation™ was actually listening to all the feedback from the people who used their product, and actively refined the product to address our complaints and suggestions.

My point, and I do have one, is that details matter. Details matter, in fact, a hell of a lot. Whether in automatic cat feeders, or software. As my friend Wil Shipley once said:

This is all your app is: a collection of tiny details.

This is still one of my favorite quotes about software. It's something we internalized heavily when building Stack Overflow. Getting the details right is the difference between something that delights, and something customers tolerate.

Your software, your product, is nothing more than a collection of tiny details. If you don't obsess over all those details, if you think it's OK to concentrate on the "important" parts and continue to ignore the other umpteen dozen tiny little ways your product annoys the people who use it on a daily basis – you're not creating great software. Someone else is. I hope for your sake they aren't your competitor.

The details are hard. Everyone screws up the details at first, just like Petmate did with the first version of this automatic feeder. And it's OK to screw up the details initially, provided …

  • you're getting the primary function more or less right.
  • you're listening to feedback from the people who use your product, and actively refining the details of your product based on their feedback every day.

We were maniacal about listening to feedback from avid Stack Overflow users from the earliest days of Stack Overflow in August 2008. Did you know that we didn't even have comments in the first version of Stack Overflow? But it was obvious, based on user feedback and observed usage, that we desperately needed them. There are now, at the time I am writing this, 1,569 completed feature requests; that's more than one per day on average.

Imagine that. Someone who cares about the details just as much as you do.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.

Posted by Jeff Atwood    41 Comments

Buying Happiness

May 3, 2012

Despite popular assertions to the contrary, science tells us that money can buy happiness. To a point.

Recent research has begun to distinguish two aspects of subjective well-being. Emotional well-being refers to the emotional quality of an individual's everyday experience — the frequency and intensity of experiences of joy, stress, sadness, anger, and affection that make one's life pleasant or unpleasant. Life evaluation refers to the thoughts that people have about their life when they think about it. We raise the question of whether money buys happiness, separately for these two aspects of well-being. We report an analysis of more than 450,000 responses to the Gallup-Healthways Well-Being Index, a daily survey of 1,000 US residents conducted by the Gallup Organization. […] When plotted against log income, life evaluation rises steadily. Emotional well-being also rises with log income, but there is no further progress beyond an annual income of ~$75,000.

For reference, the federal poverty level for a family of four is currently $23,050. Once you reach a little over 3 times the poverty level in income, you've achieved peak happiness, as least far as money alone can reasonably get you.

This is something I've seen echoed in a number of studies. Once you have "enough" money to satisfy the basic items at the foot of the Maslow's Hierarchy of Needs pyramid – that is, you no longer have to worry about food, shelter, security, and perhaps having a bit of extra discretionary money for the unknown – stacking even more money up doesn't do much, if anything, to help you scale the top of the pyramid.

Maslows-hierarchy-of-needs

But even if you're fortunate enough to have a good income, how you spend your money has a strong influence on how happy – or unhappy – it will make you. And, again, there's science behind this. The relevant research is summarized in If money doesn't make you happy, then you probably aren't spending it right (pdf).

Most people don't know the basic scientific facts about happiness — about what brings it and what sustains it — and so they don't know how to use their money to acquire it. It is not surprising when wealthy people who know nothing about wine end up with cellars that aren't that much better stocked than their neighbors', and it should not be surprising when wealthy people who know nothing about happiness end up with lives that aren't that much happier than anyone else's. Money is an opportunity for happiness, but it is an opportunity that people routinely squander because the things they think will make them happy often don't.

You may also recognize some of the authors on this paper, in particular Dan Gilbert, who also wrote the excellent book Stumbling on Happiness that touched on many of the same themes.

What is, then, the science of happiness? I'll summarize the basic eight points as best I can, but read the actual paper (pdf) to obtain the citations and details on the underlying studies underpinning each of these principles.

1. Buy experiences instead of things

Things get old. Things become ordinary. Things stay the same. Things wear out. Things are difficult to share. But experiences are totally unique; they shine like diamonds in your memory, often more brightly every year, and they can be shared forever. Whenever possible, spend money on experiences such as taking your family to Disney World, rather than things like a new television.

2. Help others instead of yourself

Human beings are intensely social animals. Anything we can do with money to create deeper connections with other human beings tends to tighten our social connections and reinforce positive feelings about ourselves and others. Imagine ways you can spend some part of your money to help others – even in a very small way – and integrate that into your regular spending habits.

3. Buy many small pleasures instead of few big ones

Because we adapt so readily to change, the most effective use of your money is to bring frequent change, not just "big bang" changes that you will quickly grow acclimated to. Break up large purchases, when possible, into smaller ones over time so that you can savor the entire experience. When it comes to happiness, frequency is more important than intensity. Embrace the idea that lots of small, pleasurable purchases are actually more effective than a single giant one.

4. Buy less insurance

Humans adapt readily to both positive and negative change. Extended warranties and insurance prey on your impulse for loss aversion, but because we are so adaptable, people experience far less regret than they anticipate when their purchases don't work out. Furthermore, having the easy "out" of insurance or a generous return policy can paradoxically lead to even more angst and unhappiness because people deprived themselves of the emotional benefit of full commitment. Thus, avoid buying insurance, and don't seek out generous return policies.

5. Pay now and consume later

Immediate gratification can lead you to make purchases you can't afford, or may not even truly want. Impulse buying also deprives you of the distance necessary to make reasoned decisions. It eliminates any sense of anticipation, which is a strong source of happiness. For maximum happiness, savor (maybe even prolong!) the uncertainty of deciding whether to buy, what to buy, and the time waiting for the object of your desire to arrive.

6. Think about what you're not thinking about

We tend to gloss over details when considering future purchases, but research shows that our happiness (or unhappiness) largely lies in exactly those tiny details we aren't thinking about. Before making a major purchase, consider the mechanics and logistics of owning this thing, and where your actual time will be spent once you own it. Try to imagine a typical day in your life, in some detail, hour by hour: how will it be affected by this purchase?

7. Beware of comparison shopping

Comparison shopping focuses us on attributes of products that arbitrarily distinguish one product from another, but have nothing to do with how much we'll enjoy the purchase. They emphasize characteristics we care about while shopping, but not necessarily what we'll care about when actually using or consuming what we just bought. In other words, getting a great deal on cheap chocolate for $2 may not matter if it's not pleasurable to eat. Don't get tricked into comparing for the sake of comparison; try to weight only those criteria that actually matter to your enjoyment or the experience.

8. Follow the herd instead of your head

Don't overestimate your ability to independently predict how much you'll enjoy something. We are, scientifically speaking, very bad at this. But if something reliably makes others happy, it's likely to make you happy, too. Weight other people's opinions and user reviews heavily in your purchasing decisions.

Happiness is a lot harder to come by than money. So when you do spend money, keep these eight lessons in mind to maximize whatever happiness it can buy for you. And remember: it's science!

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    38 Comments

Trust Me, I'm Lying

May 1, 2012

We reflexively instruct our children to always tell the truth. It's even encoded into Boy Scout Law. It's what adults do, isn't it? But do we? Isn't telling the truth too much and too often a bad life strategy – perhaps even dangerous? Is telling children to always tell the truth even itself the whole truth?

Trust-me-im-lying

One of the most thought provoking articles on the topic, and one I keep returning to, year after year, is I Think You're Fat. It's about the Radical Honesty movement, which proposes that adults follow their own advice and always tell the truth. No matter what.

The [Radical Honesty] movement was founded by a sixty-six-year-old Virginia-based psychotherapist named Brad Blanton. He says everybody would be happier if we just stopped lying. Tell the truth, all the time. This would be radical enough – a world without fibs – but Blanton goes further. He says we should toss out the filters between our brains and our mouths. If you think it, say it. Confess to your boss your secret plans to start your own company. If you're having fantasies about your wife's sister, Blanton says to tell your wife and tell her sister. It's the only path to authentic relationships. It's the only way to smash through modernity's soul-deadening alienation. Oversharing? No such thing.

Yes. I know. One of the most idiotic ideas ever, right up there with Vanilla Coke and giving Phil Spector a gun permit. Deceit makes our world go round. Without lies, marriages would crumble, workers would be fired, egos would be shattered, governments would collapse.

And yet … maybe there's something to it. Especially for me. I have a lying problem. Mine aren't big lies. They aren't lies like "I cannot recall that crucial meeting from two months ago, Senator." Mine are little lies. White lies. Half-truths. The kind we all tell. But I tell dozens of them every day. "Yes, let's definitely get together soon." "I'd love to, but I have a touch of the stomach flu." "No, we can't buy a toy today – the toy store is closed." It's bad. Maybe a couple of weeks of truth-immersion therapy would do me good.

The author, A.J. Jacobs, is a great writer who made a cottage industry out of treating himself like a guinea pig, such as attempting to become the smartest man in the world, spend a year living exactly like the Bible tells us to, and to become the fittest person on Earth. Based on the strength of this article, I bought two of his books; experiments like Radical Honesty are right up his alley.

Radical honesty itself isn't exactly a new concept. It's been parodied in any number of screwball Hollywood comedies such as Liar, Liar (1997) and The Invention of Lying (2009). But there's a big difference between milking this concept for laughs and exploring it as an actual lifestyle among real human beings. Among the ideas raised in the article, which you should go read now, are:

  • Telling someone that something they created is terrible: is that cruelty, because they have no talent, or is it compassion, so they can know they need to improve it?
  • Does a thought in your head that you never express to anyone represent your truth? Should you share it? This is particularly tricky for men, who think about sex twice as much as women.
  • How much mental energy do you expend listening to a conversation trying to determine how much of what the other person is saying is untrue? Wouldn't it be less fatiguing if everything they said was, by definition, the truth? And when you're talking, always telling the truth means you never have to decide just how much truth to tell, how to hedge, massage, and spin the truth to make it palatable.
  • In a hypothetical future when every action we take is public and broadcast to the world, is that exposing the real truth of our lives? Should we become more honest today to ready ourselves for this inevitable future?
  • Always telling the truth can be thrilling, a form of risk taking, as you intentionally violate taboos around politeness that exist solely for the sake of avoiding conflict.
  • Total honesty can lead to new breakthroughs in communication, where politeness prevented you from ever reaching the root, underlying causes of discontent or unhappiness.
  • Honesty is more efficient. Rather than spending a lot of time sending messages back and forth artfully dancing around the truth, go directly there.
  • If people see you are willing to be honest with them, they tend to return the favor, leading to a more useful relationship.

What we often don't acknowledge is that the truth is kind of scary. That's why we have a hard time being honest with ourselves, much less those around us. Reading through all these ambiguous situations that A.J. put himself through, you start to wonder if you understand what truth is, or what it means to decide that something is "true". After summarizing the article in bullet form, I'm surprised there are so many points in favor of honesty, maybe even radical honesty.

But uncompromisingly committing to the whole truth, and nothing but the truth, has a darker side.

My wife tells me a story about switching operating systems on her computer. In the middle, I have to go help our son with something, then forget to come back.

"Do you want to hear the end of the story or not?" she asks.

"Well...is there a payoff?"

"F**k you."

It would have been a lot easier to have kept my mouth closed and listened to her. It reminds me of an issue I raised with Blanton: Why make waves? "Ninety percent of the time I love my wife," I told him. "And 10 percent of the time I hate her. Why should I hurt her feelings that 10 percent of the time? Why not just wait until that phase passes and I return to the true feeling, which is that I love her?"

Blanton's response: "Because you're a manipulative, lying son of a bitch."

Rather than embrace the truth, as Radical Honesty would have us do, Adrian Tan advises us to be wary of the truth.

Most of you will end up in activities which involve communication. To those of you I have a second message: be wary of the truth. I’m not asking you to speak it, or write it, for there are times when it is dangerous or impossible to do those things. The truth has a great capacity to offend and injure, and you will find that the closer you are to someone, the more care you must take to disguise or even conceal the truth. Often, there is great virtue in being evasive, or equivocating. There is also great skill. Any child can blurt out the truth, without thought to the consequences. It takes great maturity to appreciate the value of silence.

I think he's right. But Radical Honesty isn't altogether wrong, either. Let me be clear: Radical Honesty, as a lifestyle, is ridiculous and insane. Advocating telling the truth 100% of the time, no matter what, is harmful extremism. But it's also wonderfully seductive as a concept, because it illustrates how needlessly afraid most of us are of truth – even truths that could potentially help us. Radical Honesty teaches us to be more brave. That is, when it's not destroying our lives and the lives of everyone around us.

Ask yourself:

  • What is the purpose of this truth?
  • What effect will sharing this truth have on the other person, on yourself, on the world?
  • What change will come about, positive or negative, from choosing to voice a particular truth at a particular time?

I believe the true lesson of Radical Honesty is that truth, real truth, is honesty with a purpose. Ideally a noble purpose, but any purpose at all other than "because I could" will suffice. By all means, be brave; embrace the truth. But if your honesty has no purpose, if you can't imagine any positive outcome from being honest, I suggest you're better off keeping it to yourself.

Or even lying.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

Posted by Jeff Atwood    57 Comments

Geekatoo, the Geek Bat-Signal

April 27, 2012

To understand this story, you need to understand that grandchildren are like crack cocaine to grandparents. I'm convinced that if our parents could somehow snort our children up their noses to get a bigger fix, they would. And when your parents live out of state, like ours do, access to the Internet isn't just important. No. It is life threatening.

Like Gator in Jungle Fever, grandparents just gotta get their fix of the grandkids every month. And if they don't, if their Internet is broken for any reason, you're going to get an earful via telegraph and facsimile and registered letter until you fix it.

one rule: never get high on your own supply.

Either way, they're gonna get high. On your kids.

My mom is no exception. So when her computer suddenly stopped working, and she couldn't get updates on her three grandkids, I got frantic calls. Which is odd, because everything had been working fine for a few years now. Once Henry was born in 2009, I set her up with a netbook that had Skype and Firefox set to auto update and she'd been able to video chat with us regularly, no problem at all, since then. So what happened?

My first thought was to hell with it, I'll just buy her a new iPad online via the Apple Store. I'm a big fan of the retina display, and surely the touchy-feely iPad would be more resistant to whatever problem she was having than a netbook, what with its archaic "operating system" and "updates" and "keyboard" and "mouse".

With some urging from my wife (I married well), cooler heads prevailed. What if her problem had nothing to do with the computer, but her Internet connection in some way? Then I'd just be trading one set of problems for another with the iPad. I have no idea how things are set up over there, thousands of miles away. I needed help. Help from a fellow geek who lives nearby and is willing to drive out and assist my poor mom.

My mom doesn't live near where I grew up any more, so I have no friend network there. All I could think of was Geek Squad. I've seen the trucks in our neighborhood, and they've been around a while, so I checked out their website. Maybe they'd work?

Geek-squad-service

When I can buy my mom a new iPad for $399, the idea of paying $299 just to have someone come out and fix her old stuff starts to feel like a really bad idea. But I suppose it's a preview of our disposable computer future, because it's increasingly cheaper to buy a new one than it is to bother fixing the old one. This is the stuff that my friend and iFixit founder Kyle Wiens' nightmares are made of. I'm sorry, Kyle. But it's coming.

I posted my discontent on Twitter, as I am wont to do, and received an interesting recommendation for a site I'd never heard of – Geekatoo.

Geekatoo-logo

I was intrigued, first because the site didn't appear to suck which is more than I can say for about half the links I click on, and second because it appealed to my geek instincts. I could post a plea for help for my mom, and a fellow geek, one of my kind who happened to be local, would be willing to head out and assist. I could send out the geek bat-signal! But I was still skeptical. My mom lives in Charlotte, North Carolina which, while not exactly the sticks, isn't necessarily a big tech hub city, either. I figured I had nothing to lose at this point, so I posted the request titled "Mom Needs Tech Support" with the info I had.

Much to my surprise, I got two great bids within 24 hours, geeks with good credentials, and I picked the first one. The estimate was two hours for $45, and he was on-site helping my mom within 2 days from the time I posted.

Geekatoo-case

It turns out that my wife's intuition was correct: the cable internet installer had inexplicably decided to connect my mother's computer to a neighbor's wireless, instead of setting up a WiFi access point for her. So when that neighbor moved away, calamity ensued.

And the results? Well, I think they speak for themselves.

Thank you Jeff you are the best son ever!!!!!!!!!

My mom, as usual, exaggerates about her only son. I am far, far from the best son ever. But any website that can make me look like a hero to my mom, and keep my fellow Super User geeks gainfully employed doing superhero work on my behalf gets a huge thumbs up from me.

Needless to say, strongly recommended. If you need reliable local tech support that won't break the bank, and you want to support both your family and your local geek community at the same time, check out Geekatoo.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!

Posted by Jeff Atwood    25 Comments

Will Apps Kill Websites?

April 23, 2012

I've been an eBay user since 1999, and I still frequent eBay as both buyer and seller. In that time, eBay has transformed from a place where geeks sell broken laser pointers to each other, into a global marketplace where businesses sell anything and everything to customers. If you're looking for strange or obscure items, things almost nobody sells new any more, or grey market items for cheap, eBay is still not a bad place to look.

At least for me, eBay still basically works, after all these years. But one thing hasn't changed: the eBay website has always been difficult to use and navigate. They've updated the website recently to remove some of the more egregious cruft, but it's still way too complicated. I guess I had kind of accepted old, complex websites as the status quo, because I didn't realize how bad it had gotten until I compared the experience on the eBay website with the experience of the eBay apps for mobile and tablet.

eBay Website

Ebay-web

eBay Mobile App

Ebay-iphone-app

eBay Tablet App

Ebay-ipad-app

Unless you're some kind of super advanced eBay user, you should probably avoid the website. The tablet and mobile eBay apps are just plain simpler, easier, and faster to use than the eBay website itself. I know this intuitively from using eBay on my devices and computers, but there's also usability studies with data to prove it, too. To be fair, eBay is struggling under the massive accumulated design debt of a website originally conceived in the late 90s, whereas their mobile and tablet app experiences are recent inventions. It's not so much that the eBay apps are great, but that the eBay website is so very, very bad.

The implied lesson here is to embrace constraints. Having a limited, fixed palette of UI controls and screen space is a strength. A strength we used to have in early Mac and Windows apps, but seem to have lost somewhere along the way as applications got more powerful and complicated. And it's endemic on the web as well, where the eBay website has been slowly accreting more and more functionality since 1999. The nearly unlimited freedom that you get in a modern web browser to build whatever UI you can dream up, and assume as large or as small a page as you like, often ends up being harmful to users. It certainly is in the case of eBay.

If you're starting from scratch, you should always design the UI first, but now that we have such great mobile and tablet device options, consider designing your site for the devices that have the strictest constraints first, too. This is the thinking that led to mobile first design strategy. It helps you stay focused on a simple and uncluttered UI that you can scale up to bigger and beefier devices. Maybe eBay is just going in the wrong direction here; design simple things that scale up; not complicated things you need to scale down.

Above all else, simplify! But why stop there? If building the mobile and tablet apps first for a web property produces a better user experience – why do we need the website, again? Do great tablet and phone applications make websites obsolete?

Why are apps better than websites?

  1. They can be faster.
    No browser overhead of CSS and HTML and JavaScript hacks, just pure native UI elements retrieving precisely the data they need to display what the user requests.

  2. They use simple, native UI controls.
    Rather than imagineering whatever UI designers and programmers can dream up, why not pick from a well understood palette of built-in UI controls on that tablet or phone, all built for optimal utility and affordance on that particular device?

  3. They make better use of screen space.
    Because designers have to fit just the important things on 4 inch diagonal mobile screens, or 10 inch diagonal tablet screens, they're less likely to fill the display up with a bunch of irrelevant noise or design flourishes (or, uh, advertisements). Just the important stuff, thanks!

  4. They work better on the go and even offline.
    In a mobile world, you can't assume that the user has a super fast, totally reliable Internet connection. So you learn to design apps that download just the data they need at the time they need to display it, and have sane strategies for loading partial content and images as they arrive. That's assuming they arrive at all. You probably also build in some sort of offline mode, too, when you're on the go and you don't have connectivity.

Why are websites better than apps?

  1. They work on any device with a browser.
    Websites are as close to universal as we may ever get in the world of software. Provided you have a HTML5 compliant browser, you can run an entire universe of "apps" on your device from day zero, just by visiting a link, exactly the same way everyone has on the Internet since 1995. You don't have to hope and pray a development community emerges and is willing to build whatever app your users need.

  2. They don't have to be installed.
    Applications, unlike websites, can't be visited. They aren't indexed by Google. Nor do applications magically appear on your device; they must be explicitly installed. Even if installation is a one-click affair, your users will have to discover the app before they can even begin to install it. And once installed, they'll have to manage all those applications like so many Pokemon.

  3. They don't have to be updated.
    Websites are always on the infinite version. But once you have an application installed on your device, how do you update it to add features or fix bugs? How do users even know if your app is out of date or needs updating? And why should they need to care in the first place?

  4. They offer a common experience.
    If your app and the website behave radically differently, you're forcing users to learn two different interfaces. How many different devices and apps do you plan to build, and how consistent will they be? You now have a community divided among many different experiences, fragmenting your user base. But with a website that has a decent mobile experience baked in, you can deliver a consistent, and hopefully consistently great, experience across all devices to all your users.

I don't think there's a clear winner, only pros and cons. But apps will always need websites, if for nothing else other than a source of data, as a mothership to phone home to, and a place to host the application downloads for various devices.

And if you're obliged to build a website, why not build it out so it offers a reasonable experience on a mobile or tablet web browser, too? I have nothing against a premium experience optimized to a particular device, but shouldn't all your users have a premium experience? eBay's problem here isn't mobile or tablets per se, but that they've let their core web experience atrophy so badly. I understand that there's a lot of inertia around legacy eBay tools and long time users, so it's easy for me to propose radical changes to the website here on the outside. Maybe the only way eBay can redesign at all is on new platforms.

Will mobile and tablet apps kill websites? A few, certainly. But only the websites stupid enough to let them.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    44 Comments

Make Your Email Hacker Proof

April 17, 2012

It's only a matter of time until your email gets hacked. Don't believe me? Just read this harrowing cautionary tale.

When [my wife] came back to her desk, half an hour later, she couldn’t log into Gmail at all. By that time, I was up and looking at e‑mail, and we both quickly saw what the real problem was. In my inbox I found a message purporting to be from her, followed by a quickly proliferating stream of concerned responses from friends and acquaintances, all about the fact that she had been “mugged in Madrid.” The account had seemed sluggish earlier that morning because my wife had tried to use it at just the moment a hacker was taking it over and changing its settings—including the password, so that she couldn’t log in again.

The greatest practical fear for my wife and me was that, even if she eventually managed to retrieve her records, so much of our personal and financial data would be in someone else’s presumably hostile hands that we would spend our remaining years looking over our shoulders, wondering how and when something would be put to damaging use. At some point over the past six years, our [email] correspondence would certainly have included every number or code that was important to us – credit card numbers, bank-account information, medical info, and any other sensitive data you can imagine.

Now get everyone you know to read it, too. Please. It's for their own good.

Your email is the skeleton key to your online identity. When you lose control of your email to a hacker – not if, but when you lose control of your email to a hacker – the situation is dire. Email is a one stop shop for online identity theft. You should start thinking of security for your email as roughly equivalent to the sort of security you'd want on your bank account. It's exceedingly close to that in practice.

The good news, at least if you use GMail, is that you can make your email virtually hacker-proof today, provided you own a cell phone. The fancy geek technical term for this is two factor authentication, but that doesn't matter right now. What matters is that until you turn this on, your email is vulnerable. So let's get started. Not tomorrow. Not next week. Right. Freaking. Now.

Go to your Google Account Settings

Google-account-settings

Make sure you're logged in. Expand the little drop-down user info panel at the top right of most Google pages. From here, click "Account" to view your account settings.

Google-enable-two-factor-auth

On the account settings page, click "edit" next to 2-step verification and turn it on.

Have Your Cell Phone Ready

GMail will walk you through the next few steps. You just need a telephone that can receive SMS text messages. Enter the numeric code sent through the text message to proceed.

Google-text-email-verification

Now Log In With Your Password and a PIN

Now your password alone is no longer enough to access your email.

Google-two-factor-login

Once this is enabled, accessing your email always requires the password, and a code delivered via your cell phone. (You can check the "remember me for 30 days on this device" checkbox so you don't have to do this every time.) With this in place, even if they discover your super sekrit email password, would-be hackers can't do anything useful with it! To access your email, they'd need to somehow gain control of your cell phone, too. I can't see that happening unless you're in some sort of hostage situation, and at that point I think email security is the least of your problems.

What If I Lose My Cell Phone?

Your cell phone isn't the only way to get the secondary PIN you need to access your email. On the account page there are multiple ways to generate verification codes, including adding a secondary backup phone number, and downloading mobile applications that can generate verification codes without a text message (but that requires a smart phone, naturally).

Google-backup-email-codes

This also includes the never-fails-always-works option: printing out the single-use backup verification codes on a piece of paper. Go do this now. Right now! And keep those backup codes with you at all times. Put them in your wallet, purse, man-purse, or whatever it is that travels with you most often when you get out of bed.

Backup-verification-codes

What About Apps That Access Email?

Applications or websites that access your email, and thus necessarily store your email address and password, are also affected. They have no idea that they now need to enter a PIN, too, so they'll all be broken. You'll need to generate app-specific passwords for your email. To do that, visit the accounts page.

Google-enabling-apps

Click on authorizing applications & sites, then enter a name for the application and click the Generate Password button.

Google-generated-app-password

Let me be clear about this, because it can be confusing: enter that specially generated password in the application, not your master email password.

This effectively creates a list of passwords specific to each application. So you can see the date each one was last used, and revoke each app's permission to touch your email individually as necessary without ever revealing your primary email password to any application, ever. See, I told you, there is a method to the apparent madness.

But I Don't Use Gmail

Either nag your email provider to provide two-factor authentication, or switch over. Email security is critically important these days, and switching is easy(ish). GMail has had fully secure connections for quite a while now, and once you add two-factor authentication to the mix, that's about as much online email safety as you can reasonably hope to achieve short of going back to snail mail.

Hey, This Sounds Like a Pain!

I know what you're thinking. Yes, this is a pain in the ass. I'll fully acknowledge that. But you know what's an even bigger pain in the ass? Having your entire online identity stolen and trashed by a hacker who happens to obtain your email password one day. Remember that article I exhorted you to read at the beginning? Oh, you didn't read it? Go freaking read it now!

Permit me to channel Jamie Zawinski one last time: "OMG, entering these email codes on every device I access email would be a lot of work! That sounds like a hassle!" Shut up. I know things. You will listen to me. Do it anyway.

I've been living with this scheme for a few months now, and I've convinced my wife to as well. I won't lie to you; it hasn't all been wine and roses for us either. But it is inconvenient in the same way that bank vaults and door locks are. The upside is that once you enable this, your email becomes extremely secure, to the point that you can (and I regularly do) email yourself highly sensitive data like passwords and logins to other sites you visit so you can easily retrieve them later.

If you choose not to do this, well, at least you've educated yourself about the risks. And I hope you're extremely careful with your email password and change it regularly to something complex. You're making life all too easy for the hackers who make a fabulous living from stealing and permanently defacing online identities just like yours.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    120 Comments

Learn to Read the Source, Luke

April 16, 2012

In the calculus of communication, writing coherent paragraphs that your fellow human beings can comprehend and understand is far more difficult than tapping out a few lines of software code that the interpreter or compiler won't barf on.

That's why, when it comes to code, all the documentation probably sucks. And because writing for people is way harder than writing for machines, the documentation will continue to suck for the forseeable future. There's very little you can do about it.

Except for one thing.

Read-the-source-luke

You can learn to read the source, Luke.

The transformative power of "source always included" in JavaScript is a major reason why I coined – and continue to believe in – Atwood's Law. Even if "view source" isn't built in (but it totally should be), you should demand access to the underlying source code for your stack. No matter what the documentation says, the source code is the ultimate truth, the best and most definitive and up-to-date documentation you're likely to find. This will be true forever, so the sooner you come to terms with this, the better off you'll be as a software developer.

I had a whole entry I was going to write about this, and then I discovered Brandon Bloom's brilliant post on the topic at Hacker News. Read closely, because he explains the virtue of reading source, and in what context you need to read the source, far better than I could:

I started working with Microsoft platforms professionally at age 15 or so. I worked for Microsoft as a software developer doing integration work on Visual Studio. More than ten years after I first wrote a line of Visual Basic, I wish I could never link against a closed library ever again.

Using software is different than building software. When you're using most software for its primary function, it's a well worn path. Others have encountered the problems and enough people have spoken up to prompt the core contributors to correct the issue. But when you're building software, you're doing something new. And there are so many ways to do it, you'll encounter unused bits, rusty corners, and unfinished experimental code paths. You'll encounter edge cases that have been known to be broken, but were worked around.

Sometimes, the documentation isn't complete. Sometimes, it's wrong. The source code never lies. For an experienced developer, reading the source can often be faster… especially if you're already familiar with the package's architecture. I'm in a medium-sized co-working space with several startups. A lot of the other CTOs and engineers come to our team for guidance and advice on occasion. When people report a problem with their stack, the first question I ask them is: "Well, did you read the source code?"

I encourage developers to git clone anything and everything they depend on. Initially, they are all afraid. "That project is too big, I'll never find it!" or "I'm not smart enough to understand it" or "That code is so ugly! I can't stand to look at it". But you don't have to search the whole thing, you just need to follow the trail. And if you can't understand the platform below you, how can you understand your own software? And most of the time, what inexperienced developers consider beautiful is superficial, and what they consider ugly, is battle-hardened production-ready code from master hackers. Now, a year or two later, I've had a couple of developers come up to me and thank me for forcing them to sink or swim in other people's code bases. They are better at their craft and they wonder how they ever got anything done without the source code in the past.

When you run a business, if your software has a bug, your customers don't care if it is your fault or Linus' or some random Rails developer's. They care that your software is bugged. Everyone's software becomes my software because all of their bugs are my bugs. When something goes wrong, you need to seek out what is broken, and you need to fix it. You fix it at the right spot in the stack to minimize risks, maintenance costs, and turnaround time. Sometimes, a quick workaround is best. Other times, you'll need to recompile your compiler. Often, you can ask someone else to fix it upstream, but just as often, you'll need to fix it yourself.

  • Closed-software shops have two choices: beg for generosity, or work around it.
  • Open source shops with weaker developers tend to act the same as closed-software shops.
  • Older shops tend to slowly build the muscles required to maintain their own forks and patches and whatnot.

True hackers have come to terms with a simple fact: If it runs on my machine, it's my software. I'm responsible for it. I must understand it. Building from source is the rule and not an exception. I must control my environment and I must control my dependencies.

Nobody reads other people's code for fun. Hell, I don't even like reading my own code. The idea that you'd settle down in a deep leather chair with your smoking jacket and a snifter of brandy for a fine evening of reading through someone else's code is absurd.

But we need access to the source code. We must read other people's code because we have to understand it to get things done. So don't be afraid to read the source, Luke – and follow it wherever it takes you, no matter how scary looking that code is.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    54 Comments

Books: Bits vs. Atoms

April 10, 2012

I adore words, but let's face it: books suck.

More specifically, so many beautiful ideas have been helplessly trapped in physical made-of-atoms books for the last few centuries. How do books suck? Let me count the ways:

  • They are heavy.
  • They take up too much space.
  • They have to be printed.
  • They have to be carried in inventory.
  • They have to be shipped in trucks and planes.
  • They aren't always available at a library.
  • They may have to be purchased at a bookstore.
  • They are difficult to find.
  • They are difficult to search within.
  • They can go out of print entirely.
  • They are too expensive.
  • They are not interactive.
  • They cannot be updated for errors and addendums.
  • They are often copyrighted.

What's the point of a bookshelf full of books other than as an antiquated trophy case of written ideas trapped in awkward, temporary physical relics?

Brian-dettmer-book

Books should not be celebrated. Words, ideas, and concepts should be celebrated. Books were necessary to store these things, simply because we didn't have any other viable form to contain them. But now we do.

Words Belong on the Internet

At the risk of stating the obvious, if your goal is to get a written idea in front of as many human beings as efficiently as possible, you shouldn't be publishing dead tree books at all. You should be editing a wiki, writing a blog, or creating a website. That's why the Encyclopedia Britannica officially went out of print in 2012, after a 244 year print run. In the straight-up match between paper and Web, the Encyclopedia Britannica lost. Big time.

The EB couldn’t cover enough: 65,000 topics compared to the almost 4M in the English version of Wikipedia.

Topics had to be consistently shrunk or discarded to make room for new information. E.g., the 1911 entry on Oliver Goldsmith was written by no less than Thomas Macaulay, but with each edition, it got shorter and shorter. EB was thus in the business of throwing out knowledge as much as it was in the business of adding knowledge.

Topics were confined to rectangles of text. This is of course often a helpful way of dividing up the world, but it is also essentially false. The “see also’s” and the attempts at synthetic indexes and outlines (Propædi) helped, but they were still highly limited, and cumbersome to use.

This is why the book scanning efforts of Google Books and The Internet Archive are so important – to unlock the knowledge trapped in all those books and place it online so the entire world can benefit.

In the never-ending human quest for communication, bits have won decisively over atoms. But bits haven't completely replaced atoms for publishing quite yet; that will take a few more decades.

An Argument for the eBook

While the Internet is perfectly adequate for basic printed text juxtaposed with images and tables, it is a far cry from the beautiful, complex layout and typography of modern books. Sometimes the medium is part of the message. That's what led computer scientists to create PostScript and TeX, systems of representing the printed page in code as pure mathematics that can scale infinitely, or at least to the best possible resolution of the particular device you're viewing it on. Packaging written content into a special file format preserves these beautiful layouts so you can read the text as originally designed by the author.

It's also fair to argue that writers should be fairly compensated for their work. Clearly nobody is going to pay 5 cents per web page. But there's a long established commercial model of packaging a set of writing together into a coherent format, or "book", and selling that.

You can't always rely on the Internet being available. What if you have no Internet connectivity, or intermittent connectivity? You could periodically harvest a bunch of related web pages every month and package the current versions into a file. And that file can be stored and cached locally on laptops, phones, and servers all over the world. Local files have built in, persistent offline availability.

No, the Internet will not kill the book. But it will change their form permanently; books are no longer pages printed with atoms, they're files printed with bits: eBooks.

The Trouble with Bits

The road from atoms to bits is not an easy one, and we're only at the beginning of this journey. eBooks are vastly more flexible than printed books, but they come with their own set of tradeoffs:

  • They always require a reading device.
  • They cannot be loaned to friends.
  • They cannot be resold to others.
  • They cannot be donated to libraries.
  • They may be encumbered with copy protection.
  • They may be in a format your reader cannot understand.
  • They may refuse to load for any reason the publisher deems necessary.
  • They may have incomplete or broken or obsolete layout.
  • They may have low-resolution bitmapped images that are inferior to print.
  • They may be a substantially worse reading experience than print except on very high resolution reading devices.

Book-error

The copy protection issue alone is deeply troubling; with eBooks, book publishers now have an unprecedented level of control over when, where, and how you can read their books. In the world of atoms, once the book is shipped out, the publisher cedes all control to the reader. Once you've bought that physical book, you can do with it whatever you will: read it, burn it, photocopy it (for personal use), share it, resell it, loan it, donate it, even throw it at passers-by as a makeshift weapon. But in the world of bits, the publisher has an iron grip over their eBook, which isn't so much sold to you as "licensed" for your use, maybe even only for specific devices like an Amazon Kindle or an Apple iPad. And they can silently remove the book from your device at their whim.

In the brave new world of eBooks, book publishers are waking up drunk with newfound power. And honestly I can't say I blame them. After centuries of publishers having virtually no control at all over the books they publish, they've now been granted near total control.

How Much Do eBooks Cost?

Consider one of my favorite books, the classic Don't Make Me Think. How much does it cost to buy, as an eBook or otherwise?

Amazon print new$22.88
Amazon print used$13.98
Amazon eBook$14.16
Publisher eBook$25.60
Apple eBook$33.16

Except for Amazon, all the eBooks are more expensive than the print version. This … makes no sense. How can the bits in the digital version, which require no printing, no shipping, no physical storage whatsoever, be more expensive than the atoms?

What Do eBooks Look Like?

What you actually end up reading when you buy the eBook can vary wildly. Here are pages 80 and 81 of my print copy of Don't Make Me Think. I attempted to take a photograph of the book, then realized it's incredibly difficult to take a decent picture of two pages of a book for a photography noob like myself, so I manually scanned the pages in instead.

Dont-make-me-think-page-80-81-scanned-small

If you buy the eBook from the publisher, you get a PDF which appears to be based on the exact same data used to print the book. Pages 80 and 81 are nearly identical to print, with page numbers, footnotes, layout and typography completely intact. (There are some unrelated minor differences on page 81 because the print version is from the second edition.)

Dont-make-me-think-pages-80-81-small

But when you buy the eBook from Amazon, you get a proprietary eBook format which contains very little of the original formatting. Pages 80 and 81 are quite different. The footnotes are gone. The title font and font colors are lost. The layout and spacing is completely off, and to my eye the page frankly looks a little broken.

Dont-make-me-think-page-80-81-kindle-small

When you buy the book from Apple, you get yet another proprietary eBook format. For comparison, here's page 3 of Don't Make Me Think from the publisher's PDF, which as we've previously established is very nearly the same as print.

Dont-make-me-think-page-3-small

I downloaded the sample chapter of Don't Make Me Think from Apple's iBooks, and it appears to be an even worse representation than Amazon's. I have all the same criticisms of Amazon's eBook format here – page 3 has broken layout, no footnotes, missing title fonts and colors, plus now it takes four, yes, four pages to read that very same single print page.

Dont-make-me-think-page-3-ibooks-all

So eBooks Suck, Too?

With Don't Make Me Think, I intentionally chose a book that highlights the remaining gap between atoms and bits in books. I've read dozens of other eBooks on Kindle and iPad, and generally the experience is good. For books that are entirely text, with very little layout, the various eBook formats do a great job. This may very well be a majority of books in the world. All eBook formats handle text and basic fonts perfectly fine. But then, so does the Internet. If an eBook can't outperform the Internet at layout, it loses one of the strongest arguments in its favor.

Still, there's no way Amazon's or Apple's current eBook versions of Don't Make Me Think are suitable replacements for the print version. Worse, you won't even know what you'll be missing unless you download a sample and compare it with the print version, as I have. That's disappointing, because part of the joy a book brings to the words inside is by expertly packaging those words into a whole experience. If an eBook can't capture the nuance of the layout at least as well as a hoary old PDF does, again, why bother?

We, as readers, are easily giving up as much as we're getting in the transition from books made of atoms to eBooks made of bits. To make it worthwhile, I believe publishers need to do two things:

  1. eBooks should be inexpensive. Because I can't loan them (with rare exceptions), because I can't resell them, because I can't buy a cheaper used copy, because I'm only licensed to read them at all on "supported" readers under whatever terms the publishers will allow me to, an eBook simply has less utility and value to me. Right now, eBooks are far less flexible than physical books and therefore a worse value. Yet they are far cheaper to produce and sell for everyone involved. The pricing absolutely has to reflect this. If I can get a used copy of a book for less than the eBook, no sale. If I can get a new copy of a book for less than the eBook, no sale and screw you.

  2. eBooks should be a near-perfect replica of the print book. With the advent of the iPad 3, it is finally possible for eBook readers to provide nearly the same visual fidelity as the print book. I don't want to spend money on an eBook with broken, inferior formatting and typography and layout compared to the print edition. Give me an eBook that I can potentially hand down to my children with the same confidence I could give them a print book, 30 years from now, and know that I am not totally compromising the experience.

Because I love words, I want to love eBooks. I want to buy lots and lots of eBooks. But unless the publishers are willing to treat eBooks with the same respect and care that they give to their printed books – and most importantly of all, adjust their pricing to reflect the brave new economy of bits, and not an antiquated economy of atoms – they're destined to eventually suffer the same fate as the Encyclopedia Britannica.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    95 Comments

Speed Hashing

April 6, 2012

Hashes are a bit like fingerprints for data.

Fingerprint-as-hash

A given hash uniquely represents a file, or any arbitrary collection of data. At least in theory. This is a 128-bit MD5 hash you're looking at above, so it can represent at most 2128 unique items, or 340 trillion trillion trillion. In reality the usable space is substantially less; you can start seeing significant collisions once you've filled half the square root of the space, but the square root of an impossibly large number is still impossibly large.

Back in 2005, I wondered about the difference between a checksum and a hash. You can think of a checksum as a person's full name: Eubediah Q. Horsefeathers. It's a shortcut to uniqueness that's fast and simple, but easy to forge, because security isn't really the point of naming. You don't walk up to someone and demand their fingerprints to prove they are who they say they are. Names are just convenient disambiguators, a way of quickly determining who you're talking to for social reasons, not absolute proof of identity. There can certainly be multiple people in the world with the same name, and it wouldn't be too much trouble to legally change your name to match someone else's. But changing your fingerprint to match Eubediah's is another matter entirely; that should be impossible except in the movies.

Secure hashes are designed to be tamper-proof

A properly designed secure hash function changes its output radically with tiny single bit changes to the input data, even if those changes are malicious and intended to cheat the hash. Unfortunately, not all hashes were designed properly, and some, like MD5, are outright broken and should probably be reverted to checksums.

As we will explain below, the algorithm of Wang and Yu can be used to create files of arbitrary length that have identical MD5 hashes, and that differ only in 128 bytes somewhere in the middle of the file. Several people have used this technique to create pairs of interesting files with identical MD5 hashes:

  • Magnus Daum and Stefan Lucks have created two PostScript files with identical MD5 hash, of which one is a letter of recommendation, and the other is a security clearance.
  • Eduardo Diaz has described a scheme by which two programs could be packed into two archives with identical MD5 hash. A special "extractor" program turn one archive into a "good" program and the other into an "evil" one.
  • In 2007, Marc Stevens, Arjen K. Lenstra, and Benne de Weger used an improved version of Wang and Yu's attack known as the chosen prefix collision method to produce two executable files with the same MD5 hash, but different behaviors. Unlike the old method, where the two files could only differ in a few carefully chosen bits, the chosen prefix method allows two completely arbitrary files to have the same MD5 hash, by appending a few thousand bytes at the end of each file.
  • Didier Stevens used the evilize program (below) to create two different programs with the same Authenticode digital signature. Authenticode is Microsoft's code signing mechanism, and although it uses SHA1 by default, it still supports MD5.

If you could mimic another person's fingerprint or DNA at will, you could do some seriously evil stuff. MD5 is clearly compromised, and SHA-1 is not looking too great these days.

The good news is that hashing algorithms (assuming you didn't roll your own, God forbid) were designed by professional mathematicians and cryptographers who knew what they were doing. Just pick a hash of a newer vintage than MD5 (1991) and SHA-1 (1995), and you'll be fine – at least as far as collisions and uniqueness are concerned. But keep reading.

Secure hashes are designed to be slow

Speed of a checksum calculation is important, as checksums are generally working on data as it is being transmitted. If the checksum takes too long, it can affect your transfer speeds. If the checksum incurs significant CPU overhead, that means transferring data will also slow down or overload your PC. For example, imagine the sort of checksums that are used on video standards like DisplayPort, which can peak at 17.28 Gbit/sec.

But hashes aren't designed for speed. In fact, quite the opposite: hashes, when used for security, need to be slow. The faster you can calculate the hash, the more viable it is to use brute force to mount attacks. Unfortunately, "slow" in 1990 and 2000 terms may not be enough. The hashing algorithm designers may have anticipated predicted increases in CPU power via Moore's Law, but they almost certainly did not see the radical increases in GPU computing power coming.

How radical? Well, compare the results of CPU powered hashcat with the GPU powered oclHashcat when calculating MD5 hashes:

Radeon 79708213.6 M c/s
6-core AMD CPU52.9 M c/s

The GPU on a single modern video card produces over 150 times the number of hash calculations per second compared to a modern CPU. If Moore's Law anticipates a doubling of computing power every 18 months, that's like peeking 10 years into the future. Pretty amazing stuff, isn't it?

Hashes and passwords

Let's talk about passwords, since hashing and passwords are intimately related. Unless you're storing passwords incorrectly, you always store a user's password as a salted hash, never as plain text. Right? Right? This means if your database containing all those hashes is compromised or leaked, the users are still protected – nobody can figure out what their password actually is based on the hash stored in the database. Yes, there are of course dictionary attacks that can be surprisingly effective, but we can't protect users dead-set on using "monkey1" for their password from themselves. And anyway, the real solution to users choosing crappy passwords is not to make users remember ever more complicated and longer passwords, but to do away with passwords altogether.

This has one unfortunate ramification for password hashes: very few of them were designed with such massive and commonly available GPU horsepower in mind. Here are my results on my current PC, which has two ATI Radeon 7970 cards generating nearly 16000 M c/s with MD5. I used oclHashcat-lite with the full range of a common US keyboard – that is, including uppercase, lowercase, numbers, and all possible symbols:

all 6 character password MD5s47 seconds
all 7 character password MD5s1 hour, 14 minutes
all 8 character password MD5s~465 days
all 9 character password MD5sfuggedaboudit

The process scales nearly perfectly as you add GPUs, so you can cut the time in half by putting four video cards in one machine. It may sound crazy, but enthusiasts have been doing it since 2008. And you could cut it in half again by building another PC with four more video cards, splitting the attack space. (Keep going if you're either crazy, or working for the NSA.) Now we're down to a semi-reasonable 117 days to generate all 8 character MD5s. But perhaps this is a worst-case scenario, as a lot of passwords have no special characters. How about if we try the same thing using just uppercase, lowercase, and numbers?

all 6 character password MD5s3 seconds
all 7 character password MD5s4 minutes
all 8 character password MD5s4 hours
all 9 character password MD5s10 days
all 10 character password MD5s~625 days
all 11 character password MD5sfuggedaboudit

If you're curious about the worst case scenario, a 12 character all lowercase password is attainable in about 75 days on this PC. Try it yourself; here's the script I used:

set BIN=oclHashcat-lite64
set OPTS=--gpu-accel 200 --gpu-watchdog 0 --outfile-watch 0 --restore-timer 0 --pw-min 6 --pw-max 6 --custom-charset1 ?l?d?s?u
 
%BIN% %OPTS% --hash-type 0 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ?1?1?1?1?1?1?1?1?1?1?1?1?1

Just modify the pw-min, pw-max and the custom-charset as appropriate. Or, if you're too lazy to try it yourself, browse through the existing oclHashcat benchmarks others have run. This will also give you some idea how computationally expensive various known hashes are on GPUs relative to each other, such as:

MD523070.7 M/s
SHA-17973.8 M/s
SHA-2563110.2 M/s
SHA-512267.1 M/s
NTLM44035.3 M/s
DES185.1 M/s
WPA/WPA2348.0 k/s

What about rainbow tables?

Rainbow tables are huge pre-computed lists of hashes, trading off table lookups to massive amounts of disk space (and potentially memory) for raw calculation speed. They are now utterly and completely obsolete. Nobody who knows what they're doing would bother. They'd be wasting their time. I'll let Coda Hale explain:

Rainbow tables, despite their recent popularity as a subject of blog posts, have not aged gracefully. Implementations of password crackers can leverage the massive amount of parallelism available in GPUs, peaking at billions of candidate passwords a second. You can literally test all lowercase, alphabetic passwords which are ≤7 characters in less than 2 seconds. And you can now rent the hardware which makes this possible to the tune of less than $3/hour. For about $300/hour, you could crack around 500,000,000,000 candidate passwords a second.

Given this massive shift in the economics of cryptographic attacks, it simply doesn’t make sense for anyone to waste terabytes of disk space in the hope that their victim didn’t use a salt. It’s a lot easier to just crack the passwords. Even a “good” hashing scheme of SHA256(salt + password) is still completely vulnerable to these cheap and effective attacks.

But when I store passwords I use salts so none of this applies to me!

Hey, awesome, you're smart enough to not just use a hash, but also to salt the hash. Congratulations.

$saltedpassword = sha1(SALT . $password);

I know what you're thinking. "I can hide the salt, so the attacker won't know it!" You can certainly try. You could put the salt somewhere else, like in a different database, or put it in a configuration file, or in some hypothetically secure hardware that has additional layers of protection. In the event that an attacker obtains your database with the password hashes, but somehow has no access to or knowledge of the salt it's theoretically possible.

This will provide the illusion of security more than any actual security. Since you need both the salt and the choice of hash algorithm to generate the hash, and to check the hash, it's unlikely an attacker would have one but not the other. If you've been compromised to the point that an attacker has your password database, it's reasonable to assume they either have or can get your secret, hidden salt.

The first rule of security is to always assume and plan for the worst. Should you use a salt, ideally a random salt for each user? Sure, it's definitely a good practice, and at the very least it lets you disambiguate two users who have the same password. But these days, salts alone can no longer save you from a person willing to spend a few thousand dollars on video card hardware, and if you think they can, you're in trouble.

I'm too busy to read all this.

If you are a user:

Make sure all your passwords are 12 characters or more, ideally a lot more. I recommend adopting pass phrases, which are not only a lot easier to remember than passwords (if not type) but also ridiculously secure against brute forcing purely due to their length.

If you are a developer:

Use bcrypt or PBKDF2 exclusively to hash anything you need to be secure. These new hashes were specifically designed to be difficult to implement on GPUs. Do not use any other form of hash. Almost every other popular hashing scheme is vulnerable to brute forcing by arrays of commodity GPUs, which only get faster and more parallel and easier to program for every year.

[advertisement] Hiring developers? Post your open positions with Stack Overflow Careers and reach over 20MM awesome devs already on Stack Overflow. Create your satisfaction-guaranteed job listing today!
Posted by Jeff Atwood    63 Comments

Preserving The Internet... and Everything Else

April 2, 2012

In Preserving Our Digital Pre-History I nominated Jason Scott to be our generation's digital historian in residence. It looks like a few people must have agreed with me, because in March 2011, he officially became an archivist at the Internet Archive.

The-internet-archive

Jason recently invited me to visit the Internet Archive office in nearby San Francisco. The building alone is amazing; when you imagine the place where they store the entire freaking Internet, this enormous former Christian Science church seems … well, about right.

It's got a built in evangelical aura of mission, with new and old computer equipment strewn like religious totems throughout.

Internet-archive-and-jason-scott

Doesn't it look a bit like the place where we worship servers, with Jason Scott presiding over the invisible, omnipresent online flock? It's all that and so much more. Maybe the religious context is appropriate, because I always thought the Internet Archive's mission – to create a permanent copy of every Internet page ever created, as it existed at the time – was audacious bordering on impossible. You'd need to be a true believer to even consider the possibility.

The Internet Archive is about the only ally we have in the fight against pernicious and pervasive linkrot all over the Internet. When I go back and review old Coding Horror blog entries I wrote in 2007, it's astonishing just how many of the links in those posts are now, after five years, gone. I've lost count of all the times I've used the Wayback Machine to retrieve historical Internet pages I once linked to that are now permanently offline – pages that would have otherwise been lost forever.

The Internet Archive is a service so essential that its founding is bound to be looked back on with the fondness and respect that people now have for the public libraries seeded by Andrew Carnegie a century ago … Digitized information, especially on the Internet, has such rapid turnover these days that total loss is the norm. Civilization is developing severe amnesia as a result; indeed it may have become too amnesiac already to notice the problem properly. The Internet Archive is the beginning of a cure – the beginning of complete, detailed, accessible, searchable memory for society, and not just scholars this time, but everyone.

Stewart Brand

Without the Internet Archive, the Internet would have no memory. As the world's foremost expert on backups I cannot emphasize enough how significant the Internet Archive is to the world, to any average citizen of the Internet who needs to source an old hyperlink. Yes, maybe it is just the world's largest and most open hard drive, but nobody else is doing this important work that I know of.

Let's Archive Atoms, Too

While what I wrote above is in no way untrue, it is only a small part of the Internet Archive's mission today. Where I always thought of the Internet Archive as, well, an archive of the bits on the Internet, they have long since broadened the scope of their efforts to include stuff made of filthy, dirty, nasty atoms. Stuff that was never on the Internet in the first place.

The Internet Archive isn't merely archiving the Internet any more, they are attempting to archive everything.

All of this, in addition to boring mundane stuff like taking snapshots of the entire Internet every so often. That's going to take, uh … a lot of hard drives. I snapped a picture of a giant pile of 3 TB drives waiting to be installed in one of the storage rooms.

Lots-of-hard-drives

The Internet Archive is a big organization now, with 30 employees in the main San Francisco office you're seeing above, and 200 staff all over the world. With a mission of such overwhelming scope and scale, they're going to need all the help they can get.

The Internet Archive Needs You

The Internet Archive is a non-profit organization, so you could certainly donate money. If your company does charitable donations and cares at all about the Internet, or free online access to human knowledge, I'd strongly encourage them to donate to the Internet Archive as well. I made sure that Stack Exchange donated every year.

But more than money, what the Internet Archive needs these days is … your stuff. I'll let Jason explain exactly what he's looking for:

I'm trying to acquire as much in the way of obscure video, obscure magazines, unusual pamphlets and printed items of a computer nature or even of things like sci-fi, zines – anything that wouldn't normally find itself inside most libraries. Hence my computer magazines collection – tens of thousands of issues in there. I'd love to get my hands on more.

Also as mentioned, I love, love, love shareware CDs. Those are the most bang for the buck with regards to data and history that I want to get my hands on.

Being the obsessiveconscientious geeks that I know you are, I bet you have a collection of geeky stuff exactly like that somewhere in your home. If so, the best way you can help is to send it in as a contribution! Email jscott@archive.org about what you have, and if you're worried about rejection, don't be:

There's seriously nothing we don't want. I don't question. I take it in, I put it in items. I am voracious. Omnivorous. I don't say no.

The Internet Archive has an impossible mission on an immense scale. It is an unprecedented kind of open source archiving, not driven by Google or Microsoft or some other commercial entity with ulterior motives, but a non-profit organization motivated by nothing more than the obvious common good of building a massive digital Library of Alexandria to preserve our history for future generations. Let's do our part to help support the important work of the Internet Archive in whatever way we can.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Posted by Jeff Atwood    22 Comments

Visualizing Code to Fail Faster

March 29, 2012

In What You Can't See You Can't Get I mentioned in passing how frustrated I was that the state of the art in code editors and IDE has advanced so little since 2003. A number of commenters pointed out the amazing Bret Victor talk Inventing on Principle. I hadn't seen this, but thanks for mentioning it, because I definitely should have. Maybe you haven't seen it either?

It's a bit long at 54 minutes, but worth viewing in its entirety. What Bret shows here is indeed exactly the sort of thing we should be doing, but aren't.

In some ways we've actually regressed from my ancient Visual Basic 6.0 days, when you'd get dynamically notified about errors as you typed, not just when you compiled or ran unit tests. The idea that you should be able to type (or gesture, or speak) and immediately see the result of that change is simple, but extremely powerful. It's speed of iteration in the small. That's essentially the basis for my argument that showing markup and rendered output side-by-side, and dynamically updating them as you type, is vastly superior for learning and experimentation compared to any attempt at WYSIWYG.

But Bret goes further than that – why not show the effects of predicted changes, and change over time? Time is the missing element in a static display of code and rendered output; how do we show that?

Braid-jump-code

Again, watch the video because it's easier to see in action than it is to explain. But maybe you'd like to play with it yourself? That's sort of the point, isn't it? As I wrote in 2007:

I yearn for the day when web pages are regularly illustrated with the kind of beautiful, dynamic visualizations that Ben Fry creates.

That day, I'm happy to report, seems to have arrived. Bret's article, Up and Down the Ladder of Abstraction is extremely interactive in plain old boring HTML 5.

Interactive-ladder-abstraction

Yes, it's artsy, yes these are mostly toy projects, but this isn't entirely abstract art house visualization nonsense. Designing tools that let you make rapid changes, and see the effects of those changes as soon as possible can be transformative.

Paul realized that what we needed to be solved was not, in fact, human powered flight. That was a red-herring. The problem was the process itself, and along with it the blind pursuit of a goal without a deeper understanding how to tackle deeply difficult challenges. He came up with a new problem that he set out to solve: how can you build a plane that could be rebuilt in hours not months. And he did. He built a plane with Mylar, aluminum tubing, and wire.

The first airplane didn't work. It was too flimsy. But, because the problem he set out to solve was creating a plane he could fix in hours, he was able to quickly iterate. Sometimes he would fly three or four different planes in a single day. The rebuild, retest, relearn cycle went from months and years to hours and days.

Eighteen years had passed since Henry Kremer opened his wallet for his vision. Nobody could turn that vision into an airplane. Paul MacCready got involved and changed the understanding of the problem to be solved. Half a year later later, MacCready's Gossamer Condor flew 2,172 meters to win the prize. A bit over a year after that, the Gossamer Albatross flew across the channel.

Don't get me wrong, we're failing plenty fast with our existing tools. But I can't shake the feeling that we could we fail even faster if we optimized our IDEs and code editors to better visualize the effects of our changes in real time as we make them.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Posted by Jeff Atwood    27 Comments

The End of Pagination

March 27, 2012

What do you do when you have a lot of things to display to the user, far more than can possibly fit on the screen? Paginate, naturally.

Pagination-examples

There are plenty of other real world examples in this 2007 article, but I wouldn't bother. If you've seen one pagination scheme, you've seen them all. The state of art in pagination hasn't exactly changed much – or at all, really – in the last 5 years.

I can understand paginating when you have 10, 50, 100, maybe even a few hundred items. But once you have thousands of items to paginate, who the heck is visiting page 964 of 3810? What's the point of paginating so much information when there's a hard practical limit on how many items a human being can view and process in any reasonable amount of time?

Once you have thousands of items, you don't have a pagination problem. You have a search and filtering problem. Why are we presenting hundreds or thousands of items to the user? What does that achieve? In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for.

U2-google

But perhaps you don't know exactly what you're looking for: maybe you want a variety of viewpoints and resources, or to compare a number of similar items. Fair enough. I have a difficult time imagining any scenario where presenting a hundred or so items wouldn't meet that goal. Even so, the items would naturally be presented in some logical order so the most suitable items are near the top.

Once we've chosen a suitable order and a subset of relevant items … do we really need pagination at all? What if we did some kind of endless pagination scheme, where we loaded more items into the view dynamically as the user reaches the bottom? Like so:

It isn't just oddball disemvowelled companies, either. Twitter's timeline and Google's image search use a similar endless pagination approach. Either the page loads more items automatically when you scroll down to the bottom, or there's an explicit "show more results" button.

Pagination is also friction. Ever been on a forum where you wished like hell the other people responding to the thread had read all four pages of it before typing their response? Well, maybe some of them would have if the next page buttons weren't so impossibly small, or better yet, not there at all because pagination was automatic and seamless. We should be actively removing friction where we want users to do more of something.

I'm not necessarily proposing that all traditional pagination be replaced with endless pagination. But we, as software developers, should avoid mindlessly generating a list of thousands upon thousands of possible items and paginating it as a lazy one-size-fits-all solution. This puts all the burden on the user to make sense of the items. Remember, we invented computers to make the user's life easier, not more difficult.

Once you've done that, there's a balance to be struck, as Google's research tells us:

User testing has taught us that searchers much prefer the view-all, single-page version of content over a component page containing only a portion of the same information with arbitrary page breaks.

Interestingly, the cases when users didn’t prefer the view-all page were correlated with high latency (e.g., when the view-all page took a while to load, say, because it contained many images). This makes sense because we know users are less satisfied with slow results. So while a view-all page is commonly desired, as a webmaster it’s important to balance this preference with the page’s load time and overall user experience.

Traditional pagination is not particularly user friendly, but endless pagination isn't without its own faults and pitfalls, either:

  • The scroll bar, the user's moral compass of "how much more is there?" doesn't work in endless pagination because it is effectively infinite. You'll need an alternate method of providing that crucial feedback, perhaps as a simple percent loaded text docked at the bottom of the page.
  • Endless pagination should not break deep linking. Even without the concept of a "page", users should be able to clearly and obviously link to any specific item in the list.
  • Clicking the browser forward or back button should preserve the user's position in the endless scrolling stream, perhaps using pushState.
  • Pagination may be a bad user experience, but it's essential for web spiders. Don't neglect to accommodate web search engines with a traditional paging scheme, too, or perhaps a Sitemap.
  • Provide visible feedback when you're dynamically loading new items in the list, so the user can tell that new items are coming, and their browser isn't hung – and that they haven't reached the bottom yet.
  • Remember that the user won't be able to reach the footer (or the header) any more, because items keep appearing as they scroll down in the river of endless content. So either move to static headers and footers, or perhaps use the explicit "load more" button instead of loading new content automatically.

For further reading, there's some excellent Q&A on the topic of pagination at ux.stackexchange.

Above all else, you should strive to make pagination irrelevant because the user never has to look at more than a few items to find what they need. That's why I suspect Google hasn't done much with this technique in their core search result pages; if they aren't providing great results on page 1, it doesn't really matter what kind of pagination they use because they're not going to be in business much longer. Take that lesson to heart: you should be worried most of all about presenting a relevant list of items to the user in a sensible order. Once you've got that licked, then and only then should you think about your pagination scheme.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    69 Comments

What You Can't See You Can't Get

March 23, 2012

I suppose What You See Is What You Get has its place, but as an OCD addled programmer, I have a problem with WYSIWYG as a one size fits all solution. Whether it's invisible white space, or invisible formatting tags, it's been my experience that forcing people to work with invisible things they cannot directly control … inevitably backfires. A lot.

I have a distinctly Ghostbusters attitude to this problem.

Ghostbusters-logo

I need to see these invisible things, so that I can zap them with my proton pack. I mean, er, control them. And more importantly, understand them; perhaps even master them.

I recently had the great privilege of meeting Ted Nelson, who gave me an in-person demo of his ZigZag project and his perpetually in-progress since 1960 Xanadu project, currently known as Xanadu Space. But one thing he mentioned as he gave the demo particularly intrigued me. Being Ted Nelson, of course he went much further than my natural aversion to invisible, hidden markup in content – he insisted that markup and content should never be in the same document. Far more radical.

I want to discuss what I consider one of the worst mistakes of the current software world, embedded markup; which is, regrettably, the heart of such current standards as SGML and HTML. (There are many other embedded markup systems; an interesting one is RTF. But I will concentrate on the SGML-HTML theology because of its claims and fervor.)

There is no one reason this approach is wrong; I believe it is wrong in almost every respect.

I propose a three-layer model:

  1. A content layer to facilitate editing, content linking, and transclusion management.
  2. A structure layer, declarable separately. Users should be able to specify entities, connections and co-presence logic, defined independently of appearance or size or contents; as well as overlay correspondence, links, transclusions, and "hoses" for movable content.
  3. A special-effects-and-primping layer should allow the declaration of ever-so-many fonts, format blocks, fanfares, and whizbangs, and their assignment to what's in the content and structure layers.

It's an interesting, albeit extremely hand-wavy and complex, alternative. I'm unclear how you would keep the structure layer in sync with the content layer if someone is editing the content. I don't even know if there are any real world examples of this three layer approach in action. (And as usual, feel free to correct me in the comments if I've missed anything!)

Instead, what we do have are existing, traditional methods of intermixing content and markup ala HTML or TeX.

PDF vs. TeX

When editing, there are two possible interfaces:

  1. WYSIWYG where the markup layer is magically hidden so, at least in theory, the user doesn't ever have to know about markup and can focus entirely on the content. It is an illusion, but it is simple enough when it's working. The downside is that the abstraction – this idea that the markup is truly "invisible" – is rarely achieved in practice and often breaks down for anything except the most basic of documents. But it can be good enough in a lot of circumstances.

  2. Two windows where the markup is fully visible in one window, and shown as a live rendered preview in the other window, updated as you type, either side-by-side or top-and-bottom. Users have a dynamic sandbox where they can experiment and learn how markup and content interact in the real world, rather than having it all swept under the rug. Ultimately, this results in less confusion for intermediate and advanced users. That's why I'm particularly fond of this approach, and it is what we use on Stack Exchange. The downside is that it's a bit more complex, depending on whether or not you use humane markup, and it certainly takes a bit more screen space and thinking to process what's going on.

What I didn't realize is that there's actually a third editing option: keep the markup visible, and switch rapidly back and forth between the markup and rendered view with a single keystroke. That's what the Gliimpse project reveals:

Please watch the video. The nearly instantaneous and smooth transition that Gliimpse demonstrates between markup and preview has to be seen to be appreciated. The effect is a bit like Expose on the Mac, or Switcher on PC. I'm not sure how I feel about this, mainly because I don't know of any existing IDEs that even attempt to do anything remotely like it.

But I'd sure like to try it. As a software developer, it's incredibly frustrating to me that we have generational improvements in games like Skyrim and Battlefield 3 that render vastly detailed, dynamic worlds at 60 frames per second, yet our source code editors are advancing only in tiny incremental steps, year after year.

Posted by Jeff Atwood    36 Comments

Welcome to the Post PC Era

March 19, 2012

What was Microsoft's original mission?

In 1975, Gates and Allen form a partnership called Microsoft. Like most startups, Microsoft begins small, but has a huge vision – a computer on every desktop and in every home.

The existential crisis facing Microsoft is that they achieved their mission years ago, at least as far as the developed world is concerned. When was the last time you saw a desktop or a home without a computer? 2001? 2005? We're long since past the point where Microsoft's original BHAG was met, and even exceeded. PCs are absolutely ubiquitous. When you wake up one day to discover that you've completely conquered the world … what comes next?

Apparently, the Post PC era.

Microsoft never seemed to recover from the shock of achieving their original 1975 goal. Or perhaps they thought that they hadn't quite achieved it, that there would always be some new frontier for PCs to conquer. But Steve Jobs certainly saw the Post PC era looming as far back as 1996:

The desktop computer industry is dead. Innovation has virtually ceased. Microsoft dominates with very little innovation. That's over. Apple lost. The desktop market has entered the dark ages, and it's going to be in the dark ages for the next 10 years, or certainly for the rest of this decade.

If I were running Apple, I would milk the Macintosh for all it's worth – and get busy on the next great thing. The PC wars are over. Done. Microsoft won a long time ago.

What's more, Jobs did something about it. Apple is arguably the biggest (and in terms of financials, now literally the biggest) enemy of general purpose computing with the iPhone and iPad. These days, their own general purpose Mac operating system, OS X, largely plays second fiddle to the iOS juggernaut powering the iPhone and iPad.

Here's why:

Apple-cumulative-sales

The slope of this graph is the whole story. The complicated general purpose computers are at the bottom, and the simpler specialized computers are at the top.

I'm incredibly conflicted, because as much as I love the do-anything computer …

  • I'm not sure that many people in the world truly need a general purpose computer that can do anything and install any kind of software. Simply meeting the core needs of browsing the web and email and maybe a few other basic things covers a lot of people.
  • I believe the kitchen-sink-itis baked into the general purpose computing foundations of PCs, Macs, and Unix make them fundamentally incompatible with our brave new Post PC world. Updates. Toolbars. Service Packs. Settings. Anti-virus. Filesystems. Control panels. All the stuff you hate when your Mom calls you for tech support? It's deeply embedded into of the culture and design of every single general purpose computer. Doing potentially "anything" comes at a steep cost in complexity.
  • Very, very small PCs – the kind you could fit in your pocket – are starting to have the same amount of computing grunt as a high end desktop PC of, say, 5 years ago. And that was plenty, even back then, for a relatively inefficient general purpose operating system.

But the primary wake up call, at least for me, is that the new iPad finally delivered an innovation that general purpose computing has been waiting on for thirty years: a truly high resolution display at a reasonable size and price. In 2007 I asked where all the high resolution displays were. Turns out, they're only on phones and tablets.

iPad 2 display vs iPad 3 display

That's why I didn't just buy the iPad 3 (sorry, The New iPad). I bought two of them. And I reserve the right to buy more!

iPad 3 reviews that complain "all they did was improve the display" are clueless bordering on stupidity. Tablets are pretty much by definition all display; nothing is more fundamental to the tablet experience than the quality of the display. These are the first iPads I've ever owned (and I'd argue, the first worth owning), and the display is as sublime as I always hoped it would be. The resolution and clarity are astounding, a joy to read on, and give me hope that one day we could potentially achieve near print resolution in computing. The new iPad screen is everything I've always wanted on my desktops and laptops for the last 5 years, but I could never get.

Don't take my word for it. Consider what screen reading pioneer, and inventor of ClearType, Bill Hill has to say about it:

The 3rd Generation iPad has a display resolution of 264ppi. And still retains a ten-hour battery life (9 hours with wireless on). Make no mistake. That much resolution is stunning. To see it on a mainstream device like the iPad - rather than a $13,000 exotic monitor - is truly amazing, and something I've been waiting more than a decade to see.

It will set a bar for future resolution that every other manufacturer of devices and PCs will have to jump.

And the display calibration experts at DisplayMate have the measurements and metrics to back these claims up, too:

… the new iPad’s picture quality, color accuracy, and gray scale are not only much better than any other Tablet or Smartphone, it’s also much better than most HDTVs, laptops, and monitors. In fact with some minor calibration tweaks the new iPad would qualify as a studio reference monitor.

Granted, this is happening on tiny 4" and 10" screens first due to sheer economics. It will take time for it to trickle up. I shudder to think what a 24 or 27 inch display using the same technology as the current iPad would cost right now. But until the iPhone and iPad, near as I can tell, nobody else was even trying to improve resolution on computer displays – even though all the existing HCI research tells us that higher resolution displays are a deep fundamental improvement in computing.

At the point where these simple, fixed function Post-PC era computing devices are not just "enough" computer for most folks, but also fundamentally innovating in computing as a whole … well, all I can say is bring on the post-PC era.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

Posted by Jeff Atwood    99 Comments

Rubber Duck Problem Solving

March 13, 2012

At Stack Exchange, we insist that people who ask questions put some effort into their question, and we're kind of jerks about it. That is, when you set out to ask a question, you should …

  • Describe what's happening in sufficient detail that we can follow along. Provide the necessary background for us to understand what's going on, even if we aren't experts in your particular area.
  • Tell us why you need to know the answer. What led you here? Is it idle curiosity or somehow blocking you on a project? We don't require your whole life story, just give us some basic context for the problem.
  • Share any research you did towards solving your problem, and what you found, if anything. And if you didn't do any research – should you even be asking?
  • Ultimately, this is about fairness: if you're going to ask us to spend our valuable time helping you, it's only fair that you put in a reasonable amount of your valuable time into crafting a decent question. Help us help you!

We have a great How to Ask page that explains all of this, which is linked generously throughout the network. (And on Stack Overflow, due to massive question volume, we actually force new users to click through that page before asking their first question. You can see this yourself by clicking on Ask Question in incognito or anonymous browser mode.)

What we're trying to prevent, most of all, is the unanswerable drive-by question. Those help nobody, and left unchecked they can ruin a Q&A site, turning it into a virtual ghost town. On Stack Exchange, questions that are so devoid of information and context that they can't reasonably be answered will be actively closed, and if they aren't improved, eventually deleted.

As I said, we're kinda jerks about this rule. But for good reason: we're not-so-subtly trying to help you help yourself, by teaching you Rubber Duck problem solving. And boy does it ever work. I've gotten tons of feedback over the years about how people, in the process of writing up their thorough, detailed question for Stack Overflow or another Stack Exchange site, figured out the answer to their own problem.

Rubber-duckies

It's quite common. See for yourself:

How can I thank the community when I solve my own problems?

I've only posted one question so far, and almost posted another. In both cases, I answered my own questions at least partially while writing it out. I credit the community and the process itself for making me think about the answer. There's nothing explicit in what I'm writing that states quite obviously the answer I needed, but something about writing it down makes me think along extra lines of thought.

Why is it that properly formulating your question often yields you your answer?

I don't know how many times this has happened:

  • I have a problem
  • I decide to bring it to stack overflow
  • I awkwardly write down my question
  • I realize that the question doesn't make any sense
  • I take 15 minutes to rethink how to ask my question
  • I realize that I'm attacking the problem from a wrong direction entirely.
  • I start from scratch and find my solution quickly.

Does this happen to you? Sometimes asking the right question seems like half the problem.

Beginning to ask a question actually helps me debug my problem myself

Beginning to ask a question actually helps me debug my problem myself, especially while trying to formulate a coherent and detailed enough question body in order to get decent answers. Has this happened to anybody else before?

It's not a new concept, and every community seems to figure it out on their own given enough time, but "Ask the Duck" is a very powerful problem solving technique.

Bob pointed into a corner of the office. "Over there," he said, "is a duck. I want you to ask that duck your question."

I looked at the duck. It was, in fact, stuffed, and very dead. Even if it had not been dead, it probably would not have been a good source of design information. I looked at Bob. Bob was dead serious. He was also my superior, and I wanted to keep my job.

I awkwardly went to stand next to the duck and bent my head, as if in prayer, to commune with this duck. "What," Bob demanded, "are you doing?"

"I'm asking my question of the duck," I said.

One of Bob's superintendants was in his office. He was grinning like a bastard around his toothpick. "Andy," he said, "I don't want you to pray to the duck. I want you to ask the duck your question."

I licked my lips. "Out loud?" I said.

"Out loud," Bob said firmly.

I cleared my throat. "Duck," I began.

"Its name is Bob Junior," Bob's superintendant supplied. I shot him a dirty look.

"Duck," I continued, "I want to know, when you use a clevis hanger, what keeps the sprinkler pipe from jumping out of the clevis when the head discharges, causing the pipe to..."

In the middle of asking the duck my question, the answer hit me. The clevis hanger is suspended from the structure above by a length of all-thread rod. If the pipe-fitter cuts the all-thread rod such that it butts up against the top of the pipe, it essentially will hold the pipe in the hanger and keep it from bucking.

I turned to look at Bob. Bob was nodding. "You know, don't you," he said.

"You run the all-thread rod to the top of the pipe," I said.

"That's right," said Bob. "Next time you have a question, I want you to come in here and ask the duck, not me. Ask it out loud. If you still don't know the answer, then you can ask me."

"Okay," I said, and got back to work.

I love this particular story because it makes it crystal clear how the critical part of rubber duck problem solving is to totally commit to asking a thorough, detailed question of this imaginary person or inanimate object. Yes, even if you end up throwing the question away because you eventually realize that you made some dumb mistake. The effort of walking an imaginary someone through your problem, step by step and in some detail, is what will often lead you to your answer. But if you aren't willing to put the effort into fully explaining the problem and how you've attacked it, you can't reap the benefits of thinking deeply about your own problem before you ask others to.

If you don't have a coding buddy (but you totally should), you can leverage the Rubber Duck problem solving technique to figure out problems all by yourself, or with the benefit of the greater Internet community. Even if you don't get the answer you wanted, forcing yourself to fully explain your problem – ideally in writing – will frequently lead to new insights and discoveries.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    40 Comments

How to Hire a Programmer

March 5, 2012

There's no magic bullet for hiring programmers. But I can share advice on a few techniques that I've seen work, that I've written about here and personally tried out over the years.

1. First, pass a few simple "Hello World" online tests.

I know it sounds crazy, but some people who call themselves programmers can barely program. To this day, I still get regular pings from people who tell me they had candidates fail the most basic programming test imaginable.

That's why extremely simple programming tests are step one of any sane interview process. These tests should happen online, and the goal is not to prove that the candidate is some kind of coding genius, but that they know what the heck programming is. Yes, it's sad and kind of depressing that this is even necessary, but if you don't perform this sanity check, trust me – you'll be sorry.

Some services that do online code screening (I am sure there are more, but these are the ones I know about) are Interview Zen and codility.

2. Ask to see their portfolio.

Any programmer worth their salt should have a portfolio of the things they've worked on. It doesn't have to be fancy. I'm just looking for a basic breadcrumb trail of your awesomeness that you've left on the Internet to help others. Show me a Stack Overflow profile where I can see what kind of communicator and problem solver you are. Link me to an open-source code repository of your stuff. Got a professional blog? A tumblr? A twitter? Some other word I've never heard of? Excellent, let's have a look. Share applications you've designed, or websites you worked on, and describe what parts were yours.

Just seeing what kind of work people have done, and what sort of online artifacts they've created, is tremendously helpful in getting a sense of what people do and what they're good (or bad) at.

3. Hire for cultural fit.

Like GitHub, I find that cultural fit is often a stronger predictor of success than mad programming chops.

We talk about [philosophy] during the hiring process, which we take very seriously. We want any potential GitHubber to know what they’re getting into and ensure it’s a good fit. Part of that is having dinner and talking about stuff like the culture, philosophy, mistakes we’ve made, plans, whatever.

Early on we made a few hires for their skills with little regard to how they’d fit into the culture of the company or if they understood the philosophy. Naturally, those hires didn’t work out. So while we care about the skills of a potential employees, whether or not they “get” us is a major part too.

I realize that not every business has a community around what they do, but if you do have a community you should try like hell to hire from your community whenever possible. These are the folks who were naturally drawn to what you do, that were pulled into the gravitational well of your company completely of their own accord. The odds of these candidates being a good cultural fit are abnormally high. That's what you want!

Did a few of your users build an amazing mod for your game? Did they find an obscure security vulnerability and try to tell you about it? Hire these people immediately!

4. Do a detailed, structured phone screen.

Once you've worked through the above, it's time to give the candidate a call. Bear in mind that the phone screen is not for chatting, it's for screening. The call should be technical and structured, so both of you can get out immediately if it clearly isn't a fit. Getting the Interview Phone Screen Right covers the basics, but in summary:

  1. A bit of on-the-fly coding. "Find the largest int value in an int array."
  2. Some basic design. "Design a representation to model HTML."
  3. Scripting and regular expressions. "Give me a list of the text files in this directory that contain phone numbers in a specific format."
  4. Data structures. "When would you use a hashtable versus an array?"
  5. Bits and bytes. "Why do programmers think asking if Oct 31 and Dec 25 are the same day is funny?"

What you're looking for is not magical perfect answers, necessarily, but some context into how this person solves problems, and whether they know their stuff (plus or minus 10 percent). The goal is to make sure that the candidates that do make it to the next step are not wasting their time or yours. So don't be shy about sticking to your guns and ending the call early if there are too many warning flags.

5. Give them an audition project.

So the candidate breezed through the hello world programming tests, has an amazing portfolio, is an excellent cultural fit, and also passed the phone screen with flying colors. Time to get them in for a face-to-face interview, right? Not so fast there cowboy!

I've seen candidates nail all of the above, join the company, and utterly fail to Get Things Done. Have I mentioned that hiring programmers is hard?

If you want to determine beyond the shadow of a doubt if someone's going to be a great hire, give them an audition project. I'm not talking about a generic, abstract programming problem, I'm talking about a real world, honest-to-God unit of work that you need done right now today on your actual product. Something you would give to a current employee, if they weren't all busy, y'know, doing other stuff.

This should be a regular consulting gig with an hourly rate, and a clearly defined project mission statement. Select a small project that can ideally be done in a few days, maybe at most a week or two. Either the candidate can come in to the office, or they can work remotely. I know not every business has these bite-sized units of work that they can slice off for someone outside the company – but trying desperately to make it inside the company – to take on. I'd argue that if you can't think of any way to make an audition mini-project work for a strong hiring candidate, perhaps you're not structuring the work properly for your existing employees, either.

If the audition project is a success, fantastic – you now have a highly qualified candidate that can provably Get Things Done, and you've accomplished something that needed doing. To date, I have never seen a candidate who passes the audition project fail to work out. I weigh performance on the audition project heavily; it's as close as you can get to actually working the job without being hired. And if the audition project doesn't work out, well, consider the cost of this little consulting gig a cheap exit fee compared to an extensive interview process with 4 or 5 other people at your company. Worst case, you can pass off the audition project to the next strong candidate.

(A probationary period of conditional employment can also work, and is conceptually quite similar. You could hire with a 6-8 week review "go or no go" decision everyone agrees to in advance.)

6. Get in a room with us and pitch.

Finally, you should meet candidates face-to-face at some point. It's inevitable, but the point of the earlier steps is that you should be 95% certain that a candidate would be a great hire before they ever set foot in an interview room.

I'm far from an expert on in person interviews, but I don't like interview puzzle questions, to put it mildly.

Instead, I have my own theory about how we should interview programmers: have the candidate give a 15 minute presentation on their area of expertise. I think this is a far better indicator of success than a traditional interview, because you'll quickly ascertain …

  • Is this person passionate about what they are doing?
  • Can they communicate effectively to a small group?
  • Do they have a good handle on their area of expertise?
  • Would your team enjoy working with this person?

The one thing every programmer should know, per Steve Yegge, is how to market yourself, your code, and your project. I wholeheartedly agree. Now pitch me!

7. None of this is guaranteed.

Please take this list at face value. I've seen these techniques work, and I've occasionally seen them not work. Adapt this advice to your your particular situation, keep what you think makes sense, and ignore the rest (although I'd strongly advise you to never, ever skip step #1). Even in the best of circumstances, hiring human beings is hard. A job opportunity may not work out for reasons far beyond anyone's control. People are, as they say, complicated.

If you think of work as a relationship, one you'll spend 40 hours a week (or more) in the rest of your life, it behooves everyone involved to "date smart". Both the company and the candidate should make a good faith best effort to determine if there's a match. Your goal shouldn't be merely to get a job, or hire someone for a job, but to have fun and create a love connection. Don't rush into anything unless it feels right on both sides.

(as an aside, if you're looking for ways to attract programmers, you can't go wrong with this excellent advice from Samuel Mullen.)

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    102 Comments

Should All Web Traffic Be Encrypted?

February 23, 2012

The prevalence of free, open WiFi has made it rather easy for a WiFi eavesdropper to steal your identity cookie for the websites you visit while you're connected to that WiFi access point. This is something I talked about in Breaking the Web's Cookie Jar. It's difficult to fix without making major changes to the web's infrastructure.

In the year since I wrote that, a number of major websites have "solved" the WiFi eavesdropping problem by either making encrypted HTTPS web traffic an account option or mandatory for all logged in users.

For example, I just noticed that Twitter, transparently to me and presumably all other Twitter users, switched to an encrypted web connection by default. You can tell because most modern browsers show the address bar in green when the connection is encrypted.

Twitter-https-encryption-indicators

I initially resisted this as overkill, except for obvious targets like email (the skeleton key to all your online logins) and banking.

Yes, you can naively argue that every website should encrypt all their traffic all the time, but to me that's a "boil the sea" solution. I'd rather see a better, more secure identity protocol than ye olde HTTP cookies. I don't actually care if anyone sees the rest of my public activity on Stack Overflow; it's hardly a secret. But gee, I sure do care if they somehow sniff out my cookie and start running around doing stuff as me! Encrypting everything just to protect that one lousy cookie header seems like a whole lot of overkill to me.

Of course, there's no reason to encrypt traffic for anonymous, not-logged-in users, and Twitter doesn't. You get a plain old HTTP connection until you log in, at which point they automatically switch to HTTPS encryption. Makes sense.

It was totally painless for me, as a user, and it makes stealing my Twitter identity, or eavesdropping on my Twitter activity (as fascinating as I know that must sound), dramatically more difficult. I can't really construct a credible argument against doing this, even for something as relatively trivial as my Twitter account, and it has some definite benefits. So perhaps Twitter has the right idea here; maybe encrypted connections should be the default for all web sites. As tinfoil hat as this seemed to me a year ago, now I'm wondering if that might actually be the right thing to do for the long-term health of the overall web, too.

ENCRYPT ALL THE THINGS

Why not boil the sea, then? Let us encrypt all the things!

HTTPS isn't (that) expensive any more

Yes, in the hoary old days of the 1999 web, HTTPS was quite computationally expensive. But thanks to 13 years of Moore's Law, that's no longer the case. It's still more work to set up, yes, but consider the real world case of GMail:

In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

HTTPS means The Man can't spy on your Internet

Since all the traffic between you and the websites you log in to would now be encrypted, the ability of nefarious evildoers to either …

  • steal your identity cookie
  • peek at what you're doing
  • see what you've typed
  • interfere with the content you send and receive

… is, if not completely eliminated, drastically limited. Regardless of whether you're on open public WiFi or not.

Personally, I don't care too much if people see what I'm doing online since the whole point of a lot of what I do is to … let people see what I'm doing online. But I certainly don't subscribe to the dangerous idea that "only criminals have things to hide"; everyone deserves the right to personal privacy. And there are lots of repressive governments out there who wouldn't hesitate at the chance to spy on what their citizens do online, or worse. Much, much worse. Why not improve the Internet for all of them at once?

HTTPS goes faster now

Security always comes at a cost, and encrypting a web connection is no different. HTTPS is going to be inevitably slower than a regular HTTP connection. But how much slower? It used to be that encrypted content wouldn't be cached in some browsers, but that's no longer true. And Google's SPDY protocol, intended as a drop-in replacement for HTTP, even goes so far as to bake encryption in by default, and not just for better performance:

[It is a specific technical goal of SPDY to] make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.

There's also SSL False Start which requires a modern browser, but reduces the painful latency inherent in the expensive, but necessary, handshaking required to get encryption going. SSL encryption of HTTP will never be free, exactly, but it's certainly a lot faster than it used to be, and getting faster every year.

Bolting on encryption for logged-in users is by no means an easy thing to accomplish, particularly on large, established websites. You won't see me out there berating every public website for not offering encrypted connections yesterday because I know how much work it takes, and how much additional complexity it can add to an already busy team. Even though HTTPS is way easier now than it was even a few years ago, there are still plenty of tough gotchas: proxy caching, for example, becomes vastly harder when the proxies can no longer "see" what the encrypted traffic they are proxying is doing. Most sites these days are a broad mashup of content from different sources, and technically all of them need to be on HTTPS for a properly encrypted connection. Relatively underpowered and weakly connected mobile devices will pay a much steeper penalty, too.

Maybe not tomorrow, maybe not next year, but over the medium to long term, adopting encrypted web connections as a standard for logged-in users is the healthiest direction for the future of the web. We need to work toward making HTTPS easier, faster, and most of all, the default for logged in users.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Posted by Jeff Atwood    49 Comments