Belshe leaves Microsoft

REDMOND, WA – Friday, September 29 2006 marked the final day at Microsoft for Mike Belshe, a development manager in the IS Client group in Silicon Valley.  The end of Belshe’s tenure marks a turning point for Microsoft, as Allchin, Valentine, Kennedy and Gates also hang up their hats.

The recent announcement was not a surprise.  Markets had adjusted to the Microsoft (MSFT) news over the past few months, and the stock has soared.  Prior to the news, the stock traded at $22 in June, but now tops $27.35.  “We’re not surprised by the market reaction,” said Steve Ballmer, CEO of Microsoft.  “Mike’s salary was pretty high, and with revenues of only $10B per quarter, our profits were in jeopardy.  He was doing a stellar job, but it was really hard for us to provide investors the return they expect.”  Investors are ecstatic, and happy to have the money back in the bank.

When asked, all that Belshe had to say was, “I really enjoyed the folks I worked with at Microsoft.  I wish them all the best, and I’m super excited that the stock price is up.  Like everyone else, I expect great things from a great team of people.  And Microsoft has got a great team.” 

Working in Seattle

I just read a really old blog post from Steven Sinofsky.  He’s a pretty big deal at Microsoft, and he writes a great blog with all kinds of advice for college grads about what it’s like to work at Microsoft.  If you are thinking of joining Microsoft, definitely check out his blog.

But this particular article struck me as somewhat comical.  I never thought of the weather as much of a selling point for a job, much less as a selling point for Seattle!  Steven fairly posts the cold, hard facts about the weather – that it rains for 155 days per year in Seattle.  (That’s 42% of the days)  They even have streaks of rain every day for 3 or more weeks at a time there!  But then he goes on to pitch that the climate there is actually a moderate one, going so far as to state, “Most people are surprised to learn that it really doesn’t rain all that much in Seattle“.   Huh?  I’m not sure how much rain is “all that much” by his measure, but I guess we all have our own opinions.  To me, that is great salesmanship, Steve!  Believe in your product!

Truth be told, if you are going to work for Microsoft, definitely move to Redmond.  The campus is great, the people are great, and you’ll have a blast working on great technology.  But make no mistake – with the exception of a very short summer – the weather is wet.  Steven claims that, “No matter where you live, people can find a reason to complain about the weather (or is it weather forecasting?)”, but he’s just lived in Seattle too long.  Here in California, we don’t dwell on the weather, except to debate whether we should bike to work or surf.

Desktop Applications Cost Too Much

Ryan Stewart this week was Looking at the Strategy of Rich Internet Applications. He hits a lot of points there, but also takes a stab at the fundamental differences between Web Applications and Desktop Applications. Clearly, if you read other parts of his blog, you’ll know that Ryan very much thinks the Desktop Experience has a much richer experience to offer than the web does.

As a Microsoft employee, I’m supposed to say, “he’s right”. After all, we Microsofties have a vested interest in proving this true – if users don’t need the desktop anymore, we’re in serious trouble! I’ve personally wracked my brain trying to prove this is true, hoping this to be true. But alas, I think he will ultimately be proven wrong. I’m not saying that Web Applications today have knocked out desktop apps yet. They certainly have not. But I also have no doubt that they ultimately will. Web Applications are fundamentally cheaper than Desktop Applications to build, install, administer, maintain and support.

The Problem with Desktop Software:

1) The Cost of Deploying Software
It is incredibly expensive to maintain software deployed to every machine. Take any large corporation, and they’ve got a fleet of IT professionals who’s only job is to make sure that the deployed software still works. Can you imagine managing 100,000 desktops that are in use by 100,000 different people, each sitting at desks in varied locations? Users break it, and the IT guy has to fix it. If the software weren’t deployed in the first place, this cost goes away.

2) The Cost of Conflicting Software
It’s the Operating System’s job to manage the resources of the hardware. To date, we have yet to see an Operating System that can prevent conflicts between two software packages. It boggles my mind, but somehow installing an Antivirus program can affect the way your Email client works. This is a sad fact that each and every one of us has experienced. In today’s world, installing two software packages on a single box means you’ve got bugs. It’s impossible to test every combination, and we software professionals inherently suck at it.

3) The Cost of Patching Software
Patching software is tricky. You need to communicate to the user that you have a patch, how important the patch is, what the fixes are, what the side effects are, as well as any gotchas. You may need to upgrade the data formats the user is using, depending on how big of a difference the new patch is when compared to the old one. Microsoft does patching better than anyone with it’s Microsoft Update product, but it’s taken years to get right, and it takes more process to update a product than it does to get the Space Shuttle off the ground. Why? Well, if you are going to update 500,000,000 desktops, you’d better damn well know it works. Frankly, unless Microsoft can both reduce the process cost and also make this technology available at zero cost to every other software maker, patching of most software continue to be a serious gamble for the end-user. And even if the patch does work, don’t forget about the patch causing new instances of Cost #2 mentioned above.

4) The Cost of Supporting every Platform
Once you deploy your software, your new applications need to support the old ones. This adds exponential expense to building software. If you don’t think it’s too bad to support Windows 98, Windows 2000, Windows XP, Windows Server 2003, and Vista, how about writing an application that needs to work with 4 different versions of Office (Office 2000, Office XP, Office 2003, and Office 2007) on each of OSes? (now you’ve got 20 combinations) But wait! Don’t forget that the world is changing from 32bit to 64bit architectures, so you’ll need to build both 32 bit and 64 bit versions of your code for each of them! Now you’re looking at building software to be tested on 40 different platforms. Seriously, who for a minute thinks that software makers don’t take shortcuts here? Maybe now you realize why your Windows Server 2003 breaks so much (it get’s tested the least).

For platform support, I haven’t even mentioned those noble apps that want to build for both the Macintosh and the Windows environments. Doing that is such a daunting task that nobody expects applications to be concurrently released on both anymore. That’s just crazy talk.

5) The Cost of Integrating Web based and Desktop based software
Building Web-based applications means you have to build out a server infrastructure and employ a whole set of technologies which is fundamentally different from what you use when building desktop based applications. Unfortunately for the desktop apps, almost all modern apps need to use some sort of server-side infrastructure to build the latest features. Both Quicken and Money, which are classic Desktop Applications now integrate with sophisticated server-side applications for tracking your investments, doing online trading, and more. Over time, it will prove too costly to build both the Desktop portion and the Web portion of these applications. Software providers will need to consolidate. Unfortunately, you can’t move the web content (real time stock quotes, news, banking services, etc) onto the desktop. So, if you want to consolidate your technologies, the only way you can consolidate is to move to the Web-based application.

6) The Cost of Going Mobile
Jonathan Schwartz (CEO, Sun Microsystems) wrote about this just the other day. While here in America we haven’t gone as crazy about mobile as other countries have, there is no doubt it is coming. Which application is better capable of moving mobile? The Desktop-based application or the Web based one? Desktop apps need to be completely rewritten to work on mobile devices.

7) The Cost of Synchronizing Data
Once you’ve managed to deploy your desktop app, you start to use it. You write a few Word documents, save away some QuickBooks data, and get some good work done. But then you need to travel to Phoenix. Yikes! Now you need a laptop so you can take your data with you. But wait – you left your laptop in the taxi, and now you need to get the client’s phone number so you can tell him you’ll be late. Shoot – that was on the laptop too! The problem is that you haven’t synchronized your data between all your desktop-based software packages. So, in addition to the desktop and laptop, you’ll now be buying services and software from one of the mobile carriers to try to sync all this data for you. Getting expensive!

The Solution is Web-Based

OK – so if you’ve read this far, you may not yet be convinced of the inevitable doom for our desktop applications. Just to make sure nobody says I left anything out, let’s recap the above 7 costs with the Web-based application.

1) Cost of Deploying Software. In the Web-based model, the IT department does not deploy anything except the browser itself. Once that is deployed, new applications can be added without deployment costs to the desktops. (Server side deployment or data deployment, such as hosting email data still exists, but that exists in the desktop arena as well with Desktop Server applications like Exchange)

2) Cost of Conflicting Software. The web is designed around a set of pages which are partitioned. This partitioning ensures that unrelated applications don’t conflict. (e.g. Yahoo can’t change a page on their site which breaks Microsoft’s site)

3) Cost of Patching Software. Patching software exists in both models. However, in the desktop-model, your patch has to work for any desktop, which could be running any platform, or have been modified by the user in any way. The user could have deleted registry keys, moved disks around, or added new gizmos like USB drives, printers, and network cards. In the Web-based world, the application provider controls all of these things. Further, the patch can be scheduled to run at times when the user is known to not be using the system. Because you know what you are patching, Web Applications patch much more easily. You only have to support the new version, and the version one prior; there is no need to support 10 year old systems.

4) Cost of Supporting Every Platform. This problem does not exist in the Web Application world, except for supporting various browser features. IE, Firefox, and Mac each have somewhat different features, and this can be tricky to build software for. Nonetheless, it is infinitely simpler than the myriad of combinations created with the desktop.

5) Cost of Integrating Web-Apps and Desktop Apps. Ironically, the Web-App world already does this. There is a very clear line between what is done on the client (HTML, JavaScript, etc), and what is done on the server. Web Apps specifically design for this fact, and don’t usually need to modify the desktop.

6) Cost of Going Mobile. Web Apps need relatively small changes to work on mobile devices, and for the desperate, even generic browsers can do a functional job on mobile devices.

What it Takes for Web-Apps to Finally Conquer the Desktop

Alright. Now that we’ve established that Web Applications truly are cheaper to maintain and richer in features, why haven’t they taken over already? Clearly something is missing?

Better UI & App Platform
HTML & JavaScript are pretty flexible, and it always amazes me what some people can do with it. But, most UI’s are pretty poor when compared to what the desktop can provide. Graphics rendering is pretty much unavailable, and accessibility and navigation metaphors are often broken.

We need a few more generations of markup to allow Web Apps to better utilize the client and create more consistent user interfaces.

Ability to Save Data Locally
Today, going to a web-based application means that you are storing your data on the Web. This is a big tradeoff in terms of security and bandwidth. I want my photos to be mine – but I want the application on the web.

I fully expect web browsers to be capable of doing this in the future. I also expect web browsers will be capable of storing data on USB or flash devices. Instead of each of us having a desktop with a big hard disk, we’ll have a set of small compact flashes that we can plug into our cameras, our phones, our computers, the kiosk at the airport, or all of the above.

Note that the ability to Save Data Locally is specifically what weakens desktops for the “Cost of Going Mobile” and the “Cost of Synchronization”. It’s these private data stores which are costly, and using flash or USB devices re-introduce part of that. The difference, however, is that the application is able to write to any place; instead of only being able to write to “C:Documents and SettingsJoeMicrosoftFoo Application”, applications will write to where the user wants the data. And, if that is a mobile storage device, it will go mobile, decreasing the costs of mobility and hopefully eliminating the need for much synchronization.

Ability to Provide Internet and Intranet Solutions
Moving the storage for the consumer is one thing, but companies will still need and want to control their email and other data. Web App providers will need to provide ways that the backend portions of these applications can either be hosted by an IT department, or be hosted on the Intranet. Let the consumer decide.

More Bandwidth
We need more broadband penetration. If you don’t have broadband, you want your desktop apps. Sooner or later, this will be realized. Some thought we’d have enough bandwidth 10 years ago. Who knows, maybe it’s still 10 more years away.

Conclusion

For me, the conclusion is obvious. Users will ultimately elect the pains that come with remote-managing their data over the pains of doing system administration. It’s just easier to delegate system administration tasks (deployment, backups, etc) than it is to do it yourself. As soon as technology takes us far enough, we’ll jump.

Don’t conclude that I’m totally absolute here. This is an evolution that will take many years. There will always be some desktops out there. High-performance games may demand it (or maybe dedicated consoles like XBox and PlayStation will take that), or other vertical apps will demand it. Developers will need their own boxes. Video editors and graphics designers will probably need their own machines to do their specialized work. For mainstream use, though, we’re heading pure web. And increasingly, even these specialized work environments will move to the web too.

Finally, Some External Resources

Paul Graham
I don’t agree quite with the words, but mostly I do agree. Keep in mind that Paul wrote this in 2001, “There is all the more reason for startups to write Web-based software now, because writing desktop software has become a lot less fun. If you want to write desktop software now you do it on Microsoft’s terms, calling their APIs and working around their buggy OS. And if you manage to write something that takes off, you may find that you were merely doing market research for Microsoft.”

If you want to have backward compatibility and support for environments as far back as 10 years old, and you are going to deploy hundreds of millions of copies of it, you are going to be left with something that seems like “calling their APIs and working around their buggy OS.” It’s not Microsoft that is the problem, it’s the nature of the beast.

Om Malik
Om teamed up with Niall Kennedy recently to discuss this topic, and they concluded that there is a lot of life left for Desktops. They are probably mostly right, but I think their long term vision is a little short term. Om created a poll on this topic, with 64% of respondents wanting “both desktop and web apps”.

Paul Kedrosky
Paul’s interesting viewpoint is to look at history, “Way back when there was a time when people would have said that editing text in WYSIWYG was a CPU-bound task that required a desktop application, but times have a-changed. I have no doubt that the same thing will happen, sooner rather than later, to many tasks, like audio-editing, that are currently deemed now-and-forever desktop apps.”

Peter Rip
The real problem with desktop apps is no one works at their desktop anymore.

Windows Live Writer Review

If you blog, you need to use Windows Live Writer.  I wrote about Writer once before, but didn’t do a full review.  This time, I am so hooked, I have to write more.

I recently switched over to WordPress from Movable Type, and with my first few posts I sincerely tried to use the WordPress AJAX, web-based editor.  But, the web-based editor is so primitive compared to writer that I had to switch back. 

There are a few problems with the web-editors.  First, you really do have to know HTML.  Although WordPress tries to be WYSIWYG, it has lots of bugs, and I often had to switch to HTML mode to make my posts look right.  Worse, sometimes I simply could not get the editor to format posts the way I wanted.  Everything was difficult and I was constantly fighting with the editor rather than writing my post.  Finally, the WordPress and Movable Type editors give you a tiny little window to compose your post.  It is functional, but it hardly lets you see your post as it will be shown in your final blog.

Writer solves all of these problems and goes way beyond.  The editor really is WYSIWYG.  It even uses the templates from your blog so that as you type you see exactly how your entry will work.  Native controls a-la word are a button-press away to make your posts look great.

But the biggest feature for me is the way Writer seamlessly integrates image handling.  WordPress does okay, but both WordPress and Movable Type force you to do a lot of work to insert an image.  Movable Type even makes you go through a special upload process not too dissimilar from using FTP.

With Writer, images are incredibly easy.  First, you can just drag-and-drop content from the web into your post.   I prefer to insert from a file because you have more options with manipulating the image afterward.  Here are the 3 panes of options provided by writer (click to see larger image):

Image Layout Properties Image Properties Image Effects Properties

Imaging features include:

  • One button positioning to Left/Right/Center in your content
  • Add Drop Shadow or PhotoPaper backgrounds to the image
  • Automatically add margins to the image
  • Have the image link to itself in a bigger form
  • Add watermarks to images
  • Add effects to images (black/white, emboss, blur, sepia, etc)

Outside of imaging and basic editing, Writer knocks out all the basic features too.  It can manage multiple blogs for you so that you can post to many sources from one simple app.  This is far better than having to navigate to multiple web-based admin screens.  Further, you can edit offline, perfect for composing those posts while you are on the plane or away from the net.  Writer even bundles a spell checker.

Finding drafts, prior posts, creating new posts is all a snap.  There is also a plugin API which I haven’t toyed with, but if you’ve got content, you may want to check it out so that other bloggers can more easily write about you.

All in all, it’s so much easier to work with Writer than it is to work with the Movable Type or WordPress web-based editors.  You’ll save a lot of time and everything will look a lot nicer too.

Putt your way to 63 Miles-Per-Gallon

The average MPG on my last tank was 63.0!  EPA rating on my 2006 Honda Civic Hybrid is only 50MPG.  How did I do that?

It takes a little patience, but the answer is simple.  Just drive 60 miles per hour the whole way, and you’ll keep that MPG rating up.  The car’s performance is very sensitive to small increases in throttle at those speeds.

Achieving MPG at that level yields two conclusions for me:

First, people really are governed by metrics.  They say in management school that keeping metrics for how your employees are doing is key to optimizing behaviors, and I think it’s true.  Whether you are measuring bugs/day, support calls/week, or cars washed/hour, we all have a competitive element and a desire to rank at the top of the chart.  For me, when the only metric I had in my car was the speedometer, of course I wanted to get to a maximum speed.  How many times have you heard your coworkers brag about their top speed?  Now, however, my metric is MPG.  And if getting the highest MPG means driving 60MPH instead of 90MPH, well, I can do that too.  What if cars didn’t have speedometers and only had MPG gauges?  Would everyone start to optimize for MPG?

The second thought is that we should create a website for us MPG freaks – www.MPGHiScores.com – where drivers can post their MPG accomplishments.  Rules would be that “scores” must cover at least a 100mile trip and you must take a photo of your odometer to submit.

Doing Backups Ain’t Like The Movies

In the movies, you’ll watch the hero covertly enter someone else’s office, quickly hack into and navigate the computer system, drop a cd into the drive, and wait impatiently for 5-10 seconds while all of the data copies.

In reality, I fumble with the disks, can’t even get the DVD drive to open (keeps closing automatically, almost as if it knows its toying with me), discover my Windows 2003 server doesn’t software for doing a backup to DVD even if I want to, go through 2 or 3 reboots, have to figure out the obtuse software, write the first disk and fail, do it over, and finally, about 2 hours later, I have a simple backup.

If only life emulated art…..

Google’s Electric Bill & Energy Efficient PCs

Over the last few days I’ve been talking to a few folks about the Dual Core chips and how much power they can draw. High end systems are drawing 150-200Watts each. That’s pretty staggering, and the dual core processors are drawing less than the previous generation of Pentium chips.

Today, the NYTimes writes about a Google whitepaper being presented by Google today for improving PC energy efficiency. It may not matter much for a PC or two at home, but for a server farm, which can use 10,000 or more computers, it’s a really big deal. Further, while your home PC is usually idle, the goal of a server machine is to actually have it under fair load 24×7. And more load invariably means more power draw.

To put it into perspective, if you had a datacenter with 10,000 CPUs (and no overhead equipment), and you paid $0.1 per KWh, you’d be facing an electric bill of about $108,000 per month. Again, according to the NYTimes, Google has around 450,000 machines. If they used these expensive chips, they’d be churning through $4.8M per month just for electricity. No wonder they wrote this whitepaper. No wonder they are moving out of California.

Just wait until the Tesla and other electric vehicles appear. We’re worried about oil prices now, but electricity demand is on the brink of massive increases.

See also: eWeek article, cNet “Power could cost more than servers”

Windows Live Soapbox

Windows Live Soapbox MSN has it’s own YouTube clone coming out called Soapbox. I haven’t been able to try it myself, but I thought that B2Day had a great comment about these types of services,

“It’s not the technology that makes these services so compelling, it’s the community who flocks to them.”

Microsoft will build the technology. But it will be a struggle for technology alone to overcome the social phenomenon of YouTube.

Does Innovation through Acquisition Count?

I hear a lot of people complain that Microsoft isn’t innovative. When you try to point out innovations that Microsoft has made, they claim that all the “good innovation” is via acquisition. Even if that were true, does it matter?

The nemesis of any startup is getting good distribution. It’s incredibly expensive and difficult to build great distribution channels. Microsoft, by contrast, has distribution like no other. If Microsoft buys a great product and gives it distribution, why is it that Microsoft shouldn’t get credit for innovation? The fact is that the startup couldn’t get the distribution on their own. Microsoft provided it by buying the company and using it’s distribution. Isn’t that a form of innovation in it’s own right?
Microsoft deserves full credit.

Office 2007 Analogy

Marc Orchant compares Office 2007 to driving a new rental car. I think it’s got some truth to it. I think the analogy can be extended in that you only rent a car when you don’t already have a car (e.g. Office 2003). 🙂

I actually mostly like the new UI. It looks clean, and it is not hard to use. But a UI doesn’t compel me to upgrade. I don’t really need prettier menus. I don’t even think I need updates to word or excel or even powerpoint. (Perhaps I lack vision on those products!)

What I do need is better email. Why is it that when I add a conference room to a meeting invite in Outlook, I have to manually mark it as a “resource”, or it won’t actually book the conference room? (So many people have experienced this – it could easily be comedy material for the Office Space sequel). Or, if I send someone my phone number, why doesn’t it automatically add my phone to the contact info? Or what about automatically showing me map locations for addresses? Or adding appointments to calendar automatically? Why can’t my wife schedule time on my calendar at work?

Lastly, the .docx filetypes make Word 2007 cumbersome to use for collaboration. Sure, it is an open xml format, which is a great step forward from proprietary formats. So, it is with some reservation that I complain about this. But, if you have Word 2007, and your colleague has Word 2003, I guarantee you’ll accidentally send a .docx file which he won’t be able to read. To solve this, he can install a .docx reader for his older version of Office, but more likely you’ll end up resending the file after manually converting to a .doc file for him. It doesn’t seem right that the default course of action is that you send a document that someone else can’t read. Why doesn’t Outlook figure that out before you send? After all, you probably sent email to this colleague before, and Outlook puts its version number into every mail header, so your Outlook can know what version of Office your colleague has. Outlook could tell you, before you send it, that you are sending a Word-2007 specific file to a user that probably only has Word 2003, and automatically convert it. Now that would be pretty smart! I guess I wish the default were to save .doc, and not .docx. There’s probably an option in there for that somewhere. But the ribbon has so many buttons on it, I’m not sure quite where to look!

On the good news front, Office 2007 does successfully tackle my #1 pet peeve with Outlook: fast email search. That is the best reason to upgrade.