Skip to main content
    Country/region select      Terms of use
     Home      Products      Services & solutions      Support & downloads      My account     

developerWorks  >  Lotus  >  Forums & community  >  Best Practice Makes Perfect

Best Practice Makes Perfect

A collaboration with Domino developers about how to do it and how to get it right in Domino

Nathan Freeman writes in a comment:

...the usage patterns of a successful application will tend to slow it down at a rate faster than Moore's Law will accelerate it.
There is no better example of this than a person's Notes Inbox.  As time goes by, the steady addition of new mail drags performance.  In the vast majority of real world situations, the performance degradation outpaces hardware updates, whether at the client, the server, or the network.
The history of the Notes platform in many implementations is one of non-scalable departmental applications thrown on bigger and bigger hardware, in an effort to find scalability out of something that was designed on the cheap and in a hurry in the first place.
There's something far more valuable than developer time.  It's user time.  A developer who spends an extra week or an extra month making sure an app is scalable and easy to use is investing wisely.  Banking on Moore's law ... is just a fool's errand.

A heavy burden of documentsIt's certainly worth discussing, and opinions can differ.

What I'm objecting to is a preoccupation with performance at the expense of developer productivity. I want efforts at maximizing performance targeted where they will do the most good -- i.e. where they will save end users lots of time. There's a definite trade-off of developer time versus user time, but it's not an even trade-off, even if you take any disparity in salaries into account. If you tie up a developer for weeks and months getting every last bit of performance out of an application, there's an opportunity cost. They're not working on other applications that could boost the productivity of other users.

What's more important -- getting it so 500 users can open a document in 1 second rather than 1.5, or fixing it so 500 users can collaborate to do a task in one day that used to take them five? The answer may not always be the same, because it depends how often the latter task needs to be done and whether the first group of users includes the president of the company, but I hope you see my point. Even aside from considerations of employee productivity, the new task may have business value (turning around their customers' requests faster) which also weighs in.

I agree that applications tend to get much slower as more documents are added, especially if they are poorly designed. It makes sense to think about performance and for developers to build habits that tend to give better performance, such as:

  • Avoid GetNthDocument.
  • Use keyword formulas that avoid doing a lookup in read mode.
  • Use Computed for Display fields rather than Computed if you don't really need the value stored.
These are things you can do as you go along without having to think much about it, especially if you use the Paste Information app or something like it to have your best-of-breed formulas and code snippets immediately to hand.

Any big deployment should have performance goals and test whether they are met, using a realistically large number of test documents. But as Nathan points out, the development path of many Notes apps doesn't follow a formal IT process; someone buys a copy of "Notes Development for Dummies" and cobbles something together, and it works fairly well for their department for a while before an accumulation of documents drags it down, down, into the ground.

It might be useless to argue about what a novice developer should do around performance, because they're not going to do it anyway. But consider this: many applications never get off the ground. Someone creates one he thinks'll be useful, but he hasn't really understood the work task, or the user interface he designed stinks, or people are unwilling to use it for some other reason. If he had put a week or a month into maximizing performance, that would be a week or a month wasted. Performance is never the problem for initial acceptance, unless it's a mass import of data from somewhere to start, and in that case the problem is obvious immediately.

Fortunately, Notes designs are malleable; once it's clear that an application is going to be useful, then is a good time to get IT support for formal performance testing and adjustments as necessary. That's when you know your developers' time is well spent working on performance. And of course, it means that good analysis tools are essential. Because the application wasn't created by someone with good habits, there's a need for a quick way to zip through it and find obvious problem areas -- such as the little list above. Fortunately, there are some excellent tools of this sort available from business partners (I suspect I shouldn't name names here).

And of course, an application doesn't necessarily get slower over time -- only if you let the documents accumulate. Adding archiving to a slow application can often rescue it from the doldrums, and that's relatively simple to do.

Andre Guirard | 9 May 2007 01:07:00 AM ET | Café Porch, Plymouth, MN, USA | Comments (5)


1) Well...
Nathan T. Freeman | 5/9/2007 7:06:22 AM

Andre, thanks for taking the time to respond to my point.

Data build-up is only one vector by which applications can slow. There's also user population. A constant amount of data in a single application where the user base grows by an order of magnitude can slow down, if the design is poor in the first place.

Just yesterday, I was looking at a Domino web app built by a Java developer, where one screen was doing 5 "NoCache" DbLookups against a view with an @Today in the selection formula. At 1 user, performance was... moderate. At 10, the system jumped to 45-second response time. Moore's Law isn't going to rescue that application.

I definitely don't believe in obsessive performance tuning. My own applications are littered with comments lines like 'could make things go faster with a better caching implementation in this property, but it's not worth the time right now.' But there's a base line performance consideration that should go into every function you work on. There's a balance. And I would dearly love to someday be able to define it.

Regarding this line "once it's clear that an application is going to be useful, then is a good time to get IT support for formal performance testing and adjustments as necessary" -- what's you're essentially referring to is refactoring a design after it's had a successful pilot.

The problem is: this never happens in the real world. I mean NEVER. It's more rare than good system documentation.

Look at a process which should naturally undergo radical refactoring: open source. With thousands of knowledgeable eyes peering in to the details on an app, wouldn't you expect that pointing out the design issues with it would jump to the forefront and lead rapidly to better code?

Except it really doesn't. The tendency is to accept the feature set as delivered, because it does at least solve the problem, even if in a weak way.

The OpenNTF Blogsphere project went through this, and all of us who use it were fortunate enough to have Declan Lynch come along last year and dedicate the time to refactor what had grown into a powerful but cobbled-together organic mishmash of features. But such a process only ever happens when an app reaches the scale of PRODUCT rather than PROJECT. (In fact, if I do the "product vs project" session I'm thinking of next year at 'sphere, I'll make sure to illustrate refactoring versions as a highlight.)

There's a huge middle ground of applications that are in intranet use, that are run by thousands of users every day, that deal with processes handling billions of dollars, where a bad initial design is simply never corrected -- whether because any single day of pain isn't great enough, or the user base simply doesn't know how much they lose by the bad design. I have often wondered whether this class of applications represents a business partner profit opportunity. It certainly represents an opportunity for competitive migrations -- that's the low-hanging fruit for J2EE and Sharepoint vendors!

The Best Practice has to be to figure out the balance. Maybe we need measurement tools to do this. Maybe an app-level analysis mode that returns usage pattern details on the level that Microsoft did for the Office2007 research. Or maybe even better toolkits for devs so they simply don't have bad design ideas to start with.

I'm really not sure. But I'm afraid I can't agree that it's as simple as "let the hardware handle it."

Thanks again.

2) I have to agree with Nathan on this
Ben Langhinrichs | 5/9/2007 7:43:12 AM

While I completely understand your point, Andre, the reality is that it is very, very hard to get anybody to refactor after a pilot. It may be a pendulum swing issue, but there are many applications, projects and even products that fail because there is too little emphasis on performance and scalability. Yes, we are all familiar with the developer who wants to obsess over every nanosecond, but I think you would find that those are far less common now than ten or twenty ago. Now, the mantra is increasingly to let the machine get faster, which leads to the massive bloat in products including Vista and Microsoft Office and Websphere. The minimum specs for these products are irrational, and indications of a generation that cumulatively assumes that the machine will get faster, bigger, etc. and that they don't have to do the work.

To cite one quick example, I posted a question about my OpenSesame project and the speed it took to export rich text to ODF. It was handling about 20 documents a second, and I asked if that was fast enough, but I just wasn't satisfied, so I spent a few days, and now it is handling about 100 a second. Perhaps I did a miserable job the first time, but I could have gotten it out there and seen how people felt about the speed without worrying, but it would have been exponentially harder to risk a production product with a major rewrite of any portion, so I would have been more tempted to leave it alone. Of course, it won't matter for people exporting a few hundred documents, but if anybody tries it on a million, the original speed could have made it completely infeasible.

So, while I used to caution people to not worry about performance until they proved that the app would work, I don't anymore. Too few are worrying about performance at all. Instead, I suggest making sure that any application can scale at least reasonably before putting it out in front of users, because there won't be time or money to make it scale after that. Fortunately, I have the luxury of determining my own priorities with my software development, so I can do that easily. I understand that it is harder for many people who have to answer to somebody else.

3) Bloat
Russ Mayes | 5/9/2007 10:21:27 AM

I am with Nathan and Ben on this. It is almost unheard of to have the opportunity to go back for performance improvements. Much more likely is that you'll go back to an application to add new features. If the application is already not built for scalability, the new features are going to make the problems worse through code bloat. It's much more difficult, but I've learned (the hard way!) that you have to get the stuff under the hood right from the beginning.

4) Completely off-topic
Ben Langhinrichs | 5/9/2007 12:16:46 PM

Andre - Your comment counting is not working. Right now, it shows one comment and there are already three, but I have noticed that it is low on other threads as well. Just an FYI.

5) More on Ben’s points
Erik Brooks | 5/13/2007 11:18:09 PM

Ben raises a very valid point. There are far fewer developers coding for machine efficiency than there were years ago. I think the main reason is that many people simply don't have the knowledge of how things work at the assembly and hardware level.

Most "developers" these days can't tell you how a B-tree works, why a ReDim Preserve() is so expensive, or what a semaphore is. Many don't even go so far as to think about throwing in an "Exit For" in a loop when they're iterating through a set looking for a match and find one.

True Computer Scientists (any worth their salt, at least) understand these concepts. People with heavy C/C++ backgrounds (as there were 10-20 years ago) tend to understand as well. Few I.S./M.I.S. majors do.

I know many people who can say "more memory makes your computer faster", but they don't understand *why* -- the crucial relationship between page faults, seek times of milliseconds vs. nanoseconds, etc.

Modern programming languages have so many things that "help" developers -- automated garbage collection being a great example. These features further prevent developers from needing to understand these concepts and help hide things going on "under the hood".

While obsessing over every clock cycle and dirty caches might be overkill, some basic scalability considerations are definitely crucial to initial design if you want to stay *within* Moore's Law. If you don't know the difference between a linear-scaling algorithm and a geometric-scaling one, your app might be doomed once in production.

Another problem I see is that a fair number of "developers" can cobble together some basic application features, but can't - or don't - write load-testing scripts.

I purposefully perform initial development and testing on a very slow machine - an old P-II 400 Mhz - to help emphasize CPU load.

Say I'm comparing two different algorithms for a particular procedure. If my test rig was identical to one of our smoking-fast production servers, I might see a run time of .15 seconds for one method versus .20 the other. Either one seems "blazingly fast" to the eye.

On the old P-II the same code executes 10-15 times slower, and the difference becomes very noticeable. Simply looking at CPU usages graphs in Task Manager lets me know which way to go.

I do agree, Andre, that Notes applications are often more malleable than others. But I think Nathan and Ben are correct that in many business cases the developer's time is better spent tackling some other "big item" than reworking an internal function for more speed. Only when things become completely unbearable does such refactoring even begin to be considered. (e.g. the app ran fine for 10 users but now it's got 200 and it's chunking big-time, occasionally timing out, etc.) I actually find that most refactoring tends to happen as a side-effect of adding a new feature -- the developer is in there and says "hmmm... might as well clean this up a bit..."

 Add a Comment
Comment:  (No HTML - Links will be converted if prefixed http://)
Remember Me?     Cancel

Search this blog 


    About IBM Privacy Contact