IBM®
Skip to main content
    Country/region select      Terms of use
 
 
   
     Home      Products      Services & solutions      Support & downloads      My account     

developerWorks  >  Lotus  >  Forums & community  >  Best Practice Makes Perfect

Best Practice Makes Perfect

A collaboration with Domino developers about how to do it and how to get it right in Domino

Elevator cartoonKevin Pettitt writes:

...all that looking up to the form configuration document could be expensive, but I have made liberal use of @GetDocField so that the only DBlookup that is used simply returns the UNID of the form configuration, for use by the @GetDocField stuff. I'm certainly open to alternative viewpoints, but the intent here was the compromise between potential caching problems associated with profile documents and extremely expensive DBlookups.

I've run across this attitude before, of wanting to avoid profile documents because of perceived caching issues, and it puzzles me. What caching issues? Profile documents are cached, okay? If someone changes the contents of one, other users won't see the change until they get out of the application and back in. For a Domino web application, it may take a while for all the server processes to catch on that the profile has changed.

Like the elevator in the cartoon, this doesn't make them useless -- it just means you have to choose when to apply them. The application Kevin is talking about requires storing a list of fields that would be automatically filled in for users based on their previous input. (The field values themselves would be stored in a personal profile document, for which caching is not an issue since only one user uses it. The question is where we store the list of field names, which must be available to all users).

In this case, the data doesn't change often, and if someone has a slightly out of date copy, it's no big deal. A more perfect application for profile documents is difficult to imagine, and they're so easy to use, and better-performing than the alternatives.

I think people try to use profile documents inappropriately, with data that's changed often and by many users -- to store the latest sequential number assigned to a document, say -- and it doesn't work, and they get gun-shy about it. I think it's always better to think about why something didn't work, and figure out how it does work, so that you can avoid only the non-working scenarios, rather than, as they say, throwing the baby out with the bathwater.*



* No actual babies were harmed in the composition of this blog entry.

Andre Guirard | 10 August 2007 10:12:01 AM ET | Plymouth, MN, USA | Comments (13)


 Comments

1) Don’t get me wrong, I love to use profile docs, but...
Kevin Pettitt | 8/10/2007 1:07:53 PM

...there's more going on in my SuperNTF form config document than this. Actually, I haven't added this feature yet, so its all still somewhat academic.

Among the things the form configuration doc does is store information about what features to enable for a particular form (i.e. which hidden subforms to load), and soon they will also store all the validation formula logic for the form. The issue of concern is not so much the frequency of change (not often), but the speed with which those changes take effect when they are made. I'm trying to avoid situations where some important change is made, which if it doesn't take effect immediately will cause customer "panic".

Note that I am already planning to implement profile docs for keyword lookups in SuperNTF, but even there I will offer developers the option of coding a lookup formula using @GetProfileField or @Dblookup. I even build the formula syntax for them in a "code helper" tab on the keyword form. I'll just need to add a formula for the profile scenario. Developers could then decide to use @Dblookup for instances of either high update frequency or low tolerance for lag time.

So Andre while your points are completely valid, there are mitigating concerns in this instance (which may yet prove unfounded). But since I believe the performance loss of using @GetDocfield over @GetProfileField is fairly small for the number of fields in question, it seemed a suitable compromise. I can't remember where I read about @GetDocField and performance recently, but it was another blogger who did some testing and one thing I remember being notable was how speedy @GetDocField was. My memory could be faulty though.

The other question lurking in the back of my mind relates to potential profile document corruption. I seem to recall this as a big concern back in the 4.6/5 days, but have not personally experienced a problem since then. Is that sort of thing a non-issue these days (if it ever was?)

2) caching
John Vaughan | 8/10/2007 1:20:12 PM

the caching thing is a real problem during development. especially with $$Return fields - they seem to get cached to particular document instances. it can really drive one batty.

is there a way to flush the cache on a server? i've tried restarting the http task, to no avail.

3) re: Don’t get me wrong, I love to use profile docs, but...
Andre Guirard | 8/10/2007 1:48:40 PM

BTW, Kevin, I'm not trying to pick on you; your app is just a good excuse to talk about things I wanted to talk about here anyway.

I'm trying to avoid situations where some important change is made, which if it doesn't take effect immediately will cause customer "panic".

Then why not modify the design when the configuration changes are saved? Store the customizable logic in a script library; it's very easy to update the contents using the DXL importer reading information from a DOM parser. Of course, it requires Designer access, but controlling who has access to modify the behavior of the app is what Designer access is for. (Or you could write a server agent with that access to run on demand, and invoke it when configuration changes are saved).

BTW, what measures to you plan to insure that any changes immediately replicate to every server and local replica -- if changes must take effect for everyone immediately?

...I will offer developers the option of coding a lookup formula using @GetProfileField or @Dblookup.

Here, I think the measure might be missing the point. Coding the lookup formula is fairly simple and mechanical. Making an appropriate decision about where to load the information from, and whether to use caching, requires thought and experience. The latter is the part that people could use the most help with.

The other question lurking in the back of my mind relates to potential profile document corruption. I seem to recall this as a big concern back in the 4.6/5 days, but have not personally experienced a problem since then. Is that sort of thing a non-issue these days (if it ever was?)

I haven't heard of any problems -- anyone else?

It is, I believe, still possible for duplicate profile documents to get created when an incomplete replica of an application is used. I.e. the replica on server A has a profile document, a replica is created on B, and either the profile document is not selected by a selective replication, or the replica on B is used before the profile has been replicated to it. When the application uses @GetProfileField or related functions, a blank profile is created on B, and that eventually replicates everywhere, causing fear and suffering.

In later versions, you can test for the presence of a profile document (in LotusScript) without creating one if it doesn't exist. You could check on opening a database whether it contains the expected profile and if not, bail.

4) it’s hard to do...
Charles Robinson | 8/10/2007 2:16:57 PM

"I think it's always better to think about why something didn't work, and figure out how it does work, so that you can avoid only the non-working scenarios..."

What are the "non-working scenarios"? They aren't documented. My personal experience with profile documents back when I started Notes development in 1999 (R5) is that they're flaky and unreliable. I understand how they're supposed to work, but since I couldn't make them behave consistently when I started with Notes I avoid them even now. They could work flawlessly now, but I'll probably never know. @DBLookup has *always* worked, and always will, so why change to something that has a history of being flaky?

5) Remembering when I was a novice Notes Developer...
Kevin Pettitt | 8/10/2007 4:00:55 PM

"Then why not modify the design when the configuration changes are saved? Store the customizable logic in a script library; it's very easy to update the contents using the DXL importer reading information from a DOM parser."

Easy for who? I understand the steps involved in making this work would probably not be hard for an intermediate or senior developer to follow along, but I suspect a fairly low percentage of even these would find building this from scratch a bit of a challenge (yes, this includes me). Given that SuperNTF is designed first and foremost to allow even novices to follow along, I would only try to include something like this if it could be "black boxed" sufficiently such that they would never have to touch (i.e. break) it.

"BTW, what measures to you plan to insure that any changes immediately replicate to every server and local replica -- if changes must take effect for everyone immediately?"

Immediate may be too strong a word, but basically I want to avoid a situation where a database administrator/power user makes a change, then tries to use the change, and can't figure out why the change didn't take. If normal replication lag is part of the user expectation in a multi-server scenario, then I wouldn't expect a call on that. If it was an issue though the idea would be to simply trigger a replication manually...although come to think of it, I suppose I could offer to replicate immediately in the querysave such that config changes would push out faster.

"Making an appropriate decision about where to load the information from, and whether to use caching, requires thought and experience. The latter is the part that people could use the most help with."

Excellent point, and all the more reason for me to get around to adding more inline/pop up help on such forms.

As Charles points out, some of these sorts of design decisions are based simply on the principle of "if the more reliable approach doesn't have a significant downside, then go with it". In some circumstances, the "more reliable" approach can be used to excess and create noticable performance issues. For a very busy database with thousands of edits going on daily, maybe they do become noticable. For now, I have aimed to get the best performance I can without overcomplicating the design to point where the Novice will get lost easily. My aim is certainly not perfect though so you are more than welcome to pick on me ;-).

6) Caching data
Karl-Henry Martinsson | 8/11/2007 11:25:42 PM

Just this past week, I implemented caching in one of my applications. The application is fairly large and complex, and have too many views, too many documents, and is being modified too frequently, so the view indexes are constantly being re-built. I have been reducing the number of views, and things are working better.

Certain lookups are done very frequently, either against views or using db.FTSearch(), but the data is very static. Some lookups are against the database itself (getting the user's limit for payments), some are against the NAB (manager name, office location, and an internal ID used thoughout our systems).

What I did was to build code (a class) to do the following:

When the database is first opened, read the data from the different places and create a local XML file with the data.

When I am accessing one of the functions (thankfully I have been breaking out my code into many functions), it first check if the data requested is for the current user. If that is the case, it check last updated time (also stored in the XML file). If it is older than 6 hours it reloads the data from the server. Then the value is returned.

If it is not for the current user (happens sometimes when a manager or a scheduled agent perform actions), the normal lookup is done. This seems to speed things up alot.

My question is: is it faster to read the data from a profile document or from a file on disk? Especially in a database that is somewhat sluggish already.

7) Designer access
Ben Poole | 8/12/2007 4:57:56 PM

Without wishing to digress too much, one thing to pick up on is application access.

My experience of most corporate set-ups is that it's rare to have production applications with anything higher than Editor access enabled for anyone other than administrators ad server groups. For example, as a developer I never have Designer access to anything, so whilst DXL and the like are cool, in reality many of the nifty DXL tricks we can use have to languish in background agents or somesuch (which invariably DO have Designer access).

Most of the stuff I do nowadays is web-based, and profile documents are rarely much use when it comes to storing application details, as you simply cannot rely on the cache updating. I've even seen applications with profile documents that "revert" to old data. Like Charles said, if we had actual data on what profile documents should and shouldn't be doing, that would help make the appropriate design decision.

8) Profile docs on the web
Erik Brooks | 8/13/2007 11:45:35 PM

Nathan Freeman and I were working on a web-only project back in 1999 (in my newbie days of Notes dev), and we needed to track user-specific information as users migrated from page-to-page. I.E. a "session cache" of sorts.

He recommended using profile docs, and they seemed to be exactly what we were after. Heck, they even let you specify a username parameter! They sounded perfect.

But then you would see this behavior:

Set profile field to "A"

Reload web page - "A" displays

Set profile field to "B"

Reload web page - "A" displays

Reload web page - "B" displays

Reload web page - "B" displays

Reload web page - "A" displays

Reload web page - "B" displays

Not only was Domino caching the data, but it was *inconsistently* caching it. A dbcache flush would fix the problem, but they obviously wouldn't work for often-changed data.

Nowadays your web options are for tracking frequently-updated data are:

(1) writing fields to docs

(2) cookies (cool, but don't track anything REALLY important there)

(3) crazy CGI variables in the URL (a bear to keep persistent in Domino with any sort of server-generated HTML).

Profile docs still have their place, though. Their heavy caching by the web server makes them great for reading rarely-changing configuration information such as external filepaths, and the terse syntax of @GetProfileField() definitely beats @DbLookup.

9) Loosing profile documents
Theo Heselmans | 8/14/2007 4:42:57 AM

I use profile documents a lot, and like them.

Almost every db I create uses a 'dbprofile' to store common and fast accessible data.

André, because you asked:

"I haven't heard of any problems -- anyone else?":

I discovered a (repeatable) way of loosing the dbprofile, and it is quite annoying, and was happening regularly.

It always involved dbs where upon opening a script would check some fields in the dbprofile.

- Replicate a db with a dbprofile doc

- Interrupt the replication, so the db is not yet fully initialized (some users do this)

- Open the replica: the db postopen script sees there is no dbprofile yet, and creates a blank one.

- if the user then continues with the replication, the 'main' dbprofile is overwritten, loosing all data.

I ended up with creating a backup and restore procedure for the profile docs.

10) I think they’re great
Martijn de Jong | 8/14/2007 5:55:25 AM

@2 It's hidden in the comments above already, but the command to flush the cache on the server is "dbcache flush"

Another handy server command and slightly related is the command to get rid of the namelookup cache (you've changed the password of one of your testusers cause you forgot the old one, but the new password doesn't get picked up). That command is: "show nlcache reset"

I think profile documents are great and I use them a lot, but as said before, you need to know when to use them. Good examples to use them:

- Database profiles with email addressen for feedback, filepaths to other servers, urls to websites and more of this rarely changing but important to keep flexible information. Most larger organisations won't even accept a design where these kinds of values are hardcoded in the design

- Language documents. For multilingual applications I use the @GetProfileField(profilename ; fieldname; uniqueKey) a lot where I use the uniqueKey for the shortcode of the language. Works very well and it's easy to add another language to your database. Performance is still good, while if you would use dblookups for this, it would kill the performance.

11) Profile docs in Domino Web development courses
Fabian Robok | 8/14/2007 8:28:13 AM

The official Domino 6/6.5 (and I think 7 as well) Web development course used to have an example of a web shop, where the uses basket would be stored in profile docs. Apart form the overall quality of the sample database (sub-zero), this sometimes worked perfectly, and sometimes not at all.

And I'm talking of an environment, where machines have been restored from ghost images most of the times, so conditions should have been almost 100% identical.

In Notes client apps, I started using profile documents rather heavily from Note 6 on (didn't even try in R5, because of the bad reputation they had in 4.5 and 4.6). In web apps, I rather stay away from them. Calling WQS agents that send dbcache flush to the console don't look like a great idea to me.

12) Added Theo and Martin’s ideas to SuperNTF Request List
Kevin Pettitt | 8/15/2007 6:37:44 PM

Theo's profile doc backup/restore and Martijn's use of profile docs for multilingual databases ideas are now enshrined here: { Link }

Feel free to throw in any additional info, links, etc. by responding to those request docs.

Thanks for the great ideas!

 Add a Comment
Subject:
   
Name:
Comment:  (No HTML - Links will be converted if prefixed http://)
 
Remember Me?     Cancel

Search this blog 

Disclaimer 

    About IBM Privacy Contact