Part 2 of the Lotus Sametime deployment at IBM discusses the business value and planning that went into migrating and installing Lotus Sametime. Geography, server load, number of users, number of clusters needed, contact lists were all carefully considered for this deployment.
< Previous | Contents | Next
One geographic location
One community and three vpuserinfo.nsf databases
Calculating a user's home server
IBM is the largest user of Sametime in the world. Client facing teams had not been able to use IBM as a reference for Sametime chat capabilities. The legacy environment was complex and unsupported, requiring frequent manual intervention to assist in meeting Service Level Agreements (SLAs). Additionally, a high degree of customization hindered traditional support methods.
The upgraded environment is more easily managed and debugged should problems occur. Having the software up-to-date and the environment robuts allows IBM to move on with needed pilot work to deliver desktop video to all IBMers, which will reduce costs over time. Current SLAs are being achieved with less manual intervention. Client-facing selling teams are now able to use the internal implementation as a reference deployment with customers.
There are many things to consider when deciding upon an architecture. What works for one company may not be the best architecture for another. The “Sametime 7.5.1 Best Practices for Enterprise Scale Deployment” Redbook discusses many factors to consider when deciding upon the architecture for your company. Many companies divide the clusters or users by geography. This was considered within IBM as well. However, to do this, the geographies would each have to host a set of connected servers which users would access by means of different URLs. For example, if IBM divided its users geograhically between the US and Asia Pacific, we would have to have two URLs, us.acme.com and ap.acme.com. This would require client updates on all Sametime clients worldwide to go to the correct server name. It would also mean an increase in hardware cost. Since the hardware is currently located in one location, we took advantage of the different time zones so that many servers were not needed. We decided to let the different time zones work in our favor. For example, to have servers set up in the Asian Pacific geography, we would have to have enough servers to handle a load of 150k simultaneous users, and enough US servers to handle 150k simultaneous users. Because the servers are located centrally, we can have enough servers to handle a load of 200k users and keep them running around the clock. This leads to lower hardware costs. Since the Sametime servers had already been housed in a single location for many years, we already knew what kind of network impact and network latency that we would see for users. The network impact is one of the reasons that some companies choose to host by geography.
In IBM's deployment, all servers are in one location to take full advantage of all of the servers around the clock. We decided to let the different time zones work in our favor. Instead of having some servers under a lot of load some of the time because of the different groups worldwide, our user population is split up rather evenly across all of the servers/clusters so that the servers are able to maintain a more consistent state and usage around the clock. Typically users are sorted onto different servers and/or clusters via a Home Server attribute set in their LDAP person record. IBM did not use this method to identify the user's home server because the Sametime server uses a shared LDAP that the Sametime deployment team did not control. Adding and populating a field for all user records within IBM was not feasible.
IBM determined that three clusters of three servers each was the optimum number for approximately 225,000 concurrent users per day.
Typically, most administrators will divide their users among clusters. Having one cluster in IBM would mean that the contacts list database vpuserinfo.nsf
would be too big. Every action such as searching, loading, and so on took too long. IBM came up with the idea to divide the user information into three files, one for each cluster. Each cluster would have its own vpuserinfo.nsf
. All the servers within a cluster share information about users such as contacts lists, privacy lists, and alerts. All of that user information is contained within a single database called vpuserinfo.nsf
By using three clusters, we were able to breakdown the size of a single vpuserinfo.nsf
file to a third of its original size. To break down the vpuserinfo.nsf
file, Sametime development created a tool that took the single vpuserinfo.nsf
and broke out into the needed number of clusters all of the users information based upon the algorithm that is used for their logins. This allowed us to take an approximately 15 GB vpuserinfo.nsf
and break it down into 3 vpuserinfo.nsf
files of roughly 5 GB each.
When a user logs on, the user is directed to a cluster and one of three servers in the cluster. If a user's data is on Cluster 2, then the user must log on to Cluster 2. The user need only log onto one server in the cluster. Through an algorithm that calculates the user's name, Sametime always know which server the user belongs to. Sametime developers came up with a way that allows the home server field to be set at the server layer that determines, by means of an algorithm, the cluster and server that users are sent to as they log in. After the first log in, users are always be sent to the same cluster. This happens by means of a custom class file specified in the Sametime configuration database (stconfig.nsf
) LDAP settings. Custom class files within the stconfig.nsf
LDAP settings have been used in Sametime for many years. For more information on custom class files, refer to the section entitled “Use Java classes to customize LDAP directory searches” in the Sametime information center.
When a user logs into the cluster, information for that user has already replicated across the servers in the cluster.
< Previous | Contents | Next