ShowTable of Contents
General best practices
Follow these best practices when creating alerts for dashboard applications.
New for WebSphere Dashboard Framework 6.1.5: Run Alerts models in WebSphere Application Server during development
You can test your alerting application more conveniently by running alert models in WebSphere Application Server from the Designer user interface. Add the following value to the override.properties file located in the deployed_project_name/WEB-INF/config folder:
This property allows you to create alert definitions in the Designer user interface by running the ManageAlerts.model, which is located in WEB-INF/models/solutions/alerting/admin folder. You can also view My Alerts by running the MyAlertsPortlet.model, which is located in the WEB-INF/models/solutions/alerting folder.
Deploy your Alerts project to both the WebSphere Application Server and the Portal Server during development
Generally, a dashboard application that contains alerts is designed to run on WebSphere Portal. However, also deploying your project to WebSphere Application Server lets you connect Designer to the data source defined in WebSphere Application Server. This is helpful when you leverage the Alert Data builder to incorporate an SQL call result to the Alerts engine.
Store alerts XML files in a project during development
By default, XML files generated for alerts become part of the deployed WAR file in the portal server's installedApps folder. Storing files in the project during development prevents them from being overwritten when WAR files are redeployed and allows you to edit the files in Designer during development and testing.
Here are the properties and default paths for the various types of alerting data. The paths specified here match the default paths created in the project by the Alerts Module feature set. The paths specified must be complete paths from the root of the drive where you installed IBM WebSphere Portlet Factory Designer. For the alerts examples below, assume that the Eclipse workspace folder is created at c:\MyProjects\workspace and the project name is "MyFirstProject." The slashes in these paths are front slashes (/) and not back slashes (\) as these values need to have a universal interpretation for a variety of operating systems.
# Alert Definitions
# Generic Alerts
# Notifier Definitions
# User Alerts
# User Contexts
Enable caching to improve performance
Enable caching of alerts to improve performance. Edit an override.properties file located in the deployed_project_name/WEB-INF/config folder to include the following lines to enable caching for your type of persistence manager:
Database persistence manager
XML-file persistence manager
Use alerts logging to test and debug alert definitions
You will need to review the WEB-INF/logs/alertsEngine.txt log while testing and debugging alert definitions. Leave the default logging enabled for the alerts engine so that it generates a log on the server that contains runtime warning and error messages. You can change the name, location, and logging level by editing the set of alerts engine specific property values in the WEB-INF/config/log4j.properties file.
Enable tracing as a debugging tool.
To help debug certain complex problems, enable tracing in the log by setting the following property in the WEB-INF/config/log4j.properties file.
Enable statistics logging to help pinpoint performance problems
To track server statistics, set the logging level to INFO in the WEB-INF/config/log4j.properties file. Statistics, such as those shown below, are stored in the WEB-INF/logs/serverStats.txt file.
Alerts Engine: 10 Latency: 568
Alerts Engine/Evaluate: 8 Latency: 707
Alerts Engine/Evaluate/JavaBasedAlert: 1 Latency: 62
Alerts Engine/Evaluate/IteratedAlert: 2 Latency: 2765
Alerts Engine/Evaluate/ModelBasedAlert: 1 Latency: 16
Alerts Engine/Evaluate/JEPBasedAlert: 2
Alerts Engine/Evaluate/EBITAAlert: 2 Latency: 23
Alerts Engine/Xml-File Persistence Manager: 2 Latency: 15
Alerts Engine/Xml-File Persistence Manager/Load User Contexts: 1
Alerts Engine/Xml-File Persistence Manager/Load Alert Defs: 1 Latency: 31
Alerts Engine/Cache Hits: 2
Alerts Engine/Cache Misses: 8
This output is useful when debugging your alert implementations since it tells you what alerts were evaluated and how long the evaluations took. For example, the line /Alerts Engine/Evaluate/EBITAAlert: 2 Latency: 23 tells you that the evaluator for EBITAAlert was invoked two times and these invocations took 23 milliseconds in total.
Alerts Engine/Cache Hits: 2
Alerts Engine/Cache Misses: 8
tell you that the Alerts Engine found at least one cached alert that precluded the need for two invocations of an evaluator, but that eight other times it was required to invoke alert evaluators to satisfy alert requests.
Designing Alerts for Performance
The Dashboard Framework alerts engine is a data-driven process. The engine uses one or more alert definitions and one or more data provider models to determine the set of alerts active at any point in time for any user. There are many factors that influence the overall performance of the alerts engine, but, in general, the two biggest factors are the amount of data required to generate alerts and how long those alerts can be cached by the engine.
Each alert definition uses a provider model to retrieve data from a back-end. This data is typically in the form of a row-set (rows of columns, usually from a database query). Each time the alerts engine needs to determine the set of active alerts for a definition it may need to invoke the provider model and retrieve the available alerting data. If the engine decides to invoke the provider model it will iterate over the returned rows evaluating the alert for each row's unique set of columns. At the end of this evaluation process the engine will have generated zero or more alerts from the row-set. The majority of the time required to process an alert definition is therefore dependent upon the performance of the back-end and, perhaps more importantly, the number of rows returned by the data provider.
Alerts can be cached by the alerts engine. When caching is enabled the engine will first look in the cache to determine whether it has perviously generated alerts that are still valid. If valid alerts are found in the cache they are returned by the engine. On the other hand, if caching is disabled or cached alerts have expired, then the engine will be forced to invoke the data provider model to retrieve the current set of alerting data and re-evaluate each row. When the engine has completed the re-evaluation process it will cache the alerts if caching is enabled. From this it can plainly be seen that the alerts engine does far less work when it can return cached alerts.
Alert Design Best Practices
Design alert definitions that use small bounded sets of aggregate data rather than “raw” low-level data with unbounded sizes. Since alerting performance is so strongly influenced by the amount of data required for evaluation you can increase alerting performance by only relying upon aggregated data. For example, when designing an alert that works with sales data the most performant option would likely be to use a data provider that returns summary sales data aggregated by region or office. The number of sales regions and offices would be small and bounded therefore no matter how many sales records are created the amount of data retrieved by the alerts engine remains small and constant.
Enable caching in the alerts engine and design alert definitions so that generated alerts are cachable.
Avoid creating alert definitions that use data with a short lifetime. For example, you can create an alert definition in which the generated alerts expire one minute after they are created and cached. This short lifetime could cause the engine to invoke the data provider model every minute and retrieve all of the required data. A better approach is to choose data whose lifetime spans one or more days.