Successful development of a solution leveraging Lotus Expeditor micro broker depends on matching its features and scale to requirements in an appropriate fashion. This section outlines commonly used design considerations, patterns and topologies in micro broker scenarios to help underpin successful deployments with best practices.
Common goals and themes
At a high level there are a some common goals and themes that maximize the effect of the micro broker in a solution.
Loose coupling between applications
The first is loose-coupling between the application endpoints. This is where two communicating applications do not have tight dependencies on each other's interfaces but are connected via a messaging software component such as the micro broker.
Loose coupling is one of the key tenets of SOA. This is because business is ever changing with new business and technical requirements where services are added, removed, and moved around. Loose coupling has a number of benefits for the application, not least the flexibility to modify or change one or more applications in the system without disturbing the others. In a tightly-coupled system, change to an application has an inevitable knock-on effect, particularly if the change is to an alternative implementation or technology base. This is because the two applications are directly fused to each other using their native interfaces.
In this way, the integration between the two applications will inevitably have to change along with any proprietary interface or semantic differences. Within the enterprise this can have a number of consequences. Of primary concern is the risk of disturbance to stable and running systems unnecessarily. If at all possible we want to minimize the impact to business critical systems since an outage will most likely have real business impact, for example, loss of sales, lack of reporting data or indeed direct financial losses. From a skills perspective, this also means that potentially for each change to one system skills will be required in another system, if only to manage the risk of disturbance. Furthermore, vendor lock-in becomes increasingly likely since the cost of replacing one platform for another inhibits the ability to change vendors. The secondary issue that emerges from this tightly coupled case of affairs is that the cost of change becomes expensive, in some cases prohibitively so. This point-integration also means that the applications have to provide the sort of qualities of service around reliable delivery and recovery rather than focusing on their business logic.
Exploiting standards like JMS and messaging with the micro broker promotes a loosely coupled view of the world since the interoperability of the applications is managed through standardized interfaces and semantics. Furthermore, the micro broker provides messaging qualities of service to deliver messages reliably between endpoints with transactional control to manage success and failure scenarios consistently. In other words, many qualities of service headaches are off-loaded from the application to the micro broker and the service is left to focus on the business logic that it wants to achieve.
This is particularly true in embedded systems where sensor hardware and software often has highly proprietary low-level interfaces. The use of the micro broker and MQTT in this case provides cleaner, standardized interfaces between the components but within the small disk and memory footprint required in such constrained environments.
Separation of concerns between the edge and the enterprise back-end
An extension of the loose coupling thought is the notion of using the micro broker to separate the concerns of the edge with the back-end infrastructure. The real power of the micro broker in architectural terms is that it can both provide the benefits of a loosely-coupled framework for the constrained environment of the edge and it can also serve as a controlled gateway between the edge and the enterprise. A common pattern for such separation is shown in the diagram below.
In the above example, a micro broker linking applications together using the JMS client at the edge is linked into the back-end using a single MQ link. The data flowing into the enterprise from the edge can also be prepared into a consumable format before transmission and even filtered such that the enterprise sees only the data it really needs to from the edge. This not only creates a cleaner architecture but also reduces infrastructure load (and therefore cost) in terms of the traffic flowing over network link and also the processing power required in the data center. By creating a transformation class within the bridge, the micro broker provides a self-contained way of separating any filtering or transformation logic from the applications. This means should the transformation logic need to be changed, once again we have a loosely coupled architecture.
A key facet of how the micro broker bridge connects the edge to the enterprise is its notion of transmission control and connection policies. We can consider the connectivity of the micro broker in two parts -- the network over which the edge clients connect and the network over which the bridge is connected to the back-end. In a typical edge scenario, the edge network is a closed loop, for example a local wireless connection in the branch of a bank, with the connection into the enterprise systems achieved via a dedicated link. Having a logical separation between the local network and the remote enterprise allows the edge to operate fairly autonomously and also to cater for scenarios where the link to the enterprise is broken and communication can no longer be achieved. When the bridge connection is closed (or broken), messages will be stored up inside the micro broker and moved over the bridge when the connection is available again. This bridged pattern means that so long as the local edge network is available, the edge applications and systems can function in isolation. This pattern is particularly advantageous where the enterprise link is unreliable or particularly expensive.
Non-functional considerations for the constrained environment
It is also important to consider the typical non-functional requirements of the edge integration scenario when considering a micro broker deployment. Key to the success of the deployment is that the micro broker is used appropriately within the system. For example, the micro broker is not WebSphere Message Broker or WebSphere MQ which are both enterprise messaging servers and are designed to run within a data-center environment and for high-scale, high-throughput applications. Enterprise messaging servers typically bring with them the various systems management, administration and clustering capabilities required of those environments. In a micro broker edge scenario, we typically see much lower specification hardware and different throughput characteristics for the applications than we would for an enterprise messaging server. Hence, when the requirements for the solution start to demand enterprise messaging server style performance and scalability, this is a very good indicator that the WebSphere Message Broker or MQ might be a better alternative than the micro broker.
It should be noted that the WebSphere Premises Server provides data capture capabilities tailored for scenarios such as RFID. It provides business visibility of real world data to improve business process awareness and efficiency. It facilitates the integration and configuration of devices through an extensible interface framework, and reliably integrates sensor data into server-side business applications. WebSphere Premises Server supports multiple communication protocols, which means event messages from a wide range of sources (such as passive and active RFID, vehicle buses, healthcare devices, and many others) can be collected, analyzed, correlated and processed by a common set of services.
Deployment topology patterns
This section describes some of the primary deployment patterns for a micro broker solution. These exploit a range of the micro broker's capabilities from that of a simple stand-alone integration hub to more complex scenarios linking the micro broker with enterprise messaging servers and/or other micro brokers.
Stand-alone edge integration hub
This represents the simplest case scenario where the micro broker is acting purely as a reliable intermediary between applications.
An example of where this scenario might be used is when the micro broker is deployed as a small local "server" in an environment such as a retail branch, scientific laboratory or academic institution. The application does not require the micro broker to transmit data to an enterprise server over the bridge since its primary role is purely as a loosely coupled common ground between applications. The applications themselves may either already have their own mechanisms for transmission of data from the edge or simply have an edge-only scope (for example, the output may be reports printed on site).
Client connectivity considerations
The micro broker offers a number of possibilities in terms of the mechanics for connecting clients. As shown in earlier sections, client libraries are included with the micro broker providing both a JMS-compliant interface and an interface mapping onto the MQTT protocol. In addition, the buffered JMS client provides the capability for applications to continue to send messages even when the local network is down or fragile. Considering the following factors will help resolve the question of which particular client to use.
- Target environment for the client application. If the application is running on a constrained device platform, a good choice would be the MQTT client which is aimed at very small footprint applications. A good example of this would be a sensor device publishing readings.
- Functional requirements for the application. If a point-to-point paradigm is the best fit for your application, for example, this dictates the JMS client is the best choice. Generally speaking, JMS is the best choice of client interface for business applications in less constrained environments both in terms of richness of semantics (queues and topics) and reusability with other messaging infrastructure.
- Network characteristics. If the network over whch the client is connecting is fragile then the JMS buffered client allows the application to continue to send messages during a network outage. An example scenario is a mobile point-of-sale application where the user may be wandering in and out of network coverage in a store or warehouse.
Further details of how to develop client applications themselves are covered both in the case studies and in section 3
of this book.
A variation on the stand-alone hub scenario is when the micro broker is used within the confines of a single machine as a means of achieving simple inter-process communication. This is particularly useful when heterogenous programming languages are used for applications that need to interoperate (for example C and Java) or when two Java applications are running in separate Java Virtual Machines (JVMs).
A commonly used technique for achieving such integration is to use a socket-based protocol to normalize the communications between the environments over TCP/IP. The micro broker adds significant value since it provides messaging with the open MQTT protocol out-of-the-box. This means that the applications do not need to be concerned with defining their own integration protocols and semantics, they can simply use (or indeed write) an MQTT client appropriate to their environment. In addition to the supported Java clients in Lotus Expeditor there are a variety of third-party MQTT client implementations available on the internet.
Edge integration hub bridged to another messaging server
This represents the "classic" micro broker edge integration topology. The micro broker acts as an integration hub for edge applications with the bridge connecting it to another messaging infrastructure which might be either another micro broker or most commonly an enterprise messaging server in the back-end. As per the diagram below, a single micro broker might be bridged to multiple endpoints.
A typical scenario for such a topology is a retail store integration solution consisting of:
- Retail hardware platform of point-of-sale terminals, printers and store controllers such as the IBM 4690 platform
- A small-scale branch server such as a desktop PC or a laptop
- In-store application software over and above that of the point-of-sale such as product label printing
- An enterprise messaging back-end system based on a product such as WebSphere Message Broker.
As we have discussed earlier, use of the micro broker as the integration hub in the branch provides a flexible separation of concerns between the in-branch applications. The use of the micro broker JMS client and industry-standard payload formats such as those ratified by the Association for Retail Technology Standards (ARTS) provides a standards-based infrastructure for interoperability with third-party software. Data is sent down to stores from the back-end systems containing store-specific data such as pricing updates and sent back from the stores containing information such as the transaction log to support business reporting systems.
Reliable transfer of data from the edge
A further advantage of using the micro broker in this scenario is the provision of a reliable messaging protocol such as WebSphere MQ over the bridge for transferring data between the store and the back-office. This technology means that the integrity of the data flowing between the branch and the back-end can be assured unlike unreliable protocols like File Transfer Protocol (FTP) which is a common technique used in such scenarios. The disadvantage of unreliable transfer mechanisms against transactional, reliable message transfer is that in the event of a network failure or system crash, restoring the system to a defined state is more complex and generally requires more IT expertise in the branch to resolve. For example files can be locked, or partially delivered and so on. Furthermore, the longer the system takes to recover, the longer the back-office systems are starved of management information and the stores run the risk of having incorrect prices on shelves, directly impacting the financials of the enterprise.
Data transfer frequency
In addition to reliable protocols, the micro broker bridge can also manage the frequency with which data flows over the bridge. Many back-end systems by virtue of their legacy are typically batch-oriented and therefore are geared towards handling larger quantities of data but at specific times. For example, many retail systems are geared towards a batch of transaction log data at the end of trading. Whilst the requirements may dictate the application controls this batch, a bridge transmission control policy triggering at particular points of the day can be configured in this case such that the time of transmission can be managed separately from the applications themselves. This approach means that if requirements emerge to move towards real-time systems, this transition can be managed without necessarily having to alter the applications. Alternatively, if the back-end systems are capable, a constant "trickle feed" of the data back to the enterprise is another common pattern giving a more real-time experience in the enterprise systems. In this case an "always-on" policy can be configured for the bridge. Again, using the bridge tranmission control policy enables the connection characteristics to be altered independently of the applications, say for example should respite from the "always-on" policy be required in the back-end.
Choosing a protocol from the edge to the enterprise
When connecting the micro broker to another messaging server, as we have seen elsewhere in this book there are a number of options available in terms of the type of connector used. A number of factors will dictate the nature of the connector used. In many cases, the incumbent enterprise messaging server will dictate the type of connector. For example, a third-party JMS provider will dictate the use of the JNDI bridge connector.
Another consideration is the richness of the target run-time environment of the micro broker. Most third-party JMS providers will dictate a full J2SE run-time environment for their client library, as does our own WebSphere MQ JMS client which is a pre-requisite of the MQ JMS connector for the bridge. If the micro broker is required to run in a restricted Java environment, this may preclude the use of such providers for direct connectivity into the enterprise and therefore mandate the use of the MQTT connector. In this case, an alternative option is to connect the micro broker using the MQTT bridge connector to the WebSphere Premises Server as an intermediate point between the edge and the back-end systems. On larger-scale installations, the installed site can be separated into two domains, that of the edge and the branch. The larger-scale Premises Server serves as the focal point for a given site (or indeed multiple sites), with the micro brokers providing a number of integration hubs at the very edge for connectivity to devices such as RFID tag readers and so on.
An example of this type of application might be a warehouse using RFID technology in loading bays with the overall traffic through the warehouse tracked at a branch level. In this scenario we might reasonably push a micro broker edge integration hub down to each loading bay filtering and collecting RFID tag reads bridging into the Premises server for aggregation into a branch-level view of the business. Note that the Premises server may be physically deployed at a branch or centrally within the data center, depending on the size and scale of the branch or branches in question.
Aside from environmental factors, the functional requirements of both the edge and back-end from a messaging perspective are also a key factor. For example, if the enterprise systems are dependent on a system of queues, then the MQTT bridge connector will not suffice as it only supports the publish/subscribe paradigm. Similarly, if the enterprise systems leverage the richer message types of JMS, message headers and so on, then a JMS connection may be preferred. While existing enterprise systems could be modified to translate messages to and from the edge, the real value is achieved by integrating with the enterprise systems as they are.
Introducing the Lotus Expeditor integrator
In addition to the Lotus Expeditor client products containing the micro broker, the Lotus Expeditor integrator product is a specialized profile of Expeditor designed specifically for edge integration scenarios such as the retail branch integration scenario shown above. Unlike the Expeditor Client platforms, the integrator operates in "headless" mode in that it does not provide the graphical user interface components (GUI) of the standard Lotus Expeditor client products. The integrator has the core Expeditor run-time and micro broker at its base and builds on top of it with a number of value-added services to accelerate the development of low-touch edge applications.
A core addition made by the integrator run-time is the provision of a number of input/output adapters for access to different resource types from messaging destinations (queues and topics) to files on the local file system and FTP servers. These adapters serve both as a common means of reading and writing data but also as triggers for event-based processing at the edge. A common scenario is that the presence of a file on a disk, for example, requires some processing to occur. In retail, this might be the transaction log file generated by a store controller. Equally, this might be a message from the enterprise containing a price update. The combination of these adapters with the micro broker and bridge facilitate the integration of existing file-oriented systems with the power and flexibility of a messaging infrastructure
Using the integrator, application logic to be triggered on such events is described using a simple flow language to maximize reuse and expedite development of edge application functionality. The flow language specifies a series of steps to be executed in response to a particular trigger event. A number of different flow activities are pre-packaged with the integrator that perform common edge integration tasks or additional activities can be developed as plug-ins to the platform to meet the needs of specific applications.
A key requirement of edge scenarios is typically that as few IT skills as possible are available at the edge site. This means that in the event of a failure the edge application should recover to a known state wherever possible to minimize the need for human intervention to resolve the system. A good example of this is a small grocery store where an IT skill intensive solution will not be sustainable from a cost perspective. To this end, the integrator executes the application process flows within the context of a Java Transaction Architecture (JTA) transaction such that if a step in the process fails, the work done up to that point by the flow can be rolled-back to a last-known-good state. Furthermore, the integrator can be configured to emit Common Base Events (CBE) back to the enterprise such that in such cases process failure (or indeed success) at the edge can be tracked and handled appropriately within the enterprise.