WebSphere Enterprise Design Guidelines for Component Broker ApplicationsIntroductionDesigning systems and applications for WebSphere Application Server Enterprise Edition (WAS EE) Component Broker (CB) is a bit different than you're probably used to. One of the first things you may notice is the use of the word "application" takes on a whole new meaning. This new meaning reflects the shift from monolithic standalone applications to systems of reusable collaborating objects and components, distributed over a heterogeneous network. This design guide is intended to help you understand some of the special considerations that must be taken into account when designing systems and applications that are to be implemented using the WebSphere Application Server Enterprise Edition and framework. You will also find a wealth of information in the Component Broker (CB) documentation. Where appropriate, this guide will give a cross-reference to relevant information appearing in the WAS EE documentation or elsewhere. This document is aimed at developers building applications using CB/NT that will be deployed on CB/390. Once you've made the decision to use CB for the implementation of your application, many of your design decisions have been made for you. Because CB is a framework implementation of the CORBA II specification, you will have a distributed client/server model with a client proxy running on one machine and the Business Object implementation running on a different machine. Communication between them is done using IIOP. An added feature of this architecture is language independence. The client proxy can be written in Java, C++, or ActiveX and the Business Object implementation can be written in Java or C++, with C++ implementations providing the best performance. Your big decision here will be which language to use and where. As mentioned above, CB is a framework and the CB Managed Object Framework (MOFW, not to be confused with the meta object framework) makes a lot of design decisions for you on the server side, but also allows you to extend or customize the server architecture (within limits) to meet your specific needs. The Object Builder tool along with the WAS EE documentation will guide you through most of those customizations. Given that a fair portion of your design will be dictated to you by CORBA II and the MOFW, this guide will discuss some the issues you need to consider above and beyond the basics. We will begin with a quick review of the MOFW object layer followed by a discussion of how to structure the server side of your application, then provide a few notes on structuring the client side of the application. We will then continue the discussion by sharing some common design patterns that have been used, some useful tips regarding the use of the CORBA services, and finally some general considerations to keep in mind when designing your applications. Contents
Programming Model in a Nutshell
For a quick overview of the CB Programming model, see A Nutshell Guide to the CB Programming Model As mentioned in the introduction, CORBA allows us to have client applications that are written using a programming language that is different from the programming language used by the server implementations. We have one more wrinkle to consider; another choice to make. On the server side, we are restricted to using C++ everywhere except the Business Object (BO). Since the BO is the place where your programming efforts and talents will be directed, and the Managed Object (MO), Data Object (DO), and Persistent Object (PO) get generated for you by the Object Builder tool, CB gives you the ability to create and run BOs written in Java. This is primarily an implementation issue but it's good to be aware of it at this point. Your should decide early which implementation language to use for your BOs. Server Application Structure
The basic components you have to work with on the server side are the MO, BO, DO, PO, and Procedural Adaptor Object (PAO). Of these components, the MO is really the one we're looking at from the application perspective. We talk about the BO a lot but, before all is said and done, we convert it into an MO so that it can interact with, and use, the services provided by the Managed Object Framework. In the end, a Business Object (BO), Composed Business Object (CBO), and Application Object (AO) are all kinds of Managed Objects. The DO, PO, and PAO are internal helper objects that help us implement the behavior of the Managed Object. Our focus here will be on how to use the Managed Objects to construct an application. Note: Sometimes, to help distinguish between the BO and CBO, we call the BO a Basic Business Object (BBO). When constructing applications, we use a layered approach. Using Rational Rose® we can create a class model illustrating the uses relationship between the three key layers.
Figure 1 Layered Application Architecture Each box in the diagram represents a package of classes. Each package contains a set of interfaces and possible implementations. The packages are given suggestive names and inside each package rectangle are the names of the interfaces exported from each package. This layered construction is a design pattern commonly used when building large complex applications. (See Buschmann) Top LayerThe top layer consists of Application Objects (AOs). The primary responsibility of the AO is to encapsulate the behavior of a Use Case, and to provide a well-structured and efficient interface for client programs to use. An AO normally corresponds to all or part of a Use Case documented as part of the application analysis.
Some general responsibilities often assigned to Application Objects are:
Note: If you're already familiar with the Enterprise Java Beans programming model, the Application Object plays the same role as the Session Object. We will discuss the notions of statefulness a little later. Middle LayerIn the middle layer, we find the Composed Business Objects (CBOs) and some Basic Business Objects (BBOs), some of which might be exposed directly to clients, but usually not. You will want to use a CBO when you discover that you have some business entity that has state data that comes from disparate back-end systems; for example, some from CICS, some from DB2, some from Oracle. The ground rule is that a BBO can only be associated with a single container. Therefore, when you have the situation described above, you can use the CBO to construct the higher order object that brings them all together into a single interface. The purpose of the CBO is to provide an integrated layer representing the logical business entities that the Application Objects can use to fulfill the required functionality of the overall application. Composite Business Objects are documented in the Component Broker Application Development Tools Guide. Lower LayerThe lower layers contain the BBOs. Typically, a BBO directly represents a business entity that is managed by a particular back-end. For example, a given BBO might represent a business entity whose persistent state is stored in a DB2 database, managed by a CB DB2 Container, whereas another BBO may represent a business entity whose persistent state is managed through a set of CICS transactions, a Procedural Application Adaptor (PAA) Container. Each BBO will have a structure largely determined by the existing transactions and data. Where only one back-end store is involved, it's normal to omit the CBO layer and structure the BBOs according to the logical needs of the application. The main advantages of splitting the server logic according to this layered structure are:
Figure 2 Layered Component Architecture Each of the classes in this structure, whether AOs, CBOs, or BBOs, translate into a Managed Object. Each will normally have an interface, key, copy, BO Implementation(s), DO Interface, DO Implementation, and MO. To support the modularity suggested, each package can be placed into an IDL level module in Object Builder. Application Object Design
A number of considerations affect the design of AOs. Normally, an AO is a transient BO. This means it's not backed by a persistent store and will vanish in the case of a server failure. Being a transient object, there will be a system-generated univerally unique identifer (UUID) for the key. What this means is that there is no good way to look it up, so the only way to get back to one, once its been created, is by way of a direct reference, the IOR. In the case of CB/390 deployment, the UUID should be created on the server and returned to the client. Currently UUIDs created on the client side are not usable by CB/390. Transient AOs are workload managed on WAS EE/390, which means that there is no affinity. Each invocation of the transient AO will get a new instance, possibly in a new server region. The OS/390 Workload Manager can be configured to create a single server region. But server recycling must also be turned off. These settings affect the scalability of the application, but they allow server affinity in CB/390. See the Component Broker Administration and Operations Guide. Figure 3 Server Affinity The main reason for the lack of server affinity is that CB/390 is tuned to use the built-in sysplex facilities for server replication, dynamic activation, and load balancing across server regions and connection management to MVS resource managers such as CICS, DB2 and IMS. See the Component Broker Getting Started with Component Broker. How many Application Object instances?Consider the question of how many instances of a given type of Application Objects there should be in an application. The following answers are possible:
Client Programming Models in a
Nutshell
When it comes to assembling all of these parts in to a completed system, we have several options on the client side. The client model you decide to go with will depend on whether you need a thin client or can deal with fat client, whether you have strong security requirements, minimal, or none, or whether you will be using a Web-based access versus a directly connected application. Choosing one client model does not exclude the others. Because the client is separated from the server implementation by an IIOP bridge, you can have several different kinds of clients using the same server implementations. CORBA ClientsThese are interactive applications in which client programs are written as stand-alone applications, Java, C++, or ActiveX, and they communicate directly with the server implementations via IIOP. This model allows for stronger security, fatter clients, and easier understanding and maintainability. The user interface will be developed as you would normally develop a user interface for a C++, Java, or ActiveX applications. This is the default client programming model and is described thoroughly, with complete examples, in the Getting Started with Component Broker. For details on configuring each of the client types, refer to the Systems Administration Guide, chapter 8. Browser ClientsIn this model, the client user interface is a Web browser. There are two options available to you:
For details on how to configure each of these, refer to the Component Broker Systems Administration Guide. Design Patterns
"CB-lite"Some projects choose to use only some of the facilities of CB, typically because of the pre-existence of customized adapters to back-end technologies that require direct interfacing to these adapters rather than using CB's PAA technology. The most common technique is use a Transient BO to access a back-end when not using CB PAA technology. CB/NT does, however, provide Application Adaptors for CICS , IMS, and MQ that support pessimistic and optimistic distributed two-phase commit protocols. See the Component Broker Procedural Application Adaptor Development Guide. In the case of CB/390, adaptors exist for CICS , IMS, and DB2. Only synclevel 2 pessimistic 2 phase commit is supported currently. See the Component Broker Assembling Applications Guide and Integrating Component Broker With Existing OS/390 Applications. Session contextIn any interactive application, an important issue is how to implement "session contexts"; that is, contexts in which a user works. A session context is closely associated with a Use Case and can be thought of as a concrete realization of a Use Case in a running system. Usually such a context is associated with one or more windows on a screen. To make a context useful for the user, data must be imported into the context for the user to work on. Contexts may be sequential or concurrent, and nested or non-nested. Data can, and often will, be shared between contexts. Users may want to abandon contexts (for example, by pressing a Cancel button) or confirm them (for example, by pressing an OK button). Typically, for a given user there will be an outermost context representing the user's login session and, within it, nested contexts representing the various items of work being carried out. From one perspective, contexts are a client-tier issue because the client is the place where the user wants data to be immediately available to work on and does not want a client-server round-trip to access every data item. On the other hand, the objects that contain the data required to make a context useful are normally instantiated on the server. Furthermore, html/http clients cannot retain a context between pages except by using tricks such as "cookies". In general, then, contexts are represented on both client and server, and the application designer must decide how to partition the overall behavior of a context between client and server in order to maximize usability for the end-user. The general case would be where contexts are both concurrent and nested. Concurrent means that a given user may have several contexts open at the same time. Nested means that a child context imports data and/or object references from a parent context, and the parent context is then suspended (that is, not accessed or updated) until the client context completes, at which time updated data objects are delivered back to the parent context (but not made globally accessible). Conceptually, the easiest way to manage communication between all contexts is to treat them all as independent and to manage their coordination via server transactions. From a CB perspective, an Application Object is instantiated to represent the context on the server, with a corresponding client-side context object on the client. The AO is responsible for gathering up the objects required for the user to do the work described by a given use case, for handling the communications between the server and associated client, and for managing the updates to those objects by using CBs transaction facilities. The simplest approach is for the Application Object to start a transaction when the context starts, and to commit or rollback the transaction when the context ends, thereby propagating all changes within the context to the database. This simple approach has a couple of major drawbacks:
The first drawback can be mitigated by the use of events or notifications. The second would be mitigated by an implementation of nested transactions by CB. At this time it's recommended that you stick to the simple model described above and avoid attempting to address either of these requirements. Session state BOWhat do you do when you have an Application Object that needs to remember session information or a session context? In this case, we can create a special Business Object to hold the information. We call this a session state BO. The session state BO can be transient or persistent; if it is persistent, then you can have recoverable sessions in the event of client or server failure. When using this pattern, you'll be limited to a 1:1 relationship between the Application Object and the client. Handling "nested" transactionsWhat happens when a method starts a transaction, and then calls another method that also starts a transaction? CB only supports a 'flat' transactional model, so attempting to start a transaction when another is already running (for a given flow of control) is always an error. If you do this, an exception will be raised. There are several ways in which some (but not all) of the effects of nested transactions can be achieved. There are several cases. If a transaction is already running and a call is made to an AO which itself starts a transaction:
Of course, it will be necessary to record that a new transaction was not started (either as a local variable in a method or as a attribute in the AO) so that the running transaction is not stopped (committed) when the AO's own logic is completed. If the transaction status is recorded in a local variable, then all logic must be in a single method, which limits the scope of the 'subsumed' transaction to a single call to the service AO. If the transaction status is recorded as an attribute, then multiple calls can be used, however, this approach will only work if the AO has only one flow of control is, it is being used by only one client. One problem with this approach is that the BO instances and other resources (for example, the corresponding DB2 table rows) used by the AO are locked for the entire duration of the transaction. If those BO instances are required by other transactions, there is a risk of serialization and a significant reduction in throughput. The AO can test the transaction state and if a transaction is already running, then that transaction is suspended, and a new transaction is started, the appropriate work is done, and the new transaction is committed. Of course, it will be necessary to record the 'context' of the suspended transaction, so that it can be resumed later. As above, this can be done either as a local variable in a method or as a attribute in the AO. If a local variable is used, then the 'suspend' and 'resume' calls must be in the same method; this limits the approach to a single AO call. If the context is retained as an attribute, then the AO must be used by only one client. This approach does not suffer from the problem of 'growing' a transaction to lock other BOs for a potentially long time; if the BOs and other resources used by the AO are shared by many users, then this approach will reduce the amount of transaction serialization and, therefore, increase the throughput. Using container policies - If you have a simple model, where a single method on an Application Object begins and ends the transactions, and it uses the BOs in a straight-forward and efficient manner, then you can use the container policies to help you manage your transactions very transparently. This technique is not a way of managing or simulating nested transactions but rather a way to avoid the situation. Assume that the Application Objects will be the only ones to initiate transactions on a per-method basis and that the Business Objects that require a transactional context will run as long as there is an open context. All you need to do then is associate:
Optimized client-server communicationTypically, more data is shipped from server to client than the other way around. This is because a user normally requires a lot of data quickly to establish an understandable working set of objects, but typically creates a rather small amount of data once the work begins. The exception to this generalization will be in the creation scenarios where the objects are initially being created. Potentially each access to an attribute on a Business Object, Composed Business Object, or Application Object is an ORB round trip. This can be very expensive in terms of performance, therefore our goal is to minimize the number of ORB trips. We can do this in several ways:
ValidationWhen doing validation, there are two major categories to consider.
Caching singleton objectsCaching is a well-known technique where data, or objects, are stored for future use, rather than being re-computed or re-fetched from a backing store or database. In general, caching can be applied widely in Component Broker applications to improve the run-time performance. However, caching is not always effective: clearly, if a data item or object is used once only, then the performance will not be improved and may be made worse if the overheads of cache management are nontrivial. Extensive use of caching gives rise to increased dynamic memory usage, with its consequent heap management (or garbage collection) overheads. Caching can also result in increased virtual memory size and therefore more paging, thereby reducing the performance of the application. In general, caching techniques should be used with caution. Many Component Broker applications use managed objects that are never instantiated more than once. These objects may be used very frequently within an application, so caching these singleton objects for future use can give very significant performance improvements. Examples of singleton objects suitable for caching include:
There are several possible approaches to the caching of singleton objects:
Note: The code generated by Object Builder for 1-n relationships, when the Home and Key pattern is selected, always creates a new Query Evaluator for each call. Significant performance improvements can sometimes be gained by rewriting the generated code to cache a query evaluator; using a BO implementation attribute declared as static seems to be the most effective way of achieving this. Single BO interface, multiple BO implementationThe ability to create a single Business Object Interface and have many possible implementations is one of the more powerful features of WAS EE. Why would you ever want to do this? You need to do this essentially any time you are migrating from one kind of implementation to another, where both implementations must exist simultaneously. This is not an uncommon scenario in today's world.
As you've probably guessed by now managing this can become pretty tricky. The good part of the story is that we have the tools at hand to do it. Look at the next section on Host/Cell/WorkGroup scope and also read up on Factories, Factory Finders and Location Scopes in the Advanced Programming Guide. Host/Cell/Workgroup scopeBy using the Host, Cell and WorkGroup constructs in conjunction with the Factory Finders and Location Scope Objects that are part of the Life Cycle Service one can place a particular Business or Application Object in a given scope and then create a Factory Finder that will look in ever increasing scopes, from host to workgroup to cell, to locate a given object. Example
You create factory finder that will first look in the host scope, if not found there it looks in the workgroup, if not found there then it searches the cell scope. You many sure you are the only one that has it in your host scope, your workgroup only has visibility to the workgroup and the world knows nothing of your workgroup. Avoiding serializationThe CB frameworks and runtime environment are designed to be highly concurrent, to support high throughputs in the presence of back-end resources such as databases and transactional systems. Clearly, shared resources which are accessed by multiple threads must use locks which are often hidden inside transactions or sessions. To ensure correct behavior; such locks in general serialize two or more threads, and reduce the effective concurrency of the application. Unnecessary serialization can significantly reduce the performance of a CB application, and should be avoided where possible. Key generatorsThe general issue of unique key generation, and how to organize it, is of some importance in the design of CB applications. Using a random number key generation approach is often a bad idea, at least when rand() is used, since it is all too easy to get the same key generated. One popular technique for the management of unique keys for persistent business objects is to use a key generator. This is an ordinary persistent Business Object whose responsibility is to generate unique IDs to be used as primary keys for other business objects. Key generators can cause undesired serialization if used carelessly. To avoid serialization on key generator objects we want either:
The second approach can work but it adds a few constraints so let's take a look at the first option. When the transactional unit of work is relatively long and the generation of the key is required as part of that transaction then we have a conflict with the first strategy. Usually in this case the key being generated is required for the creation of new objects which are be created as part of the transaction. In order for the transaction to proceed and succeed, there must be a valid key. This gets further complicated if the business requirements dictate that the generated keys are sequential. One approach is to suspend the business transaction, and generate a new key in a separate very short transaction. The logic would be something like:
This approach only attempts to generate a key when we know the business logic is OK. Thus, we ensure that only required keys are generated, and only the rare event of a server failure do we get non-sequential keys. An alternative approach is to provide functionally equivalent key generators that generate keys for specified ranges. CORBA Services
CORBA specifies the interfaces for many functions required to run and manage distributed object systems. No implementation of the CORBA II specification implements all of the services completely. The strategy for all CORBA implementers has been to get the functions that are absolutely required for a working system supported first and then to follow on with the others as they can. The CORBA services provided with CB give the system designer many powerful functions in a platform independent manner. It is strongly recommended that the system designer review the sections in the Advanced Programming Guide on the Event, Life Cycle, Naming and Query Services before starting the design of a system. The following sections will point out a few things that you probably wouldn't readily glean from the standard documentation Relationships and queriesIt should be noted that currently with WAS EE/390, there is a restriction that all Business Objects involved in a query, and the query evaluator itself, must be in the same server and therefore on the same machine. Clearly, this limits the circumstances in which large queries can be used to replace long navigation chains; alternatively, this limits the freedom to place Business Objects into multiple servers for performance and scalability advantage. It is likely that this restriction will be removed in future versions. Distributed query is supported with CB/NT v3.0. Also, note that any performance advantage of using complex queries will usually only be gained if the query is completely pushed down to the underlying database manager. Where significant query functionality such as selections or joins, are implemented in object-space, then poor performance can be expected. In general, then, the approach of re-implementing lengthy object navigations as OO-SQL queries is really only suitable where all of the BOs involved are in the same server, and are made persistent using the same database. IdentityWhat happens when you end up with two references (IOR) to your bank account object? Were you just given a new bank account or is it a reference to the same bank account you've been using for the past ten years? Because we are working with a distributed system using proxies to the actual implementation objects on a server, the answer to this question isn't as easy as you might think. To establish whether two references actually refer to the same object implementation we use the Identity Service. The Identity Service provides methods such as is_identical(), that can be used to compare two IORs to see if they are referring to the same implementation object. Object References (ORs) and Stringified Object References (SORs) are not reliably unique; e.g. they include the machine name where the object is currently resident. Moving an object around, re-activating that object on a different machine within a server group will cause the OR or SOR be different. The is_identical() method does execute on the server and is guaranteed to give you the correct result. The problem is that its expensive from a performance perspective. For this reason there is the constant_random_id() method which returns a first order approximation, e.g. it will tell you for sure that two IORs do not refer to the same object but if you get a true you have to do one more test using is_identical() to make sure. If you have a collection of objects over which you want to do identity comparisons, and you want to avoid a bunch of remote is_identical() requests, you can cache the constant_random_id in the collection and use that as an approximation of whether two objects are identical. Compare the constant_random_id of both objects. If they're different, then you know they're not the same object. If they're the same, then you won't know for sure whether they're the same object -- you could end up with the same hash value for two different objects. But then you can invoke is_identical() to find out for sure. General Considerations
This section of the Guide gives guidance on how to choose between the various capabilities that CB offers. Each part of this section covers an area of CB where the designer has to make choices about how to use CB functionality, and discusses how to make these choices. Shared and Multi-copy Object SemanticsPersistent BOsWhen persistent BOs (i.e. BOs which are backed by a relational database) are used, there are two possibilities for the case where the 'same instance' of a BO is used in many transactions. Note that we are not considering the case where a persistent BO is used outside transactions - basically, this does not work in any useful way, and it should be avoided. If the caching service is in use (with either DB2 or Oracle), then we get multiple copies of 'the same' object instance - so-called multi-copy semantics . The (transactional) locks on the database sort out concurrent access to data, depending on the kind of locks (CS, RS, RR) being used. This can give good performance, when multiple transactions share the same object/table/row. Note that this continues to work correctly, even when two or more servers (on different machines, perhaps) share access to the same table/row. The caching service is not available on CB/390. In the case of CB/390, caching is delegated to the back-end resource manager such as DB2 or IMS. See the Component Broker books, Integrating Component Broker With Existing OS/390 Applications and Planning and Installation Guide. On the other hand, if we use a BO with embedded SQL (DB2 only), then we get single-copy (also called shared-copy) semantics, and only one transaction can use each BO instance at a time. This can cause serialization, and reduce throughput - but the use of embedded SQL is quite zippy if serialization does not occur. For DB2 and embedded SQL, it is possible to gain performance when many transactions share a single BO instance, by carefully suspending and resuming transactions. Also, we get multi-copy semantics when PAA-backed BOs are used. This means that multiple transactions (or multiple sessions) can use the same BO; the transactional concurrency characteristics are determined by the transactions/sessions in the back-end. Read-only Persistent BOsIn the case where we have BOs backed by DB2 tables, using embedded SQL, where those tables are marked 'read-only', there is NO special behavior used by CB. In the case where we have complex 'reference data' (which, for practical purposes, never changes) in databases, the right approach here (usually) is to use caching service. Whether optimistic or pessimistic makes no difference, if the data is genuinely read-only; it is important to set the refresh interval suitably long. However, in cases where there is little chance of a cache hit (which is exacerbated by usage of WLM which effects the creation of a cache on each server) the embedded SQL approach can give better performance. Other factors from DB2 (sizes of tables, access paths, package files, etc.) can also make a difference here. Also, remember that, in many applications, the amount of total time spent talking to DB2 is a small percentage of time spent in the transaction and thus we see little difference between caching or embedded SQL. You really have to try out some options against reasonably populated databases. In general, it seems best to start with caching and then go to embedded if needed. Finally, if you use embedded SQL, and have a small finite number of objects in this read-only state, then having them live in a 'never passivate' container can speed things up a bit more. The MOFW will then only do a re-retrieve on the table and not have to rebuild the MO assembly (MO, DO, mixin, key). Transient BOsFor transient BOs (i.e. BOs with no underlying data store; the data is entirely held within the MOFW), various cases are possible. We often treat transient BOs as Application Objects (AOs). For an AO with attributes, used in a transaction, we must get single-copy semantics, since the data's only 'in' the AO. Note that there is a risk of performance-degrading serialization on AO. In general, we will want to have separate AOs for each client, and therefore for each transaction. For an AO with attributes, which is not used in a transaction, but otherwise used concurrently (by multiple threads for multiple clients, for example), there must be some be some locks around. The MOFW does not do any locking for you; this means you have to do the synchronization yourself. Generally, we should expect to see mutexes around setting of private attributes. In Java BOs, we might see the Java BO using a separate Java class to hold the attributes and those attributes are changed with synchronized methods. The internal version of the 'Big3' sample (useful to example this in detail) does exactly this for static variables, and this approach is good for instance data as well. This makes the management of the data done almost entirely via the BO. One could also use a delegating DO and do the special synchronization stuff in the getters and setters in the DO so as to keep the BO logic clean. So, the unit of locking is up to you: it could be 'per attribute' or 'per method', as above. Or, you could lock the state of the whole object (that is, all attributes) on one thread, with all other threads blocked. In either case, there is a risk of serialization causing a reduction in performance. The bottom line is: CB provides per-transaction level safety in the MOFW; you need to think carefully about what is happening to shared state outside a transaction. For an AO without attributes, but with a primary key, the situation is unchanged; we get the same story as the two cases above. For an AO without attributes, and with a UUID key: there's no real shared state here, so threads and transactions can safely share the same object instance. There is no risk of serialization, and no blocking. There is no need for synchronization logic, since there is no state to protect. Composed BOs (CBOs)CBOs with their 'own' attributes behave in the same fashion as the corresponding BO and AO cases, as discussed above. CBOs with no state (attributes) of their own (all state is actually 'in' the BOs) is more interesting. In this case, multiple transactions can use the same CBO instance with no risk of serialization. The serialization scenarios for the CBOs are completely determined by the serialization characteristics of the 'underlying' BOs. For example, consider a scenario where we combine BOs from two back-ends (Oracle and DB2), where the BOs themselves use the caching service (to get the desired level of concurrency and transaction-level locking). We do not get BOs which support a desirable high level of concurrency when multiple transactions are used, but lose that concurrency if they happen to combine these BOs using a CBO. Rather, for a completely 'transient' CBO, we get the serialization characteristics from the (worst of the) underlying BOs which are actually used in a particular case. If the CBO is a transient object that has references which are manifested at first touch or when the object is activated, then it is very much like the AO story discussed above. If we are going to change the references, then that would either have to be synchronized by our own code in the non-transactional case, or would be synchronized by the MOFW in the transactional case. Transactions and sessionsIn CB/NT BOs may be associated with transactional or session containers. In CB/390 there is no session service, there are only transactional containers. In CB/390 all work is performed under a transaction initiated by:
As to be execute there are trade-offs in performance between the approaches, client vs. container controlled transactions. Figure 4 Client Controlled Transaction
Figure 5 Container Controlled Transaction This time the client starts with findByPrimaryKeyString() without controlling the transaction explicitly, the container begins the transaction implicitly.
As you can see in the client-controlled transaction scenario there are only three TP monitor transactions. Whereby in the scenario of the container-controlled resource, every access to the same object is a separate transaction and one or multiple TP monitor transactions, which counts up to ten. Consider this in the light of the explicit vs implicit container transaction policies discussed previously in the section on nested transactions. Fat transactionsAbout the PAA cache: A CICS transaction returns data for multiple objects of multiple object type For example, given a CICS transaction that has a customer Number as input, we get data for 1 customer object, N property objects and M agreement objects in return. In CB R3.0 multiple objects can be returned but they have to be the same type and it must be in response to a query. Due to some limitations in R3 design, the PAA will re-execute retrieve on each of these objects returned. The bottom line is that R3 gives more function than R2, query over PAA objects, but the performance is not good because retrieve is being executed too many times and it still does not handle the multiple object type requirement. What we want is something like a database stored procedure which can return multiple occurrences of multiple different table rows. This data is held in the data cache service until objects that need the data are activated either by subsequent findBy or query operations. Here's some thoughts on how one might go about constructing a PAA which fluffs up many BOs from a single CICS transaction, using the capabilities of CB2.0. Firstly, let's take a simplified example, for illustration: imagine a Customer object. Each Customer has one or more Property objects (Houses, Farms, etc.) to which a service (e.g. the supply of water) is provided. So, for each Customer, we have a tree of objects, the contents of which might wish to be presented to a user (via a GUI of a pure-client program) in response to a single query involving the custID (key). We have existing CICS transactions which return a flattened version of this object structure (in a COMMAREA), including the attribute values. Of course, this supposes that there is a maximum number of Properties for a Customer. We want to build the corresponding BO structure, by running a single CICS transaction; we want to make persistent any changes in the Customer object, or in any of the Property objects, by running a similar CICS transaction. Let's start at the bottom. I'll assume that the CICS transaction COMMAREA has places for the attributes of Customer (custID (key), name, etc.), and is then structured with a repeated group, with each group representing a single instance of a Property. The repeating group has places for the attribute values of the Property (propID, address, propertyType, etc.). The group repeats (say) 10 times in the COMMAREA; if not all Property instances are needed, then the propID key is 'null', or zero, or something else easily recognized. Now, we move to the PAO. We build, in the usual way using VisualAge for Java, a single PAO. This provides attributes corresponding exactly to the COMMAREA structure for the customer, and then 10 sets of attributes for Property. (There might be other alternatives. The PAO could have fewer attributes; for example, one attribute per Property, which is a structure corresponding to the key and attributes of Property. Or, we could have a single attribute for all Property objects, of the form 'sequence of structure', or perhaps 'array of structure', or even 'array of copy helper'.) So, we have a single PAO, importable into OB, which has all the data we need. Now, let's flip to the top. We can easily define the BO interface structures we want:
Now, we have the problem of gluing together the BO interfaces and the PAO. We define a BOImplementation of Customer in the usual way, together with a DO interface. Nothing special so far. We define a BOImplementation of Property, together with a DO interface. We define a transient DOImplementation for Property. We define a Customer DOImplementation, adding implementation attributes for all of the Property attributes, repeated 10 times. We declare we will use PAA persistence. We 'connect' the Customer attributes to the corresponding PAO attributes in the usual way. We intercept the 'activation' call on the DO to ensure that, when the BO/DO pair for Customer is created, then we also create the BO/DO pairs for all of the Property information which does not have a null propID. We provide hand-written implementations of the 'add', 'remove' and 'list' methods, in the Customer BO Implementation, for the relationship to Property. We use a transient reference collection to keep the Property objects for this relationship. We call the 'add' Customer BO method to install the relationship between Customer, and each of the Property objects; this call is made from the Customer DOImplementation. We also set the customer attribute of each Property to refer to the Customer, in the same way. (Alternatively, we could intercept the activation call on the Customer BO (not DO), and write the code to create the transient Property BOs there. This would involve promoting more of the PAO's interface into the DO interface (but not the BO, of course), but might otherwise be more convenient.) So, I've tried to sketch a situation where a Customer and a collection of associated Property objects are made available, in response to a single findByPrimaryKeyString() call on the Customer home. Of course, this only works if we insist that we can never attempt to findByPrimaryKeyString() on the Property home -- we can override this (in a specialized home) to raise an exception. I've also got to arrange that the 'passivation' call on the DO (or perhaps BO) is used to copy back the state of the Customer and Property graph into the PAO attributes, before the CICS transaction which places this data in the COMMAREA and calls the transaction. Lazy initializationAlmost all programs require data structures, objects, or whatever to be initialized. Sometimes, performance improvements can be gained by avoiding initialization until it is absolutely required (so-called deferred or lazy initialization), in cases where the initialization is rarely needed. Alternatively, programs with complex flow can sometimes be improved when duplicate calls are expensive Initialized to a suitably narrowed nil (OB provides this) Use is_nil(), rather than == NULL Avoiding ORB call overheadTo share common business function between multiple Application Objects one can use either composition or inheritance. Composition requires that inter-object calls (for managed objects) use the ORB, and so incur the ORB overhead. An alternative is to use inheritance -- build an application-specific base Application Object (probably abstract), with derived AOs. You can use the scoping operator to avoid ORB overheads. In practice, this cost is likely to be small, except in cases where the business logic is exceptionally complex, and where there are a large number of calls. References
Component Broker, Procedural Application Adaptor Development Guide OS/390 Component Broker Version 1.2, Administration and Operations Guide OS/390 Component Broker Version 1.2, Assembling Applications Guide Frank Buschmann, et. al., Pattern-Oriented Software Architecture, Wiley, 1996 OS/390 Component Broker, Getting Started with Component Broker for OS/390 Component Broker, Integrating Component Broker With Existing OS/390 Applications Component Broker, Programming Guide OS/390 Component Broker, Planning and Installation Guide Component Broker, Application Development Tools Guide Component Broker, System Administration Guide This information is provided by IBM
Corporation © Copyright IBM Corporation 1999-2000
|
|
Rational Unified
Process |