WebSphere Enterprise Deployment Guidelines for Component Broker ApplicationsLifecycle MapAnalysis -> Design -> Build -> Deploy -> Maintain Note: Increments and iterations are assumed throughout the lifecycle. IntroductionThe lifecycle map presented above is a bit simplistic for our needs but it does get us into the ball park. What's missing is the distinction between deployment for testing and for production. The assumption is that when we talk about deployment, we are talking about deployment in the production environment. The primary focus of the following discussion will be on deployment into a production environment but we will also take a moment to consider deployment into a test environment. What does it mean to deploy an application? What does it mean to deploy an application in an enterprise environment? What is an application in the WebSphere Application Server enterprise environment? Deploying an application is the process of taking a set of files, data, and executables, and making them available to the end users. The larger the environment, the greater the number of users, the more demand for continuous operation, the harder it is to deliver applications or updates without causing disruptions or making mistakes. It's this very complex and demanding environment that we address when we talk about the enterprise environment. There are two facets to deployment:
The WebSphere Application Server Enterprise Edition (WAS EE) Component Broker (CB) addresses the artifacts and provides the System Management tool to help deliver the artifacts, whereas the Rational Unified Process addresses the application development process but stops short of describing a process for deploying the artifacts into a specific environment. There is a reason for this for this deployment process will be very specific to the organization doing the deployment. To understand how to deploy a CB application, you must first understand your organization, tools, and processes that already exist. Then you must understand the applications and components being delivered, and last, but not least, the artifacts being delivered and the CB System Management tool that is used to deliver the artifacts. Contents
CB Artifacts
The artifacts that you have to deliver and the method of delivery will depend on the
overall architecture you settle on.
In the CB model, you define an Application which is some combination of the DLL,
OBJ, Jar, and EXE that delivers some defined functionality. The granularity of the
packaging is determined by you. Typically it will be a set of closely coupled Managed
Business Objects and any supporting files or executables they require. When you extend your system of Managed Business Objects so that they are used in
conjunction with a Web-based application, then you will have all of the additional
artifacts that come with a Web-based application. Test versus Production
Take a moment to consider the differences between a test environment and a production environment. Things that are the same
Things that are different
It is strongly recommended that there be a physical separation between the test and production environments. The more you can get the test environment to be like the production environment, the more confidence you'll have in the test results. With CB you have the ability to use a single interface to multiple different implementations. The decision of which implementation to use for a given interface is made when creating the application package. This feature, used in conjunction with the CORBA Naming and Lifecycle services, allows you to install and run different versions simultaneously. The trick is getting to the implementation that executes the code you want; debug or production, version 1 or version 2. You can do this by creating Factory Finders with custom location scopes. Example:
The interfaces are the same and the client only needs to reference the appropriate factory finder. For more information, refer to "Factory Finders" in the Advanced Programming Guide. Planning the Release
With an iterative model, all things are incremental and iterative. The realities of real business environments require that the deployment of software be done in a way that supports the ongoing operation of the business. Deployment of software must not disrupt the business. The traditional accommodation of this fact is to restrict application deployment to large, well identified chunks of functionality. The iterative development model requires that we avoid a monolithic deployment and, at the same time, avoid excessive churn in the production environment. The solution is to approach the testing and integration of components with explicit consideration of use cases. Whether dealing with a simple use case or a more complex one, the deployment of Managed Business Objects must be guided by the use cases. Remember a use case is a piece of functionality that has some measurable business value to the end user. The deployment of enterprise applications must be managed with much more formality than might be possible with more limited efforts. The size of the effort, the number of components, and the potential direct and secondary impacts are large. The requirements specification, architecture, design, development, testing, deployment, and management are accomplished by various groups within the organization. Planning for a CB release involves identifying some basic information:
Some basic tactical issues to be resolved will likely have different answers in OS/390 platforms versus UNIX and NT platforms. There will also be significant differences for deploying a new application in a pristine environment versus a new release of an existing application or an application that is being implemented in an environment that shares data with existing applications. These issues include:
The last area of planning for a release is to develop an implementation plan. The implementation plan will cover all remaining topics in this paper and clarify how the above issues will be resolved. Packaging
Releasing and distributing a CB application will be similar to releasing other applications in that all run-time artifacts must be gathered together for distribution and configuration on the target platform. Applications built with the Rational Unified Process and CB tooling result in a set of application artifacts stored in the Rational ClearCase control environment, which are related to the requirements in Rational RequisitePro® and other work products such as test cases. If new issues or problems, determined in the deployment process or later, are to be tracked properly back to the requirements, the build and packaging must preserve identification with the original requirements. In a workstation environment, this is a matter of exporting a build package from ClearCase to the same type of directory structures as used during development. In the OS/390 environment, because RequisitePro does not exist, a decision will have to be made for the control of the release package. If the organization requires that production source code be housed in a configuration and control environment other than Rational ClearCase, then the code extracted from ClearCase will be processed into the production lock-down control environment. The executable builds will be run from this environment. Make files and install scripts and processes may have to change between the development environment and the production build. With CB, OBGEN is the command line program used to actually build executables in a batch mode. The executable code is written to a directory under the control in the production lock-down environment. Physical Distribution
The method of deploying and installing in a distributed environment must accommodate the configuration control requirements of the operational organization. This means, for instance, that with multiple 390 sysplexes, the operational procedures may require specific actions to be allowed to copy executables from a build machine, or may require that the build be done on each sysplex. This is not an optimal approach for CB, or for distributed environments in general, so clarification of these rules and expectations should be done early. Installation
Installing the CB application includes such tasks as:
Processes may need to be run to populate directory and location services, depending how those services are used in application environment. Training
Most organizations create a training implementation for any significant application. This installation often is not as robust as a production one, but it should be built with the same executables. The training installation can be used as the final test of the production distribution process. This way if there is a fundamental flaw, it can be detected before a production environment is impacted. For internal users, the training environment should be available to users before they are expected to function in the production environment. For applications primarily exposed to external users, the training environment can be used as a sort of BETA access area so that selected users can interact with the system before using the real production environment. The training environment should be as much like the production environment as possible, including access to training versions of all of the technologies and external systems the production environment uses. Support
Support for a CB application resembles support for any interactive application. Application design can vastly improve the ability of an end user to function. As with any new functionality, the support organization must be aware of the new system. One of the important aspects of the Rational Unified Process is that issues, including user problems, can be related back to the requirements process for better management. There must be a mechanism either by providing the support organization direct access or by providing a process to forward issues. Good use of feedback from the support organization can contribute to improvements in usability as well as corrections to the business functions. Also, user support issues can lead to identifying performance related architecture problems that may not be obvious to other performance analysis. Installation Testing
The development process involves a great many cycles of building and testing code. The installation validation done for these test cycles is a starting point for a production validation. In the production world, the validation will be broader both because a production deployment will involve executables from a number of build models, and because it's necessary to validate the dependent environments, security, user accessibility. It's also advisable to validate the basic performance of a deployment to ensure that control parameters and configurations are correct. Legacy transactions, databases, distributed clients, shared services, and system management processes should all be examined or exercised to confirm the installation. Installation testing should be automated. Test scripts from the build and test phase can be modified and extended to assure coverage of all of the above areas. Automated test tooling run against the production environment should allow validation testing to be fast and reliable. A production installation involves modifying an environment that may be up and running, with incremental turning on and off of the new code. Testing must be prompt to assure that an error does not disrupt the system. Acceptance Testing
Acceptance testing is a slight extension to installation testing. Acceptance testing
actually starts well before the production deployment, but is not final until the
owning business user is satisfied with the operational system. As mentioned under
Installation Testing, Acceptance Testing must be prompt. Usually, with proper preparation,
the user or owner will be satisfied to participate in the installation validation as their
acceptance testing criteria. This will involve extending the installation testing with
additional use cases that exercise more of the application than is necessary simply to
assure the installation. Acceptance testing is important from an organizational view since
the business user is ultimately responsible for the business processes of the system. Migration
Migration in today's business world is a very traumatic thing. The requirement of continuing with business as usual while substantial new or changed functionality is being deployed is daunting. Migration may involve moving or transforming of data, which is called schema migration. It may involve implementing code that has dependencies on specific versions of run-time infrastructure, and it may involve shutting down and starting up processes in a live, online, world. One problem with migration is that if a one way, one time migration is planned, any unexpected problem with the new system may leave the organization in a position of either having to completely back out the migration, or to run with a crippled system while corrections are made. Neither of these is a good scenario. When data structures are changing, a careful design of the application can provide a system that incrementally migrates the data. If data access is well controlled, the processes that access data can be extended to accommodate data in both the old and new forms. This way certain processes in the new system can include the processes to actually transform individual data element instances. Over time the data takes on the new form. This approach requires some risk of additional processes and some additional overhead to always check the form of data. However, it allows a modern system to be transformed while it's running. At some point, it's advisable to run a background activity that goes through and fixes any remaining untransformed data so the accommodation processes can be retired. Data migration issues are key drivers of the need for data encapsulation. When run-time dependencies impact migration, the situation is more difficult. CB is evolving to support overlapping versions and reduce version dependencies, however this same situation could exist when legacy resources are accessed from CB applications. Any existing application systems that have dependencies on old run-time support should be migrated to newer run times before any effort is made to migrate running systems. If infrastructure can't be migrated ahead of time, there is usually no choice but to bring the systems down and go through a turnaround work cycle. This approach is traumatic and sensitive because a failure can cause the need to completely restore the prior configuration, extending the outage. Activation
Activation for online, high availability systems requires the ability to incrementally add processes, as well as the ability to incrementally add users or client processes. The initial implementation of CB based applications will involve establishing the application environment and gradually allowing user access. This can be done by either providing a middle tier redirector for clients or for actually modifying clients to address the new applications. The business needs to validate the new applications as it learns to manage the scaling, performance, security, and other operational aspects of the new environment. Because it must keep running, it is best to plan ahead for an easy way to not only incrementally add users, but also to be able to remove users back to the old system if needed. The nature of the existing and new physical architectures, including the quantities, varieties, and technologies of the clients, will determine whether it's best to build some middle processes to manage redirection or whether it can reasonably be done at the clients. With migration to new functionality in an existing application, the problem can get a bit more complicated. Because the execution environment can't arbitrarily manage different executable modules with the same name, migration of function in existing modules must be carefully designed. As in any component world, adding new methods that don't impact data structures or component interfaces can be deployed at will. The only issue is backing out those modules if something goes wrong, but that can be done by redeploying the old code. When data or interface structures change, there must be provision for parallel functions coexisting. As with the issue of adding and removing clients, new versions of components can be incrementally implemented in an environment if there is some sort of location service that can accommodate versioning and if the component interfaces expose version information. This implies a fundamental infrastructure and application design approach. There is not a generalized solution to this issue at this time. However, it can be addressed in the design of the system. Conclusion
Deploying applications in the Rational Unified Process and CB environments are different than what you're used to. The primary differences are not radical to the development lifecycle. The Rational Unified Process places a strong emphasis on use of requirements and use cases, and tracking subsequent design build and test efforts, as well as issues, back against those requirements and use cases. From the deployment phase perspective, this means that deployment testing, user assistance, and system management must feed back issues just like any other interested party. The real differences in deploying in this environment are in the granularity and incremental nature of the applications. Because componentized systems are designed to allow flexible and incremental assembly of business processes, deployment of components will potentially impact a great deal of business processes. However, because components shield their clients from details about their data and internal processes, and present clear interfaces to their clients, components can be deployed easily. The theoretical ease of this becomes a problem if a component is failing in a way that damages data or fails to keep an interface consistent. This information is provided by IBM
Corporation © Copyright IBM Corporation 1999-2000
|
|
Rational Unified
Process |