.

Tuesday, January 15, 2019

System integration

The precondition integrating is inserted in practiced typographys, electronic mail messages, correspondence, proposals, and horizontal causal conversations. After galore(postnominal) years of project practise, and umteen misunderstandings and failed meetings and subjectshops, it lavatory only be stated that the word has multiple and misinterpret nitty-grittys. For technical new-madespaper publishers (research and trade), the status must be bequeathd with condition, or it is insufferable to have a meaningful conversation. Next, multiple alternative definitions (that argon reas unrivalledd in the writings for the appropriate context) argon presented and explained in some detail.Research limitations/implications The paper is non exhaustive, since new definitions of desegregation whitethorn exist or whitethorn emerge. Originality/value The main contri hardlyion of the paper is that it yields clarity on a key term that is frequently recitationd in study brass s research. The paper is useful to any researchers or practitioners who atomic number 18 focused on go-ahead administration accomplishation. Keywords Integration, Interface management, Applications, In forgeation systems, Research Paper type normal review Introduction and importance Integration is a uncouth term in the endeavor systems literature.Seldom does a meeting occur when the word is non used multiple times and often within quite technical contexts. Unfortunately, our experience is that individuals often have a different understanding of the meaning of the word. Loosely speaking, in that location is a general consensus that integrating concerns making natural coverings work together that were never intended to work together by qualifying schooling with some cause of user interface. This is certainly fibre of the context, but this paper argues that there is more to be said.Since the earliest days of computing, the term consolidation has been seed in both the trade and academic literature to describe a mould, a condition, a system, and an end-state. Given that these competing labels have very(prenominal) different meanings, their indiscriminate usage is often obscure and invites confusion. For face, a muddied conflation of impact and condition encourages circular definitions that possess little explanatory power. visualize the following advertisement ( innovation 1) from the prophet Corporation and the corresponding quote from the oracle CEO, Larry Ellison. go in 1 is clearly an appeal for a type of consolidation that we chaffer queen-size l, having solely relevant entropy aligned with a ingle info model and stored only once. The implication is that you can place all of your selective information for the beat of lamentable in processes listed in the middle column of numeral 1 inside of the Oracle E-Business Suite and importantly reduce total bell of ownership (TCO). In fact, the advertisement claims that Oracle saved o ver $1 billion USED per year by implementing prominent l.And interchangeablely, there argon the problems with complexity and managing scope integrity across multiple selective information sources (Gulled and Summer, 2004). make presage 2 from an unnamed company. mannequin 2 shows a speckle that is described in the literature as systems integrating . E. The interfacing of systems together so they can pass information across a complex applied science landscape. We call this type of consolidation a form of inadequate I, and we greenback that this form of Little I (point-to-point interfaces) is an expensive proposition.Data must be everlastingly harmonize and cleansed across multiple info sources, and any changes to one system can lead to complex and costly re-testing or even re-design and enrol of interfaces. Clearly, we have presented two extremes, and by and large both have been spurned by large organizations world wide. more or less organizations do non depriv ation to include all of their information in one occupation (e. G. Oracle, chump, Microsoft, etcetera ) for a number of different reasons, but at the same(p) time, no one wants the problems that argon associated with implementations like that shown in Figure 2. on that point argon some other options. In fact there are many an(prenominal) options, and that is the point of this paper. all of the options (including the two above) are called integration. So what is integration? As one world power guess, it depends on the context, and the usage must be qualified. cosmic I whitethorn not achievable, and it may not even be appropriate. If Little I is appropriate, what type of Little I is appropriate, wedded the situation and the state of 7 Figure 2. Interfacing systems components to define an enterprise result emerging technologies?This paper addresses those questions, and it too categorizes the nearly used forms of Little I in the context of enterprise system implementation. This categorization and associated banter is essential, or it is impossible to have a meaningful discourse or so application integration. Integration Big I To establish a baseline, the following definition is proposed for integration. Integration (Big l) integration implies that all relevant data for a particular bounded and closed draw of task processes is process in the same package program application.Updates in one application module or component are reflected without the duty process logic, with no complex external interfacing. Data are stored once, and it is at once share by all work processes that are enabled by the software program application. This is a rather comprehensive and restrictive definition that revives memories of first multiplication enterprise resource planning (ERP). The barter process implications of Big I are discussed in some detail by Gulled and Summer (2003).To guard clarity doneout this paper, the above definition go out forever be referred to as Big l. Big I is definitely the goal of management, especially for mundane clientele processes. This implies one source of truth for those origin processes that are enabled by pump ERP solutions. The conceit is simple if all data are stored once and shared, wherefore integrity issues are less likely to occur. The TCO is significantly less, since interfaces across application components are not required. Furthermore, complexity is significantly reduced. MEDS 8 Figure 3 shows how Big I relates to Little I for a simple example related to US the States Logistics. In this example, Army Logistics processes are scoped with the SAP solution as Big l I. E. There is no interfacing across the SAP components. However, some of the logistics business processes flow outdoor(a) of the Army. In this case, we indicate the transportation processes that are part of the end-to-end logistics business processes, but they fall outside of the Army, and they are managed by the US ecsta sy Command (TRANSOM).The systems that support this segment of the end-to-end process are not SAP, and they are not even owned by the army. This is a unblemished composite application3 and some form of Little I is must be implemented in order to preserve the integrity of the business process logic4. Figure 3, even though a simple picture, shows much somewhat integration. First, it suggests that large and complex organizations are unlikely to place all of their business processes in a iodin application.While assertions of Figure 1 are accurate, there are at least two reasons why single character ERP will not occur in most firms (1) he earnings opened more options for Little I and (2) the culture and control of the indwelling and external system integration communities will not allow such(prenominal)(prenominal) consolidation. Like it or not, given the current state of technology, we are freeing to have to live with is a mixture of Big I and Little I, at least as long as the cu rrent trends continue.The populace of this situation is reinforced by the fact that the larger software providers are opening their proceedss and making them more flexible for mix and match Figure 3. An example of Big I and Little I in the same enterprise opportunities with Little I. This is evidenced by such yields as the Oracle Data Hubs and SAP Interweave technologies. While it is true, Just as Figure 1 shows, that the TCO could be reduced by moving to Big l, most organizations do not have the flexibility nor the desire to do that. However, this does not mean that Big I is dead.There will always be pockets of Big l spliceed by Little I, to other pockets of Big l. This is not a technical assertion, but is directly related to common esthesis. For example, one would never rip a product like SAP core ERP apart and then interface it back together again. This is self inflicted main, and it can be distracted by Just implementing the product the way it was intended to be implemente d5. Preserve the integrity of the product by implementing Big I whenever possible, and use Little I to include those components that cannot be included in the integration domain.One would never dream of separating financial from materials in an SAP implementation, and then interface it back together again. Or even worse, it makes even less sense to stand up independent SAP solutions in different divisions of a company, operating as a family or fiefdom, with the absence of an enterprise orientation. We will revisit implementation options later, but before doing that, we must further research the options for Little I. The choice of a particular little I technology has significant implications for the types of mix and match options that are available for consideration.Integration (Little I) As antecedently mentioned, all forms of Little I are some form of interfacing, even though they are loosely called system integration. Much has been written on the subject, so we only focus on th ose types of Little I that are most relevant for the implementation of enterprise systems point-to-point integration database-to-database integration data storage warehouse integration enterprise application integration (EAI) application boniface integration and business-to-business (BIB) integration.Point-to-point integration This is the most expensive form of integration. Point-to-point integration is the mate wise maturement of interfaces among systems. The data model of the physiologic object and source system are known, and someone (e. G. A system integrator) develops the code for passing information back and forth. Sometimes accelerator products are used, a good example being the IBM Miseries of middleware products that are now included as a part of Webster. Miseries does require writing code at both the source and target system.The burn down to point-to-point integration is well known, most frequently involving changing both applications to use a middleware layer, by r ewriting the transaction handling code to communicate across the two applications. The traditional model of moveion is through remote function calls. The largest problem with point-to-point integration is shown in Figure 4, a situation that Schafer (2002) attributes to a customer situation. 9 10 Figure 4. typesetters case of point-to-point integration As the number of interfaced components is increased, the number of interfaces to be fend fored increases dramatically.The TCO too increases. As a real example consider the financial interfaces to a Navy SAP solution that is shown in Figure 56. Figure 5 is a good example of the previously mentioned case that can prove when financial are separated from materials or assets in an enterprise solution and then must be interfaced back to the ERP product, violating the integrity of the solution. While Figure 5 is reality and could not be easily avoided, the SAP product was never intended to be implemented in this way. The integrity of the product is violated by destroying the Big I that is locomotive engineered into the product.For all of the reasons previously mentioned, point-to-point integration should be avoided and only be used when there are no other options. Database-to-database integration This form of Little I, requires the share-out of information at the database level hence, providing interoperable applications. The prefatorial replication solution leverages features built into many databases to move information between databases as long as they maintain the same schema information on all sources and targets. There are companies that provide middleware to accelerate this process.Database and replication software are provided by companies such as Pervasive Integration Architect and Denominators Constellate Hub that permit moving information among many different database products with different schema. Figure 6 shows the conceptual layout for this form of Little I. While this integration procedure may w ork well for database applications, it does not work so well for enterprise applications. Most enterprise applications have 11 Figure 5. From defense financial and business relationship services to the US Navy Pilot SAP implementations Figure 6. abstract layout for database-to-database 12 multi-tiered computer computer architectures, where even though the applications reside at a separate tier, the business process logic is bound to the master data. So, if one simply passes information at the database level, it is easy to micturate data integrity problems. Enterprise software vendors typically publish application program interfaces (Apish) that allow interfacing at the application level, and it is best to use these Apish. If you update the database without development the Apish, then you are violating the Big I that is engineered into the product, and integrity problems are a likely result.See that Anonymous (1999) hold in enterprise development where some of these difficulties are discussed within the context of interfacing with SAPs R/3 product. For enterprise implementations, this form of Little I should be avoided. Data warehouse integration This form of Little I is similar to database-to-database integration, but instead of replicating data across various(a) databases, a single Martial database is used to map the data from any number of physical databases, which can be various brands, models, or schema.In other words, a new data warehouse is created, and information is aggregated from a number of sources, where it may be analyzed or used for report generation. The effectiveness of this approach depends on the sophistication of the tools that are used and the quality of the data that is pulled from the various sources. Once the data are aggregated, reporting is straight forward however, if business process logic must be applied to the aggregated data, then that logic must be created at the data warehouse level.The basic layout for data warehouse integ ration is shown in Figure 7. Figure 7. Conceptual view of data arouses integration If the integration is at the database level, the same problems associated with database-to-database integration that were mentioned above still apply. If the integration is at the application level, then data warehouse integration is similar to point-to-point integration, and the problems with that approach also apply. This form of integration is quite popular, even though it is expensive to maintain.The reason that data warehouse integration is popular, is that it allows all parties involved to maintain their individual stove-piped environments while overlap selective data in a auteur environment. In short, one is trade Big I for autonomy. An example of a large data warehouse integration effort in the US Army is shown in Figure 8. The logistics integrated database (LIDS) contains aggregates information from many stand-alone systems, with the objective of providing enterprise-level analytics. As the psychogenic fugue indicates, the input data are aggregated from many sources, and output data are pushed to many sources.Constant cleansing and harmonistic is required in order to avoid integrity problems. Many enterprise solutions, like those from SAP and Oracle, use data warehouse lotions for reporting and enterprise analytics. However, this static view of enterprise data are not the same as Big l. Even if the concept is extended to include a federated query capability with the data warehouse being a virtual repository of metadata, this is still no substitute for Big l. However, the big problem, as previously mentioned, is the maintaining of business process logic at the data warehouse level.While this option preserves organisational autonomy, it is indeed costly. The data that are pushed into the warehouse must be endlessly monitored for quality, and NY changes in any one of the target or source systems create significant testing and/ or additional coding problems. 13 Figure 8. A conceptual view of the LIDS 14 Figure 9. Hub and spoke architecture for enterprise application integration Enterprise application integration EAI is the sharing of data and business process logic across hetero/homogeneous instances through message-oriented-middleware (MOM). EAI may be managed by packaged vendors (e. . SAP and Oracle) or through solutions provided by third party vendors (e. G. MM, Webmasters, etc. ). EAI is sometimes called application-centric interfacing. EAI is used to connect multiple systems at the application or database levels, utilize a form of middleware that is sometimes called a broker. The middleware moves information in and out of multiple systems, using pre-engineered connectors. The connectors are a source of competitive advantage for EAI software providers, because if a connector already exists for the target and source application, the cost of interface development can be reduced.The problems associated with point-to-point integration are reduced by adopting a hub and spoke model for sharing information. The EAI Middleware allows one to rite a single interface between each(prenominal) application and the middleware, instead of individually connecting each application to every other application. An example of a hub and spoke architecture is shown in Figure 9. Once the information is extracted, it is sent to a primaeval emcee using some sort of messaging system, where the information is processed and routed to the target system.If there is a gap in required business process logic, the logic can be created on the central server for execution. In theory, any-to-any document swap is possible, considering the business process logic in the source and target systems. Using connectors, the EAI software processes messages from packaged applications, databases, and custom applications using a queuing engine. When an event occurs (e. G. A transaction in an ERP package or a database table update), a message is published to the queue about the event.Subscribers to queue access the event envelope, analyze the content, and if it is intended for processing in the target system, the envelope contains everything necessary for recreating the event in the target system. The queuing engine ensures that all events are processed in the correct sequence, ensuring transactional integrity. Many companies provide pre-packaged EAI solutions, and the market is extremely competitive. The hub and spoke model using connectors has been operational for many years, and the products have reached a mature level.However, we note that EAI is still interfacing, and while this is a significant improvement over point-to-point integration, EAI can be costly to implement and costly to maintain. The main benefits flow from being able to use part configured connectors, while leverage industry partnerships which yield certified interfaces. severe consolidation has occurred n recent years in companies that provide EAI solutions as the larger so ftware providers have moved in to provide EAI solutions that interact with their Big I products.For example, SAP now supports EAI as part of its Interweave7 solution, where previously SAP had used third party providers like IBM and Webmasters to provide EAI capabilities. It is also important to note that EAI is typically used inside the enterprise, as distant to across the enterprise. For this reason EAI is sometimes called application-centric interfacing. The objective is to interfaces processes and share data within the enterprise. The inter-enterprise model falls under a class of solutions that are called Business-to-Business commerce, and this form of interfacing will be discussed in a later section.Application server integration This is the most sophisticated form of Little I that is discussed in this paper. count of application server integration as the creation of a single, modify application (logical or physical) that can provide a common set of services to any number of other remote applications. These services are common business objects that are shared across enterprise applications. The sharing and reuse of services is the goal of distributed objects and applications servers.Application server integration enables the enterprise by sharing services across the enterprise. The concept of application server integration is shown in Figure 10. Modern systems invoke shared objects to share business logic and interact with resources (such as databases, ERP systems, or queues). In modern ERP systems these shared objects may be more highly aggregated as draped transactions. For example, when configuring the SAP solution, one aligns transactions with process steps. A process step could be associated with one or more transactions.If the transactions associated with a process step are bundled together and wrapped as a nett service, then they may be shared across other SAP and non-SAP components. SAP calls this aggregated object an Enterprise Service, and it is the basis of SAPs Enterprise run Architecture (SAP GAG, 2004). Application integration occurs through the sharing of business logic, as well as through the back-end integration of many different applications and resources. The application server binds the data from a relational or relational-object database to he common shared objects.The main advantage of application server integration is that 15 16 Figure 10. Application server integration concept the interfaced applications or components are tightly coupled to each other by sharing methods. By our assessment, application server integration is Little I, but given the limits of current technology it is the best approximation that we can provide to Big l. This is because the data integrity checks and business logic bound to the objects are always shared, and therefore, never circumvented. The SAP example is not unique. Most of the major software vendors have a similar tragedy.For example, Figure 11 shows the Oracle strategy for application server integration. The key component of Figure 11 for our discussion is in the right-center of the figure. The Oracle Application Server manages the shared objects and during runtime Top affair manages persistence between Java objects and database tables. At the conceptual level the integration approaches pursued by Oracle and SAP are similar. The widely recognized disadvantage of using this application server integration is that significant changes may have to be made to all source and target applications to

No comments:

Post a Comment