Ana içeriğe atla

IT Systems Within the Product Lifecycle

IT Systems Within the Product Lifecycle 

Industrial companies are characterized by developing, designing, and manufacturing physical goods. While technological leaders create more and more complex products that are sold in bundles with non-material extensions like services and maintenance, the actual products are still of a physical nature and require intricate development and manufacturing processes in which expertise from several domains needs to be brought together .

For example, technical goods are regularly composed of mechanical engineering, software- and electronic-based components, as well as fluid or electric power modules. Each of those domains comes with specific tasks and needs specific IT support. Three examples illustrate this: CAD systems, simulation, and production control. As for CAD, mechanical design, electronic design, as well as the fabric layout planning all apply Computer Aided Design (CAD) systems to build Digital Mock Ups (DMUs). The concrete functionality and data models of those CAD systems, however, strongly vary depending on the tasks they have to fulfill. This has the consequence that industrial businesses use several types (and brands) of CAD systems in parallel, often one per domain.

Another example is simulation: The development and manufacturing of high-end products contains specific tasks like finite element simulation for strength calculation purposes, the simulation of product functionality or manufacturing planning. Each of those tasks is supported by its own specific IT-system. In consequence, industrial businesses use a broad variety of heterogeneous IT systems.

A third example is production control: Manufacturing is increasingly digitalized with numeric control systems, digital actors and sensors. Specific steering and control tasks lead to specific IT systems. For example manufacturing execution system (MES) are increasingly used for the collection of machine and sensor data for right time control and steering tasks. Those systems have to fit to the kind of manufacturing processes and tools. Therefore, even here industrial businesses apply separate MES in different manufacturing environments.

Taking into account that most industrial businesses act as global players, the IT infrastructure regularly gets more heterogeneous with different plants brining in their own IT systems depending on their size and functions. In summary, the points mentioned above are leading to heterogeneity of the IT system landscape. And so far, business-oriented systems like ERP and CRM systems have not been considered: Even in medium-sized industrial businesses, it is common to find a large number of respective IT systems. This results in relevant product, process, and machine data being distributed across industrial businesses. It is indispensable for a holistic decision support purpose to collect and semantically integrate this data.

Identification and Sensor Technologies 

Embedded, wirelessly interconnected, and mobile IT components that jointly provide new types of IT services have been discussed under the heading of “Ubiquitous Computing” (UC) for quite a while among scholars. It was the attention that has been given to the technology of “Radio Frequency Identification” that eventually propelled the diffusion of viable business applications of UC .

RFID is applied in a variety of applications, yard management, theft prevention for tools, up to tracking product flows in production. A crucial development has been the diffusion of standards, esp. the “Electronic Product Code” (EPC) family of standards which not only covers codes and physical interfaces, but middleware platforms, and services for data exchange across enterprise borders.

Originating in the retail sector, EPC is also increasingly applied in the manufacturing industry. Beyond its initial focus on identification, RFID can be augmented by sensor technologies, measuring environmental states such as temperature, humidity, acceleration, strain etc.

Direct effects of the application of RFID and sensor technology result from the automation of data capturing activities and encompass cost savings, faster availability of data, data quality improvements, and an avoidance of various mistakes and inefficiencies that result from erroneous manual data input. More interesting are the information effects that are an indirect consequence of the real-time data availability and the potentially higher resolution of automatic measurements of the presence, the identity, and/or the state of objects. This can even enable complete new ways of conducting processes or designing products (transformation effects) . From the view of decision support, the new data enables not only a real-time process steering but also an ex-post analysis of process instances—particularly if objects are identified on item level .

Among others, this allows process analysis on shop floor level, e.g. if WIPs or transportation material (cases, pallets, containers etc.) are tagged with pertinent RFID transponders and tracked with systems like MES. In case of EPC-based applications, the definition of a globally unique identifier like the EPC code even fosters data integration and analysis across enterprise borders. This is of particular interest in the realm of SCM, e.g. for pinpointing root causes of loss, faults and damages, for identifying and analyzing routing options, or for evaluating lot sizes or transport modalities. Another relevant development towards a UC manufacturing environment with a BI impact results from the increasing degree of network-attached and IT-controlled “smart” machines and the trend to collect, distribute, and archive machine data in digital form. UC data can enter the realm of BI either indirectly via operational systems (material management systems, warehouse management systems, SCM, ERP, MES, PPS, etc.).

Or it can bypass this layer by being fed more or less directly into the DWH environments (after going through basic filtering and data transportation steps with specific “edge ware” and middleware). Either way, UC data can become a rich source for insights for both the steering and iterative adjustments of processes as well as for the design of new ones.

Unstructured Data 

The discussed digitization of the shop floor leads to a large amount of structured data, e.g. sensor data. However, numerous sources are not as structured and therefore not readily processable by BI applications. Examples include reports, emails or plain text documentations. Even the results from BI-based analyses are usually at some point translated into an unstructured form (e.g., a PDF file) for purposes of distribution or archival—the handling of these procedures is still considered unsatisfactory in many larger organizations.

Even more challenging are non-text representations of information, e.g. pictures from optical sensors or drawings, which are also needed for decision support, esp. within engineering tasks. This leads to the requirement of coupling “classical” BI infrastructures for management support with systems that are specifically designed to handle, refine, and analyze semi- and unstructured data. In general, the semi- and unstructured data is either integrated into the information access layer (e.g. by the means of interlinked documents), integrated into the data support and information generation layer by processing (existing or extracted) meta data or distributed via components from the domain of knowledge management for knowledge storage and distribution.

 Extending the Scope of Integrated Decision Support 

In the following section, business scenarios are presented that highlight the potential of integrating the data sources discussed in section three—under consideration of the concepts and technologies introduced in section two.
This leads to a BI with a much broader scope: First, product, process, and shop floor design phases are explicitly considered—necessitating dedicated product and process DWHs. Second, process steering and management become part of BI, which leads to components for OpBI and BPI. Third, large and unstructured data sources are considered in more analysis scenarios.

Including Product and Shop Floor Design Phases 

Within the industrial product creation process and the subsequent phases of product usage and recycling there are several decisions with strategic implications. For this reason it is advisable to devote attention to the decision support within the product lifecycle. The following two exemplary management tasks will explain the typical decisions contained in the product lifecycle and the resulting information demand that needs to be covered by BI.

Management of Engineering Regulation and Standardization

The management of engineering regulation and standardization is usually part of the role of Knowledge Engineers (KE). KEs e.g. have to deal with identifying relevant engineering knowledge, acquiring that knowledge, and encoding it as input for knowledge or expert systems, construction rules, (construction) scripts, or templates. A primary task of these engineers is the acquisition and association of (fragmented) information in order to regulate construction with the objectives of coming to a holistic view on the relevant information  and to implement a permanent active learning organization . These are prerequisites for the support of a “design for X” approach (e.g. design for assembly, design for logistics, or design to standards). As most business strategies require more than one “design for X” commitment (e.g. simultaneously demanding design for cost, design for quality, and design for assembly), these commitments are very often conflicting: Reaching a higher quality level (design for quality) requires a trade-off with the reduction of costs (design for cost). The KEs therefore have to figure out the impacts of changes across different commitments, if possible based on historical data.

Examples of the data that needs to be collected and integrated for this includes actual geometric data and its history of changes (as stored in CAD systems), data on actual and historic assemblies, e.g. with respect to the reuse of parts (stored in PDM/PLM systems), non-financial KPIs (e.g. timeliness) from different manufacturing sites (mostly extracted from MES), plan, actual, and historical data about resources in production (from PPS-systems), or budged and actual financial key performance indicators from ERP systems. 

Maturity Stage Level Management 

Maturity Stage Level Management (MSLM) is important in the context of manufacturing engineering, quality management and lifecycle management. A core task of MSLM again is the acquisition and association of (fragmented) information—here aiming at reporting a certain maturity stage concerning key figures (e.g. warranty costs, loss claims) under consideration of different views (e.g. manufacturing site, region of use, and kind of defect), areas of responsibility (e.g. part managers, module managers, project managers), and hierarchical levels. The goal of MSL managers is to permanently enhance the maturity stage of products . The information demand of MSL managers includes actual and historical data from different business units as well as heterogeneous external data sources (e.g. market data). Examples include geometric and feature data, a history of changes (from CAD systems), actual and historic assemblies (from PDM/PLM systems), quality data (from quality systems), documents and spreadsheets, plan, actual as well as historic financial data (from ERP systems), data concerning customer satisfaction, e.g. complaints (from CRM systems), as well as external data form retailers, service or repair shops about failures and repairs. 
There are many tasks in the product lifecycle with similar characteristics like product and project management, management of product variants, or manufacturing engineering management: They all require information from the whole lifecycle and the plethora of applied systems contained within.

Including Process Steering and Management 

Forerunners in applying both BI-based process management approaches in industrial environments can be found in the areas of logistics and SCM. This is not surprising, given the fact that the core concepts of those functions are characterized by an overarching process view. A process DWH, potentially filled automatically by UC technologies, can be used both for tasks of steering and of analyzing. A relevant complication for such scenarios comes from the cross-border nature of many of those scenarios and the need to quickly include and exclude partners, react to changing transportation and inventory strategies, consider modification of business models, and temporary demand for advanced analytic functionality (data mining, simulation, predictive analytics). From this point of view, logistics and SCM also illustrate the potential of Cloud BI.

Steering of Product Flows 

Providing comprehensive information on product flows is a task that is heavily characterized by integrating and aggregating data from a variety of involved partners (manufacturers, second, third, and fourth party logistics providers, wholesalers, retailers) and their respective systems. Ideally, the status of a supply network (e.g. inventories, number, location, and status of moving goods and vehicles, service levels, throughput times etc.) can be accessed without manual or semi-manual data capturing and integration effort in adequate accuracy, correctness, and timeliness. This way it becomes possible to react to unexpected events (e.g. unavailable routes, losses and damages etc.) and to find solutions (e.g. alternative routes, re-directing oversupplies to retail outlets that face an Out-of-Stock situation etc.). Such scenarios are both relevant for the internal logistics of a single enterprise as for complete Supply Networks. These are also examples for Operational BI.
Analyzing Process Structures 
Interlinked with the (ad-hoc) steering is the step of uncovering patterns behind already observed events and of pinpointing root causes of reoccurring issues—a prime example for the application of BPI solutions.
 Applications include the identification of problematic product configurations that lead to problems during transportation, identifying transportation routes that can be linked to quality impairments, or bottlenecks causing decreasing cycle times. This type of BPI application requires not only pertinent analysis tools but also data on both process logic (for tracing back problems) and on business results (for evaluating the problem impact). Here, a higher granularity of the data corresponds with the ability to adequately narrow down causeeffect-relationships. Again, scenarios can be found both in internal (production) logistics (where esp. an MES can act as a rich source of relevant and interconnected process data) and in broader SCM approaches (which, however, requires object traceability, e.g. based on RFID technologies).


Bu blogdaki popüler yayınlar

Cloud Computing Reference Architecture: An Overview

The Conceptual Reference Model Figure 1 presents an overview of the NIST cloud computing reference architecture, which identifies the major actors, their activities and functions in cloud computing. The diagram depicts a generic high-level architecture and is intended to facilitate the understanding of the requirements, uses, characteristics and standards of cloud computing. As shown in Figure 1, the NIST cloud computing reference architecture defines five major actors: cloud consumer, cloud provider, cloud carrier, cloud auditor and cloud broker. Each actor is an entity (a person or an organization) that participates in a transaction or process and/or performs tasks in cloud computing. Table 1 briefly lists the actors defined in the NIST cloud computing reference architecture. The general activities of the actors are discussed in the remainder of this section, while the details of the architectural elements are discussed in Section 3. Figure 2 illustrates the intera

Cloud Architecture

The cloud providers actually have the physical data centers to provide virtualized services to their users through Internet. The cloud providers often provide separation between application and data. This scenario is shown in the Figure 2. The underlying physical machines are generally organized in grids and they are usually geographically distributed. Virtualization plays an important role in the cloud scenario. The data center hosts provide the physical hardware on which virtual machines resides. User potentially can use any OS supported by the virtual machines used.  Operating systems are designed for specific hardware and software. It results in the lack of portability of operating system and software from one machine to another machine which uses different instruction set architecture. The concept of virtual machine solves this problem by acting as an interface between the hardware and the operating system called as system VMs . Another category of virtual machine is called

CLOUD COMPUTING – An Overview

Resource sharing in a pure plug and play model that dramatically simplifies infrastructure planning is the promise of „cloud computing‟. The two key advantages of this model are easeof-use and cost-effectiveness. Though there remain questions on aspects such as security and vendor lock-in, the benefits this model offers are many. This paper explores some of the basics of cloud computing with the aim of introducing aspects such as: Realities and risks of the model  Components in the model  Characteristics and Usage of the model  The paper aims to provide a means of understanding the model and exploring options available for complementing your technology and infrastructure needs. An Overview Cloud computing is a computing paradigm, where a large pool of systems are connected in private or public networks, to provide dynamically scalable infrastructure for application, data and file storage. With the advent of this technology, the cost of computation, application hosting, c