: 5

: 오토메이션월드 관리자 : Tue, 27 June, 5:01 PM

[Industry Trends] [Manufacturing AI-(3)] Manufacturing AI, all on-site problems are not easy to solve in the short term... In order to realize this, operational complexity must be reduced and standardized

[Manufacturing AI-(3)] Manufacturing AI, all on-site problems are not easy to solve in the short term... In order to realize this, operational complexity must be reduced and standardized

Would it be possible to come up with an AI model that uses vast amounts of data within the manufacturing industry to solve all problems like ChatGPT? Probably not in the short term. The reason is that knowledge in the industrial sphere is different from general knowledge, and the characteristics of industrial knowledge are also different from those of general knowledge. Although there is a lot of industrial data, the data used by each company is limited, so the AI adoption operation strategy must be different. In order to realize this, a standardization of the use of AI and a simple and easy operating system is needed. MLOps make this process possible. At the AI convergence business development conference held on May 10th, Heo Young-shin, CBO of MakinaRocks, summarized the contents of the presentation on 'AI-led manufacturing site intellectualization.'

제공: 게티이미지뱅크 

What role and impact does AI play in manufacturing innovation? AI can be applied in areas such as automation, predictive maintenance, quality control, facility optimization, and product development. It is about automating or streamlining the way we used to work and achieving cost and production efficiency.

ChatGPT’s answer to this question aren't new or surprising, but they are quite well organized. They are quickly coming up with expert-level answers. This characteristic of ChatGPT is causing a great repercussion in the industry. It is demonstrating the level of development of AI technology itself.

The question is whether ChatGPT can directly use this knowledge system in terms of problems at the manufacturing site. If ChatGPT is used to solve problems at manufacturing sites, specific questions such as the next step to be performed utilizing the values derived from the device, rather than general questions in this form will need to be asked.

Implications of the emergence of Open AI ChatGPT

It is true that ChatGPT is an innovative technology, but it is yet difficult to use only the current trained data to directly solve problems in manufacturing sites. OpenAI, which developed ChatGPT, recently published a paper related to GPT4. An interesting fact was that as a result of inputting a huge amount of data into a giant Transformers model, flexible intelligence at a level of beyond expectations was created.

The creation of artificial intelligence that surpasses human intelligence is a story in a similar context to OpenAI. The concept of parameters is important in order to understand how large the utilized transformer model was. A parameter is a unit for extracting data attributes, and the model used in ChatGPT consists of about 175 billion parameters. The concept is to extract 175 billion characteristics and make probabilistic inferences. ChatGPT had parameters of such a huge size that it could not be compared even when compared to the existing model.

In order to highlight the advantages of such a vast model, it is necessary to fill it with a large amount of training data. ChatGPT has 45TB of processing data. Focusing on the significance of ChatGPT, the rapid development of AI technology and the possibility of combining processing data, seems to be more important than just the model itself.

It is not easy to use AI models like ChatGPT, but it is a representative case that proved its technological possibility. It is of great significance that the attempt to use AI has overcome the short-lived previous attempt.

The first step of using AI for monitoring industrial facilities is to look at individual data. For example, if there are 100 parameters, each parameter is monitored by creating a graph with a time series trend. The data is used in the area of preventive maintenance. However, it is unclear whether the evidence of preventive maintenance data has been substantiated. This is because the key of the area of maintenance is how the probability of failure is reduced at a fraction of the cost during the prevention process.

The second step is to go beyond individual data and to monitor as a complex data. It is a process in which AI learns by treating multiple variables in common to see how the multidimensional data distribution changes and compares the existing data with the training data. A typical example is AutoEncoder.

The problem with this model is that it distinguishes abnormalities, but it is impossible to specifically analyze whether certain data within the normal range is close to failure or whether the data is approaching failure over time.

MakinaRocks is developing a model that establishes a third step to address this problem. The third step is the process of analyzing the quality of operation in a steady state. For example, finding and improving robots that generate more load than necessary could reduce the risk of failure by reducing the load, and reduce cycle times in the station.

Manufacturing Data Status and Forecast

Recently, a research institute predicted that global data production will reach 175 ZB by 2025. Even areas that currently lack data are expected to be flooded with data over time.

In particular, the manufacturing industry is expected to produce an overwhelming amount of data compared to other industries. So, can there be an AI model that uses vast amounts of data in the manufacturing industry to solve all problems like ChatGPT? Probably not in the short term.

Unlike general knowledge, knowledge in the industrial domain has low accessibility because companies act as barriers. Characteristics of knowledge are different as well. Industrial knowledge has the characteristics of creating a disconnected and separate body of knowledge. This is evident by the fact that the meaning of the data varies depending on how the company uses the data in learning the sensor data extracted from the same device.

Therefore, there is a lot of industrial data, but as the data used by each company is limited, many evaluations say that the strategy for AI introduction operation should be different.

The 'Data Centric Approach' (hereinafter referred to as DCA), which has been attracting a lot of attention recently, corresponds to this strategy. DCA is an approach that starts with the idea of creating a complete dataset. However, the characteristics of data in the industrial area that change over time, the discrepancy between the existing data and the data that can be used (meta-information), the mismatch between the state change cycle and the data collection and storage cycle, and the data resolution that causes differences in cost are the challenges that this approach must overcome.

Basically, it is difficult to store all the data as it is. At the end of the day, you have to make choices, store your data, and use your data. It is important to build a dataset that fits the purpose of the data. From this point of view, it is preferable to look at DCA for the purpose of building a system that continuously improves the utilization of data and AI models used in data, rather than an approach to building flawless data. It is important to use the results of the process of applying, learning, distributing, and verifying the actual data to the AI model, and to build an iteration cycle that can improve the data and AI model.

An essential element for this is the actual data connection during development and operation. In order to do this, standardization of the use of AI and a simple and easy operating system is needed. MLOps make this process possible.

What is MLOps?

The first company to advocate MLOps was Google. Google is not dismissing MLOps as just a solution or tool, but rather as a "way of working." Google describes MLOps as a culture or method of machine learning engineering that integrates the development and operation of machine learning systems.

Amazon Web Services (AWS) defines MLOps as a tool that helps data scientists and ML engineers increase productivity and maintain model performance in the operational phase through a large-scale learning, testing, problem solving, distributing, and managing process.

After all, MLOps are tools or activities for the development and operation of AI. More specifically, it can be defined as an activity to systematically increase productivity in AI development and operation.

What would be the difference between before and after the introduction of MLOps? Usually, when it comes to developing and applying AI models, it is largely divided into two stages: model development and using the developed model. In the development stage, data is extracted and the data scientist researches it to create a high-performance AI model. After that, the developed AI model is connected with the data extracted from the actual target to apply and utilize.

The advantage of AI is that it uses the data extracted from each target to understand the characteristics of that data. yet, there are also limitations. ML systems such as data loaders, stream data generation, containers, and user function APIs are required. In addition, data scientists who are not software experts lack the ability to be directly involved in the development process.

However, some problems arise when software engineers are directly involved in development. Since they don’t know the characteristics of data science or models, the models created by data scientists don’t work, and they face operational problems. If they move on to the horizontal development stage without considering these problems, the problems will continue to arise.

For instance, it has been piloted in more than 250 robots through a project linked with Hyundai Motor Company. At the extreme, machine learning engineers, software engineers, and systems experts will perform 250 installations. More important than installation is an error in the software system that is outside the expected range. Unless one robot with a problem is scrapped due to the nature of horizontal deployment, the system installed must be repaired.

Even if this process is carried out to the operational stage, there is still high complexity. When a problem occurs, data scientists, software engineers, and field experts cannot solve the problem alone because it is not easy to determine whether it is a problem of the model, the applied system, or the problem of interpretation. Each has a role, so all the professionals should be together. It takes a long time to solve a problem. Therefore, not the performance of AI, but the cost incurred when using AI makes companies think about introducing AI.

On the other hand, using MLOps simplifies the process. After connecting the data source, you can distribute the data source to the AI model created. After creating a data stream in a data source, each procedure for creating a trained model is completed by training the data using a pipeline made in the form of software and connecting the trained model. In other words, managing through the pipeline is at the heart of the MLOps scheme. A series of other processes is the same mechanism.

In order to make good use of AI in manufacturing,

Rather than expecting innovative technologies such as ChatGPT to solve problems in the manufacturing and industrial sectors, it is necessary for the development of the industry to have a system foundation that productively develops and operates AI that companies have.

Through automation and standardization, it is necessary to solve the problems necessary for AI development and operation. The ultimate goal is to create a foundation for the use of AI models and increase AI ROI by reducing operational complexity.


[Manufacturing AI-(3)] Manufacturing AI, all on-site problems are not easy to solve in the short term... In order to realize this, operational complexity must be reduced and standardized.-HelloT (hellot.net) 

Back to list