Wei Chen, Northwestern University, moderated the panel on automated machine calibration and toolpath correction.
Julia Greer, California Institute of Technology, described an approach to additive manufacturing (AM) that involves the use of hydrogels and allows the engineering of materials at a micron or even nanometer scale. Hydrogel infusion AM uses hydrogel, a three-dimensional (3D) network of polymer chains whose structure can be controlled down to small scales, which is then filled with metal salts and subjects the resulting metal ion–filled material to a two-step thermal process. One can make a variety of materials in this way that would not be easy or even possible to produce with traditional methods.
Such materials have a number of advantageous properties. They may be much stronger than one would expect from their components, for example. In general, the techniques provide a tremendous amount of control at the molecular level, which translates into control of the material’s macro properties. Greer closed by saying that nano-architected materials offer a future in which materials might be completely customized.
Following up on the previous statistics and analytics primer, Roshan Joseph, Georgia Institute of Technology, discussed the use of Gaussian processes to optimize design for AM, illustrated by an example of a successful application.
The challenge was to design and print aortic valves for use in heart patients that would have biomechanical properties that matched up well with the real biological valves. This optimization problem has many variables, and the two most direct approaches—building and testing many different valves to find which had the best properties or building a finite element simulation for optimization—were expensive and time-consuming. It is possible to do the same job at a much lower cost by running a limited number of experiments on a finite element method simulator and then applying Gaussian process modeling to the resulting data to get an approximate answer that is still sufficient to use for the heart valves.
Joseph explained how the Gaussian process works and said that one of its strengths is that it not only supplies an answer but also provides information about the uncertainties in the answer, given that the answer is an approximation.
Wing Kam Liu, Northwestern University, spoke about software under development that uses statistical methods to design AM processes with a particular emphasis on the control of defects and microstructure. The software is called Convolution Hierarchical Deep-Learning Neural Networks–Artificial Intelligence (C-HiDeNN–AI), and the goal is to develop robust, fast, and accurate computer simulations for process design, process monitoring, process control, and quality monitoring. Current simulations are far too slow, Liu said. Process monitoring and control—such as is used to keep the temperature of the melt pool constant—need calculations done on the order of milliseconds rather than the current order of seconds.
Comparing the performance of C-HiDeNN–AI with a standard neural network in the real-time control of the melt pool temperature within a layer being put down with AM, Liu said that the training time was 70 to 110 times faster than feedforward neural networks, and the predictions were more accurate with a smaller dataset. This new statistics-based approach could prove valuable in the monitoring and control of AM processes.
Vincent Paquit, Oak Ridge National Laboratory, spoke about how his group is using data to improve AM processes. Comparing AM to building something with LEGO blocks, he said, “Every time you’re adding one block to the set, you’re running a small experiment.” His group at Oak Ridge is trying to collect a comprehensive dataset that can describe what happens each time a block is added, with the goal of connecting the dots among processing, structure, properties, and performance.
Their approach is to construct a “pseudo digital twin,” which has comprehensive data for each block—the design, the manufacturing process, the feedstock material, the post-processing steps, the testing characterization, and so on. The next step is to extract from this dataset the information that is relevant to guide insights and answer questions.
Pacquit then spoke about related topics, such as calibrating different machines so that the digital twin describes them all correctly, using the digital twin to optimize the manufacturing process, and the importance of instrumenting the AM system to obtain data during the manufacturing process with which to make decisions and better control the process.
Greer addressed the broad variability in the data she collects. The data show that she gets wide variation in such things as dimensionality and microstructures. “We don’t even know which data to collect,” she said. “We don’t even know what can be controlled, what can’t be controlled.” So it will be important to figure out which parameters are most important in determining her results and how to control them.
Chen asked Joseph about using the modular Bayesian approach on large-dimensional problems such as those encountered in modeling AM processes. Joseph acknowledged that this is a challenge with Gaussian processes—it gets very expensive to train them in high dimensions and with big data. So the best approach may be to find some way to first reduce the dimensionality of the problem. Research on the issue is ongoing.
In response to a question about any surprises that had come up in the presenters’ research, Paquit described a study where they manufactured a number of components in order to perform tensile tests on them—in particular, they wanted to study the failure modes of the components, so they worked to create defects within the components that they believed would make them more prone to failure. However, they found that even when they were using parameters that were supposed to generate defects, in certain cases the components did not actually fail.
One questioner asked about the need for interpretability and explain-ability of results produced by deep neural networks and other AI techniques. Paquit said that in his work using neural network techniques to analyze images, they never conclude anything from images alone; it is “a combination of different modalities that all contribute to the greater understanding of what’s going on,” he said. Because the AI results are not used in a stand-alone manner, their interpretability is not so important.
Another question concerned the use of AI to build models of AM processes: How does one deal with the fact that AI models must be trained on large amounts of data, so that when the material or processing parameters in AM are changed, it will require a whole new set of data to train the model? Paquit suggested that practitioners should train modular “blocks” of models that can be combined to accommodate abstractions of different geometries, and the data should be structured to support this approach. Liu talked about the value of incorporating physics into an AI model.