
Citation: | Yue Jing, Li Jian, Zhang Wen, Chen Zhangxin. The coupled deep neural networks for coupling of the Stokes and Darcy–Forchheimer problems[J]. JOURNAL OF MECHANICAL ENGINEERING, 2023, 32(1): 010201. doi: 10.1088/1674-1056/ac7554 |
Fluid flows between porous media and free-flow zones have extensive applications in hydrology, environmental science, and biofluid dynamics. A lot of researchers have derived suitable mathematical and numerical models for fluid movement. [ 1– 3] A system can be viewed as a coupled problem with two physical systems interacting across an interface. The simplest mathematical formulation for the coupled problem is coupling of the Stokes and Darcy flow with proper interface conditions. The most suitable and popular interface conditions are called the Beavers–Joseph–Saffman conditions. [ 4] However, Darcy’s law only provides a linear relationship between the gradient of pressure and velocity in the coupled model, which usually fails for complex physical problems. Forchheimer [ 5] conducted flow experiments in sand packs and recognized that for moderate Reynolds numbers ( Re > 0.1 approximately), Darcy’s law is not adequate. He found that the pressure gradient and Darcy velocity should satisfy the Darcy–Forchheimer law. Since the great attention has been paid to the coupled model, a large number of traditional mesh-based methods have been devoted to the coupled Stokes and Darcy flows problems. [ 6– 22]
Owing to the enormous potential in approximating complex nonlinear maps, [ 26– 32] deep learning has attracted growing attention in many applications, such as image, speech, text recognition, and scientific computing. [ 23– 25] Many works have arisen based on the function approximation capabilities of the feed-forward fully connected neural network to solve initial/boundary value problems [ 40– 43] in recent decades. The solution to the system of equations can be obtained by minimizing the loss function, which typically consists of the residual error of the governing partial differential equations (PDEs) along with initial/boundary values. Resently, Raissi et al. proposed physics informed neural networks (PINNs) [ 44– 46] and they have been widely used. [ 47– 52] Moreover, Sirignano and Spiliopoulos presented the deep learning Galerkin method [ 54] for solving high dimensional PDEs. Additionally, some recent works have successfully solved the second-order linear elliptic equations and the high dimensional Stokes problems. [ 33– 36] Though several excellent works have been performed in applying deep learning to solve PDEs, [ 37] the topic for solving complicated coupled physical problems remains to be investigated.
Considering the performance of deep learning for solving PDEs, our contribution is to design the CDNNs as an efficient alternative model for complicated coupled physical problems. We can encode any underlying physical laws naturally as prior information to obey the law of physics. To satisfy the differential operators, boundary conditions and divergence conditions, we train the neural networks on batches of randomly sampled points. The method only inputs random sampling spatial coordinates without considering the nature of samples. Notably, we take the interface conditions as the constraint for the CDNNs. The approach is parallel and solves multiple variables independently at the same time. Specially, the optimal solution can be obtained by using the networks instead of a linear combination of basic functions. Furthermore, we validate the convergence of the loss function under certain conditions and the convergence of the CDNNs to the exact solution. Several numerical experiments are conducted to investigate the performance of the CDNNs.
The article is organized as follows: Section
2 introduces the coupled model and the relation methodology. Section
3 discusses the convergence of the loss function
Let Ω S and Ω D be two bounded and simply connected polygonal domains in ℝ 2, as shown in Fig. 1. Let n S denote the unit normal vector pointing from Ω S to Ω D, and take n D as the unit normal vector pointing from Ω D to Ω S, on the interface Γ, then we have n D = – n S.
When kinematic effects surpass viscous effects in a porous medium, the Darcy velocity
|
The fluid motion in
|
On the interface, we prescribe the following interface conditions:
|
Here, t represents the unit tangential vector along the interface Γ. Condition ( 7) represents continuity of the fluid velocity’s normal components, Eq. ( 8) represents the balance of forces acting across the interface, and Eq. ( 9) is the Beavers–Joseph–Saffman condition. [ 55] The constant G > 0 is given and usually obtained from experimental data.
For notational brevity, we set
|
Particularly, ||
v||
k
= ||
v||
W
k,2
. Here,
k > 0 is a positive integer, ∥
υ∥
0 denotes the norm on
L
2(
Ω) or (
L
2(
Ω))
2, and
To solve coupling of the Stokes and Darcy–Forchheimer problems, we propose the CDNNs in Fig.
|
1 |
It should be noted that
|
According to the definition of loss function
|
More generally, we use the similar notation
for the multi layer neural networks with arbitrarily large number of hidden units
|
In the next two subsections, we prove that the neural network
In this subsection, we prove that the CDNNs
In the above subsection, we have proved the convergence of the loss function. In this subsection, we remain to discuss the convergence of the CDNNs to the exact solution. According to the Galerkin method, the neural networks satisfy
|
3 |
Based on the above system of equations, we give the following assumption and theorem to guarantee the convergency of the CDNNs to the exact solution.
The section presents several numerical tests to confirm the proposed theoretical results. We start with three examples with known exact solution to test the efficiency of the proposed method, where the permeability for the third example is highly oscillatory. Then, the fourth example with no exact solution shows the application of the proposed method to high contrast permeability problem. This section concludes with a physical flow. The numerical examples presented could violate the interface conditions (
|
5 |
In this subsection we study the performance of the CDNNs for the benchmark problem presented in Ref. [
|
Similar to Ref. [ 56], we fix K to be the identity tensor in ℝ 2 × 2, μ = ρ = β = ν = 1. Due to the fact that the interface conditions ( 8) and ( 9) are violated, we exploit the interface conditions ( 55) and ( 56), where g 1 and g 2 can be computed by the exact solution. Specifically, the errors converge as the hidden layer increases in Fig. 3(a). Figure 3(b) reveals that the change of data has no significant influence on errors once the size is larger than 10 2. In particular, Fig. 4 displays the exact solution of the couples Stokes Darcy–Forchheimer problems and the results of the CDNNs, one can observe that both are approximately identical. The point-wise errors are depicted in Fig. 5. As can be seen, the absolute errors are almost all 0, which also indicates the closeness between the exact solution and the approximation solution. Table 1 exhibits the details of the results, which are consistent with our theory.
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 2.49 × 10 0 | 9.15 × 10 0 | 8.72 × 10 0 | 2.74 × 10 −1 |
err L 2 | 4.84 × 10 0 | 9.42 × 10 0 | 1.94 × 10 0 | 2.99 × 10 −1 |
2 layers | U S | P S | U D | P D |
err L 1 | 4.85 × 10 −1 | 3.59 × 10 0 | 9.62 × 10 −2 | 4.03 × 10 −2 |
err L 2 | 9.01 × 10 −1 | 3.21 × 10 0 | 2.19 × 10 −1 | 4.18 × 10 −2 |
3 layers | U S | P S | U D | P D |
err L 1 | 5.80 × 10 −3 | 5.26 × 10 −2 | 1.01 × 10 −2 | 3.25 × 10 −3 |
err L 2 | 1.09 × 10 −2 | 4.66 × 10 −2 | 2.29 × 10 −2 | 3.38 × 10 −3 |
In this example, we consider
|
Naturally, the corresponding f S, f D, and g D can be calculated by the exact solution. Note that this example satisfies the interface conditions ( 7)–( 9). According to Test 1, we choose appropriate data and hidden layer to solve the second example. Figure 6 and Table 2 show the accuracy of the CDNNs for solving the coupled problems in detail.
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 1.82 × 10 −2 | 1.19 × 10 −1 | 1.57 × 10 −2 | 1.08 × 10 −2 |
err L 2 | 3.66 × 10 −2 | 1.23 × 10 −1 | 4.62 × 10 −2 | 1.11 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 2.27 × 10 −4 | 1.74 × 10 −3 | 1.04 × 10 −4 | 4.67 × 10 −5 |
err L 2 | 4.21 × 10 −4 | 2.00 × 10 −3 | 3.18 × 10 −4 | 5.55 × 10 −5 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.65 × 10 −4 | 1.01 × 10 −3 | 1.13 × 10 −4 | 7.69 × 10 −5 |
err L 2 | 3.37 × 10 −4 | 1.50 × 10 −3 | 3.44 × 10 −4 | 8.32 × 10 −5 |
In this subsection, we solve coupling of the Stokes and Darcy–Forchheimer problems with highly oscillatory permeability over domains
|
We calculate the relative errors in Table 3 to reflect ability of the CDNNs for solving the coupled problems with highly oscillatory permeability (Fig. 7). Figure 8 reveals that the CDNNs handle the highly oscillatory permeability coupled problems without losing accuracy.
The problems that we have studied so far have the exact solution. In this example, we consider coupling of the Stokes and Darcy–Forchheimer problems with no exact solution over
|
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 3.59 × 10 −1 | 5.03 × 10 0 | 5.52 × 10 −2 | 7.59 × 10 −2 |
err L 2 | 6.79 × 10 −1 | 5.20 × 10 0 | 1.73 × 10 −1 | 7.07 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 8.42 × 10 −4 | 9.41 × 10 −3 | 1.15 × 10 −3 | 1.32 × 10 −3 |
err L 2 | 1.65 × 10 −3 | 1.05 × 10 −2 | 3.43 × 10 −3 | 1.56 × 10 −3 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.89 × 10 −4 | 3.04 × 10 −3 | 2.97 × 10 −4 | 6.70 × 10 −5 |
err L 2 | 3.65 × 10 −4 | 3.37 × 10 −3 | 8.98 × 10 −4 | 8.40 × 10 −5 |
400 sampled points | |||
---|---|---|---|
Condition 1 | Condition 2 | Condition 3 | |
1 layer | 6.49 × 10 −2 | 9.14 × 10 −2 | 3.03 × 10 −2 |
2 layers | 3.67 × 10 −5 | 4.74 × 10 −2 | 6.27 × 10 −4 |
3 layers | 3.37 × 10 −5 | 7.53 × 10 −3 | 1.44 × 10 −5 |
We conclude this section with a physical flow, where Ω S = (0, 1) × (1, 2), Ω D = (0, 1) 2 and the interface Γ = {0 < x < 1, y = 1}. In Ω S, the boundaries of the cavity are walls with no-slip condition, except for the upper boundary where a uniform tangential velocity u S( x, 2) = (1, 0) T is imposed, which is driven cavity flow. More precisely, we enforce homogeneous Neumann and Dirichlet boundary conditions, respectively, on Γ D, N = { x = 0 or y = 0} and Γ D, D = { x = 1}. In addition, we set K to be the identity tensor in ℝ 2 × 2, μ = ρ = β = ν = 1, and f D = 0, f S = 0 , g D = 0 . The results of the CDNNs are depicted in Fig. 11. More vividly, we display the velocity flows of free-flow and porous media zones in Fig. 12.
In summary, we have proposed the CDNNs to study the coupled Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems into the networks properly and can be served as an efficient alternative to the complex coupled problems. CDNNs avoid limitations of the traditional methods, such as decoupling, grid construction and the complicated interface conditions. Furthermore, it is meshfree and parallel, it can solve multiple variables independently at the same time. Specially, we provide the convergence of the loss function and the convergence of the CDNNs to the exact solution. The numerical results are consistent with our theory sufficiently. Moreover, we leave the following issues subject to our future works: (1) combining data-driven with model-driven to solve the high dimensional coupled problems, (2) considering the specific size of the networks through theoretical analysis, (3) combining traditional numerical methods with deep learning to solve more complicated high dimensional coupled problems.
Acknowledgements Project supported in part by the National Natural Science Foundation of China (Grant No. 11771259), the Special Support Program to Develop Innovative Talents in the Region of Shaanxi Province, the Innovation Team on Computationally Efficient Numerical Methods Based on New Energy Problems in Shaanxi Province, and the Innovative Team Project of Shaanxi Provincial Department of Education (Grant No. 21JP013).[1] |
Li J, Bai Y, Zhao X 2023 Modern Numerical Methods for Mathematical Physics Equations Beijing Science Press 10 in Chinese
|
[2] |
Li J, Lin X, Chen Z 2022 Finite Volume Methods for the Incompressible Navier–Stokes Equations Berlin Springer 15
|
[3] |
Li J 2019 Numerical Methods for the Incompressible Navier–Stokes Equations Beijing Science Press 8
|
[4] |
Saffman P G 1971 Stud. Appl. Math. 50 93 10.1002/sapm.v50.2 doi: 10.1002/sapm.v50.2
|
[5] |
Forchheimer P 1901 Zeitz. Ver. Duetch Ing. 45 1782 10.5917/jagh1987.45.279 doi: 10.5917/jagh1987.45.279
|
[6] |
Park E J 1995 SIAM J. Numer. Anal. 32 865 10.1137/0732040 doi: 10.1137/0732040
|
[7] |
Kim M Y, Park E J 1999 Comput. Math. Appl. 38 113 10.1016/S0898-1221(99)00291-6 doi: 10.1016/S0898-1221(99)00291-6
|
[8] |
Park E J 2005 Numer. Methods Part. Differ. Equ. 21 213 10.1002/num.20035 doi: 10.1002/num.20035
|
[9] |
Discacciati M, Miglio E, Quarteroni A 2002 Appl. Numer. Math. 43 57 10.1016/S0168-9274(02)00125-3 doi: 10.1016/S0168-9274(02)00125-3
|
[10] |
Layton W J, Schieweck F, Yotov I 2003 SIAM J. Numer. Anal. 40 2195 10.1137/S0036142901392766 doi: 10.1137/S0036142901392766
|
[11] |
Riviere B 2005 J. Sci. Comput. 22 479 10.1007/s10915-004-4147-3 doi: 10.1007/s10915-004-4147-3
|
[12] |
Riviere B, Yotov I 2005 SIAM J. Numer. Anal. 42 1959 10.1137/S0036142903427640 doi: 10.1137/S0036142903427640
|
[13] |
Burman E, Hansbo P 2007 J. Comput. Appl. Math. 198 35 10.1016/j.cam.2005.11.022 doi: 10.1016/j.cam.2005.11.022
|
[14] |
Gatica G N, ua R, Sayas F J 2011 Math. Comput. 80 1911
|
[15] |
Girault V, Vassilev D, Yotov I 2014 Numer. Math. 127 93 10.1007/s00211-013-0583-z doi: 10.1007/s00211-013-0583-z
|
[16] |
Lipnikov K, Vassilev D, Yotov I 2014 Numer. Math. 126 321 10.1007/s00211-013-0563-3 doi: 10.1007/s00211-013-0563-3
|
[17] |
Qiu C X, He X M, Li J, Lin Y P 2020 J. Comput. Phys. 411 109400 10.1016/j.jcp.2020.109400 doi: 10.1016/j.jcp.2020.109400
|
[18] |
Li R, Gao Y L, Li J, Chen Z X 2018 J. Comput. Appl. Math. 334 111 10.1016/j.cam.2017.11.011 doi: 10.1016/j.cam.2017.11.011
|
[19] |
He Y N, Li J 2010 Int. J. Numer. Anal. Mod. 62 647 10.1002/fld.2035 doi: 10.1002/fld.2035
|
[20] |
Liu X, Li J, Chen Z X 2018 J. Comput. Appl. Math. 333 442 10.1016/j.cam.2017.11.010 doi: 10.1016/j.cam.2017.11.010
|
[21] |
Li J, Mei L Q, He Y N 2006 Appl. Math. Comput. 182 24 10.1016/j.amc.2006.03.030 doi: 10.1016/j.amc.2006.03.030
|
[22] |
Zhu L P, Li J, Chen Z X 2011 J. Comput. Appl. Math. 235 2821 10.1016/j.cam.2010.12.001 doi: 10.1016/j.cam.2010.12.001
|
[23] |
Krizhevsky A, Sutskever I, Hinton G E 2012 Commun. ACM 64 84 10.1145/3065386 doi: 10.1145/3065386
|
[24] |
Hinton G, Deng L, Yu D, et al. 2012 IEEE Signal Proc. Mag. 29 82 10.1109/MSP.2012.2205597 doi: 10.1109/MSP.2012.2205597
|
[25] |
He K M, Zhang X Y, Ren S Q, et al. 2016 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition June 27–30, 2016, Las Vegas, NV, USA 770
|
[26] |
Cotter N E 1990 IEEE Trans. Neural Networks 4 290 10.1109/72.80265 doi: 10.1109/72.80265
|
[27] |
Hornik K, Stinchcombe M, White H 1989 Neural Networks 2 359 10.1016/0893-6080(89)90020-8 doi: 10.1016/0893-6080(89)90020-8
|
[28] |
Hornik K, Stinchcombe M, White H 1990 Neural Networks 3 551 10.1016/0893-6080(90)90005-6 doi: 10.1016/0893-6080(90)90005-6
|
[29] |
Hornik K 1991 Neural Networks 4 251 10.1016/0893-6080(91)90009-T doi: 10.1016/0893-6080(91)90009-T
|
[30] |
Cybenko G 1989 Math. Control Signal. 2 303 10.1007/BF02551274 doi: 10.1007/BF02551274
|
[31] |
Telgrasky M 2016 Proc. Mach. Learn. Res. 49 1517
|
[32] |
Mhaskar H, Liao Q L, Poggio T 2016 arXiv:1603.00988v4 [cs.LG]
|
[33] |
Khoo Y, Lu J F, Ying L X 2017 arXiv:1707.03351 [math.NA]
|
[34] |
Li J, Yue J, Zhang W, et al. 2022 J. Sci. Comput. 10.1007/s10915-022-01930-8 doi: 10.1007/s10915-022-01930-8
|
[35] |
Li J, Zhang W, Yue J 2021 Int. J. Numer. Anal. Model. 18 427
|
[36] |
Yue J, Li J 2022 Int. J. Numer. Methods Fluids. 94 1416 10.1002/fld.5095 doi: 10.1002/fld.5095
|
[37] |
Yue J, Li J 2023 Appl. Math. Comput. 437 127514 10.1016/j.amc.2022.127514 doi: 10.1016/j.amc.2022.127514
|
[38] |
Fan Y W, Lin L, Ying L X, et al. 2018 arXiv:1807.01883 [math.NA]
|
[39] |
Wang M, Cheung S W, Chung E T, et al. 2018 arXiv:1810.12245 [math.NA]
|
[40] |
Li X 1996 Neurocomputing 12 327 10.1016/0925-2312(95)00070-4 doi: 10.1016/0925-2312(95)00070-4
|
[41] |
Lagaris I E, Likas A C, Fotiadis D I 1998 IEEE Trans. Neural Network 9 987 10.1109/72.712178 doi: 10.1109/72.712178
|
[42] |
Lagaris I E, Likas A C, Papageorgiou D G 2000 IEEE Trans. Neural Network 11 1041 10.1109/72.870037 doi: 10.1109/72.870037
|
[43] |
McFall K S, Mahan J R 2009 IEEE Trans. Neural Network 20 1221 10.1109/TNN.2009.2020735 doi: 10.1109/TNN.2009.2020735
|
[44] |
Raissi M, Perdikaris P, Karniadakis G E 2017 arXiv:1711.10561 [cs.AI]
|
[45] |
Raissi M, Perdikaris P, Karniadakis G E 2017 arXiv:1711.10566 [cs.AI]
|
[46] |
Raissi M, Perdikaris P, Karniadakis G E 2019 J. Comput. Phys. 378 686 10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
|
[47] |
Yang L, Meng X H, Karniadakis G E 2021 J. Comput. Phys. 425 109913 10.1016/j.jcp.2020.109913 doi: 10.1016/j.jcp.2020.109913
|
[48] |
Rao C P, Sun H, Liu Y 2020 arXiv:2006.08472v1 [math.NA]
|
[49] |
Olivier P, Fablet R 2020 arXiv:2002.01029 [physics.comp-ph]
|
[50] |
Lu L, Meng X H, Mao Z P, et al. 2021 SIAM Rev. 63 208 10.1137/19M1274067 doi: 10.1137/19M1274067
|
[51] |
Fang Z W, Zhan J 2020 IEEE Access 8 26328 10.1109/ACCESS.2019.2963390 doi: 10.1109/ACCESS.2019.2963390
|
[52] |
Pang G F, Lu L, Karniadakis G E 2019 SIAM J. Sci. Comput. 41 A2603 10.1137/18M1229845 doi: 10.1137/18M1229845
|
[53] |
Zhu Y H, Zabaras N, Koutsourelakis P S, et al. 2019 J. Comput. Phys. 394 56 10.1016/j.jcp.2019.05.024 doi: 10.1016/j.jcp.2019.05.024
|
[54] |
Sirignano J, Spiliopoulos K 2018 J. Comput. Phys. 375 1339 10.1016/j.jcp.2018.08.029 doi: 10.1016/j.jcp.2018.08.029
|
[55] |
Beaver G S, Joseph D D 1967 J. Fluid Mech. 30 197 10.1017/S0022112067001375 doi: 10.1017/S0022112067001375
|
[56] |
Zhao L, Chung E T, Park E J, Zhou G 2021 SIAM J. Numer. Anal. 59 1 10.1137/19M1268525 doi: 10.1137/19M1268525
|
[57] |
Kovasznay L I G 1948 Math. Proc. Cambridge 44 58 10.1017/S0305004100023999 doi: 10.1017/S0305004100023999
|
[58] |
Lèon Bottou 2012 Lecture Notes in Computer Science Grègoire M, Genevieve B O, Klaus R M Berlin Springer 430 445
|
[1] | XUE Yiwei, WANG Yu, WANG Huan, et al. Establishment of a Hyperspectral Spectroscopy-Based Biochemical Component Detection Model for Green Tea Processing Materials[J]. Science and Technology of Food Industry, 2023, 44(10): 280−289. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2020070110 |
[2] | LAI Tao, PENG Xiaoqiang, XU Chao, DAI Yifan, HU Hao, LIU Junfeng. Single Geometric Error Model of 3-axis Measurement Machine Based on Topological Structure[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 58(24): 10-19. doi: 10.3901/JME.2022.24.010 |
[3] | LÜ Chao, CUI Gege, MENG Xianghao, LU Junyan, XU Youzhi, GONG Jianwei. Graph Representation Method for Pedestrian Intention Recognition of Intelligent Vehicle[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 688-695. doi: 10.15918/j.tbit1001-0645.2021.330 |
[4] | GONG Jianwei, GONG Cheng, LIN Yunlong, LI Zirui, LÜ Chao. Review on Machine Learning Methods for Motion Planning and Control Policy of Intelligent Vehicles[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 665-674. doi: 10.15918/j.tbit1001-0645.2022.095 |
[5] | Wei Wang, Shuo Feng, Zhuyifan Ye, Hanlu Gao, Jinzhong Lin, Defang Ouyang. Prediction of lipid nanoparticles for mRNA vaccines by the machine learning algorithm[J]. JOURNAL OF MECHANICAL ENGINEERING. doi: 10.1016/j.apsb.2021.11.021 |
[6] | Cao Zhi, Shang Lidan, Yin Dong. A weakly supervised learning method for vehicle identification code detection and recognition[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 48(2): 200270. doi: 10.12086/oee.2021.200270 |
[7] | Li Guoyou, Li Chenguang, Wang Weijiang, Yang Mengqi, Hang Bingpeng. Research on multi-feature human pose model recognition based on one-shot learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 48(2): 200099. doi: 10.12086/oee.2021.200099 |
[8] | Li Jian-Kang, Li Rui. Numerical simulation study of surface enhancement coherent anti-Stokes Raman scattering reinforced substrate[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 70(10): 104207. doi: 10.7498/aps.70.20201773 |
[9] | Sun Jian, Cao Zhuo, Li Heng, et al. Application of artificial intelligence technology to numerical weather prediction. J Appl Meteor Sci, 2021, 32(1): 1-11. DOI: 10.11898/1001-7313.20210101 |
[10] | Zhao Jian, Chen Zhao-Yun, Zhuang Xi-Ning, Xue Cheng, Wu Yu-Chun, Guo Guo-Ping. Quantum state preparation and its prospects in quantum machine learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 70(2): 140307. doi: 10.7498/aps.70.20210958 |
[11] | Mengxi JU, Xinwei LI, Zhangyong LI. Detection of white blood cells in microscopic leucorrhea images based on deep active learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 519-526. doi: 10.7507/1001-5515.201909040 |
[12] | Ying CUI, Zelong XU, Jianzhong LI. Identification of nucleosome positioning using support vector machine method based on comprehensive DNA sequence feature[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 496-501. doi: 10.7507/1001-5515.201911064 |
[13] | Xiaorong PU, Kecheng CHEN, Junchi LIU, Jin WEN, Shangwei ZHNENG, Honghao LI. Machine learning-based method for interpreting the guidelines of the diagnosis and treatment of COVID-19[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 365-372. doi: 10.7507/1001-5515.202003045 |
[14] | WANG Min, LI Jiafu, ZAN Tao, MA Gangjian, ZHOU Shuang. Identification of CNC Machine Tools’ Geometric Errors Based on Circular Tests[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 58-64. doi: 10.11936/bjutxb2016010070 |
[15] | QIAN Xiaoliang, ZHANG Heqing, CHEN Yongxin, ZENG Li, DIAO Zhihua, LIU Yucui, YANG Cunxiang. Research Development and Prospect of Solar Cells Surface Defects Detection Based on Machine Vision[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 76-85. doi: 10.11936/bjutxb2016040063 |
[16] | SU Yila, WU Nier, LIU Wanwan. Machine Translation of Mongolianand Chinese Natural Language Based on Statistical Analysis[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 36-42. doi: 10.11936/bjutxb2016070044 |
[17] | MAO Zheng, JIA Wenyang, DU Wenbin, MEI Weijun. Visual Tracking Method Based on Weighted Sample Learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(2): 217-223. doi: 10.11936/bjutxb2016030069 |
[18] | DU Weinan, HU Yongli, SUN Yanfeng. Image Super-resolution Reconstruction Based on Residual Dictionary Learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 43-48. doi: 10.11936/bjutxb2016060049 |
[19] | JIA Xibin, LI Ning, JIN Ya. Dynamic Convolutional Neural Network Extreme Learning Machine for Text Sentiment Classification[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 28-35. doi: 10.11936/bjutxb2016040093 |
[20] | SHAO Shuguang, GE Yuli, WANG Shu, XU Wenqing. Global Regularity for a Model of Inhomogeneous Three-dimensional Navier-Stokes Equations[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(2): 320-326. doi: 10.11936/bjutxb2016040094 |
|
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 2.49 × 10 0 | 9.15 × 10 0 | 8.72 × 10 0 | 2.74 × 10 −1 |
err L 2 | 4.84 × 10 0 | 9.42 × 10 0 | 1.94 × 10 0 | 2.99 × 10 −1 |
2 layers | U S | P S | U D | P D |
err L 1 | 4.85 × 10 −1 | 3.59 × 10 0 | 9.62 × 10 −2 | 4.03 × 10 −2 |
err L 2 | 9.01 × 10 −1 | 3.21 × 10 0 | 2.19 × 10 −1 | 4.18 × 10 −2 |
3 layers | U S | P S | U D | P D |
err L 1 | 5.80 × 10 −3 | 5.26 × 10 −2 | 1.01 × 10 −2 | 3.25 × 10 −3 |
err L 2 | 1.09 × 10 −2 | 4.66 × 10 −2 | 2.29 × 10 −2 | 3.38 × 10 −3 |
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 1.82 × 10 −2 | 1.19 × 10 −1 | 1.57 × 10 −2 | 1.08 × 10 −2 |
err L 2 | 3.66 × 10 −2 | 1.23 × 10 −1 | 4.62 × 10 −2 | 1.11 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 2.27 × 10 −4 | 1.74 × 10 −3 | 1.04 × 10 −4 | 4.67 × 10 −5 |
err L 2 | 4.21 × 10 −4 | 2.00 × 10 −3 | 3.18 × 10 −4 | 5.55 × 10 −5 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.65 × 10 −4 | 1.01 × 10 −3 | 1.13 × 10 −4 | 7.69 × 10 −5 |
err L 2 | 3.37 × 10 −4 | 1.50 × 10 −3 | 3.44 × 10 −4 | 8.32 × 10 −5 |
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 3.59 × 10 −1 | 5.03 × 10 0 | 5.52 × 10 −2 | 7.59 × 10 −2 |
err L 2 | 6.79 × 10 −1 | 5.20 × 10 0 | 1.73 × 10 −1 | 7.07 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 8.42 × 10 −4 | 9.41 × 10 −3 | 1.15 × 10 −3 | 1.32 × 10 −3 |
err L 2 | 1.65 × 10 −3 | 1.05 × 10 −2 | 3.43 × 10 −3 | 1.56 × 10 −3 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.89 × 10 −4 | 3.04 × 10 −3 | 2.97 × 10 −4 | 6.70 × 10 −5 |
err L 2 | 3.65 × 10 −4 | 3.37 × 10 −3 | 8.98 × 10 −4 | 8.40 × 10 −5 |
400 sampled points | |||
---|---|---|---|
Condition 1 | Condition 2 | Condition 3 | |
1 layer | 6.49 × 10 −2 | 9.14 × 10 −2 | 3.03 × 10 −2 |
2 layers | 3.67 × 10 −5 | 4.74 × 10 −2 | 6.27 × 10 −4 |
3 layers | 3.37 × 10 −5 | 7.53 × 10 −3 | 1.44 × 10 −5 |
|
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 2.49 × 10 0 | 9.15 × 10 0 | 8.72 × 10 0 | 2.74 × 10 −1 |
err L 2 | 4.84 × 10 0 | 9.42 × 10 0 | 1.94 × 10 0 | 2.99 × 10 −1 |
2 layers | U S | P S | U D | P D |
err L 1 | 4.85 × 10 −1 | 3.59 × 10 0 | 9.62 × 10 −2 | 4.03 × 10 −2 |
err L 2 | 9.01 × 10 −1 | 3.21 × 10 0 | 2.19 × 10 −1 | 4.18 × 10 −2 |
3 layers | U S | P S | U D | P D |
err L 1 | 5.80 × 10 −3 | 5.26 × 10 −2 | 1.01 × 10 −2 | 3.25 × 10 −3 |
err L 2 | 1.09 × 10 −2 | 4.66 × 10 −2 | 2.29 × 10 −2 | 3.38 × 10 −3 |
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 1.82 × 10 −2 | 1.19 × 10 −1 | 1.57 × 10 −2 | 1.08 × 10 −2 |
err L 2 | 3.66 × 10 −2 | 1.23 × 10 −1 | 4.62 × 10 −2 | 1.11 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 2.27 × 10 −4 | 1.74 × 10 −3 | 1.04 × 10 −4 | 4.67 × 10 −5 |
err L 2 | 4.21 × 10 −4 | 2.00 × 10 −3 | 3.18 × 10 −4 | 5.55 × 10 −5 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.65 × 10 −4 | 1.01 × 10 −3 | 1.13 × 10 −4 | 7.69 × 10 −5 |
err L 2 | 3.37 × 10 −4 | 1.50 × 10 −3 | 3.44 × 10 −4 | 8.32 × 10 −5 |
400 sampled points | ||||
---|---|---|---|---|
1 layer | U S | P S | U D | P D |
err L 1 | 3.59 × 10 −1 | 5.03 × 10 0 | 5.52 × 10 −2 | 7.59 × 10 −2 |
err L 2 | 6.79 × 10 −1 | 5.20 × 10 0 | 1.73 × 10 −1 | 7.07 × 10 −2 |
2 layers | U S | P S | U D | P D |
err L 1 | 8.42 × 10 −4 | 9.41 × 10 −3 | 1.15 × 10 −3 | 1.32 × 10 −3 |
err L 2 | 1.65 × 10 −3 | 1.05 × 10 −2 | 3.43 × 10 −3 | 1.56 × 10 −3 |
3 layers | U S | P S | U D | P D |
err L 1 | 1.89 × 10 −4 | 3.04 × 10 −3 | 2.97 × 10 −4 | 6.70 × 10 −5 |
err L 2 | 3.65 × 10 −4 | 3.37 × 10 −3 | 8.98 × 10 −4 | 8.40 × 10 −5 |
400 sampled points | |||
---|---|---|---|
Condition 1 | Condition 2 | Condition 3 | |
1 layer | 6.49 × 10 −2 | 9.14 × 10 −2 | 3.03 × 10 −2 |
2 layers | 3.67 × 10 −5 | 4.74 × 10 −2 | 6.27 × 10 −4 |
3 layers | 3.37 × 10 −5 | 7.53 × 10 −3 | 1.44 × 10 −5 |