Yue Jing, Li Jian, Zhang Wen, Chen Zhangxin. The coupled deep neural networks for coupling of the Stokes and Darcy–Forchheimer problems[J]. JOURNAL OF MECHANICAL ENGINEERING, 2023, 32(1): 010201. doi: 10.1088/1674-1056/ac7554
Citation: Yue Jing, Li Jian, Zhang Wen, Chen Zhangxin. The coupled deep neural networks for coupling of the Stokes and Darcy–Forchheimer problems[J]. JOURNAL OF MECHANICAL ENGINEERING, 2023, 32(1): 010201. doi: 10.1088/1674-1056/ac7554

The coupled deep neural networks for coupling of the Stokes and Darcy–Forchheimer problems

doi: 10.1088/1674-1056/ac7554
  • Received Date: 07 Apr 2022
    Available Online: 16 May 2023
  • Issue Publish Date: 01 Jan 2023
  • We present an efficient deep learning method called coupled deep neural networks (CDNNs) for coupling of the Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems into the networks properly and can be served as an efficient alternative to the complex coupled problems. To impose energy conservation constraints, the CDNNs utilize simple fully connected layers and a custom loss function to perform the model training process as well as the physical property of the exact solution. The approach can be beneficial for the following reasons: Firstly, we sample randomly and only input spatial coordinates without being restricted by the nature of samples. Secondly, our method is meshfree, which makes it more efficient than the traditional methods. Finally, the method is parallel and can solve multiple variables independently at the same time. We present the theoretical results to guarantee the convergence of the loss function and the convergence of the neural networks to the exact solution. Some numerical experiments are performed and discussed to demonstrate performance of the proposed method.

     

  • Fluid flows between porous media and free-flow zones have extensive applications in hydrology, environmental science, and biofluid dynamics. A lot of researchers have derived suitable mathematical and numerical models for fluid movement. [ 13] A system can be viewed as a coupled problem with two physical systems interacting across an interface. The simplest mathematical formulation for the coupled problem is coupling of the Stokes and Darcy flow with proper interface conditions. The most suitable and popular interface conditions are called the Beavers–Joseph–Saffman conditions. [ 4] However, Darcy’s law only provides a linear relationship between the gradient of pressure and velocity in the coupled model, which usually fails for complex physical problems. Forchheimer [ 5] conducted flow experiments in sand packs and recognized that for moderate Reynolds numbers ( Re > 0.1 approximately), Darcy’s law is not adequate. He found that the pressure gradient and Darcy velocity should satisfy the Darcy–Forchheimer law. Since the great attention has been paid to the coupled model, a large number of traditional mesh-based methods have been devoted to the coupled Stokes and Darcy flows problems. [ 622]

    Owing to the enormous potential in approximating complex nonlinear maps, [ 2632] deep learning has attracted growing attention in many applications, such as image, speech, text recognition, and scientific computing. [ 2325] Many works have arisen based on the function approximation capabilities of the feed-forward fully connected neural network to solve initial/boundary value problems [ 4043] in recent decades. The solution to the system of equations can be obtained by minimizing the loss function, which typically consists of the residual error of the governing partial differential equations (PDEs) along with initial/boundary values. Resently, Raissi et al. proposed physics informed neural networks (PINNs) [ 4446] and they have been widely used. [ 4752] Moreover, Sirignano and Spiliopoulos presented the deep learning Galerkin method [ 54] for solving high dimensional PDEs. Additionally, some recent works have successfully solved the second-order linear elliptic equations and the high dimensional Stokes problems. [ 3336] Though several excellent works have been performed in applying deep learning to solve PDEs, [ 37] the topic for solving complicated coupled physical problems remains to be investigated.

    Considering the performance of deep learning for solving PDEs, our contribution is to design the CDNNs as an efficient alternative model for complicated coupled physical problems. We can encode any underlying physical laws naturally as prior information to obey the law of physics. To satisfy the differential operators, boundary conditions and divergence conditions, we train the neural networks on batches of randomly sampled points. The method only inputs random sampling spatial coordinates without considering the nature of samples. Notably, we take the interface conditions as the constraint for the CDNNs. The approach is parallel and solves multiple variables independently at the same time. Specially, the optimal solution can be obtained by using the networks instead of a linear combination of basic functions. Furthermore, we validate the convergence of the loss function under certain conditions and the convergence of the CDNNs to the exact solution. Several numerical experiments are conducted to investigate the performance of the CDNNs.

    The article is organized as follows: Section 2 introduces the coupled model and the relation methodology. Section 3 discusses the convergence of the loss function J ( U ¯ ) and the convergence of the CDNNs to the exact solution. Section 4 reveals some numerical experiments to illustrate the efficiency of the CDNNs. The article ends with conclusion in Section 5.

    Let Ω S and Ω D be two bounded and simply connected polygonal domains in ℝ 2, as shown in Fig. 1. Let n S denote the unit normal vector pointing from Ω S to Ω D, and take n D as the unit normal vector pointing from Ω D to Ω S, on the interface Γ, then we have n D = – n S.

    1.  Coupled domain with interface Γ.
    When kinematic effects surpass viscous effects in a porous medium, the Darcy velocity u D and the pressure gradient ∇ p D does not satisfy a linear relation. Instead, a nonlinear approximation, known as the Darcy–Forchheimer model, is considered. When it is imposed on the porous medium Ω D with homogeneous Dircihlet boundary condition on Γ D, the equations can be written as u D = f D , in Ω D , μ ρ K 1 u D + β ρ u D u D + p D = g D , in Ω D , p D = 0 , on Γ D , where K is the permeability tensor assumed to be uniformly positive definite and bounded, ρ is the density of the fluid, μ is viscosity, and β is a dynamic viscosity, all assumed to be positive constants. In addition, g D and f D are source terms. We remark that in this context we exploit homogeneous Dirichlet boundary condition, in fact, we can also consider homogeneous Neumann boundary condition, i.e., u D n D = 0 on Γ D and the arguments used in this paper are still true.

    The fluid motion in Ω S is described by the Stokes equations: ν Δ u S + p S = f S , in Ω S , u S = 0 , in Ω S , u S = 0 , on Γ S , where ν > 0 denotes the viscosity of the fluid.

    On the interface, we prescribe the following interface conditions: u S n S = u D n S , on Γ , p S ν n S u S n S = p D , on Γ , ν t u S n S = G u S t , on Γ .

    Here, t represents the unit tangential vector along the interface Γ. Condition ( 7) represents continuity of the fluid velocity’s normal components, Eq. ( 8) represents the balance of forces acting across the interface, and Eq. ( 9) is the Beavers–Joseph–Saffman condition. [ 55] The constant G > 0 is given and usually obtained from experimental data.

    For notational brevity, we set u ¯ = ( u S , u D , p S , p D ) and recall the classical Sobolev spaces X S 0 = { v S [ H 1 ( Ω S ) ] d : v S | Γ S = 0 } , Y D = { q D [ W 1 , 3 / 2 ( Ω D ) ] 2 : q D | Γ D = 0 } , X S = { v S X S 0 : div v S = 0 } , where H k ( Ω ) = { υ L 2 ( Ω ) : D w α υ L 2 ( Ω ) , α : α k } , and their norm υ k = ( υ , υ ) k = { α = 0 k Ω ( D w α υ ) 2 d x } 1 / 2 , υ W k , p = { | α | k υ L p p } 1 / p .

    Particularly, || v|| k = || v|| W k,2 . Here, k > 0 is a positive integer, ∥ υ0 denotes the norm on L 2( Ω) or ( L 2( Ω)) 2, and D w α υ is the generalized derivative of υ. Moreover, (⋅,⋅) D represents the inner product in the domain D and 〈⋅,⋅〉 represents the inner product on the interface Γ.

    To solve coupling of the Stokes and Darcy–Forchheimer problems, we propose the CDNNs in Fig. 2, where x, y represent the spatial coordinates in different domains. Furthermore, we give observations of the state variable U ¯ ( x ; θ ) = ( U S ( x ; θ 1 ) , U D( x; θ 2), P S( x; θ 3), P D( x; θ 4)), which is the neural network solution to the coupled Stokes and Darcy–Forchheimer problems ( 1)–( 9). Here, ( θ 1, θ 3) and ( θ 2, θ 4) are the stacked parameters of θ for Stokes and Darcy–Forchheimer, respectively. The following constrained optimization procedure aims to reconstruct the parameters θ by minimizing the loss function: J [ U ¯ ] = J Ω S [ U ¯ ] + J Ω D [ U ¯ ] + J Γ [ U ¯ ] , where J Ω S ( U ¯ ) = f S + ν Δ U S ( x ; θ 1 ) P S ( x ; θ 3 ) 0 , Ω S , ω 1 2 + U S ( x ; θ 1 ) 0 , Ω S , ω 1 2 + U S ( x ; θ 1 ) 0 , Γ S , ω 2 2 , J Ω D ( U ¯ ) = f D U D ( x ; θ 2 ) 0 , Ω D , ω 1 2 + μ ρ K 1 U D ( x ; θ 2 ) + β ρ | U D ( x ; θ 2 ) | U D ( x ; θ 2 ) + P D ( x ; θ 4 ) g D 0 , Ω D , ω 1 2 + P D ( x ; θ 4 ) 0 , Γ D , ω 2 2 , J Γ ( U ¯ ) = U S ( x ; θ 1 ) n S U D ( x ; θ 2 ) n S 0 , Γ , ω 3 2 + P S ( x ; θ 3 ) ν n S U S ( x ; θ 1 ) n S P D ( x ; θ 4 ) 0 , Γ , ω 3 2 + ν t U S ( x ; θ 1 ) n S G U S ( x ; θ 1 ) t 0 , Γ , ω 3 2 .

    1
    2.  The structure of the CDNNs.

    It should be noted that J ( U ¯ ) can measure how well the approximate solution satisfies differential operators, divergence conditions, boundary conditions and interface conditions. Furthermore, the norm in Eq. ( 11) means that f ( y ) 0 , Y , ω 2 = Y | f ( y ) | 2 ω ( y ) d y , where ω( y) is the probability density of variable y in domain Y . In this study, the training data are sampled randomly from interior domains, boundary domains and interface by the respective probability densities ω 1, ω 2, and ω 3. Especially, if J ( U ¯ ) = 0 , then U ¯ is the solution to the coupled Stokes and Darcy–Forchheimer problems ( 1)–( 9). Since it is infeasible to estimate θ by directly minimizing J ( U ¯ ) when integrated over a higher dimensional region, we apply a sequence of randomly sampled points from domain instead forming mesh grid. The main steps of the CDNNs for the coupled Stokes and Darcy–Forchheimer equations are presented as Algorithm 1. Another noticeable point is that the term ∇ θ G( ρ ( n) , θ n ) is unbiased estimate of θ J ( U ¯ ( ; θ n ) ) because the population parameters can be estimated by sample mathematical expectations.

     | Show Table
    DownLoad: CSV
    According to the definition of loss function J ( U ¯ ) , it can measure how well U ¯ satisfies Eqs. ( 1)–( 9). Neural networks are a set of algorithms for classification and regression tasks inspired by the biological neural networks in brains. There have various types of neural networks with different neuron connection forms and architectures. According to Ref. [ 29], if there is only one hidden layer and output, a set of functions implemented by following networks with m 1, m 2, m 3 and m 4 hidden units for coupling of the Stokes and Darcy–Forchheimer problems are U S m 1 ( φ ) ] d = { Θ ( x ) : d d | Θ ( x ) = i = 1 m 1 β i φ ( j = 1 d σ j , i x j + c i ) } , [ U D m 2 ( ζ ) ] d = { Λ ( x ) : d d | Λ ( x ) = i = 1 m 2 β i ζ ( j = 1 d σ j , i x j + c i ) } , P S m 3 ( ψ ) = { Ψ ( x ) : d | Ψ ( x ) = i = 1 m 3 β i ψ ( j = 1 d σ j , i x j + c i ) } , P D m 4 ( γ ) = { Υ ( t , x ) : d | Υ ( x ) = i = 1 m 4 β i γ ( j = 1 d σ j , i x j + c i ) } , where Θ ( x ) = ( Θ 1 ( x ) , Θ 2 ( x ) , , Θ d ( x ) ) , Λ ( x ) = ( Λ 1 ( x ) , Λ 2 ( x ) , , Λ d ( x ) ) , φ, ζ, ψ and γ are the shared activation functions of the hidden units in C 2 ( Ω ) , bounded and non-constant; x j is input, β i , β i , β i , β i , σ ji , σ j i , σ j i and σ j i are weights, c i , c i , c i and c i are thresholds of the neural networks.

    More generally, we use the similar notation [ U S ( φ ) ] d × [ U D ( ζ ) ] d × P S ( ψ ) × P D ( γ ) for the multi layer neural networks with arbitrarily large number of hidden units m 1, m 2, m 3 and m 4. In particular, the parameters and the activation function in each dimension of [ U S m 1 ( φ ) ] d are same as before. For convenient, we set n as the number of the neurons in numerical experiments. Then the parameters of the CDNNs can be formalized as follows: θ 1 k = ( β 1 , , β n , σ 11 , , σ d n , c 1 , , c n ) , θ 2 k = ( β 1 , , β n , σ 11 , , σ d n , c 1 , , c n ) , θ 3 = ( β 1 , , β n , σ 11 , , σ d n , c 1 , , c n ) , θ 4 = ( β 1 , , β n , σ 11 , , σ d n , c 1 , , c n ) , where k = 1,2,…, d, θ 1 ∈ ℝ (2 + d) nd , θ 2 ∈ ℝ (2 + d) nd , θ 3 ∈ ℝ (2 + d) n , and θ 4 ∈ ℝ (2 + d) n .

    In the next two subsections, we prove that the neural network U ¯ n with n hidden units for U S n , U D n , P S n and P D n satisfy the differential operators, boundary conditions, divergence conditions and interface conditions arbitrarily well for sufficiently large n. Specifically, we confirm that there exists U ¯ n [ U S ( φ ) ] d × [ U D ( ζ ) ] d × P S ( ψ ) × P D ( γ ) . satisfying J ( U ¯ n ) 0 as n → ∞. Moreover, U ¯ n u ¯ as n → ∞, where u ¯ is the exact solution to the coupled equations ( 1)–( 9).

    In this subsection, we prove that the CDNNs U ¯ can make the loss function J ( U ¯ ) arbitrarily small.

    In the above subsection, we have proved the convergence of the loss function. In this subsection, we remain to discuss the convergence of the CDNNs to the exact solution. According to the Galerkin method, the neural networks satisfy U D n f D = 0 , in Ω D , μ ρ K 1 U D n + β ρ U D n U D n + P D n g D = 0 , in Ω D , P D n = 0 , on Γ D , ν Δ U S n + P S n f S = 0 , in Ω S , U S n = 0 , in Ω S , U S n = 0 , on Γ S , U S n n S U D n n S = 0 , on Γ , P S n ν n S U S n n S P D n = 0 , on Γ , ν t U S n n S G U S n t = 0 , on Γ .

    3

    Based on the above system of equations, we give the following assumption and theorem to guarantee the convergency of the CDNNs to the exact solution.

    The section presents several numerical tests to confirm the proposed theoretical results. We start with three examples with known exact solution to test the efficiency of the proposed method, where the permeability for the third example is highly oscillatory. Then, the fourth example with no exact solution shows the application of the proposed method to high contrast permeability problem. This section concludes with a physical flow. The numerical examples presented could violate the interface conditions ( 8) and ( 9), [ 56] that is, Eqs. ( 8) and ( 9) are replaced by Eqs. ( 55) and ( 56) to deal with this case, p S ν n S u S n S = p D + g 1 , on Γ , ν t u S n S = G u S t + g 2 , on Γ , the variational formulation has only a small change: Eq. ( 51) now includes the two terms –〈 g 1, V S n S Γ – 〈 g 2, V S t Γ on the right side. In addition, we utilize 16 neurons in each hidden layer and apply the relative L 1 error ( err L 1 : r R L 1 r L 1 ) and relative L 2 error ( err L 2 : r R 0 r 0 ) to reflect the accuracy between the results of the CDNNs and the exact solution ( r : the exact solution; R : the neural network).

    5
    In this subsection we study the performance of the CDNNs for the benchmark problem presented in Ref. [ 56]. This problem is defined for Ω S = (0, 1) 2, Ω D = (0, 1) × (1, 2) and the interface Γ={0 < x < 1, y = 1} as u S = ( x 2 π sin ( 2 π y ) ( x 1 ) 2 2 x sin ( y π ) 2 ( 2 x 1 ) ( x 1 ) ) , p S = ( cos ( 1 ) 1 ) sin ( 1 ) + cos ( y ) sin ( x ) , u D = ( sin ( π x ) sin ( π y ) 2 x sin ( y π ) 2 ( 2 x 1 ) ) , p D = sin ( π x ) cos ( π y ) .

    Similar to Ref. [ 56], we fix K to be the identity tensor in ℝ 2 × 2, μ = ρ = β = ν = 1. Due to the fact that the interface conditions ( 8) and ( 9) are violated, we exploit the interface conditions ( 55) and ( 56), where g 1 and g 2 can be computed by the exact solution. Specifically, the errors converge as the hidden layer increases in Fig. 3(a). Figure 3(b) reveals that the change of data has no significant influence on errors once the size is larger than 10 2. In particular, Fig. 4 displays the exact solution of the couples Stokes Darcy–Forchheimer problems and the results of the CDNNs, one can observe that both are approximately identical. The point-wise errors are depicted in Fig. 5. As can be seen, the absolute errors are almost all 0, which also indicates the closeness between the exact solution and the approximation solution. Table 1 exhibits the details of the results, which are consistent with our theory.

    3.  The influence of different hidden layers and different training data on err L 2 (Test 1): (a) 400 training points, (b) one hidden layer.
    4.  The contrast of the exact solution and the CDNNs (Test 1).
    5.  The point-wise errors (Test 1).
    Table 1..  The relative errors of Test 1.
    400 sampled points
    1 layer U S P S U D P D
    err L 1 2.49 × 10 0 9.15 × 10 0 8.72 × 10 0 2.74 × 10 −1
    err L 2 4.84 × 10 0 9.42 × 10 0 1.94 × 10 0 2.99 × 10 −1
    2 layers U S P S U D P D
    err L 1 4.85 × 10 −1 3.59 × 10 0 9.62 × 10 −2 4.03 × 10 −2
    err L 2 9.01 × 10 −1 3.21 × 10 0 2.19 × 10 −1 4.18 × 10 −2
    3 layers U S P S U D P D
    err L 1 5.80 × 10 −3 5.26 × 10 −2 1.01 × 10 −2 3.25 × 10 −3
    err L 2 1.09 × 10 −2 4.66 × 10 −2 2.29 × 10 −2 3.38 × 10 −3
     | Show Table
    DownLoad: CSV
    In this example, we consider Ω S = (0, 1) 2, Ω D = (0, 1) × (1, 2) and the interface Γ = {0 < x < 1, y = 1} with an analytical solution presented in Ref. [ 56]. We set K to be the identity tensor in ℝ 2 × 2, μ = ρ = β = ν = 1, and the exact solution is given by u S = ( cos 2 ( π y 2 ) sin ( π x 2 ) 1 4 cos ( π x 2 ) ( sin ( π y ) + π y ) ) , p S = π 4 cos ( π x 2 ) ( y 2 cos ( π y 2 ) 2 ) , and u D = ( 1 8 sin ( π x 2 ) 1 4 π cos ( π x 2 ) ) , p D = π 4 cos ( π x 2 ) y .

    Naturally, the corresponding f S, f D, and g D can be calculated by the exact solution. Note that this example satisfies the interface conditions ( 7)–( 9). According to Test 1, we choose appropriate data and hidden layer to solve the second example. Figure 6 and Table 2 show the accuracy of the CDNNs for solving the coupled problems in detail.

    6.  The point-wise errors (Test 2).
    Table 2..  The relative errors of Test 2.
    400 sampled points
    1 layer U S P S U D P D
    err L 1 1.82 × 10 −2 1.19 × 10 −1 1.57 × 10 −2 1.08 × 10 −2
    err L 2 3.66 × 10 −2 1.23 × 10 −1 4.62 × 10 −2 1.11 × 10 −2
    2 layers U S P S U D P D
    err L 1 2.27 × 10 −4 1.74 × 10 −3 1.04 × 10 −4 4.67 × 10 −5
    err L 2 4.21 × 10 −4 2.00 × 10 −3 3.18 × 10 −4 5.55 × 10 −5
    3 layers U S P S U D P D
    err L 1 1.65 × 10 −4 1.01 × 10 −3 1.13 × 10 −4 7.69 × 10 −5
    err L 2 3.37 × 10 −4 1.50 × 10 −3 3.44 × 10 −4 8.32 × 10 −5
     | Show Table
    DownLoad: CSV
    In this subsection, we solve coupling of the Stokes and Darcy–Forchheimer problems with highly oscillatory permeability over domains Ω S = (0, 1) × (0, 1/2), Ω D = (0, 1) × (1/2, 1) and the interface Ω = {0 < x < 1, y = 1/2} presented in Ref. [ 56]. Here we set μ = ρ = β = ν = 1, K −1 = ρ I , and ρ is defined by ϱ = 2 + 1.8 sin ( 2 π x / ε ) 2 + 1.8 sin ( 2 π y / ε ) + 2 + 1.8 sin ( 2 π y / ε ) 2 + 1.8 sin ( 2 π x / ε ) , where ε = 1/16. The profile of ρ is shown in the figure. The exact solution is given by u S = ( 16 y cos ( π x ) 2 ( y 2 0.25 ) 8 π cos ( π x ) sin ( π x ) ( y 2 0.25 ) 2 ) , p S = x 2 , u D = sin ( 2 π x ) cos ( 2 π y ) cos ( 2 π x ) sin ( 2 π y ) , p D = cos ( 2 π x ) cos ( 2 π y )

    We calculate the relative errors in Table 3 to reflect ability of the CDNNs for solving the coupled problems with highly oscillatory permeability (Fig. 7). Figure 8 reveals that the CDNNs handle the highly oscillatory permeability coupled problems without losing accuracy.

    7.  The value of ρ (Test 3).
    8.  The point-wise errors (Test 3).
    The problems that we have studied so far have the exact solution. In this example, we consider coupling of the Stokes and Darcy–Forchheimer problems with no exact solution over Ω S = (–1/2, 3/2) × (0, 2), Ω D = (–1/2, 3/2) × (–2, 0) and the interface Γ = {–1/2 < x < 3/2, y = 0}. Specifically, in the Stokes region, the Dirichlet boundary condition is given by Kovasznay flow, [ 57] u S = ( 1 e λ x cos ( 2 π y ) λ 2 π e λ x sin ( 2 π y ) ) , where λ = 8 π 2 1 + 1 + 64 π 2 . Moreover, we set μ = ρ = β = ν = 1 and g D = 0 , f D = 0, f S = 0. In addition, p D satisfies the homogeneous Dirichlet boundary condition along y = –2, otherwise it has an homogeneous Neumann boundary condition. The permeability is taken to be K = ε I and the ε = 10 4. Since the exact solution for this example is unavailable, we provide L 2 error of interface to demonstrate the accuracy of the CDNNs in Table 4. Obviously, the error decreases gradually with increase of the hidden layers. Furthermore, Figs. 9 and 10 display the exact solution and the results of CDNNs in detail.

    9.  The results of CDNNs (Test 4).
    10.  The velocity flows of Stokes and Darcy (Test 4).
    Table 3..  The relative errors of Test 3.
    400 sampled points
    1 layer U S P S U D P D
    err L 1 3.59 × 10 −1 5.03 × 10 0 5.52 × 10 −2 7.59 × 10 −2
    err L 2 6.79 × 10 −1 5.20 × 10 0 1.73 × 10 −1 7.07 × 10 −2
    2 layers U S P S U D P D
    err L 1 8.42 × 10 −4 9.41 × 10 −3 1.15 × 10 −3 1.32 × 10 −3
    err L 2 1.65 × 10 −3 1.05 × 10 −2 3.43 × 10 −3 1.56 × 10 −3
    3 layers U S P S U D P D
    err L 1 1.89 × 10 −4 3.04 × 10 −3 2.97 × 10 −4 6.70 × 10 −5
    err L 2 3.65 × 10 −4 3.37 × 10 −3 8.98 × 10 −4 8.40 × 10 −5
     | Show Table
    DownLoad: CSV
    Table 4..  The errors in interface of Test 4 ( K = 10000).
    400 sampled points
    Condition 1 Condition 2 Condition 3
    1 layer 6.49 × 10 −2 9.14 × 10 −2 3.03 × 10 −2
    2 layers 3.67 × 10 −5 4.74 × 10 −2 6.27 × 10 −4
    3 layers 3.37 × 10 −5 7.53 × 10 −3 1.44 × 10 −5
     | Show Table
    DownLoad: CSV

    We conclude this section with a physical flow, where Ω S = (0, 1) × (1, 2), Ω D = (0, 1) 2 and the interface Γ = {0 < x < 1, y = 1}. In Ω S, the boundaries of the cavity are walls with no-slip condition, except for the upper boundary where a uniform tangential velocity u S( x, 2) = (1, 0) T is imposed, which is driven cavity flow. More precisely, we enforce homogeneous Neumann and Dirichlet boundary conditions, respectively, on Γ D, N = { x = 0 or y = 0} and Γ D, D = { x = 1}. In addition, we set K to be the identity tensor in ℝ 2 × 2, μ = ρ = β = ν = 1, and f D = 0, f S = 0 , g D = 0 . The results of the CDNNs are depicted in Fig. 11. More vividly, we display the velocity flows of free-flow and porous media zones in Fig. 12.

    11.  The results of the CDNNs (Test 5).
    12.  The velocity flows of Stokes and Darcy (Test 5).

    In summary, we have proposed the CDNNs to study the coupled Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems into the networks properly and can be served as an efficient alternative to the complex coupled problems. CDNNs avoid limitations of the traditional methods, such as decoupling, grid construction and the complicated interface conditions. Furthermore, it is meshfree and parallel, it can solve multiple variables independently at the same time. Specially, we provide the convergence of the loss function and the convergence of the CDNNs to the exact solution. The numerical results are consistent with our theory sufficiently. Moreover, we leave the following issues subject to our future works: (1) combining data-driven with model-driven to solve the high dimensional coupled problems, (2) considering the specific size of the networks through theoretical analysis, (3) combining traditional numerical methods with deep learning to solve more complicated high dimensional coupled problems.

    Acknowledgements Project supported in part by the National Natural Science Foundation of China (Grant No. 11771259), the Special Support Program to Develop Innovative Talents in the Region of Shaanxi Province, the Innovation Team on Computationally Efficient Numerical Methods Based on New Energy Problems in Shaanxi Province, and the Innovative Team Project of Shaanxi Provincial Department of Education (Grant No. 21JP013).
  • [1]
    Li J, Bai Y, Zhao X 2023 Modern Numerical Methods for Mathematical Physics Equations Beijing Science Press 10 in Chinese
    [2]
    Li J, Lin X, Chen Z 2022 Finite Volume Methods for the Incompressible Navier–Stokes Equations Berlin Springer 15
    [3]
    Li J 2019 Numerical Methods for the Incompressible Navier–Stokes Equations Beijing Science Press 8
    [4]
    Saffman P G 1971 Stud. Appl. Math. 50 93 10.1002/sapm.v50.2 doi: 10.1002/sapm.v50.2
    [5]
    Forchheimer P 1901 Zeitz. Ver. Duetch Ing. 45 1782 10.5917/jagh1987.45.279 doi: 10.5917/jagh1987.45.279
    [6]
    Park E J 1995 SIAM J. Numer. Anal. 32 865 10.1137/0732040 doi: 10.1137/0732040
    [7]
    Kim M Y, Park E J 1999 Comput. Math. Appl. 38 113 10.1016/S0898-1221(99)00291-6 doi: 10.1016/S0898-1221(99)00291-6
    [8]
    Park E J 2005 Numer. Methods Part. Differ. Equ. 21 213 10.1002/num.20035 doi: 10.1002/num.20035
    [9]
    Discacciati M, Miglio E, Quarteroni A 2002 Appl. Numer. Math. 43 57 10.1016/S0168-9274(02)00125-3 doi: 10.1016/S0168-9274(02)00125-3
    [10]
    Layton W J, Schieweck F, Yotov I 2003 SIAM J. Numer. Anal. 40 2195 10.1137/S0036142901392766 doi: 10.1137/S0036142901392766
    [11]
    Riviere B 2005 J. Sci. Comput. 22 479 10.1007/s10915-004-4147-3 doi: 10.1007/s10915-004-4147-3
    [12]
    Riviere B, Yotov I 2005 SIAM J. Numer. Anal. 42 1959 10.1137/S0036142903427640 doi: 10.1137/S0036142903427640
    [13]
    Burman E, Hansbo P 2007 J. Comput. Appl. Math. 198 35 10.1016/j.cam.2005.11.022 doi: 10.1016/j.cam.2005.11.022
    [14]
    Gatica G N, ua R, Sayas F J 2011 Math. Comput. 80 1911
    [15]
    Girault V, Vassilev D, Yotov I 2014 Numer. Math. 127 93 10.1007/s00211-013-0583-z doi: 10.1007/s00211-013-0583-z
    [16]
    Lipnikov K, Vassilev D, Yotov I 2014 Numer. Math. 126 321 10.1007/s00211-013-0563-3 doi: 10.1007/s00211-013-0563-3
    [17]
    Qiu C X, He X M, Li J, Lin Y P 2020 J. Comput. Phys. 411 109400 10.1016/j.jcp.2020.109400 doi: 10.1016/j.jcp.2020.109400
    [18]
    Li R, Gao Y L, Li J, Chen Z X 2018 J. Comput. Appl. Math. 334 111 10.1016/j.cam.2017.11.011 doi: 10.1016/j.cam.2017.11.011
    [19]
    He Y N, Li J 2010 Int. J. Numer. Anal. Mod. 62 647 10.1002/fld.2035 doi: 10.1002/fld.2035
    [20]
    Liu X, Li J, Chen Z X 2018 J. Comput. Appl. Math. 333 442 10.1016/j.cam.2017.11.010 doi: 10.1016/j.cam.2017.11.010
    [21]
    Li J, Mei L Q, He Y N 2006 Appl. Math. Comput. 182 24 10.1016/j.amc.2006.03.030 doi: 10.1016/j.amc.2006.03.030
    [22]
    Zhu L P, Li J, Chen Z X 2011 J. Comput. Appl. Math. 235 2821 10.1016/j.cam.2010.12.001 doi: 10.1016/j.cam.2010.12.001
    [23]
    Krizhevsky A, Sutskever I, Hinton G E 2012 Commun. ACM 64 84 10.1145/3065386 doi: 10.1145/3065386
    [24]
    Hinton G, Deng L, Yu D, et al. 2012 IEEE Signal Proc. Mag. 29 82 10.1109/MSP.2012.2205597 doi: 10.1109/MSP.2012.2205597
    [25]
    He K M, Zhang X Y, Ren S Q, et al. 2016 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition June 27–30, 2016, Las Vegas, NV, USA 770
    [26]
    Cotter N E 1990 IEEE Trans. Neural Networks 4 290 10.1109/72.80265 doi: 10.1109/72.80265
    [27]
    Hornik K, Stinchcombe M, White H 1989 Neural Networks 2 359 10.1016/0893-6080(89)90020-8 doi: 10.1016/0893-6080(89)90020-8
    [28]
    Hornik K, Stinchcombe M, White H 1990 Neural Networks 3 551 10.1016/0893-6080(90)90005-6 doi: 10.1016/0893-6080(90)90005-6
    [29]
    Hornik K 1991 Neural Networks 4 251 10.1016/0893-6080(91)90009-T doi: 10.1016/0893-6080(91)90009-T
    [30]
    Cybenko G 1989 Math. Control Signal. 2 303 10.1007/BF02551274 doi: 10.1007/BF02551274
    [31]
    Telgrasky M 2016 Proc. Mach. Learn. Res. 49 1517
    [32]
    Mhaskar H, Liao Q L, Poggio T 2016 arXiv:1603.00988v4 [cs.LG]
    [33]
    Khoo Y, Lu J F, Ying L X 2017 arXiv:1707.03351 [math.NA]
    [34]
    Li J, Yue J, Zhang W, et al. 2022 J. Sci. Comput. 10.1007/s10915-022-01930-8 doi: 10.1007/s10915-022-01930-8
    [35]
    Li J, Zhang W, Yue J 2021 Int. J. Numer. Anal. Model. 18 427
    [36]
    Yue J, Li J 2022 Int. J. Numer. Methods Fluids. 94 1416 10.1002/fld.5095 doi: 10.1002/fld.5095
    [37]
    Yue J, Li J 2023 Appl. Math. Comput. 437 127514 10.1016/j.amc.2022.127514 doi: 10.1016/j.amc.2022.127514
    [38]
    Fan Y W, Lin L, Ying L X, et al. 2018 arXiv:1807.01883 [math.NA]
    [39]
    Wang M, Cheung S W, Chung E T, et al. 2018 arXiv:1810.12245 [math.NA]
    [40]
    Li X 1996 Neurocomputing 12 327 10.1016/0925-2312(95)00070-4 doi: 10.1016/0925-2312(95)00070-4
    [41]
    Lagaris I E, Likas A C, Fotiadis D I 1998 IEEE Trans. Neural Network 9 987 10.1109/72.712178 doi: 10.1109/72.712178
    [42]
    Lagaris I E, Likas A C, Papageorgiou D G 2000 IEEE Trans. Neural Network 11 1041 10.1109/72.870037 doi: 10.1109/72.870037
    [43]
    McFall K S, Mahan J R 2009 IEEE Trans. Neural Network 20 1221 10.1109/TNN.2009.2020735 doi: 10.1109/TNN.2009.2020735
    [44]
    Raissi M, Perdikaris P, Karniadakis G E 2017 arXiv:1711.10561 [cs.AI]
    [45]
    Raissi M, Perdikaris P, Karniadakis G E 2017 arXiv:1711.10566 [cs.AI]
    [46]
    Raissi M, Perdikaris P, Karniadakis G E 2019 J. Comput. Phys. 378 686 10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
    [47]
    Yang L, Meng X H, Karniadakis G E 2021 J. Comput. Phys. 425 109913 10.1016/j.jcp.2020.109913 doi: 10.1016/j.jcp.2020.109913
    [48]
    Rao C P, Sun H, Liu Y 2020 arXiv:2006.08472v1 [math.NA]
    [49]
    Olivier P, Fablet R 2020 arXiv:2002.01029 [physics.comp-ph]
    [50]
    Lu L, Meng X H, Mao Z P, et al. 2021 SIAM Rev. 63 208 10.1137/19M1274067 doi: 10.1137/19M1274067
    [51]
    Fang Z W, Zhan J 2020 IEEE Access 8 26328 10.1109/ACCESS.2019.2963390 doi: 10.1109/ACCESS.2019.2963390
    [52]
    Pang G F, Lu L, Karniadakis G E 2019 SIAM J. Sci. Comput. 41 A2603 10.1137/18M1229845 doi: 10.1137/18M1229845
    [53]
    Zhu Y H, Zabaras N, Koutsourelakis P S, et al. 2019 J. Comput. Phys. 394 56 10.1016/j.jcp.2019.05.024 doi: 10.1016/j.jcp.2019.05.024
    [54]
    Sirignano J, Spiliopoulos K 2018 J. Comput. Phys. 375 1339 10.1016/j.jcp.2018.08.029 doi: 10.1016/j.jcp.2018.08.029
    [55]
    Beaver G S, Joseph D D 1967 J. Fluid Mech. 30 197 10.1017/S0022112067001375 doi: 10.1017/S0022112067001375
    [56]
    Zhao L, Chung E T, Park E J, Zhou G 2021 SIAM J. Numer. Anal. 59 1 10.1137/19M1268525 doi: 10.1137/19M1268525
    [57]
    Kovasznay L I G 1948 Math. Proc. Cambridge 44 58 10.1017/S0305004100023999 doi: 10.1017/S0305004100023999
    [58]
    Lèon Bottou 2012 Lecture Notes in Computer Science Grègoire M, Genevieve B O, Klaus R M Berlin Springer 430 445
  • Relative Articles

    [1]XUE Yiwei, WANG Yu, WANG Huan, et al. Establishment of a Hyperspectral Spectroscopy-Based Biochemical Component Detection Model for Green Tea Processing Materials[J]. Science and Technology of Food Industry, 2023, 44(10): 280−289. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2020070110
    [2]LAI Tao, PENG Xiaoqiang, XU Chao, DAI Yifan, HU Hao, LIU Junfeng. Single Geometric Error Model of 3-axis Measurement Machine Based on Topological Structure[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 58(24): 10-19. doi: 10.3901/JME.2022.24.010
    [3]LÜ Chao, CUI Gege, MENG Xianghao, LU Junyan, XU Youzhi, GONG Jianwei. Graph Representation Method for Pedestrian Intention Recognition of Intelligent Vehicle[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 688-695. doi: 10.15918/j.tbit1001-0645.2021.330
    [4]GONG Jianwei, GONG Cheng, LIN Yunlong, LI Zirui, LÜ Chao. Review on Machine Learning Methods for Motion Planning and Control Policy of Intelligent Vehicles[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 665-674. doi: 10.15918/j.tbit1001-0645.2022.095
    [5]Wei Wang, Shuo Feng, Zhuyifan Ye, Hanlu Gao, Jinzhong Lin, Defang Ouyang. Prediction of lipid nanoparticles for mRNA vaccines by the machine learning algorithm[J]. JOURNAL OF MECHANICAL ENGINEERING. doi: 10.1016/j.apsb.2021.11.021
    [6]Cao Zhi, Shang Lidan, Yin Dong. A weakly supervised learning method for vehicle identification code detection and recognition[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 48(2): 200270. doi: 10.12086/oee.2021.200270
    [7]Li Guoyou, Li Chenguang, Wang Weijiang, Yang Mengqi, Hang Bingpeng. Research on multi-feature human pose model recognition based on one-shot learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 48(2): 200099. doi: 10.12086/oee.2021.200099
    [8]Li Jian-Kang, Li Rui. Numerical simulation study of surface enhancement coherent anti-Stokes Raman scattering reinforced substrate[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 70(10): 104207. doi: 10.7498/aps.70.20201773
    [9]Sun Jian, Cao Zhuo, Li Heng, et al. Application of artificial intelligence technology to numerical weather prediction. J Appl Meteor Sci, 2021, 32(1): 1-11. DOI: 10.11898/1001-7313.20210101
    [10]Zhao Jian, Chen Zhao-Yun, Zhuang Xi-Ning, Xue Cheng, Wu Yu-Chun, Guo Guo-Ping. Quantum state preparation and its prospects in quantum machine learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2021, 70(2): 140307. doi: 10.7498/aps.70.20210958
    [11]Mengxi JU, Xinwei LI, Zhangyong LI. Detection of white blood cells in microscopic leucorrhea images based on deep active learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 519-526. doi: 10.7507/1001-5515.201909040
    [12]Ying CUI, Zelong XU, Jianzhong LI. Identification of nucleosome positioning using support vector machine method based on comprehensive DNA sequence feature[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 496-501. doi: 10.7507/1001-5515.201911064
    [13]Xiaorong PU, Kecheng CHEN, Junchi LIU, Jin WEN, Shangwei ZHNENG, Honghao LI. Machine learning-based method for interpreting the guidelines of the diagnosis and treatment of COVID-19[J]. JOURNAL OF MECHANICAL ENGINEERING, 2020, 37(3): 365-372. doi: 10.7507/1001-5515.202003045
    [14]WANG Min, LI Jiafu, ZAN Tao, MA Gangjian, ZHOU Shuang. Identification of CNC Machine Tools’ Geometric Errors Based on Circular Tests[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 58-64. doi: 10.11936/bjutxb2016010070
    [15]QIAN Xiaoliang, ZHANG Heqing, CHEN Yongxin, ZENG Li, DIAO Zhihua, LIU Yucui, YANG Cunxiang. Research Development and Prospect of Solar Cells Surface Defects Detection Based on Machine Vision[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 76-85. doi: 10.11936/bjutxb2016040063
    [16]SU Yila, WU Nier, LIU Wanwan. Machine Translation of Mongolianand Chinese Natural Language Based on Statistical Analysis[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 36-42. doi: 10.11936/bjutxb2016070044
    [17]MAO Zheng, JIA Wenyang, DU Wenbin, MEI Weijun. Visual Tracking Method Based on Weighted Sample Learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(2): 217-223. doi: 10.11936/bjutxb2016030069
    [18]DU Weinan, HU Yongli, SUN Yanfeng. Image Super-resolution Reconstruction Based on Residual Dictionary Learning[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 43-48. doi: 10.11936/bjutxb2016060049
    [19]JIA Xibin, LI Ning, JIN Ya. Dynamic Convolutional Neural Network Extreme Learning Machine for Text Sentiment Classification[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(1): 28-35. doi: 10.11936/bjutxb2016040093
    [20]SHAO Shuguang, GE Yuli, WANG Shu, XU Wenqing. Global Regularity for a Model of Inhomogeneous Three-dimensional Navier-Stokes Equations[J]. JOURNAL OF MECHANICAL ENGINEERING, 2017, 43(2): 320-326. doi: 10.11936/bjutxb2016040094
  • Created with Highcharts 5.0.7Amount of accessChart context menuAbstract Views, HTML Views, PDF Downloads StatisticsAbstract ViewsHTML ViewsPDF Downloads2024-052024-062024-072024-082024-092024-102024-112024-122025-012025-022025-032025-0402.557.51012.5
    Created with Highcharts 5.0.7Chart context menuAccess Class DistributionFULLTEXT: 49.5 %FULLTEXT: 49.5 %META: 50.5 %META: 50.5 %FULLTEXTMETA
    Created with Highcharts 5.0.7Chart context menuAccess Area Distribution其他: 7.5 %其他: 7.5 %Almaty: 1.9 %Almaty: 1.9 %[]: 3.7 %[]: 3.7 %伊万诺沃: 1.9 %伊万诺沃: 1.9 %加特契纳: 3.7 %加特契纳: 3.7 %北京: 13.1 %北京: 13.1 %哥伦布: 1.9 %哥伦布: 1.9 %弗吉: 1.9 %弗吉: 1.9 %张家口: 9.3 %张家口: 9.3 %普赖恩维尔: 0.9 %普赖恩维尔: 0.9 %格兰特县: 5.6 %格兰特县: 5.6 %沃罗涅日州: 1.9 %沃罗涅日州: 1.9 %芒廷维尤: 18.7 %芒廷维尤: 18.7 %芝加哥: 23.4 %芝加哥: 23.4 %莫斯科: 1.9 %莫斯科: 1.9 %西宁: 2.8 %西宁: 2.8 %其他Almaty[]伊万诺沃加特契纳北京哥伦布弗吉张家口普赖恩维尔格兰特县沃罗涅日州芒廷维尤芝加哥莫斯科西宁

Catalog

    Figures(12)  / Tables(5)

    Article Metrics

    Article views(53) PDF downloads(0) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return