FreeFEM Documentation on GitHub

stars - forks

Variational Inequality

We present, a classical example of variational inequality.

Let us denote \(\mathcal{C} = \{ u\in H^1_0(\Omega), u \le g \}\)

The problem is:

\[u = arg \min_{u\in \mathcal{C}} J(u) = \frac{1}{2} \int_\Omega \nabla u . \nabla u - \int_\Omega f u\]

where \(f\) and \(g\) are given function.

The solution is a projection on the convex \(\mathcal{C}\) of \(f^\star\) for the scalar product \(((v,w)) = \int_\Omega \nabla v . \nabla w\) of \(H^1_0(\Omega)\) where \(f^\star\) is solution of:

\[(f^\star, v ) = \int_{\Omega}{f v}, \forall v \in H^1_0(`\Omega)\]

The projection on a convex satisfy clearly \(\forall v \in \mathcal{C}, \quad (( u -v , u - \tilde{f} )) \leq 0\), and after expanding, we get the classical inequality:

\[\forall v \in \mathcal{C}, \quad \int_\Omega \nabla(u -v) \nabla u \leq \int_\Omega (u-v) f\]

We can also rewrite the problem as a saddle point problem:

Find \(\lambda, u\) such that:

\[\max_{\lambda\in L^2(\Omega), \lambda\geq 0} \min_{u\in H^1_0(\Omega)} \mathcal{L}(u,\lambda) = \frac{1}{2} \int_\Omega \nabla u . \nabla u - \int_\Omega f u + \int_{\Omega} \lambda (u-g)^+\]

where \(((u-g)^+ = max(0,u-g)\).

This saddle point problem is equivalent to find \(u, \lambda\) such that:

\[\left\{ \begin{array}{cc} \displaystyle \int_\Omega \nabla u . \nabla v + \lambda v^+ \,d\omega= \int_\Omega f u , &\forall v \in H^1_0(\Omega) \cr \displaystyle \int_\Omega \mu (u-g)^+ = 0 , & \forall \mu \in L^2(\Omega) , \mu \geq 0, \lambda \geq 0, \end{array}\right.\]

An algorithm to solve the previous problem is:

  1. k=0, and choose \(\lambda_0\) belong \(H^{-1}(\Omega)\)

  2. Loop on \(k = 0, .....\)

    • set \(\mathcal{I}_{k} = \{ x \in \Omega / \lambda_{k} + c * ( u_{k+1} - g) \leq 0 \}\)

    • \(V_{g,k+1} = \{ v\in H^1_0(\Omega) / v = g\) on \({I}_{k} \}\),

    • \(V_{0,k+1} = \{ v\in H^1_0(\Omega) / v = 0\) on \({I}_{k} \}\),

    • Find \(u_{k+1} \in V_{g,k+1}\) and \(\lambda_{k+1} \in H^{-1}(\Omega)\) such that

      \[\left\{\begin{array}{cc} \displaystyle \int_\Omega \nabla u_{k+1}. \nabla v_{k+1} \,d\omega = \int_\Omega f v_{k+1} , &\forall v_{k+1} \in V_{0,k+1} \cr \displaystyle <\lambda_{k+1},v> = \int_\Omega \nabla u_{k+1}. \nabla v - f v \,d\omega & \end{array}\right.\]

      where \(<,>\) is the duality bracket between \(H^{1}_0(\Omega)\) and \(H^{-1}(\Omega)\), and \(c\) is a penalty constant (large enough).

You can find all the mathematics about this algorithm in [ITO2003] [HINTERMULLER2002].

Now how to do that in FreeFEM? The full example is:

Tip

Variational inequality

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
load "medit"

// Parameters
real eps = 1e-5;
real c = 1000; //penalty parameter of the algoritm
real tgv = 1e30; //a huge value for exact penalization
func f = 1; //right hand side function
func fd = 0; //Dirichlet boundary condition function

// Mesh
mesh Th = square(20, 20);

// Fespace
fespace Vh(Th, P1);
int n = Vh.ndof; //number of degree of freedom
Vh uh, uhp; //u^n+1 and u^n
Vh Ik; //to define the set where the containt is reached.
Vh g = 0.05; //discret function g
Vh lambda = 0;

// Problem
varf a (uh, vh)
    = int2d(Th)(
          dx(uh)*dx(vh)
        + dy(uh)*dy(vh)
    )
    - int2d(Th)(
          f*vh
    )
    + on(1, 2, 3, 4, uh=fd)
    ;

//the mass Matrix construction
varf vM (uh, vh) = int2d(Th)(uh*vh);

//two versions of the matrix of the problem
matrix A = a(Vh, Vh, tgv=tgv, solver=CG); //one changing
matrix AA = a(Vh, Vh, solver=CG); //one for computing residual

matrix M = vM(Vh, Vh); //to do a fast computing of L^2 norm : sqrt(u'*(w=M*u))

real[int] Aiin(n);
real[int] Aii = A.diag; //get the diagonal of the matrix
real[int] rhs = a(0, Vh, tgv=tgv);

// Initialization
Ik = 0;
uhp = -tgv;

// Loop
for(int iter = 0; iter < 100; ++iter){
    // Update
    real[int] b = rhs; //get a copy of the Right hand side
    real[int] Ak(n); //the complementary of Ik ( !Ik = (Ik-1))
    Ak = 1.; Ak -= Ik[];
    //adding new locking condition on b and on the diagonal if (Ik ==1 )
    b = Ik[] .* g[]; b *= tgv; b -= Ak .* rhs;
    Aiin = Ik[] * tgv; Aiin += Ak .* Aii; //set Aii= tgv i in Ik
    A.diag = Aiin; //set the matrix diagonal
    set(A, solver=CG); //important to change preconditioning for solving

    // Solve
    uh[] = A^-1* b; //solve the problem with more locking condition

    // Residual
    lambda[] = AA * uh[]; //compute the residual (fast with matrix)
    lambda[] += rhs; //remark rhs = -\int f v

    Ik = (lambda + c*( g- uh)) < 0.; //the new locking value

    // Plot
    plot(Ik, wait=true, cmm=" lock set ", value=true, fill=true);
    plot(uh, wait=true, cmm="uh");

    // Error
    //trick to compute L^2 norm of the variation (fast method)
    real[int] diff(n), Mdiff(n);
    diff = uh[] - uhp[];
    Mdiff = M*diff;
    real err = sqrt(Mdiff'*diff);
    cout << "|| u_{k=1} - u_{k} ||_2 = " << err << endl;

    // Stop test
    if(err < eps) break;

    // Update
    uhp[] = uh[];
}

// Plot
medit("uh", Th, uh);

Note

As you can see on this example, some vector, or matrix operator are not implemented so a way is to skip the expression and we use operator +=, -= to merge the result.

Table of content