In this article, the probabilistic metric distance between two disjoint sets is utilised to define the essential criteria for the existence and uniqueness of the best proximity point, which takes into account the global optimization problem. In order to solve this problem, we pretend that we are trying to obtain the optimal approximation to the solution of a fixed point equation. Here, we introduce two types of probabilistic proximal contraction mappings and use a geometric property called Ω-property in the context of probabilistic metric spaces. We also obtain some consequences for self-mappings, which give the fixed point results. Some examples are provided to validate the findings. As an application, we obtain the solution to a second-order boundary value problem using a minimum t-norm in the context of probabilistic metric spaces.
This manuscript considers a global optimization problem in the context of probabilistic metric space (PM space). PM space is the probabilistic generalization of metric space in which the distribution function plays the role of metric. Our objective is to identify the minimum distance between two subsets of a PM space. We use two different types of non-self contraction mappings to solve proximity point problems and propose the concept of probabilistic distance between any two subsets of a PM space.
The proximity point problem originated in the work of Eldred and Veeramani [4] in 2006 and has in subsequent times, developed vastly through many works. The following is the description of this problem. If P and Q are non-empty subsets of a metric space (Y, δ), a pair (p1, q1) ∈ P × Q is called a best proximity pair if δ (p1, q1) = δ (P, Q) = inf {δ (p1, q1) : p1 ∈ P and q1 ∈ Q}. Further, if is a mapping from P to Q, a point w ∈ P is called a best proximity point (with respect to ) if at the point w the function attains its global minimum with the value δ (P, Q); that is, . Thus the problem is a problem of global minimization. In another approach to this problem, it can be viewed as an approximate fixed point problem [1, 12]. For a non-self mapping , the idea of a fixed point is a point for which is not pertinent when P and Q are disjoint. Even in the cases where P∩ Q ≠ ∅, a fixed point of the function , only exists under special conditions. But it may be possible to find some approximate fixed point of by minimizing the function . If the minimized value is δ (P, Q), we obtain a proximity point at which the proximity pair is realized. Thus, the proximity point problem is to find an optimal approximate solution of the fixed point equation .
Several exciting results regarding the global optimality approach for single-valued and multi-valued contraction in the setting of various spaces appeared in the literature, for examples, [5, 14–16] and references therein.
In this article, a novel approach is taken to tackle the challenge of defining essential criteria for the existence and uniqueness of the best proximity point within the framework of probabilistic metric spaces. To validate the proposed findings, the article furnishes a set of examples that not only illustrate the concepts but also underscore their practical applicability. These examples provide concrete evidence of the validity and effectiveness of the presented approach. As an application, we obtain the solution to a second-order boundary value problem in the context of probabilistic metric spaces by utilizing a minimum t-norm. This application highlights the framework’s adaptability and ability to handle various real-world problems.
Mathematical preliminaries
Some technical accounts of several concepts associated with the structure of the probabilistic metric space are given below, which are required in our main discussion.
Definition 2.1. [6, 18] A distribution function is a mapping that is non-decreasing and left continuous with and . Heavyside function is a standard example of the distribution function.
Every pair of elements in PM spaces is assigned a distribution function, which makes them probabilistic generalizations of metric spaces. Schweizer and Sklar [18] comprehensively considered several aspects of such spaces.
Definition 2.2. A probabilistic metric space (PM-space) (see [6, 18]) is an ordered pair (Y, ϝ), where Y is a non-empty set and ϝ is a mapping from Y × Y into the set of all distribution functions. The distribution function ϝ (u, v) is denoted by ϝu,v and satisfies the following conditions for all u, v, w ∈ Y,
(i) ϝu,v (0) =0,
(ii) ϝu,v (l) =1 for all l > 0 if and only if u = v,
(iii) ϝu,v (l) = ϝ v,u (l) for all l > 0,
(iv) if ϝu,v (l1) =1 and ϝv,w (l2) =1 then ϝu,w (l1 + l2) =1, for l1, l2 > 0.
Example 2.1. Consider Y = [1, 2] and , for all u, v ∈ Y. Then (Y, ϝ) is a PM space.
A Menger space is a particular type of PM-space in which the triangle inequality is hypothesised using a t-norm.
Shi et al. [21] introduced the following definition of n-th order t-norm.
Definition 2.3. A mapping is called a n-th order t-norm [21] if the following conditions are satisfied:
= Δ (p1, p2, …, pn-1, Δ (pn, q2, q3, …, qn)). When n = 2, we have a binary t-norm, which is commonly known as t-norm.
Some examples of different types of t-norms are:
(i) The Lukasiewicz t-norm, Δ = TL, defined by
(ii) The minimum t-norm, Δ = Tm, defined by
(iii) The product t-norm, Δ = Tp, defined by
Definition 2.4. A Menger space, also known as a Menger PM-space (see [6, 18]) is a triplet (Y, ϝ , Δ), where Y is a non-empty set, ϝ is a function defined on Y × Y to the set of distribution functions and Δ is a t-norm, satisfying the following conditions:
ϝu,v (0) =0forallu, v ∈ Y,
ϝu,v (z) =1forallz > 0and u, v ∈ Yifandonlyifu = v,
ϝu,v (z) = ϝ v,u (z) forallu, v ∈ Y, z > 0 and
ϝu,v (κ + μ) ≥ Δ (Fu,w (κ) , Fw,v (μ)) forallκ, μ ≥ 0andu, v, w ∈ Y .
Menger spaces are the particular types of PM-spaces. A metric space becomes a Menger space if we write ϝu,v (l) = H (l - δ (u, v)), where H is a Heavyside function. Sehgal and Bharucha-Reid [19] initiated the fixed point result in PM-spaces. Following that, many results arose in the literature, and Hadzic and Pap [6] provided a comprehensive survey. For details on convergence, Cauchy sequence and completeness in Menger spaces (see [6, 18]).
Definition 2.5. [20] Let P and Q be two nonempty subsets of PM-space (Y, ϝ , Δ). We define P0 and Q0 as follows:
P0 = {u ∈ P : ∃ v ∈ Qsuchthat ϝ u,v (l) = ϝ P,Q (l) for all l > 0},
Q0 = {v ∈ Q : ∃ u ∈ Psuchthat ϝ u,v (l) = ϝ P,Q (l) for all l > 0}.
The distance of a point u ∈ Y from a nonempty set P for l > 0 is defined as
and
the distance between two nonempty sets P and Q for l > 0 is defined as
An element u* ∈ P is said to be the best proximity point of the mapping if it satisfies the condition that for all l > 0
,
where denotes the image of u*.
Here, we use a geometric property called Ω-property in the context of PM-spaces.
Definition 2.6. [20] Let (P, Q) be a pair of nonempty subsets of a PM-space (Y, ϝ , Δ). Then the pair (P, Q) is said to have the Ω-property if and only if
ϝu1,v1 (l) = ϝ P,Q (l)
and
ϝu2,v2 (l) = ϝ P,Q (l),
which imply that ϝu1,u2 (l) = ϝ v1,v2 (l) where for all l > 0 and u1, u2 ∈ P, v1, v2 ∈ Q.
The following result of Saha et al. [13] will be used in the sequel. The authors showed that the distance between two non-empty disjoint subsets of probabilistic metric spaces is a distribution function.
Lemma 2.1. Let P and Q be two non-empty subsets of a probabilistic metric space (Y, ϝ , Δ), then ϝP,Q (l), l > 0 is a distribution function.
The main features of the paper are:
1. We establish two proximity point theorems for two non-self contraction mappings.
2. Some consequences are obtained for self-mapping.
3. Some examples are given here to validate our results.
4. An application to second-order differential equation is provided.
Main Result
In this section we establish some generalized best proximity point results on PM spaces.
Before the main results, we define two probabilistic proximal contraction mappings.
Definition 3.1. Let (P, Q) be a pair of nonempty subsets of a Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1]. A mapping is called
generalized type-I proximal contraction if
where l, l1, l2, l3 > 0 with l = l1 + l2 + l3 and 0 < p + q + r < 1
and
generalized type-II proximal contraction if
where l = l1 + l2 + l3 + l4 + l5 and 0 < p + q + r + s + t < 1.
Theorem 3.1. Let (P, Q) be a pair of nonempty subsets of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1]. Suppose that P is closed and is a generalized type-II proximal contractive mapping satisfying the following conditions: (i) and (P, Q) satisfies the Ω-property,
(ii) there exist u0, u1 ∈ P0 such that for all l > 0,
(iii) for each sequence {un} in Y tends to u*, if is convergent, then it converges to .
Then there exists an element u* ∈ P0 such that , that is, has a best proximity point.
Proof. By an assumption of the theorem there exist u0, u1 ∈ P0 such that
for all l > 0. Since , there exists u2 ∈ P0 such that
So, for all l > 0, and .
Again, as , there exists u3 ∈ P0 such that
So, for all l > 0, and .
Using induction, for all l > 0, we have
and
Using (3.3) and (3.4) and Ω-property of the pair (P, Q), for all l > 0, n ≥ 1, we get
Using (3.2), (3.3), (3.4) and definition of ϝP,Q (l), for all l > 0, we have
where , , and .
Now, here we get two possible cases.
Case-I: If then using the property of t-norm, we get
Therefore, from (3.6), we have
Choose , , , , that is, , , , , where λ = p + q + r + t, then obviously 0 < λ < 1. From (3.7), we get
We now claim that for l > 0,
If possible, let for some z > 0,
Then, we have
For λ ∈ (0, 1), and using the monotone property of ϝ, we get
which is a contradiction. Hence
Taking n-times iteration successively, we have
Now, taking limit on both sides and since λ ∈ (0, 1), we have
Case II: If , then using the property of t-norm, we have
Using a similar procedure as in Case-I, we get
Combining Case-I and Case-II, we have
Now, we have to prove that {un} is a Cauchy sequence. If possible, let {un} be not a Cauchy sequence. Then, there exist ϵ > 0 and 0 < θ < 1 for which we can find subsequences {um(k)} and {un(k)} of {un} with n (k) > m (k) > k such that
and
Take ϵ1 < ϵ. Using the monotone property of ϝ, for all k > 0, we have
Now from (3.12),
Further, we choose is chosen sufficiently small such that . Using the left continuity property of ϝ and by (3.13), we have
Again using (3.11), for sufficiently large k, we get
Now, using (3.15), (3.16) in (3.14), we have
which is a contradiction. Hence {un} is a Cauchy sequence in P. Since (Y, ϝ , Δ) is complete and P is closed, there exists u* ∈ P such that
Using (3.4), for each m > n > 0 and l > 0, we have
and
Using Ω-property, we get
As the sequence, {un} is a Cauchy sequence, the above equation implies that is a Cauchy sequence and hence it is convergent, that is,
For any arbitrary ρ > 0 and using (3.4), we get
Taking limit n→ ∞ in the above inequality, we have
As ρ is arbitrary positive number and by Lemma 2.1, ϝP,Q (l) is left continuous, from above we have
It implies
Hence, it completes the proof.
It is easy to see the following result.
Theorem 3.2. Let (P, Q) be a pair of nonempty subsets of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1]. Suppose that P is closed and is a generalized type-I proximal contractive mapping satisfying the following conditions:
(i) and (P, Q) satisfies the Ω-property,
(ii) there exist u0, u1 ∈ P0 such that for all l > 0,
(iii) for each sequence {un} in Y tends to u*, if is convergent, then it converges to .
Then there exists an element u* ∈ P0 such that , that is, has a proximity point. If we take self mapping, that is, P = Q, then we get a fixed point result as a consequence of Theorems 3.1 and 3.2. The following corollaries are deduced from the work of Babacev [2] and Shams and Jafari [17].
Corollary 3.1. Let P be a nonempty closed subset of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1] and let be a self mapping satisfying the following condition
where u, v ∈ P, l = l1 + l2 + l3 + l4 + l5 and 0 < p + q + r + s + t < 1. Then has a unique fixed point in P.
Now, taking l = l1 = l2 = l3 = l4 = l5 and p = q = r = s = t = α, we get another corollary.
Corollary 3.2. Let P be a nonempty closed subset of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1] and let be a self mapping satisfying the following condition for all l > 0, 0 < α < 1, u, v ∈ P,
Then has a unique fixed point in P.
Corollary 3.3. Let P be a nonempty closed subset of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1] and let be a self mapping satisfying the following condition
where u, v ∈ P, l, l1, l2, l3 > 0 with l = l1 + l2 + l3 and 0 < ω + φ + ψ < 1. Then has a unique fixed point in P. Taking l = l1 = l2 = l3 and ω = φ = ψ = c, we get another result for self mapping.
Corollary 3.4. Let P be a nonempty closed subset of a complete Menger space (Y, ϝ , Δ), where Δ is a continuous t-norm which satisfies Δ (p, p) = p, p ∈ [0, 1] and let be a self mapping satisfying the following condition for all u, v ∈ P, l > 0, and 0 < c < 1,
Then has a unique fixed point in P.
Remark 3.1.
It is interesting to see the above results for different types of t-norms.
We know that contraction mapping may be self or non-self. In the case of self-mapping, we can find the solution for the equation , known as a fixed point. But in our present work, we consider the non-self mapping, which means two sets are disjoint. We aim to find the optimal distance, the smallest distance between y and .
Best proximity points may be considered a generalization of the best approximation. From the best approximation results, we get an idea about the existence of an approximate solution. The best proximity point theorems are used to solve the problems to find an approximate optimal solution. That is the best proximity point of . If we consider self-mapping, the best proximity theorem reduces to a fixed point result.
It may be noted that the best approximation theorem and best proximity point theorems are relevant with respect to their topological structures. A best approximate solution is not optimal where the best proximity solution is optimal.
Illustrations
In this section, we provide some examples that support our results.
By the property of t-norm,
If Δ is replaced by minimum norm, then Δ (p1, q1) = min {p1, q1} = q1, if p1 > q1 and Δ (p1, q1) = min {p1, q1} = p1, if q1 > p1. So, we choose the minimum t-norm in our examples.
Example 4.1. Suppose that and ϝ is a distribution function defined on Y × Y, defined as , u, v ∈ Y, l > 0. Define a t-norm as Δ (p1, q1) = min {p1, q1}. It is easy to see that (Y, ϝ , Δ) is a PM space. Choose P = [0, 1] and as closed subsets of Y. For only point 1 ∈ P and only point , for all l ≥ 0. So trivially the pair (P, Q) satisfies Ω-property. Define a mapping by , u ∈ P. Choose P0 = {1} and such that . Now, for all u, v ∈ P and l > 0, we have to show that
where l, l1, l2, l3 > 0 with l = l1 + l2 + l3 and 0 < p + q + r < 1. Here, .
Further, take such that 0 < p + q + r < 1. From (4.23), we have
Now, we get the following three possible cases: Case I: Take . Then after rearranging, we get , l - 2l1 ≥ 0, that is, l ≥ 2l1 > l1 > 0.
Case II: Take . Then after rearranging, we get . For u, v ∈ P = [0, 1], it is obviously and l > 2l2 > l2 > 0.
Case III: Take . Then after rearranging, we get . For u, v ∈ P = [0, 1], it is obviously and l > 2l3 > l3 > 0.
Hence all the conditions of Theorem 3.2 hold for all u, v ∈ P, l > 2 max {l1, l2, l3} and has a best proximity point 1 ∈ P.
Example 4.2. Suppose that and and ϝ is a distribution function defined on Y × Y, defined as , (u, v) , (u′, v′) ∈ Y, l > 0. Define a minimum t-norm as
Δ (p1, q1) = min {p1, q1}. It is easy to see that (Y, ϝ , Δ) is a PM space.
Consider the closed subsets P and Q of Y as P = {(u, 1) :0 ≤ u ≤ 1} and Q = {(u, - 1) :0 ≤ u ≤ 1} . Let be a mapping defined by , u ∈ P.
Here, , Q0 = Q such that . Also, for all l > 0.
Now, we have to show that the pair (P, Q) satisfies the Ω-property.
Let κ1 = (u1, 1) , κ2 = (u2, 1) ∈ P and μ1 = (v1, - 1) , μ2 = (v2, - 1) ∈ Q with
and
From (4.24), we get for all l > 0
which implies that u1 = v1. Similarly from (4.25), we get for all l > 0, u2 = v2.
Now for all l > 0, we have
So, the pair (P, Q) satisfies the Ω-property.
Now, for all u, v ∈ P and l > 0, we have to show that
where l, l1, l2, l3, l4, l5 > 0 with l = l1 + l2 + l3 + l4 + l5 and 0 < p + q + r + s + t < 1. Let κ = (u, 1) , μ = (v, 1) ∈ P.
Here, .
Further, take such that 0 < p + q + r + s + t < 1. From (4.26), we have
Now, we get the following five possible cases:
Case I: Take . So after rearranging, we get . So Case-I is true.
Case II: Take . Since 0 ≤ u, v ≤ 1, max |u - v|=1. Then, after rearranging, we get 24l + 6lu - l2 ≥ 0. As l ≥ l2, 24l + 6lu ≥ l2. Hence Case-II is also true.
Case III: Take . Since 0 ≤ u, v ≤ 1, max |u - v|=1. Then, after rearranging, we get 24l + 6lv - l3 ≥ 0. As l ≥ l3, 24l + 6lv ≥ l3. Hence Case-III is also true.
Case IV: Take . Since 0 ≤ u, v ≤ 1, max |u - v|=1. Then after rearranging, we get . As l ≥ l4, .
Hence Case-IV is also true.
Case V: Take . Since 0 ≤ u, v ≤ 1, max |u - v|=1. Then after rearranging, we get . As l ≥ l5, . Therefore, Case V is also true.
Combining Cases I, II, III, IV and V, we say that all conditions of Theorem 3.1 hold.
Therefore (0, 1) ∈ P is the unique best proximity point of in P0,
, that is, .
The following example shows that we get a unique fixed point for self-mapping but a proximity point for non-self mapping under the same inequality.
Example 4.3. Let Y = {y1, y2, y3, y4}, here the t-norm Δ (p1, q1) = min(p1, q1) and ϝu,v (l) be defined as
and
It is easy to see that (Y, ϝ , Δ) is a complete Menger space. Define a self mapping as follows: , , , , then it satisfies the inequality (3.2). Also we get y2 as fixed point, that is, .
Now, if we consider the mapping where P = {y1, y2} and Q = {y3, y4}. If we define as follow ,. Hence, the mapping is non-self. Here, we do not get any solution for the equation , that is, there is no fixed point. But we can find out the proximity point.
Now, δ (P, Q) = inf {(p1, q1) : p1 ∈ P, q1 ∈ Q}.
So, δ (P, Q) = min {|y1 - y3|, |y2 - y4|}.
If , then y1 is proximity point of in P.
Otherwise, y2 will be a proximity point of in P.
Application
In this section, we give an application to find the solution of a second-order differential equation. We consider the following boundary value problem of second order differential equation:
where is a continuous function.
It is easy to see that Green’s function G (l, z) for the associated boundary-values problem is
Let Y = C (I) (I = [0, 1]) be the space of all continuous functions defined on I. It is well known that such a space with the metric given by
is a complete metric space. Choose a distribution function . Define a t-norm as Δ (p1, q1) = min {p1, q1}. It is easy to see that (Y, ϝ , Δ) is a complete PM space.
Suppose that . Now, we have to show that the above-mentioned differential equation satisfies the following inequality
Therefore,
that is,
So,
We have α > 0 such that for all u, v ∈ C (I); l, z ∈ I and , we get
Now, it is well known that u ∈ C2 (I) is a solution of given differential equation is equivalent to that u ∈ C (I) is a solution of the integral equation
Define the operator by
For all u, v ∈ C (I); l, z ∈ I, we have
Also, = 0 . It implies
for all u, v ∈ C (I). Therefore, all the conditions of Corollary 3.2 are satisfied, and we have a solution in C (I).
Conclusion
Probabilistic optimization is a vast area of applied mathematics having applications in different domains. This article considers the global optimization problem in probabilistic metric spaces to find the probabilistic distance between two sets. It can be explored as a continuation of the current work on how various types of probabilistic contractions can contribute to the proximity point theory in probabilistic metric space structure.
Declarations
Availability of data and materials
Data sharing does not apply to this article as no datasets were generated or analysed during the current study.
Ethical Approval
Not applicable.
Funding
Basque Government:Grant IT1555-22 Basque Government: Grant KK-2022/00090 MCIN/AEI 269.10.13039/501100011033 /FEDER,UE: Grant PID2021-1235430B-C21 MCIN/AEI 269.10.13039/501100011033 /FEDER,UE: Grant PID2021-1235430B-C22
Competing Interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contribute equally.
References
1.
BariC.Di., SuzukiT. and VetroC., Best proximity points for cyclic Mier-Keeler contractions, Nonlinear Anal69 (2008), 3790–3794.
ChoudhuryB.S. and MaityP., Best proximity point results in generalized metric spaces, Vietnam J Math44 (2016), 339–349.
4.
EldredA.A. and VeeramaniP., Existence and convergence of best proximity points, J Math Anal Appl323 (2006), 1001–1006.
5.
GabelehM. A note on the paper ”A global optimality result in probabilistic spaces using control function ”, Optimization (2022). DOI: 10.1080/02331934.2022.2072219.
6.
HadzicO., PapE. Fixed Point Theory in Probabilistic Metric Spaces, Kluwer Academic Publishers, 2001.
7.
IsikH., SezenM.S. and VetroC., Φ-Best proximity point theorems and applications to variational inequality problem, J Fixed point Theory Appl19 (2017), 3177–3189.
8.
JleliM. and SametB., Best proximity points for α –ψ-proximal contractive type mappings and applications, Bull Sci Math137 (2013), 977–995.
9.
JleliM., KarapinarE. and SametB., Best proximity point result for MK-proximal contractions, Abstr Appl Anal2012 (2012). Article Id: 193085
10.
JainS.K., MeenaG., SinghD. and Mitra.J.K., Best proximity point results with their consequences and application, J Ineq Appl2022 (2022), 73. DOI: https://doi.org/10.1186/s13660-022-02807-y.
11.
KarapinarE., Best proximity points of cyclic mappings, Appl Math Lett25 (2012), 1761–1766.
12.
SankarV., Raj, A best proximity point theorem for weakly contractive non-self-mappings, Nonlin Anal74 (2011), 4804–4808.
13.
SahaP., GuriaS., BhandariS.K., ChoudhuryB.S. A global optimality result in probabilistic spaces using control function, Optimization (2020), DOI: 10.1080/02331934.2020.1781118.
14.
SahaP., GuriaS., ChoudhuryB.S. and de la SenM., Determining fuzzy distance between sets by application of fixed point technique using weak contractions and fuzzy geometric notions, Symmetry11(6) (2019). https://doi.org/10.3390/sym11060812
15.
SahaP., GuriaS., ChoudhuryB.S. Determining fuzzy distance through non-self fuzzy contractions, Yugoslav J Oper Res (2019). DOI: https://doi.org/10.2298/YJOR180515002S.
16.
SaleemN., AbbasM. and FarooqS., Best proximity results in generalized metric spaces, Thai J Math20(2) (2022), 589–603.
17.
ShamsM. and JafariS., Some fixed point results in probabilistic Menger space, Bol Soc Paran Mat35(3) (2017), 9–24.
SehgalV.M. and Bharucha-ReidA.T., Fixed point of contraction mappings on PM space, Math Sys Theory6(2) (1972), 97–100.
20.
SuY. and ZhangJ., Fixed point and best proximity point theorems for contractions in new class of probabilistic metric spaces, Fixed Point Theory Appl2014 (2014), 170.
21.
ShiY., RenL. and WangX., The extension of fixed point theorems for set valued mapping, J Appl Math Comput13 (2003), 277–286.