Call for Paper - April 2021 Edition
IJCA solicits original research papers for the April 2021 Edition. Last date of manuscript submission is March 22, 2021. Read More

KKT Proximity Measure Versus Augmented Achievement Scalarization Function

International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2018
Mohamed Abouhawwash, M. A. Jameel

Mohamed Abouhawwash and M A Jameel. KKT Proximity Measure Versus Augmented Achievement Scalarization Function. International Journal of Computer Applications 182(24):1-7, October 2018. BibTeX

	author = {Mohamed Abouhawwash and M. A. Jameel},
	title = {KKT Proximity Measure Versus Augmented Achievement Scalarization Function},
	journal = {International Journal of Computer Applications},
	issue_date = {October 2018},
	volume = {182},
	number = {24},
	month = {Oct},
	year = {2018},
	issn = {0975-8887},
	pages = {1-7},
	numpages = {7},
	url = {},
	doi = {10.5120/ijca2018917986},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}


KKT proximity measure (KKTPM) is use as metric for obtained how we are close to the from a corresponding Pareto-optimal (PO) point without any knowledge about the true optimum point. This metric use one such common a scalarization method that also guarantees to find any PO solution that is achievement scalarizing function (ASF) method. Since that KKTPM formulation is based on augmented achievement scalarizing function (AASF) to avoid weak PO solutions. This paper studies a relation between KKTPM values and AASF values. Aim of this study to know the advantage and disadvantage of both measures. Also, this paper discusses some special cases to know the merits of both measures and to confirm that KKT proximity measure is an essential measure for convergence. In addition, this study investigates the correlation plot between these two measures for ZDT test problems, results show the difference in values and therefore cannot obtain a perfect correlation between KKTPM values and AASF values. Hence, it can be said that KKT proximity measure is better.


  1. Mohamed Abouhawwash, Haitham Seada, and Kalyanmoy Deb. Towards faster convergence of evolutionary multicriterion optimization algorithms using karush kuhn tucker optimality based local search. Computers & Operations Research, 79:331–346, 2017.
  2. Roberto Andreani, Jos´e Mario Mart´inez, and Benar Fux Svaiter. A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM Journal on Optimization, 20(6):3533–3554, 2010.
  3. Johannes Bader, Kalyanmoy Deb, and Eckart Zitzler. Faster hypervolume-based search using monte carlo sampling. In Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems, pages 313–326. Springer, 2010.
  4. Dimitri P Bertsekas, Angelia Nedi, Asuman E Ozdaglar, et al. Convex analysis and optimization. 2003.
  5. S¸Ilker Birbil, Johannes Bartholomeus Gerardus Frenk, and Georg J Still. An elementary proof of the fritz-john and karush–kuhn–tucker conditions in nonlinear programming. European journal of operational research, 180(1):479–484, 2007.
  6. Lam Thu Bui, Slawomir Wesolkowski, Axel Bender, Hussein A Abbass, and Michael Barlow. A dominance-based stability measure for multi-objective evolutionary algorithms. In Evolutionary Computation, 2009. CEC’09. IEEE Congress on, pages 749–756. IEEE, 2009.
  7. K. Deb and M. Abouhawwash. An optimality theory-based proximity measure for set-based multiobjective optimization. IEEE Transactions on Evolutionary Computation, 20(4):515– 528, Aug 2016.
  8. K. Deb, M. Abouhawwash, and H. Seada. A computationally fast convergence measure and implementation for single- , multiple-, and many-objective optimization. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(4):280–293, Aug 2017.
  9. Kalyanmoy Deb and Himanshu Jain. An evolutionary manyobjective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. IEEE Trans. Evolutionary Computation, 18(4):577–601, 2014.
  10. Joydeep Dutta, Kalyanmoy Deb, Rupesh Tulshyan, and Ramnik Arora. Approximate kkt points and a proximity measure for termination. Journal of Global Optimization, 56(4):1463– 1499, 2013.
  11. Michael Emmerich, Andr´e Deutz, and Nicola Beume. Gradient-based/evolutionary relay hybrid for computing pareto front approximations maximizing the s-metric. In International Workshop on Hybrid Metaheuristics, pages 140– 156. Springer, 2007.
  12. Gabriel Haeser and Mar´ia Laura Schuverdt. On approximate kkt condition and its extension to continuous variational inequalities. Journal of Optimization Theory and Applications, 149(3):528–539, 2011.
  13. H. W. Kuhn and A. W. Tucker. Nonlinear programming. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pages 481–492, Berkeley, Calif., 1951. University of California Press.
  14. Luis Mart´i, Jes´us Garc´ia, Antonio Berlanga, and Jos´e M Molina. An approach to stopping criteria for multi-objective optimization evolutionary algorithms: The mgbm criterion. In Evolutionary Computation, 2009. CEC’09. IEEE Congress on, pages 1263–1270. IEEE, 2009.
  15. Kaisa Miettinen. Nonlinear multiobjective optimization, volume 12. Springer Science & Business Media, 2012.
  16. Yury Nikulin, Kaisa Miettinen, and Marko M M¨akel¨a. A new achievement scalarizing function based on parameterization in multiobjective optimization. OR spectrum, 34(1):69–87, 2012.
  17. A Ravindran, Gintaras Victor Reklaitis, and Kenneth Martin Ragsdell. Engineering optimization: methods and applications. John Wiley & Sons, 2006.
  18. Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 2015.
  19. Rupesh Tulshyan, Ramnik Arora, Kalyanmoy Deb, and Joydeep Dutta. Investigating ea solutions for approximate kkt conditions in smooth problems. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, pages 689–696. ACM, 2010.
  20. Tobias Wagner, Heike Trautmann, and Luis Mart´i. A taxonomy of online stopping criteria for multi-objective evolutionary algorithms. In EMO, volume 11, pages 16–30. Springer, 2011.
  21. Lyndon While, Philip Hingston, Luigi Barone, and Simon Huband. A faster algorithm for calculating hypervolume. IEEE transactions on evolutionary computation, 10(1):29– 38, 2006.
  22. Andrzej PWierzbicki. The use of reference objectives in multiobjective optimization. In Multiple criteria decision making theory and application, pages 468–486. Springer, 1980.
  23. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary computation, 8(2):173–195, 2000.


Multi-objective optimization, Exact KKT proximity measure, Direct KKT proximity measure, AASF approach