2023 France-Berkeley Fund: 2 recipients from the CREST


The France-Berkeley Fund

Established in 1993 as a partnership with the French Ministry of Foreign Affairs, the France-Berkeley Fund (FBF) promotes and supports scholarly exchange in all disciplines between faculty and research scientists at the University of California and their counterparts in France.

Through its annual grant competition, the FBF provides seed money for innovative, bi-national collaborations. The Fund’s core mission is to advance research of the highest caliber, to foster interdisciplinary inquiry, to encourage new partnerships, and to promote lasting institutional and intellectual cooperation between France and the United States.

2023-2024 Call: 2 CREST recipients

For the 2023-2024 call, 2 projects have been submitted and are getting funded:

• Decentralizing divorces
A project developed by Matias Nunez (CREST, CNRS Research fellow) and his counterpart Federico Echenique, Professor of Economics and Social Sciences at UC Berkeley.

Abstract:
This project focuses on the development of practical applications of mechanism design, a branch of economics concerned with developing well-functioning institutions that ensure efficient and fair outcomes. In particular, we will focus on legal settings where two persons need to reach an agreement while their preferences are misaligned. Examples are dissolution of partnerships, allocation of rights and duties among conflicting agents, and divorces. While a judge, legal experts and lengthy bargaining procedures are often needed in practice, we plan to develop economic tools to appraise reasonable compromises, reducing both cost and time.

• Towards Local, Distribution-Free and Efficient Guarantees in Aggregation and Statistical Learning
A project developed by Jaouad Mourtada (CREST, ENSAE Paris) and his counterpart Nikita Zhivotovskiy, Assistant Professor in Statistics at UC Berkeley.

Description:
Statistical learning theory is dedicated to the analysis of procedures for learning based on data. The general aim is to understand what guarantees on the prediction accuracy can be obtained, under which conditions and by which procedures. It can inform the design of sound and robust methods, that can withstand corruption in the data or departure from an idealized posited model, without sacrificing accuracy or efficiency in more favorable situations. In particular, the problem of aggregation can be formulated as follows: given a class of predictors and a sample, form a new predictor that is guaranteed to have an accuracy approaching that of the best predictor within the class, up to an error that should be as small as possible.
This problem can be cast in several settings and has been investigated through various angles in Statistics and Computer Science. While the topic is classical, it has seen a renewed interest through (for instance) the recent direction of robust statistical learning, which raises the question of the most general conditions under which a good accuracy can be achieved. Despite important progress, several important and basic questions have remained unanswered in the literature, which we aim to study.