Speakers

Featured Speakers

Speakers

Speaker Abstracts

Joann de Zegher

Joann de Zegher: Crowdsourcing Information in Informal Supply Chains

Over 60% of global employment is informal, meaning that the majority of today’s economy operates without formal contracts. In such informal working environments, people make decisions based on information that is exchanged through informal networks. This limits the information available to each member of the network and, therefore, their ability to make optimal decisions. In informal supply chains, members of the network also compete with each other, which disincentivizes the sharing of potentially valuable information. In this paper, we study the design of incentive-compatible information-sharing mechanisms for such informal supply chains. We incentivize information-sharing by selecting information disclosure policies that are contingent on the amount of information shared by each individual. We characterize optimal information disclosure policies and show that information sharing emerges as a subgame-perfect Nash equilibrium that is welfare-improving for all participants. Our model is informed by an information-sharing platform in informal palm oil supply chains in rural Indonesia, and we are currently designing a field experiment to test our proposed policies. To operationalize the theoretical results and test them in practice, we developed several extensions of the model.

Daniel Freund

Daniel Freund: A Simple Condition for Constant Regret in Online Decision-Making

We consider a finite time-horizon T over which a decision-maker observes samples arriving independently from a known distribution. In each iteration, an irreversible decision needs to be made; examples of such problems include online versions of bin packing, knapsack, and revenue management, as well as many other problems of practical interest. In all of these settings the goal is to find a solution that has small loss relative to that of an offline-optimal solution, i.e., one in which all samples are known a priori. We provide a simple, yet general, condition under which the Bayesian prophet framework, introduced by Vera & Banerjee (2018), is guaranteed to yield constant regret, and show how it applies to different problems.

Thiago Serra

Thiago Serra: Bounding and Counting Linear Regions of Deep Neural Networks

One form of characterizing the expressiveness of a piecewise linear neural network is by the number of linear regions, or pieces, of the function modeled. In this talk, we investigate the number of linear regions that these networks can attain, both theoretically and empirically. We present upper and lower bounds for the maximum number of linear regions on rectifier and maxout networks and a method to perform exact counting of the number of regions by modeling the neural network as a mixed-integer linear program. These bounds come from leveraging the dimension of the space defining each linear region, and they indicate that a shallow network may define more linear regions than any deep network. We also show how to approximate the number of linear regions of a rectifier network with an algorithm for probabilistic lower bounds on the number of solutions of mixed-integer linear programs. This algorithm is several orders of magnitude faster than exact counting and the values reach similar orders of magnitude, hence making it a viable method to compare the expressiveness of such networks. [top]

Yifan Feng

Yifan Feng: Learning customer preferences from personalized assortments

A company wishes to commercialize the best version of a product out of a menu of available alternatives. Unaware of customers’ true preferences, the company relies on a feedback system that allows potential buyers to provide feedback on their preferred versions. Under a ranking-based choice model framework, we study how to dynamically individualize the set of versions shown to each customer for them to provide feedback on. This allows the company to identify the top-ranked version with a fixed probabilistic confidence level using a minimal amount of feedback. [top]

Neda Mirzaeian

Neda Mirzaeian: A Queueing Model and Analysis for Autonomous Vehicles on Highways

We investigate the effects of autonomous vehicles (AVs) on highway congestion. AVs have the potential to significantly reduce highway congestion, since they can maintain smaller inter-vehicle gaps and travel together in larger platoons than human-driven vehicles (HVs). Various policies have been proposed to regulate AVs, yet no in-depth comparison of these policies exists. To address this shortcoming, we develop a queueing model for a multi-lane highway, and analyze two policies: the designated-lane policy (“D policy”) under which one lane is designated to AVs, and the integrated policy (“I policy”) under which AVs travel together with HVs in all lanes. We use a Markovian arrival process to connect the service rate to inter-vehicle gaps and congestion, and measure the performance using mean travel time and throughput. Our analysis shows that although the I policy performs at least as well as a benchmark case with no AVs, the D policy outperforms the benchmark only when the highway is heavily congested and AVs constitute the majority of vehicles; in such a case this policy may outperform the I policy only in terms of throughput. These findings caution against recent industry and government proposals that the D policy should be employed at the beginning of the mass appearance of AVs. Finally, we calibrate our model to data, and show that for highly congested highways, a moderate number of AVs can make substantial improvement (e.g., 22% AVs can improve throughput by 30%), and when all vehicles are AVs, throughput can be increased over 400%. [top]

Raghav Singal

Raghav Singal: Shapley Meets Uniform: An Axiomatic Framework for Attribution in Online Advertising

A central challenge in online advertising is attribution, namely, assessing the contribution of individual ads to product purchase. We develop an axiomatic framework for attribution in online advertising. Under a Markovian model for user behavior, we illustrate limitations of existing heuristics and propose a novel framework motivated by causality and game theory. Furthermore, we establish that our framework coincides with an adjusted “unique-uniform” attribution scheme. This scheme is efficiently implementable and can be interpreted as a correction to the commonly used uniform attribution scheme. We supplement our theory with numerical experiments inspired by a real-world dataset. [top]

Bradley Sturt

Bradley Sturt: Dynamic Optimization With Side Information

In this talk, we present a scalable approach for leveraging machine learning in dynamic optimization under uncertainty. Our approach finds policies by solving a robust optimization problem with multiple uncertainty sets, which are constructed using historical data and machine learning (such as k-nearest neighbors and random forests). Through a novel concentration inequality, we show that the resulting policies have near-optimal average performance in big data settings (w.p.1). We also develop a tractable approximation algorithm for our approach based on overlapping linear decision rules. Across data-driven examples from inventory management and finance with up to twelve stages, our method finds policies in under one minute with near-optimal average performance (15% improvement over alternatives). [top]

Mika Sumida

Mika Sumida: Approximation Algorithms for Network Revenue Management

We present an approximation algorithm for network revenue management problems. In our approximation algorithm, we construct an approximate policy using value function approximations that are expressed as linear combinations of basis functions. We use a backward recursion to compute the coefficients of the basis functions in the linear combinations. If each product uses at most L resources, then the total expected revenue obtained by our approximate policy is at least 1/(1+L) of the optimal total expected revenue. In many network revenue management settings, although the number of resources and products can become large, the number of resources used by a product remains bounded. In this case, our approximate policy provides a constant-factor performance guarantee. Our approach can incorporate the customer choice behavior among the products and allows the products to use multiple units of a resource, while still maintaining the performance guarantee. [top]

Alfredo Torrico

Alfredo Torrico: Submodular Optimization: Greedy Adapts to Sharpness

Submodular maximization has been widely studied over the past decades, mostly because of its numerous applications in real-world problems. It is well known that the standard greedy algorithm guarantees a worst case approximation factor of 1-1/e when maximizing a monotone submodular function under a cardinality constraint. However, empirical studies show that its performance is substantially better in practice. This raises a natural question of explaining this improved performance of the greedy algorithm. In this work, we define sharpness for submodular functions as a candidate explanation for this phenomenon. The sharpness criteria is inspired by the concept of strong convexity in convex optimization. We show that the  greedy algorithm provably performs better as the sharpness of the submodular function increases. This is joint work with Mohit Singh and Sebastian Pokutta. [top]

Advertisements