CMS Upcoming SeminarsCMS Upcoming Seminar Feed
http://cms.caltech.edu/seminars.rss
enTCS+ talk: Explicit Binary Tree Codes with Polylogarithmic Size Alphabetbjleung@caltech.edu (Bonnie Leung)TCS+ talk<strong>Speaker(s):</strong> Leonard Schulman (Caltech)<br><strong>Location:</strong> Annenberg 205<br><p><strong>Abstract: </strong>Tree codes are "real time" or "causal" error-correcting codes. They are known to exist but explicit construction has been a longstanding open problem. We report on progress on this problem.</p><p>For every constant delta we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n).</p><p>As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basis--a result of independent interest.</p><p>Joint work with Gil Cohen (Princeton) and Bernhard Haeupler (CMU)</p>Wed, 23 May 2018 10:00:00 -0700http://cms.caltech.edu/events/81681Young Investigators Lecture: Low-Complexity Modeling for Visual Data: Representations and Algorithmslchavarr@caltech.edu (Liliana Chavarria)Young Investigators Lecture<strong>Speaker(s):</strong> Yuqian Zhang ()<br><strong>Location:</strong> Moore B280<br><p><strong>ABSTRACT: </strong>This talk focuses on representations and algorithms for visual data, in light of recent theoretical and algorithmic developments in high-dimensional data analysis. We first consider the problem of modeling a given dataset as superpositions of basic motifs. This simple model arises from several important applications, including microscopy image analysis, neural spike sorting and image deblurring. This motif-finding problem can be phrased as '"short-and-sparse" blind deconvolution, in which the goal is to recover a short motif (convolution kernel) from its convolution with a random spike train. We assume the kernel to have unit Frobenius norm, and formulate it as a nonconvex optimization problem over the sphere. By analyzing the optimization landscape, we argue that when the target spike train is sufficiently sparse, then on a region of the sphere, every local minimum is equivalent to the ground truth. This geometric characterization implies that efficient methods obtain the ground truth under the same conditions. We next consider the problem of modeling physical nuisances across a collection of images, in the context of illumination-invariant object detection and recognition. We study the image formation process for general nonconvex objects (faces etc.), and propose a test data construction methodology that achieves object verification with worst-case performance guarantees. In addition, we leverage tools from sparse and low-rank decomposition to reduce the complexity for both storage and computation. These examples show the possibility of formalizing certain vision problems with rigorous guarantees.</p><p><strong>BIO:</strong> Yuqian Zhang is a Ph.D. candidate in the Electrical Engineering Department at Columbia University and is advised by Professor John Wright. She received her B.S. in Electrical Engineering from Xi'an Jiaotong University. She was selected to participate in the Rising Stars in EECS Workshop 2017. Her research spans across optimization, computer vision, signal processing, and machine learning. Specifically, her primary research interest is to develop efficient, reliable and robust algorithms for applications in computer vision, scientific data analysis, etc. </p>Wed, 23 May 2018 16:00:00 -0700http://cms.caltech.edu/events/81127Rigorous Systems Research Group (RSRG) Seminar: Generalized Benders Cuts for Infinite-Horizon Control Problemsysui@caltech.edu (Yanan Sui)Rigorous Systems Research Group (RSRG) Seminar<strong>Speaker(s):</strong> Joe Warrington (Swiss Federal Institute of Technology)<br><strong>Location:</strong> Annenberg 213<br><p>We describe a nonlinear generalization of dual dynamic programming theory and its application to value function estimation for deterministic control problems over continuous state and input spaces, in a discrete-time infinite horizon setting. We prove that the result of a one-stage policy evaluation can be used to produce nonlinear lower bounds on the optimal value function that are valid over the entire state space. These bounds reflect the functional form of the system's costs, dynamics, and constraints. We provide an iterative algorithm that produces successively better approximations of the optimal value function, prove some key properties of the algorithm, and describe means of certifying the quality of the output. We demonstrate the efficacy of the approach on systems whose dimensions are too large for conventional dynamic programming approaches to be practical.</p>Thu, 24 May 2018 12:00:00 -0700http://cms.caltech.edu/events/82087Rigorous Systems Research Group (RSRG) Seminar: Addressing Challenges in Autonomy: Lessons from Information and Control Theoriesjames@caltech.edu (James Anderson)Rigorous Systems Research Group (RSRG) Seminar<strong>Speaker(s):</strong> REZA AHMADI (UT Austin)<br><strong>Location:</strong> Annenberg 314<br><p>We live in the prolific age of artificial intelligence and machine learning. These automation technologies underlie real systems (e.g. robots, and self-driving vehicles), and virtual systems (e.g. financial, and inventory management). The problem is many of these autonomous systems have become so intricate and black-box that we hit a complexity roadblock. For example, it can be difficult to tell why a classifier or a recommendation engine based on machine learning works. Moreover, when the algorithms work, how can we quantify their limitations, safety, privacy and performance with guarantees. In this talk, I borrow notions from control and information theories to address two challenges in autonomy. The first one is motivated by the Mars 2020 project and is concerned with navigation of an autonomous agent in an uncertain environment (modeled by a Markov decision process) subject to communication and sensing limitations (in terms of transfer entropy), and high-level mission specification (characterized by linear temporal logic formulae). The second one is concerned with belief verification in autonomous systems (represented by a partially observable Markov decision process) with applications in privacy verification of autonomous systems (e.g. a robot) operating on shared infrastructure, and machine teaching.</p>Fri, 25 May 2018 14:00:00 -0700http://cms.caltech.edu/events/82302TCS+ talk: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairnessbjleung@caltech.edu (Bonnie Leung)TCS+ talk<strong>Speaker(s):</strong> Michael Kearns (University of Pennsylvania)<br><strong>Location:</strong> Annenberg 205<br><p><strong>Abstract: </strong>The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or <br>inadvertent "fairness gerrymandering", in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several <br>computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness.</p><p>We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple<br>structured subclasses. But it also raises the possibility of applying empirically successful machine learning methods to the problem.</p><p>We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner <br>and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.</p><p>Joint work with Seth Neel, Aaron Roth, and Zhiwei Steven Wu.</p>Wed, 30 May 2018 10:00:00 -0700http://cms.caltech.edu/events/81627Rigorous Systems Research Group (RSRG) Seminar: Online Experiment Designysui@caltech.edu (Yanan Sui)Rigorous Systems Research Group (RSRG) Seminar<strong>Speaker(s):</strong> Reza Eghbali (SIMONS Institute for the Theory of Computing)<br><strong>Location:</strong> Annenberg 213<br><p>We consider a new and general online resource allocation problem, where the goal is to maximize a function of a positive semidefinite (PSD) matrix with a scalar budget constraint. The problem data arrives online, and the algorithm needs to make an irrevocable decision at each step. Of particular interest are classic experiment design problems in the online setting, with the algorithm deciding whether to allocate budget to each experiment as new experiments become available sequentially.<br>We analyze two greedy primal-dual algorithms and provide bounds on their competitive ratios. Our analysis relies on a smooth surrogate of the objective function that needs to satisfy a new diminishing returns (PSD-DR) property (that its gradient is order-reversing with respect to the PSD cone). Using the representation for monotone maps on the PSD cone given by Lowner's theorem, we obtain a convex parametrization of the family of functions satisfying PSD-DR. We then formulate a convex optimization problem to directly optimize our competitive ratio bound over this set. This design problem can be solved offline before the data start arriving. The online algorithm that uses the designed smoothing is tailored to the given cost function, and enjoys a competitive ratio at least as good as our optimized bound. We provide examples of computing the smooth surrogate for D-optimal and A-optimal experiment design, and demonstrate the performance of the custom-designed algorithm.</p>Thu, 31 May 2018 12:00:00 -0700http://cms.caltech.edu/events/81415Finance Seminar: Chad Kendall, USC: Herding and Contrarianism: A Matter of Preference?sabrina@hss.caltech.edu (Sabrina De Jaegher)Finance Seminar: Chad Kendall, USC<strong>Speaker(s):</strong> Chad Kendall (University of Southern California)<br><strong>Location:</strong> Baxter B125<br><p>Abstract: Herding and contrarian strategies in financial markets produce informational inefficiencies because investors ignore private information, instead following or bucking past trends. In a simple trading environment, I demonstrate theoretically that investors with prospect theory preferences ignore private information by following a strategy that looks like herding or contrarianism, but which is actually trend-independent. I confirm the theory's predictions in a laboratory experiment designed to rule out other sources of these behaviors, and find that approximately 70% of subjects exhibit herd-like behavior. Finally, I perform a calibration exercise using actual market data to demonstrate the applicability of the results to more general settings</p><p><em><em>Finance Seminars at Caltech are funded through the generous support of The Ronald and Maxine Linde Institute of Economic and Management Sciences (lindeinstitute.caltech.edu) and Stephen A. Ross.</em></em></p>Thu, 31 May 2018 16:00:00 -0700http://cms.caltech.edu/events/79342Linde Institute/SISL Seminar: Luciano Pomatto, Caltech: Topic to be announcedmmartin@caltech.edu (Mary Martin)Linde Institute/SISL Seminar: Luciano Pomatto, Caltech<strong>Speaker(s):</strong> Luciano Pomatto (Caltech)<br><strong>Location:</strong> Baxter 127<br><p>Please check later for additional details</p>Fri, 01 Jun 2018 12:00:00 -0700http://cms.caltech.edu/events/82026PhD Thesis Seminar: The Nested Periodic Subspaces - Extensions of Ramanujan Sums for Period Estimation. ()PhD Thesis Seminar<strong>Speaker(s):</strong> <br><strong>Location:</strong> Moore B270<br>Fri, 01 Jun 2018 14:00:00 -0700http://cms.caltech.edu/events/82295CNS Seminar: TBDmbereal@caltech.edu (Minah Bereal)CNS Seminar<strong>Speaker(s):</strong> Brian Wiltgen (UC Davis)<br><strong>Location:</strong> Beckman Behavioral Biology B180<br><p><strong>Title</strong>: Manipulating memory traces in the hippocampus</p><p><strong>Abstract:</strong> Since the discovery of patient H.M., researchers have known that the hippocampus is important for memory. Subsequent animal work reinforced this finding by showing that hippocampal dysfunction produces profound amnesia for spatial and contextual information. Despite these well-established facts, it is still unclear why the hippocampus is so fundamental for memory. The dominant idea, based on the work of Marr, is that memory is retrieved when the hippocampus reinstates patterns of cortical activity that were observed during learning. This idea is supported by spatial studies in rodents showing that learned sequences are replayed in the hippocampus and cortex after training. However, it has not been determined if reactivation of cortical representations during replay (or memory retrieval) requires the hippocampus. To examine this idea, my lab uses fos-tTA mice to tag active hippocampal neurons with the long-lasting fluorescent protein H2B-GFP and light sensitive opsins. These proteins allow us to identify 'encoding neurons' several days after learning and manipulate their activity with laser stimulation. When tagged hippocampal neurons are silenced, we find that memory retrieval is impaired and representations in the cortex and amygdala cannot be reactivated. Memory retrieval is also induced when tagged cells are stimulated, but only under certain conditions. These data are consistent with the idea that the hippocampus retrieves memory by reinstating patterns of cortical activity that were present during learning.</p>Mon, 04 Jun 2018 16:00:00 -0700http://cms.caltech.edu/events/82203Rigorous Systems Research Group (RSRG) Seminar: TBAysui@caltech.edu (Yanan Sui)Rigorous Systems Research Group (RSRG) Seminar<strong>Speaker(s):</strong> Nan Jiang (Microsoft Research, NYC)<br><strong>Location:</strong> Annenberg 213<br><p>TBA</p>Thu, 07 Jun 2018 12:00:00 -0700http://cms.caltech.edu/events/82088IQIM Postdoctoral and Graduate Student Seminar : TBAmarciab@caltech.edu (Marcia Brown)IQIM Postdoctoral and Graduate Student Seminar <strong>Speaker(s):</strong> Oskar Painter (Caltech)<br><strong>Location:</strong> East Bridge 114<br><p><strong>Abstract</strong>: TBA</p><p><br> </p><p> </p>Fri, 15 Jun 2018 12:00:00 -0700http://cms.caltech.edu/events/82244Finance Seminar: Topic to be announcedsabrina@hss.caltech.edu (Sabrina De Jaegher)Finance Seminar<strong>Speaker(s):</strong> Terrance Odean (UC Berkeley)<br><strong>Location:</strong> Baxter B125<br><p>Please check later for additional details</p><p><em><em>Finance Seminars at Caltech are funded through the generous support of The Ronald and Maxine Linde Institute of Economic and Management Sciences (lindeinstitute.caltech.edu) and Stephen A. Ross.</em></em></p>Thu, 11 Oct 2018 16:00:00 -0700http://cms.caltech.edu/events/82361Finance Seminar: Topic to be announcedsabrina@hss.caltech.edu (Sabrina De Jaegher)Finance Seminar<strong>Speaker(s):</strong> Alex Imas ()<br><strong>Location:</strong> Baxter B125<br><p>Please check later for additional details</p><p><em><em>Finance Seminars at Caltech are funded through the generous support of The Ronald and Maxine Linde Institute of Economic and Management Sciences (lindeinstitute.caltech.edu) and Stephen A. Ross.</em></em></p>Thu, 29 Nov 2018 16:00:00 -0800http://cms.caltech.edu/events/82343Finance Seminar: Topic to be announcedsabrina@hss.caltech.edu (Sabrina De Jaegher)Finance Seminar<strong>Speaker(s):</strong> Camelia M. Kuhnen (University of North Carolina at Chapel Hill)<br><strong>Location:</strong> Baxter B125<br><p>Please check later for additional details</p><p><em><em>Finance Seminars at Caltech are funded through the generous support of The Ronald and Maxine Linde Institute of Economic and Management Sciences (lindeinstitute.caltech.edu) and Stephen A. Ross.</em></em></p>Thu, 30 May 2019 16:00:00 -0700http://cms.caltech.edu/events/82359