关于量子计算的有趣事实

2018 年 10 月 24 日 FT中文网



马戈利斯:量子计算可能不精确。你的iPhone 27可能不是一部量子电脑,但它的电池可能是用量子电脑设计的。

登录查看更多
点赞 0

Thermal preferences vary from person to person and may change over time. The objective of this paper is to sequentially pose intelligent queries to occupants in order to optimally learn the room temperatures which maximize their satisfaction. Our central hypothesis is that an occupant's preference relation over room temperatures can be described using a scalar function of these temperatures, which we call the "occupant's thermal utility function". Information about an occupant's preference over room temperatures is available to us through their response to thermal preference queries : "prefer warmer," "prefer cooler" and "satisfied" which we interpret as statements about the derivative of their utility function, i.e. the utility function is "increasing", "decreasing" and "constant" respectively. We model this hidden utility function using a Gaussian process with a built-in unimodality constraint, i.e., the utility function has a unique maximum, and we train this model using Bayesian inference. This permits an expected improvement based selection of next preference query to pose to the occupant, which takes into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling from areas which are likely to offer an improvement over current best observation). We use this framework to sequentially design experiments and illustrate its benefits by showing that it requires drastically fewer observations to learn the maximally preferred room temperature values as compared to other methods. This framework is an important step towards the development of intelligent HVAC systems which would be able to respond to individual occupants' personalized thermal comfort needs. In order to encourage the use of our PE framework and ensure reproducibility in results, we publish an implementation of our work named GPPrefElicit as an open-source package in the Python language .

点赞 0
阅读1+

Recently, (Blanchet, Kang, and Murhy 2016, and Blanchet, and Kang 2017) showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems. The distributional uncertainty is defined as a neighborhood centered at the empirical distribution. In this work, we propose a methodology which learns such neighborhood in a natural data-driven way. Also, we apply robust optimization methodology to inform the transportation cost. We show rigorously that our framework encompasses adaptive regularization as a particular case. Moreover, we demonstrate empirically that our proposed methodology is able to improve upon a wide range of popular machine learning estimators.

点赞 0
阅读1+
Top