## 基础入门

1.线性回归-sklearn库官方介绍英文版 作者：开源库 http://scikit-learn.org/stable/supervised_learning.html#supervised-learning

2.scikit-learn 0.18 中文文档 作者：ApacheCN Apache中文网 http://cwiki.apachecn.org/pages/viewpage.action?pageId=10030181

3.线性回归及梯度下降算法 作者：王大宝的CD http://blog.csdn.net/sinat_22594309/article/details/55203609

4.对线性回归、逻辑回归、各种回归的概念学习 作者：viewcode http://blog.csdn.net/viewcode/article/details/8794401

5.Logistic回归 作者：ApacheCN - 学习机器学习群【629470233】 https://github.com/apachecn/MachineLearning/blob/master/docs/5.Logistic%E5%9B%9E%E5%BD%92.md

6.线性回归Linear regression 作者：大饼博士X http://blog.csdn.net/xbinworld/article/details/43919445

7.线性回归之——最小二乘法 作者：sbp810050504 http://sbp810050504.blog.51cto.com/2799422/1269572

9.Stanford机器学习笔记-1.线性回归 作者：llhthinker http://www.cnblogs.com/llhthinker/p/5248586.html

10.线性回归 作者：Duanxx http://blog.csdn.net/daunxx/article/details/51556677

11.在Python中使用线性回归预测数据 作者：伯乐在线 - HanKai翻译 http://python.jobbole.com/81215/

12.线性回归 作者：futurehau http://www.cnblogs.com/futurehau/p/6105011.html

13.线性回归知识及预测糖尿病实例 作者：Eastmount http://blog.csdn.net/eastmount/article/details/52929765

14.回归模型LinearRegression简单分析氧化物数据 作者：Eastmount http://blog.csdn.net/eastmount/article/details/60468818

## 名人主页

### VIP内容

• 最近邻导论
• 决策树集成
• 线性回归线性分类
• Softmax回归、SVM、Boosting
• PCA、Kmeans、最大似然
• 概率图模型
• 期望最大化
• 神经网络
• 卷积神经网络
• 强化学习
• 可微分隐私
• 算法公平性

https://www.cs.toronto.edu/~huang/courses/csc2515_2020f/

### 最新论文

Estimating causal effects from randomized experiments is central to clinical research. Reducing the statistical uncertainty in these analyses is an important objective for statisticians. Registries, prior trials, and health records constitute a growing compendium of historical data on patients under standard-of-care that may be exploitable to this end. However, most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control. Here, we propose a use of historical data that exploits linear covariate adjustment to improve the efficiency of trial analyses without incurring bias. Specifically, we train a prognostic model on the historical data, then estimate the treatment effect using a linear regression while adjusting for the trial subjects' predicted outcomes (their prognostic scores). We prove that, under certain conditions, this prognostic covariate adjustment procedure attains the minimum variance possible among a large class of estimators. When those conditions are not met, prognostic covariate adjustment is still more efficient than raw covariate adjustment and the gain in efficiency is proportional to a measure of the predictive accuracy of the prognostic model above and beyond the linear relationship with the raw covariates. We demonstrate the approach using simulations and a reanalysis of an Alzheimer's Disease clinical trial and observe meaningful reductions in mean-squared error and the estimated variance. Lastly, we provide a simplified formula for asymptotic variance that enables power calculations that account for these gains. Sample size reductions between 10% and 30% are attainable when using prognostic models that explain a clinically realistic percentage of the outcome variance.

Top