安装TensorFlow 2.0 preview进行深度学习(附Jupyter Notebook)

1 月 11 日 专知

【导读】深度学习框架TensorFlow 2.0 (preview版)可以用pip进行安装了。本文介绍安装TensorFlow 2.0 preview的方法,并介绍一个Github项目tf2_course,它包含了一些TensorFlow 2的练习和解决方案,以Jupyter Notebook的形式展现。



TensorFlow是最流行的深度学习框架之一,大家期待已久的TensorFlow 2.0现在出了Preview版本,并且可以直接通过pip安装。目前的TensorFlow 2.0 Preview版本可能会包含一些bug,并且不能保证和最终的2.0 Release版一致。


TensorFlow 2.0 Preview版本的安装


我们在Ubuntu上的Python 3.6环境中成功安装了TensorFlow 2.0 preview(暂时没有在Windows或Python3.5环境上成功安装)。首先需要用Miniconda或Anaconda创建一个名为python36的Python3.6环境:

conda create -n python36 python=3.6

安装完成后用下面命令进入python36环境:

source activate python36

然后使用pip直接安装TensorFlow 2.0 Preview:

pip install tf-nightly-gpu-2.0-preview

注意:如果要运行tf-nightly-gpu-2.0-preview,需要安装CUDA 10,否则会报下面的错误:

ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory


tf2_course教程


tf2_course是Github上的一个Jupyter Notebook项目,包含了TensorFlow 2的练习和解决方案,Github地址为:https://github.com/ageron/tf2_course


用git命令clone教程到本地,可以将$HOME替换为你希望存放的路径:

$ cd $HOME  # or any other development directory you prefer
$ git clone https://github.com/ageron/tf2_course.git
$ cd tf2_course

教程的Jupyter Notebook都在tf2_course中,其中包括:

  • Neural Nets with Keras

    简介:

    用tensorflow.keras进行一些常规的神经网络操作。

    链接:

    https://github.com/ageron/tf2_course/blob/master/01_neural_nets_with_keras.ipynb

  • Low-Level TensorFlow API

    简介:

    一些基本的tensorflow API,如层的定义等。 

    链接:

    https://github.com/ageron/tf2_course/blob/master/02_low_level_tensorflow_api.ipynb

  • Loading and Preprocessing Data


    简介:

    数据预处理,如tf.data.Dataset。

    链接:

    https://github.com/ageron/tf2_course/blob/master/03_loading_and_preprocessing_data.ipynb


参考链接:

  • https://github.com/ageron/tf2_course


-END-

专 · 知

   专知《深度学习: 算法到实战》课程正在开讲! 中科院博士为你讲授!




请加专知小助手微信(扫一扫如下二维码添加),咨询《深度学习:算法到实战》参团限时优惠报名~

欢迎微信扫一扫加入专知人工智能知识星球群,获取专业知识教程视频资料和与专家交流咨询!

请PC登录www.zhuanzhi.ai或者点击阅读原文,注册登录专知,获取更多AI知识资料!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

点赞 0

Nonnegative matrix factorization (NMF) has become a workhorse for signal and data analytics, triggered by its model parsimony and interpretability. Perhaps a bit surprisingly, the understanding to its model identifiability---the major reason behind the interpretability in many applications such as topic mining and hyperspectral imaging---had been rather limited until recent years. Beginning from the 2010s, the identifiability research of NMF has progressed considerably: Many interesting and important results have been discovered by the signal processing (SP) and machine learning (ML) communities. NMF identifiability has a great impact on many aspects in practice, such as ill-posed formulation avoidance and performance-guaranteed algorithm design. On the other hand, there is no tutorial paper that introduces NMF from an identifiability viewpoint. In this paper, we aim at filling this gap by offering a comprehensive and deep tutorial on model identifiability of NMF as well as the connections to algorithms and applications. This tutorial will help researchers and graduate students grasp the essence and insights of NMF, thereby avoiding typical `pitfalls' that are often times due to unidentifiable NMF formulations. This paper will also help practitioners pick/design suitable factorization tools for their own problems.

点赞 0
阅读1+

In this paper, we propose a novel sequence-aware recommendation model. Our model utilizes self-attention mechanism to infer the item-item relationship from user's historical interactions. With self-attention, it is able to estimate the relative weights of each item in user interaction trajectories to learn better representations for user's transient interests. The model is finally trained in a metric learning framework, taking both short-term and long-term intentions into consideration. Experiments on a wide range of datasets on different domains demonstrate that our approach outperforms the state-of-the-art by a wide margin.

点赞 1
阅读1+
Top