安装TensorFlow 2.0 preview进行深度学习(附Jupyter Notebook)

1 月 11 日 专知

【导读】深度学习框架TensorFlow 2.0 (preview版)可以用pip进行安装了。本文介绍安装TensorFlow 2.0 preview的方法,并介绍一个Github项目tf2_course,它包含了一些TensorFlow 2的练习和解决方案,以Jupyter Notebook的形式展现。



TensorFlow是最流行的深度学习框架之一,大家期待已久的TensorFlow 2.0现在出了Preview版本,并且可以直接通过pip安装。目前的TensorFlow 2.0 Preview版本可能会包含一些bug,并且不能保证和最终的2.0 Release版一致。


TensorFlow 2.0 Preview版本的安装


我们在Ubuntu上的Python 3.6环境中成功安装了TensorFlow 2.0 preview(暂时没有在Windows或Python3.5环境上成功安装)。首先需要用Miniconda或Anaconda创建一个名为python36的Python3.6环境:

conda create -n python36 python=3.6

安装完成后用下面命令进入python36环境:

source activate python36

然后使用pip直接安装TensorFlow 2.0 Preview:

pip install tf-nightly-gpu-2.0-preview

注意:如果要运行tf-nightly-gpu-2.0-preview,需要安装CUDA 10,否则会报下面的错误:

ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory


tf2_course教程


tf2_course是Github上的一个Jupyter Notebook项目,包含了TensorFlow 2的练习和解决方案,Github地址为:https://github.com/ageron/tf2_course


用git命令clone教程到本地,可以将$HOME替换为你希望存放的路径:

$ cd $HOME  # or any other development directory you prefer
$ git clone https://github.com/ageron/tf2_course.git
$ cd tf2_course

教程的Jupyter Notebook都在tf2_course中,其中包括:

  • Neural Nets with Keras

    简介:

    用tensorflow.keras进行一些常规的神经网络操作。

    链接:

    https://github.com/ageron/tf2_course/blob/master/01_neural_nets_with_keras.ipynb

  • Low-Level TensorFlow API

    简介:

    一些基本的tensorflow API,如层的定义等。 

    链接:

    https://github.com/ageron/tf2_course/blob/master/02_low_level_tensorflow_api.ipynb

  • Loading and Preprocessing Data


    简介:

    数据预处理,如tf.data.Dataset。

    链接:

    https://github.com/ageron/tf2_course/blob/master/03_loading_and_preprocessing_data.ipynb


参考链接:

  • https://github.com/ageron/tf2_course


-END-

专 · 知

   专知《深度学习: 算法到实战》课程正在开讲! 中科院博士为你讲授!




请加专知小助手微信(扫一扫如下二维码添加),咨询《深度学习:算法到实战》参团限时优惠报名~

欢迎微信扫一扫加入专知人工智能知识星球群,获取专业知识教程视频资料和与专家交流咨询!

请PC登录www.zhuanzhi.ai或者点击阅读原文,注册登录专知,获取更多AI知识资料!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

登录查看更多
点赞 0

We develop a system for modeling hand-object interactions in 3D from RGB images that show a hand which is holding a novel object from a known category. We design a Convolutional Neural Network (CNN) for Hand-held Object Pose and Shape estimation called HOPS-Net and utilize prior work to estimate the hand pose and configuration. We leverage the insight that information about the hand facilitates object pose and shape estimation by incorporating the hand into both training and inference of the object pose and shape as well as the refinement of the estimated pose. The network is trained on a large synthetic dataset of objects in interaction with a human hand. To bridge the gap between real and synthetic images, we employ an image-to-image translation model (Augmented CycleGAN) that generates realistically textured objects given a synthetic rendering. This provides a scalable way of generating annotated data for training HOPS-Net. Our quantitative experiments show that even noisy hand parameters significantly help object pose and shape estimation. The qualitative experiments show results of pose and shape estimation of objects held by a hand "in the wild".

点赞 0
阅读2+

We propose a person detector on omnidirectional images, an accurate method to generate minimal enclosing rectangles of persons. The basic idea is to adapt the qualitative detection performance of a convolutional neural network based method, namely YOLOv2 to fish-eye images. The design of our approach picks up the idea of a state-of-the-art object detector and highly overlapping areas of images with their regions of interests. This overlap reduces the number of false negatives. Based on the raw bounding boxes of the detector we fine-tuned overlapping bounding boxes by three approaches: the non-maximum suppression, the soft non-maximum suppression and the soft non-maximum suppression with Gaussian smoothing. The evaluation was done on the PIROPO database and an own annotated Flat dataset, supplemented with bounding boxes on omnidirectional images. We achieve an average precision of 64.4 % with YOLOv2 for the class person on PIROPO and 77.6 % on Flat. For this purpose we fine-tuned the soft non-maximum suppression with Gaussian smoothing.

点赞 0
阅读2+

Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.

点赞 0
阅读3+
Top