视觉识别系统出自“头脑风暴”一词。所谓头脑风暴(Brain-storming)系统是运用系统的、统一的视觉符号系统。视觉识别是静态的识别符号具体化、视觉化的传达形式,项目最多,层面最广,效果更直接。视觉识别系统属于CIS中的VI,用完整、体系的视觉传达体系,将企业理念、文化特质、服务内容、企业规范等抽象语意转换为具体符号的概念,塑造出独特的企业形象。视觉识别系统分为基本要素系统和应用要素系统两方面。基本要素系统主要包括:企业名称、企业标志、标准字、标准色、象征图案、宣传口语、市场行销报告书等。应用系统主要包括:办公事务用品、生产设备、建筑环境、产品包装、广告媒体、交通工具、衣着制服、旗帜、招牌、标识牌、橱窗、陈列展示等。视觉识别(VI)在CI系统大众所接受,据有主导的地位。

VIP内容

题目: Deep Isometric Learning for Visual Recognition

简介: 初始化,正则化和skip连接被认为是训练非常深的卷积神经网络并获得最新性能的三种必不可少的技术。 本文表明,无需规范化或skip连接的深层卷积网络也可以训练出在标准图像识别基准上获得令人惊讶的良好性能。 这是通过在初始化和训练过程中强制卷积内核接近等距来实现的,还可以通过使用ReLU的变体来实现等距变迁。 进一步的实验表明,如果与skip连接结合使用,则即使完全不进行正则化,此类近等距网络也可以达到ResNet在ImageNet与COCO数据集上相同的性能。

成为VIP会员查看完整内容
0
15

最新论文

Gaussian processes (GPs) provide a framework for Bayesian inference that can offer principled uncertainty estimates for a large range of problems. For example, if we consider regression problems with Gaussian likelihoods, a GP model enjoys a posterior in closed form. However, identifying the posterior GP scales cubically with the number of training examples and requires to store all examples in memory. In order to overcome these obstacles, sparse GPs have been proposed that approximate the true posterior GP with pseudo-training examples. Importantly, the number of pseudo-training examples is user-defined and enables control over computational and memory complexity. In the general case, sparse GPs do not enjoy closed-form solutions and one has to resort to approximate inference. In this context, a convenient choice for approximate inference is variational inference (VI), where the problem of Bayesian inference is cast as an optimization problem -- namely, to maximize a lower bound of the log marginal likelihood. This paves the way for a powerful and versatile framework, where pseudo-training examples are treated as optimization arguments of the approximate posterior that are jointly identified together with hyperparameters of the generative model (i.e. prior and likelihood). The framework can naturally handle a wide scope of supervised learning problems, ranging from regression with heteroscedastic and non-Gaussian likelihoods to classification problems with discrete labels, but also multilabel problems. The purpose of this tutorial is to provide access to the basic matter for readers without prior knowledge in both GPs and VI. A proper exposition to the subject enables also access to more recent advances (like importance-weighted VI as well as interdomain, multioutput and deep GPs) that can serve as an inspiration for new research ideas.

0
0
下载
预览
父主题
Top