A Technical Overview of AI & ML in 2018 & Trends for 2019

2018 年 12 月 24 日 待字闺中
A Technical Overview of AI & ML in 2018 & Trends for 2019

Introduction

The last few years have been a dream run for Artificial Intelligence enthusiasts and machine learning professionals. These technologies have evolved from being a niche to becoming mainstream, and are impacting millions of lives today. Countries now have dedicated AI ministers and budgets to make sure they stay relevant in this race.

The same has been true for a data science professional. A few years back — you would have been comfortable knowing a few tools and techniques. Not anymore! There is so much happening in this domain and so much to keep pace with — it feels mind boggling at times.

This is why I thought of taking a step back and looking at the developments in some of the key areas in Artificial Intelligence from a data science practitioners’ perspective. What were these breakthroughs? What happened in 2018 and what can be expected in 2019? Read this article to find out!

P.S. As with any forecasts, these are my takes. These are based on me trying to connect the dots. If you have a different perspective — I would love to hear it. Do let me know what you think might change in 2019.

Areas we’ll cover in this article

  • Natural Language Processing (NLP)

  • Computer Vision

  • Tools and Libraries

  • Reinforcement Learning

  • AI for Good — A Move Towards Ethical AI

Natural Language Processing (NLP)


Making machines parse words and sentences has always seemed like a dream. There are way too many nuances and aspects of a language that even humans struggle to grasp at times. But 2018 has truly been a watershed moment for NLP.

We saw one remarkable breakthrough after another — ULMFiT, ELMO, OpenAI’s Transformer and Google’s BERT to name a few. The successful application of transfer learning (the art of being able to apply pretrained models to data) to NLP tasks has blown open the door to potentially unlimited applications. Our podcast with Sebastian Ruder further cemented our belief in how far his field has traversed in recent times. As a side note, that’s a must-listen podcast for all NLP enthusiasts.

Let’s look at some of these key developments in a bit more detail. And if you’re looking to learn the ropes in NLP and are looking for a place to get started, make sure you head over to this ‘NLP using Python‘ course. It’s as good a place as any to start your text-fuelled journey!

ULMFiT

Designed by Sebastian Ruder and fast.ai’s Jeremy Howard, ULMFiT was the first framework that got the NLP transfer learning party started this year. For the uninitiated, it stands for Universal Language Model Fine-Tuning. Jeremy and Sebastian have truly put the word Universal in ULMFiT — the framework can be applied to almost any NLP task!

The best part about ULMFiT and the subsequent frameworks we’ll see soon? You don’t need to train models from scratch! These researchers have done the hard bit for you — take their learning and apply it in your own projects. ULMFiT outperformed state-of-the-art methods in six text classification tasks.

You can read this excellent tutorial by Prateek Joshi on how to get started with ULMFiT for any text classification problem.

ELMo

Want to take a guess at what ELMo stands for? It’s short for Embeddings from Language Models. Pretty creative, eh? Apart from it’s name resembling the famous Sesame Street character, ELMo grabbed the attention of the ML community as soon as it was released.

ELMo uses language models to obtain embeddings for each word while also considering the context in which the word fits into the sentence or paragraph. Context is such a crucial aspect of NLP that most people failed to grasp before. ELMo uses bi-directional LSTMs to create the embeddings. Don’t worry if that sounds like a mouthful — check out this article to get a really simple overview of what LSTMs are and how they work.

Like ULMFiT, ELMo significantly improves the performance of a wide variety of NLP tasks, like sentiment analysis and question answering. Read more about it here.

Google’s BERT

Quite a few experts have claimed that the release of BERT marks a new era in NLP. Following ULMFiT and ELMo, BERT really blew away the competition with it’s performance. As the original paper states, “BERT is conceptually simple and empirically powerful”.

BERT obtained state-of-the-art results on 11 (yes, 11!) NLP tasks. Check out their results on the SQuAD benchmark:

SQuAD v1.1 Leaderboard (Oct 8th 2018)Test EMTest F11st Place Ensemble — BERT87.493.22nd Place Ensemble — nlnet86.091.71st Place Single Model — BERT85.191.82nd Place Single Model — nlnet83.590.1

Interested in getting started? You can use either the PyTorch implementationor Google’s own TensorFlow codeto try and replicate the results on your own machine.

I’m fairly certain you are wondering what BERT stands for at this point.

It’s Bidirectional Encoder Representations from Transformers. Full marks if you got it right the first time.

Facebook’s PyText

How could Facebook stay out of the race? They have open-sourced their own deep learning NLP framework called PyText. It was released earlier this week so I’m still to experiment with it, but the early reviews are extremely promising. According to research published by FB, PyText has led to a 10% increase in accuracy of conversational models and reduced the training time as well.

PyText is actually behind a few of Facebook’s own products like the FB Messenger. So working on this adds some real-world value to your own portfolio (apart from the invaluable knowledge you’ll gain obviously).

You can try it out yourself by downloading the code from this GitHub repo.

Google Duplex

If you haven’t heard of Google Duplex yet, where have you been?! Sundar Pichai knocked it out of the park with this demo and it has been in the headlines ever since:

Since this is a Google product, there’s a slim chance of them open sourcing the code behind it. But wow! That’s a pretty awesome audio processing application to showcase. Of course it raises a lot of ethical and privacy questions, but that’s a discussion for later in this article. For now, just revel in how far we have come with ML in recent years.

NLP Trends to Expect in 2019

Who better than Sebastian Ruder himself to provide a handle on where NLP is headed in 2019? Here are his thoughts:

  • Pretrained language model embeddings will become ubiquitous; it will be rare to have a state-of-the-art model that is not using them

  • We’ll see pretrained representations that can encode specialized information which is complementary to language model embeddings. We will be able to combine different types of pretrained representations depending on the requirements of the task

  • We’ll see more work on multilingual applications and cross-lingual models. In particular, building on cross-lingual word embeddings, we will see the emergence of deep pretrained cross-lingual representations

Computer Vision


This is easily the most popular field right now in the deep learning space. I feel like we have plucked the low-hanging fruits of computer vision to quite an extent and are already in the refining stage. Whether it’s image or video, we have seen a plethora of frameworks and libraries that have made computer vision tasks a breeze.

We at Analytics Vidhya spent a lot of time this year working on democratizing these concepts. Check out our computer vision specific articles here, covering topics from object detection in videos and images to lists of pretrained models to get your deep learning journey started.

Here’s my pick of the best developments we saw in CV this year.

And if you’re curious about this wonderful field (actually going to become one of the hottest jobs in the industry soon), then go ahead and start your journey with our ‘Computer Vision using Deep Learning’ course.

The Release of BigGANs

Ian Goodfellow designed GANs in 2014, and the concept has spawned multiple and diverse applications since. Year after year we see the original concept being tweaked to fit a practical use case. But one thing has remained fairly consistent till this year — images generated by machines were fairly easy to spot. There would always be some inconsistency in the frame which made the distinction fairly obvious.

But that boundary has started to seep away in recent months. And with the creation of BigGANs, that boundary could be removed permanently. Check out the below images generated using this method:


Unless you take a microscope to it, you won’t be able to tell if there’s anything wrong with that collection. Concerning or exciting? I’ll leave that up to you, but there’s no doubt GANs are changing the way we perceive digital images (and videos).

For the data scientists out there, these models were trained on the ImageNet dataset first and then the JFT-300M data to showcase that these models transfer well from one set to the other. I would also to direct you to the GAN Dissection page — a really cool way to visualize and understand GANs.

Fast.ai’s Model being Trained on ImageNet in 18 Minutes

This was a really cool development. There is a very common belief that you need a ton of data along with heavy computational resources to perform proper deep learning tasks. That includes training a model from scratch on the ImageNet dataset. I understand that perception — most of us thought the same before a few folks at fast.ai found a way to prove all of us wrong.

Their model gave an accuracy of 93% in an impressive 18 minutes timeframe. The hardware they used, detailed in their blog post, contained 16 public AWS cloud instances, each with 8 NVIDIA V100 GPUs. They built the algorithm using the fastai and PyTorch libraries.


The total cost of putting the whole thing together came out to be just $40Jeremy has described their approach, including techniques, in much more detail here. A win for everyone!

NVIDIA’s vid2vid technique

Image processing has come leaps and bounds in the last 4–5 years, but what about video? Translating methods from a static frame to a dynamic one has proved to be a little tougher than most imagined. Can you take a video sequence and predict what will happen in the next frame? It had been explored before but the published research had been vague, at best.

NVIDIA decided to open source their approach earlier this year, and it was met with widespread praise. The goal of their vid2vid approach is to learn a mapping function from a given input video in order to produce an output video which depicts the contents of the input video with incredible precision.


You can try out their PyTorch implementation available on their GitHub here.

Computer Vision Trends to Expect in 2019

Like I mentioned earlier, we might see modifications rather than inventions in 2019. It might feel like more of the same — self-driving cars, facial recognition algorithms, virtual reality, etc. Feel free to disagree with me here and add your point of view — I would love to know what else we can expect next year that we haven’t already seen.

Drones, pending political and government approvals, might finally get the green light in the United States (India is far behind there). Personally, I would like to see a lot of the research being implemented in real-world scenarios. Conferences like CVPR and ICML portray the latest in this field but how close are those projects to being used in reality?

Visual question answering and visual dialog systems could finally make their long-awaited debut soon. These systems lack the ability to generalize but the expectation is that we’ll see an integrated multi-modal approach soon.

Self-supervised learning came to the forefront this year. I can bet on that being used in far more studies next year. It’s a really cool line of learning — the labels are directly determined from the data we input, rather than wasting time labelling images manually. Fingers crossed!

Tools and Libraries

This section will appeal to all data science professionals. Tools and libraries are the bread and butter of data scientists. I have been part a part of plenty of debates about which tool is the best, which framework supersedes the other, which library is the epitome of economical computations, etc. I’m sure quite a lot of you will be able to relate to this as well.

But one thing we can all agree on — we need to be on top of the latest tools in the field, or risk being left behind. The pace with which Python has overtaken everything else and planted itself as the industry leader is example enough of this. Of course a lot of this comes down to subjective choices (what tool is your organization using, how feasible is it to switch from the current framework to a new one, etc.), but if you aren’t even considering the state-of-the-art out there, then I implore you to start NOW.

So what made the headlines this year? Let’s find out!

PyTorch 1.0

What’s all the hype about PyTorch? I’ve mentioned it multiple times already in this article (and you’ll see more instances later). I’ll leave it to my colleague Faizan Shaikh to acquaint you with the framework.


That’s one of my favorite deep learning articles on AV — a must-read! Given how slow TensorFlow can be at times, it opened the door for PyTorch to capture the deep learning market in double-quick time. Most of the code that I see open soruced on GitHub is a PyTorch implemnantation of the concept. It’s not a coincidence — PyTorch is super flexible and the latest version (v1.0) already powers many Facebook products and services at scale, including performing 6 billion text translations a day.

PyTorch’s adoption rate is only going to go up in 2019 so now is as good a time as any to get on board.

AutoML — Automated Machine Learning

Automated machine learning (or AutoML) has been gradually making inroads in the last couple of years. Companies like RapidMiner, KNIME, DataRobot and H2O.ai have released excellent products showcasing the immense potential of this service.

Can you imagine working on a ML project where you only need to work with a drag-and-drop interface without coding? It’s a scenario that’s not too far off in the future. But apart from these companies, there was a significant release in the ML/DL space — Auto Keras!


It’s an open source library for performing AutoML tasks. The idea behind it is to make deep learning accessible to domain experts who perhaps don’t have a ML background. Make sure you check it out here. It is primed to make a huge run in the coming years.

TensorFlow.js — Deep Learning in the Browser

We’ve been building and designing machine learning and deep learning models in our favorite IDEs and notebooks since we got into this line of work. How about taking a step out and trying something different? Yes, I’m talking about performing deep learning in your web browser itself!


This is now a reality thanks to the release of TensorFlow.js. That link has a few demos as well which demonstrate how cool this open source concept is. There are primarily three advantages/features of TensorFlow.js:

  • Develop and deploy machine learning models with JavaScript

  • Run pre-existing TensorFlow models in your browser

  • Retrain pre-existing models

AutoML Trends to Expect in 2019

I wanted to focus particularly on AutoML in this thread. Why? Because I feel it’s going to be a real-game changer in the data science space in the next few years. But dont just take my word for it! Here’s H2O.ai’s Marios Michailidis, Kaggle Grandmaster, with his view of what to expect from AutoML in 2019:

Machine learning continues its march into being one of the most important trends of the future — of where the world is going towards to. This expansion has increased the demand for skilled applications in this space. Given its growth , it is imperative that automation is the key into utilising the data science resources as best as possible. The applications are limitless: Credit, insurance, fraud, computer vision, acoustics,sensors, recommenders, forecasting, NLP — you name it. It is a privilege to be working in this space . The trends that will continue being important can be defined as:
  • Providing smart visualisations and insights to help describe and understand the data

  • Finding/building/extracting better features for a given dataset

  • Building more powerful/smarter predictive models — quickly

  • Bridging the gap between black box modelling and productionisation of these models with machine learning interpretability (mli)

  • Facilitating the productionisation of these models

Reinforcement Learning


If I had to pick one field where I want to see more penetration, it would be reinforcement learning. Apart from the occasional headlines we see at irregular intervals, there hasn’t yet been a game-changing breakthrough. The general perception I have seen in the community is that it’s too math-heavy and there are no real industry applications to work on.

While this is true to a certain extent, I would love to see more practical use cases coming out of RL next year. In my monthly GitHub and Reddit series, I tend to keep at least one repository or discussion on RL to at least foster a discussion around the topic. This might well be the next big thing to come out of all that research.

OpenAI have released a really helpful toolkit to get beginners started with the field, which I have mentioned below. You can also check out this beginner-friendly introduction on the topic (it has been super helpful for me).

If there’s anything I have missed, would love to hear your thoughts on it.

OpenAI’s Spinning Up in Deep Reinforcement Learning


If research in RL has been slow, the educational material around it has been minimal (at best). But true to their word, OpenAI have open sourced some awesome material on the subject. They are calling this project ‘Spinning Up in Deep RL’ and you can read all about it here.

It’s actually quite a comprehensive list of resources on RL and they have attempted to keep the code and explanations as simple as possible. There is quite a lot of material which includes things like RL terminologies, how to grow into an RL research role, a list of important papers, a supremely well-documented code repository, and even a few exercised to get you started.

No more procrastinating now — if you were planning to get started with RL, your time has come!

Dopamine by Google

To accelerate research and get the community more involved in reinforcement learning, the Google AI team has open sourced Dopamine, a TensorFlow framework that aims to create research by making it more flexible and reproducible.


You can find the entire training data along with the TensorFlow code (just 15 Python notebooks!) on this GitHub repository. Here’s the perfect platform for performing easy experiments in a controlled and flexible environment. Sounds like a dream for any data scientist.

Reinforcement Learning Trends to Expect in 2019

Xander Steenbrugge, speaker at DataHack Summit 2018 and founder of the ArxivInsights channel, is quite the expert in reinforcement learning. Here are his thoughts on the current state of RL and what to expect in 2019:

  • I currently see three major problems in the domain of RL:

  • Sample complexity (the amount of experience an agent needs to see/gather in order to learn)

  • Generalization and transfer learning (Train on task A, test on related task B)

  • Hierarchical RL (automatic subgoal decomposition)

  • I believe that the first two problems can be addressed with a similar set of techniques all related to unsupervised representation learning. Currently in RL, we are training Deep Neural nets that map from raw input space (eg Pixels) to actions in an end-to-end manner (eg. with Backpropagation) using sparse reward signals (eg the score of an Atari game or the success of a robotic grasp). The problem here is that:

  • It takes a really long time to actually “grow” useful feature detectors because the signal-to-noise ratio is very low. RL basically starts with random actions until it is lucky enough to stumble upon a reward and then needs to figure out how that specific reward was actually caused. Further exploration is either hardcoded (epsilon-greedy exploration) or encouraged with techniques like curiosity-driven-exploration. This is not efficient and this leads to problem 1.

  • Secondly, these deep NN architectures are known to be very prone to overfitting, and in RL we generally tend to test agents on the training data –> overfitting is actually encouraged in this paradigm.

  • A possible path forward that I am very enthousiastic about is to leverage unsupervised representation learning (autoencoders, VAE’s, GANs, …) to transform a messy, high-dimensional input space (eg Pixels) into a lower-dimensional ‘conceptual’ space that has certain desirable properties such as:

  • Linearity, disentanglement, robustness to noise, …

  • Once you can map Pixels into such a useful latent space, learning suddenly becomes much easier/faster (problem 1.) and you also hope that policies learned in this space will have much stronger generalization because of the properties mentioned above (problem 2.)

  • I am not an expert on the Hierarchy problem, but everything mentioned above also applies here: it’s easier to solve a complicated hierarchical task in latent space than it is in raw input space.

BONUS: Check out Xander’s video about overcoming sparse rewards in Deep RL (the first challenge highlighted above).

  • Sample complexity will continue to improve due to adding more and more auxiliary learning tasks that augment the sparse, extrinsic reward signal (things like curiosity driven exploration, autoencoder-style pretraining, disentangling causal factors in the environment, …). This will work especially well in very sparse reward environments (such as the recent Go-explore results on Montezuma’s revenge)

  • Because of this, training systems directly in the physical world will become more and more feasible (instead of current applications that are mostly trained in simulated environments and then use domain randomization to transfer to the real world.) I predict that 2019 will bring the first truly impressive robotics demo’s that are only possible using Deep Learning approaches and cannot be hardcoded / human engineered (unlike most demo’s we have seen so far)

  • Following the major success of Deep RL in the AlphaGo story (especially with the recent AlphaFoldresults), I believe RL will gradually start delivering actual business applications that create real-world value outside of the academic space. This will initially be limited to applications where accurate simulators are available to do large-scale, virtual training of these agents (eg drug discovery, electronic-chip architecture optimization, vehicle & package routing, …)

  • As has already started to happen (see here or here) there will be a general shift in RL development where testing an agent on the training data will no longer be considered ‘allowed’. Generalization metrics will become core, just as is the case for supervised learning methods

AI for Good — A Move Towards Ethical AI

Imagine a world ruled by algorithms that dictate every action humans take. Not exactly a rosy scenario, is it? Ethics in AI is a topic we at Analytics Vidhya have always been keen to talk about. It becomes bogged down amid all the technical discussions when it should be considered along with those topics.

Quite a few organizations were left with egg on their face this year with Facebook’s Cambridge Analytica scandal and Google’s internal rife about designing weapons headlining the list of scandals. But all of this led to the big tech companies penning down charters and guidelines they intend to follow.

There isn’t one out-of-the-box solution or one size fits all solution to handling the ethical aspect of AI. It requires a nuanced approach combined with a structured path put forward by the leadership. Let’s see a couple of major moves that shook the landscape earlier this year.

Campaigns by Google and Microsoft

It was heartening to see the big corporations putting emphasis on this side of AI (even though the road that led to this point wasn’t pretty). I want to direct your attention to the guidelines and principles released by a couple of these companies:

  • Google’s AI Principles

  • Microsoft’s AI Principles

These all essentially talk about fairness in AI and when and where to draw the line. Always a good idea to reference them when you’re starting a new AI based project.

How GDPR has Changed the Game

GDPR, or the General Data Protection Regulation, has definitely had an impact on the way data is collected for building AI applications. GDPR came into play to ensure users have more control over their data (what information is collected and shared about them).

So how does that affect AI? Well, if the data scientist does not have data (or enough of it), building any model becomes a non-starter. This has certainly put a spanner in the works of how social platforms and other sites used to work. GDPR will make for a fascinating case study down the line but for now, it has limited the usefulness of AI for a lot of platforms.

Ethical AI Trends to Expect in 2019

This is a bit of a grey field. Like I mentioned, there’s no one solution to it. We have to come together as a community to integrate ethics within AI projects. How can we make that happen? As Analytics Vidhya’s Founder and CEO Kunal Jain highlighted in his talk at DataHack Summit 2018, we will need to pen down a framework which others can follow.

I expect to see new roles being added in organizations that primarily deal with ethical AI. Corporate best practices will need to be re-structured and governance approaches re-drawn as AI becomes central to the company’s vision. I also expect the Government to play a more active role in this regard with new or modified policies coming into play. 2019 will be a very interesting year, indeed.

End Notes

Impactful — the only word that succinctly describes the amazing developments in 2018. I’ve become an avid user of ULMFiT this year and I’m looking forward to exploring BERT soon. Exciting times, indeed.

I would love to hear from you as well! What developments did you find the most useful? Are you working on any project using the frameworks/tools/concepts we saw in this article? And what are your predictions for the coming year? I look forward to hearing your thoughts and ideas in the comments section below.



登录查看更多
10

相关内容

近年来,研究人员通过文本上下文信息分析获得更好的词向量。ELMo是其中的翘楚,在多个任务、多个数据集上都有显著的提升。所以,它是目前最好用的词向量,the-state-of-the-art的方法。这篇文章发表在2018年的NAACL上,outstanding paper award。下面就简单介绍一下这个“神秘”的词向量模型。

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

0
12
下载
预览

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

0
9
下载
预览

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

1
38
下载
预览

Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.

0
32
下载
预览

Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.

0
32
下载
预览

Question Answering has recently received high attention from artificial intelligence communities due to the advancements in learning technologies. Early question answering models used rule-based approaches and moved to the statistical approach to address the vastly available information. However, statistical approaches are shown to underperform in handling the dynamic nature and the variation of language. Therefore, learning models have shown the capability of handling the dynamic nature and variations in language. Many deep learning methods have been introduced to question answering. Most of the deep learning approaches have shown to achieve higher results compared to machine learning and statistical methods. The dynamic nature of language has profited from the nonlinear learning in deep learning. This has created prominent success and a spike in work on question answering. This paper discusses the successes and challenges in question answering question answering systems and techniques that are used in these challenges.

0
4
下载
预览

The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.

0
7
下载
预览

This manuscript surveys reinforcement learning from the perspective of optimization and control with a focus on continuous control applications. It surveys the general formulation, terminology, and typical experimental implementations of reinforcement learning and reviews competing solution paradigms. In order to compare the relative merits of various techniques, this survey presents a case study of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and best studied problem in optimal control. The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior. In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms. This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and controls might be combined to approach these challenges.

0
5
下载
预览

In recent years, a specific machine learning method called deep learning has gained huge attraction, as it has obtained astonishing results in broad applications such as pattern recognition, speech recognition, computer vision, and natural language processing. Recent research has also been shown that deep learning techniques can be combined with reinforcement learning methods to learn useful representations for the problems with high dimensional raw data input. This chapter reviews the recent advances in deep reinforcement learning with a focus on the most used deep architectures such as autoencoders, convolutional neural networks and recurrent neural networks which have successfully been come together with the reinforcement learning framework.

0
12
下载
预览

Natural language processing (NLP) has recently gained much attention for representing and analysing human language computationally. It has spread its applications in various fields such as machine translation, email spam detection, information extraction, summarization, medical, and question answering etc. The paper distinguishes four phases by discussing different levels of NLP and components of Natural Language Generation (NLG) followed by presenting the history and evolution of NLP, state of the art presenting the various applications of NLP and current trends and challenges.

0
4
下载
预览
小贴士
相关资讯
Call for Participation: Shared Tasks in NLPCC 2019
中国计算机学会
5+阅读 · 2019年3月22日
IEEE | DSC 2019诚邀稿件 (EI检索)
Call4Papers
6+阅读 · 2019年2月25日
【TED】生命中的每一年的智慧
英语演讲视频每日一推
5+阅读 · 2019年1月29日
Unsupervised Learning via Meta-Learning
CreateAMind
26+阅读 · 2019年1月3日
机器学习线性代数速查
机器学习研究会
8+阅读 · 2018年2月25日
五个精彩实用的自然语言处理资源
机器学习研究会
5+阅读 · 2018年2月23日
Python机器学习教程资料/代码
机器学习研究会
5+阅读 · 2018年2月22日
【推荐】自然语言处理(NLP)指南
机器学习研究会
30+阅读 · 2017年11月17日
【推荐】SVM实例教程
机器学习研究会
12+阅读 · 2017年8月26日
相关论文
Hyper-Parameter Optimization: A Review of Algorithms and Applications
Tong Yu,Hong Zhu
12+阅读 · 2020年3月12日
Bernhard Schölkopf
9+阅读 · 2019年11月24日
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-López,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera
38+阅读 · 2019年10月22日
AutoML: A Survey of the State-of-the-Art
Xin He,Kaiyong Zhao,Xiaowen Chu
32+阅读 · 2019年8月14日
Object Detection in 20 Years: A Survey
Zhengxia Zou,Zhenwei Shi,Yuhong Guo,Jieping Ye
32+阅读 · 2019年5月13日
Advances in Natural Language Question Answering: A Review
K. S. D. Ishwari,A. K. R. R. Aneeze,S. Sudheesan,H. J. D. A. Karunaratne,A. Nugaliyadde,Y. Mallawarrachchi
4+阅读 · 2019年4月10日
A Survey of Learning Causality with Data: Problems and Methods
Ruocheng Guo,Lu Cheng,Jundong Li,P. Richard Hahn,Huan Liu
7+阅读 · 2018年9月25日
Benjamin Recht
5+阅读 · 2018年6月25日
Seyed Sajad Mousavi,Michael Schukat,Enda Howley
12+阅读 · 2018年6月23日
Diksha Khurana,Aditya Koli,Kiran Khatter,Sukhdev Singh
4+阅读 · 2017年8月17日
Top