2019 年 1 月 29 日 英语演讲视频每日一推



演讲者:Josh Prager

演讲题目:Wisdom from great writers on every year of life



I'm turning 44 next month, and I have the sense that 44 is going to be a very good year, a year of fulfillment, realization. I have that sense, not because of anything particular in store for me, but because I read it would be a good year in a 1968 book by Norman Mailer.


"He felt his own age, forty-four ..." wrote Mailer in "The Armies of the Night," "... felt as if he were a solid embodiment of bone, muscle, heart, mind, and sentiment to be a man, as if he had arrived."


Yes, I know Mailer wasn't writing about me. But I also know that he was; for all of us -- you, me, the subject of his book, age more or less in step, proceed from birth along the same great sequence: through the wonders and confinements of childhood; the emancipations and frustrations of adolescence; the empowerments and millstones of adulthood; the recognitions and resignations of old age. There are patterns to life, and they are shared. As Thomas Mann wrote: "It will happen to me as to them."


We don't simply live these patterns. We record them, too. We write them down in books, where they become narratives that we can then read and recognize. Books tell us who we've been, who we are, who we will be, too. So they have for millennia. As James Salter wrote, "Life passes into pages if it passes into anything."


And so six years ago, a thought leapt to mind: if life passed into pages, there were, somewhere, passages written about every age. If I could find them, I could assemble them into a narrative. I could assemble them into a life, a long life, a hundred-year life, the entirety of that same great sequence through which the luckiest among us pass. I was then 37 years old, "an age of discretion," wrote William Trevor. I was prone to meditating on time and age. An illness in the family and later an injury to me had long made clear that growing old could not be assumed. And besides, growing old only postponed the inevitable, time seeing through what circumstance did not. It was all a bit disheartening.


A list, though, would last. To chronicle a life year by vulnerable year would be to clasp and to ground what was fleeting, would be to provide myself and others a glimpse into the future, whether we made it there or not. And when I then began to compile my list, I was quickly obsessed, searching pages and pages for ages and ages. Here we were at every annual step through our first hundred years. "Twenty-seven ... a time of sudden revelations," "sixty-two, ... of subtle diminishments."


I was mindful, of course, that such insights were relative. For starters, we now live longer, and so age more slowly. Christopher Isherwood used the phrase "the yellow leaf" to describe a man at 53, only one century after Lord Byron used it to describe himself at 36.




I was mindful, too, that life can swing wildly and unpredictably from one year to the next, and that people may experience the same age differently. But even so, as the list coalesced, so, too, on the page, clear as the reflection in the mirror, did the life that I had been living: finding at 20 that "... one is less and less sure of who one is;" emerging at 30 from the "... wasteland of preparation into active life;" learning at 40 "... to close softly the doors to rooms [I would] not be coming back to." There I was.


Of course, there we all are. Milton Glaser, the great graphic designer whose beautiful visualizations you see here, and who today is 85 -- all those years "... a ripening and an apotheosis," wrote Nabokov -- noted to me that, like art and like color, literature helps us to remember what we've experienced.


And indeed, when I shared the list with my grandfather, he nodded in recognition. He was then 95 and soon to die, which, wrote Roberto Bolaño, "... is the same as never dying." And looking back, he said to me that, yes, Proust was right that at 22, we are sure we will not die, just as a thanatologist named Edwin Shneidman was right that at 90, we are sure we will. It had happened to him, as to them.


Now the list is done: a hundred years. And looking back over it, I know that I am not done. I still have my life to live, still have many more pages to pass into. And mindful of Mailer, I await 44.


Thank you.





TED(指 Technology、Entertainment、Design 在英语中的缩写,即技术、娱乐、设计)是美国的一家私有非营利机构,该机构以它组织的 TED 大会著称。每年3月,TED大会在美国召集众多科学、设计、文学、音乐等领域的杰出人物,分享他们关於技术、社会、人的思考和探索。TED演讲的特点是毫无繁杂冗长的专业讲座,观点响亮,开门见山,种类繁多,看法新颖。

Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.


Multispectral imaging is an important technique for improving the readability of written or printed text where the letters have faded, either due to deliberate erasing or simply due to the ravages of time. Often the text can be read simply by looking at individual wavelengths, but in some cases the images need further enhancement to maximise the chances of reading the text. There are many possible enhancement techniques and this paper assesses and compares an extended set of dimensionality reduction methods for image processing. We assess 15 dimensionality reduction methods in two different manuscripts. This assessment was performed both subjectively by asking the opinions of scholars who were experts in the languages used in the manuscripts which of the techniques they preferred and also by using the Davies-Bouldin and Dunn indexes for assessing the quality of the resulted image clusters. We found that the Canonical Variates Analysis (CVA) method which was using a Matlab implementation and we have used previously to enhance multispectral images, it was indeed superior to all the other tested methods. However it is very likely that other approaches will be more suitable in specific circumstance so we would still recommend that a range of these techniques are tried. In particular, CVA is a supervised clustering technique so it requires considerably more user time and effort than a non-supervised technique such as the much more commonly used Principle Component Analysis Approach (PCA). If the results from PCA are adequate to allow a text to be read then the added effort required for CVA may not be justified. For the purposes of comparing the computational times and the image results, a CVA method is also implemented in C programming language and using the GNU (GNUs Not Unix) Scientific Library (GSL) and the OpenCV (OPEN source Computer Vision) computer vision programming library.


Sentiment analysis is a widely studied NLP task where the goal is to determine opinions, emotions, and evaluations of users towards a product, an entity or a service that they are reviewing. One of the biggest challenges for sentiment analysis is that it is highly language dependent. Word embeddings, sentiment lexicons, and even annotated data are language specific. Further, optimizing models for each language is very time consuming and labor intensive especially for recurrent neural network models. From a resource perspective, it is very challenging to collect data for different languages. In this paper, we look for an answer to the following research question: can a sentiment analysis model trained on a language be reused for sentiment analysis in other languages, Russian, Spanish, Turkish, and Dutch, where the data is more limited? Our goal is to build a single model in the language with the largest dataset available for the task, and reuse it for languages that have limited resources. For this purpose, we train a sentiment analysis model using recurrent neural networks with reviews in English. We then translate reviews in other languages and reuse this model to evaluate the sentiments. Experimental results show that our robust approach of single model trained on English reviews statistically significantly outperforms the baselines in several different languages.


We observe that end-to-end memory networks (MN) trained for task-oriented dialogue, such as for recommending restaurants to a user, suffer from an out-of-vocabulary (OOV) problem -- the entities returned by the Knowledge Base (KB) may not be seen by the network at training time, making it impossible for it to use them in dialogue. We propose a Hierarchical Pointer Memory Network (HyP-MN), in which the next word may be generated from the decode vocabulary or copied from a hierarchical memory maintaining KB results and previous utterances. Evaluating over the dialog bAbI tasks, we find that HyP-MN drastically outperforms MN obtaining 12% overall accuracy gains. Further analysis reveals that MN fails completely in recommending any relevant restaurant, whereas HyP-MN recommends the best next restaurant 80% of the time.


Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov's acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov's acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov's acceleration has been used with actor-critic algorithm.

8+阅读 · 2019年1月8日
A Technical Overview of AI & ML in 2018 & Trends for 2019
10+阅读 · 2018年12月24日
人工智能 | 国际会议截稿信息5条
4+阅读 · 2017年11月22日
5+阅读 · 2017年11月16日
9+阅读 · 2017年10月24日
5+阅读 · 2017年10月15日
13+阅读 · 2017年10月13日
3+阅读 · 2017年8月16日
14+阅读 · 2017年8月15日
因果图,Causal Graphs,52页ppt
123+阅读 · 2020年4月19日
23+阅读 · 2019年10月11日
55+阅读 · 2019年10月11日
22+阅读 · 2019年10月11日
31+阅读 · 2019年10月10日
34+阅读 · 2019年10月10日
Yoshua Bengio
3+阅读 · 2019年12月2日
Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences
Jasper van der Waa,Jurriaan van Diggelen,Karel van den Bosch,Mark Neerincx
4+阅读 · 2018年7月23日
Ethem F. Can,Aysu Ezen-Can,Fazli Can
10+阅读 · 2018年6月8日
Dinesh Raghu,Nikhil Gupta, Mausam
3+阅读 · 2018年5月3日
K. Lakshmanan
6+阅读 · 2018年4月24日
Ashok Deb,Kristina Lerman,Emilio Ferrara
3+阅读 · 2018年4月14日
André Calero Valdez,Martina Ziefle
4+阅读 · 2018年4月13日
Matthias Plappert,Rein Houthooft,Prafulla Dhariwal,Szymon Sidor,Richard Y. Chen,Xi Chen,Tamim Asfour,Pieter Abbeel,Marcin Andrychowicz
3+阅读 · 2018年1月31日