** Detecting objects in a video is a compute-intensive task. In this paper we propose CaTDet, a system to speedup object detection by leveraging the temporal correlation in video. CaTDet consists of two DNN models that form a cascaded detector, and an additional tracker to predict regions of interests based on historic detections. We also propose a new metric, mean Delay(mD), which is designed for latency-critical video applications. Experiments on the KITTI dataset show that CaTDet reduces operation count by 5.1-8.7x with the same mean Average Precision(mAP) as the single-model Faster R-CNN detector and incurs additional delay of 0.3 frame. On CityPersons dataset, CaTDet achieves 13.0x reduction in operations with 0.8% mAP loss. **

** The persistence diagram is an increasingly useful tool arising from the field of Topological Data Analysis. However, using these diagrams in conjunction with machine learning techniques requires some mathematical finesse. The most success to date has come from finding methods for turning persistence diagrams into vectors in $\mathbb{R}^n$ in a way which preserves as much of the space of persistence diagrams as possible, commonly referred to as featurization. In this paper, we describe a mathematical framework for featurizing the persistence diagram space using template functions. These functions are general as they are only required to be continuous, have a compact support, and separate points. We discuss two example realizations of these functions: tent functions and Chybeyshev interpolating polynomials. Both of these functions are defined on a grid superposed on the birth-lifetime plane. We then combine the resulting features with machine learning algorithms to perform supervised classification and regression on several example data sets, including manifold data, shape data, and an embedded time series from a Rossler system. Our results show that the template function approach yields high accuracy rates that match and often exceed the results of existing methods for featurizing persistence diagrams. One counter-intuitive observation is that in most cases using interpolating polynomials, where each point contributes globally to the feature vector, yields significantly better results than using tent functions, where the contribution of each point is localized to its grid cell. Along the way, we also provide a complete characterization of compact sets in persistence diagram space endowed with the bottleneck distance. **

** We introduce an original mathematical model to analyze the diffusion of posts within a generic online social platform. Each user of such a platform has his own Wall and Newsfeed, as well as his own self-posting and re-posting activity. As a main result, using our developed model, we derive in closed form the probabilities that posts originating from a given user are found on the Wall and Newsfeed of any other. These probabilities are the solution of a linear system of equations. Conditions of existence of the solution are provided, and two ways of solving the system are proposed, one using matrix inversion and another using fixed-point iteration. Comparisons with simulations show the accuracy of our model and its robustness with respect to the modeling assumptions. Hence, this article introduces a novel measure which allows to rank users by their influence on the social platform, by taking into account not only the social graph structure, but also the platform design, user activity (self- and re-posting), as well as competition among posts. **

** Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain. **

模型评估 ·

** The Poison Game is a two-player game played on a graph in which one player can influence which edges the other player is able to traverse. It operationalizes the notion of existence of credulously admissible sets in an argumentation framework or, in graph-theoretic terminology, the existence of non-trivial semi-kernels. We develop a modal logic (poison modal logic, PML) tailored to represent winning positions in such a game, thereby identifying the precise modal reasoning that underlies the notion of credulous admissibility in argumentation. We study model-theoretic and decidability properties of PML, and position it with respect to recently studied logics at the cross-road of modal logic, argumentation, and graph games. **

** In this paper, we study the problem of minimizing a sum of smooth and strongly convex functions split over the nodes of a network in a decentralized fashion. We propose the algorithm $ESDACD$, a decentralized accelerated algorithm that only requires local synchrony. Its rate depends on the condition number $\kappa$ of the local functions as well as the network topology and delays. Under mild assumptions on the topology of the graph, $ESDACD$ takes a time $O((\tau_{\max} + \Delta_{\max})\sqrt{{\kappa}/{\gamma}}\ln(\epsilon^{-1}))$ to reach a precision $\epsilon$ where $\gamma$ is the spectral gap of the graph, $\tau_{\max}$ the maximum communication delay and $\Delta_{\max}$ the maximum computation time. Therefore, it matches the rate of $SSDA$, which is optimal when $\tau_{\max} = \Omega\left(\Delta_{\max}\right)$. Applying $ESDACD$ to quadratic local functions leads to an accelerated randomized gossip algorithm of rate $O( \sqrt{\theta_{\rm gossip}/n})$ where $\theta_{\rm gossip}$ is the rate of the standard randomized gossip. To the best of our knowledge, it is the first asynchronous gossip algorithm with a provably improved rate of convergence of the second moment of the error. We illustrate these results with experiments in idealized settings. **

** Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN. **

** Autonomous micro aerial vehicles still struggle with fast and agile maneuvers, dynamic environments, imperfect sensing, and state estimation drift. Autonomous drone racing brings these challenges to the fore. Human pilots can fly a previously unseen track after a handful of practice runs. In contrast, state-of-the-art autonomous navigation algorithms require either a precise metric map of the environment or a large amount of training data collected in the track of interest. To bridge this gap, we propose an approach that can fly a new track in a previously unseen environment without a precise map or expensive data collection. Our approach represents the global track layout with coarse gate locations, which can be easily estimated from a single demonstration flight. At test time, a convolutional network predicts the poses of the closest gates along with their uncertainty. These predictions are incorporated by an extended Kalman filter to maintain optimal maximum-a-posteriori estimates of gate locations. This allows the framework to cope with misleading high-variance estimates that could stem from poor observability or lack of visible gates. Given the estimated gate poses, we use model predictive control to quickly and accurately navigate through the track. We conduct extensive experiments in the physical world, demonstrating agile and robust flight through complex and diverse previously-unseen race tracks. The presented approach was used to win the IROS 2018 Autonomous Drone Race Competition, outracing the second-placing team by a factor of two. **

** The availability and accuracy of Channel State Information (CSI) play a crucial role for coherent detection in almost every communication system. Particularly in the recently proposed cell-free massive MIMO system, in which a large number of distributed Access Points (APs) is connected to a Central processing Unit (CPU) for joint decoding, acquiring CSI at the CPU may improve performance through the use of detection algorithms such as minimum mean square error (MMSE) or zero forcing (ZF). There are also significant challenges, especially the increase in fronthaul load arising from the transfer of high precision CSI, with the resulting complexity and scalability issues. In this paper, we address these CSI acquisition problems by utilizing vector quantization with precision of only a few bits and we show that the accuracy of the channel estimate at the CPU can be increased by exploiting the spatial correlation subject to this limited fronthaul load. Further, we derive an estimator for the simple \emph{Quantize-and-Estimate} (QE) strategy based on the Bussgang theorem and compare its performance to \emph{Estimate-and-Quantize} (EQ) in terms of Mean Squared Error (MSE). Our simulation results indicate that the QE with few-bit vector quantization can outperform EQ and individual scalar quantization at moderate SNR for small numbers of bits per dimension. **

** Segmentation is a key stage in dermoscopic image processing, where the accuracy of the border line that defines skin lesions is of utmost importance for subsequent algorithms (e.g., classification) and computer-aided early diagnosis of serious medical conditions. This paper proposes a novel segmentation method based on Local Binary Patterns (LBP), where LBP and K-Means clustering are combined to achieve a detailed delineation in dermoscopic images. In comparison with usual dermatologist-like segmentation (i.e., the available ground-truth), the proposed method is capable of finding more realistic borders of skin lesions, i.e., with much more detail. The results also exhibit reduced variability amongst different performance measures and they are consistent across different images. The proposed method can be applied for cell-based like segmentation adapted to the lesion border growing specificities. Hence, the method is suitable to follow the growth dynamics associated with the lesion border geometry in skin melanocytic images. **

** Composite endpoints that combine multiple outcomes on different scales are common in clinical trials, particularly in chronic conditions. In many of these cases, patients will have to cross a predefined responder threshold in each of the outcomes to be classed as a responder overall. One instance of this occurs in systemic lupus erythematosus (SLE), where the responder endpoint combines two continuous, one ordinal and one binary measure. The overall binary responder endpoint is typically analysed using logistic regression, resulting in a substantial loss of information. We propose a latent variable model for the SLE endpoint, which assumes that the discrete outcomes are manifestations of latent continuous measures and can proceed to jointly model the components of the composite. We perform a simulation study and find the method to offer large efficiency gains over the standard analysis. We find that the magnitude of the precision gains are highly dependent on which components are driving response. Bias is introduced when joint normality assumptions are not satisfied, which we correct for using a bootstrap procedure. The method is applied to the Phase IIb MUSE trial in patients with moderate to severe SLE. We show that it estimates the treatment effect 2.5 times more precisely, offering a 60% reduction in required sample size. **

** The ability to detect, in real-time, heavy hitters is beneficial to many network applications, such as DoS and anomaly detection. Through programmable languages as P4, heavy hitter detection can be implemented directly in the data-plane, allowing custom actions to be applied to packets as they are processed at a network node. This enables networks to immediately respond to changes in network traffic in the data-plane itself and allows for different QoS profiles for heavy hitter and non-heavy hitter traffic. Current interval-based methods that flush the whole counting structure are not well-suited for programmable hardware (the data-plane), because they either require more resources than available in that hardware, they do not provide good accuracy, or require too many actions from the control-plane. A sliding window approach that maintains accuracy over time would solve these issues. However, to the best of our knowledge, the concept of sliding windows in programmable hardware has not been studied yet. In this paper, we develop streaming approaches to detect heavy hitters in the data-plane. We consider the problems of (1) adopting a sliding window and (2) identifying heavy hitters separately and propose multiple memory- and processing-efficient solutions for each of them. These solutions are suitable for P4 programmable hardware and can be combined at will to solve the streaming variant of the heavy hitter detection problem. **

** Modern blockchains, such as Ethereum, enable the execution of so-called smart contracts - programs that are executed across the decentralised blockchain network. As smart contracts become more popular and carry more value, they become more of an interesting target for attackers. In the past few years, several smart contracts have been found to be vulnerable and thus exploited by attackers. However, a new trend towards a more proactive approach seems to be on the rise where attackers do not search for vulnerable contracts anymore. Instead, they try to lure their victims into traps by deploying vulnerable-looking contracts that contain hidden traps. This type of contracts is commonly referred to as honeypots. In this paper, we present the first systematic analysis of honeypots, by investigating their prevalence, behaviour and impact on the Ethereum blockchain. We develop a taxonomy of honeypot techniques and use this to build HONEYBADGER - a tool that employs symbolic execution and well defined heuristics to expose smart contract honeypots. We perform a large-scale analysis of more than 2 million smart contracts and show that our tool not only achieves high precision, but also high scalability. We identify 690 honeypots as well as 240 victims in the wild, with an accumulated profit of more than $90,000 for the honeypot creators. Our manual validation shows that 87% of the reported contracts are indeed honeypots. **