医学人工智能AIM(Artificial Intelligence in Medicine)杂志发表了多学科领域的原创文章,涉及医学中的人工智能理论和实践,以医学为导向的人类生物学和卫生保健。医学中的人工智能可以被描述为与研究、项目和应用相关的科学学科,旨在通过基于知识或数据密集型的计算机解决方案支持基于决策的医疗任务,最终支持和改善人类护理提供者的性能。 官网地址:http://dblp.uni-trier.de/db/journals/artmed/

The Massively Parallel Computation (MPC) model is an emerging model which distills core aspects of distributed and parallel computation. The aim is to solve (typically graph) problems in systems where the input is distributed over many machines with limited space. Recent work has focused on the regime in which machines have sublinear memory, with randomized algorithms presented for the tasks of Maximal Matching, Maximal Independent Set (MIS) and $(\Delta+1)$-Coloring. However, there have been no prior corresponding deterministic algorithms. We introduce a new graph sparsification technique that deterministically computes a low-degree subgraph with additional desired properties. The degree of the nodes in this subgraph is small in the sense that the edges of each node can be now stored on a single machine. Depending on the specific local problem, the low-degree subgraph also has the property that solving the problem on this subgraph provides a large global progress. We obtain the following deterministic MPC algorithms with $O(n^{\epsilon})$ space on each machine for any constant $\epsilon >0$: - We show an $O(\log \Delta+\log\log n)$ communication round algorithms for solving maximal matching and MIS, by derandomizing the well-known randomized algorithm of Luby [SICOMP'86]. Based on the recent work of Ghaffari et al. [FOCS'18], this additive $O(\log\log n)$ factor is conditionally essential. These algorithms can also be shown to run in $O(\log \Delta)$ rounds in the closely related model of Congested Clique. This improves up on the state-of-the-art bound of $O(\log^2 \Delta)$ rounds by Censor-Hillel et al. [DISC'17]. - By employing our graph sparsification technique accompanied with a palette sparsification, we give a deterministic (deg+1)-list coloring (and thus also a $(\Delta+1)$-coloring) algorithm in $O(\log \Delta+\log\log n)$ rounds.

0+
0+
下载
预览

The field of transparent Machine Learning (ML) has contributed many novel methods aiming at better interpretability for computer vision and ML models in general. But how useful the explanations provided by transparent ML methods are for humans remains difficult to assess. Most studies evaluate interpretability in qualitative comparisons, they use experimental paradigms that do not allow for direct comparisons amongst methods or they report only offline experiments with no humans in the loop. While there are clear advantages of evaluations with no humans in the loop, such as scalability, reproducibility and less algorithmic bias than with humans in the loop, these metrics are limited in their usefulness if we do not understand how they relate to other metrics that take human cognition into account. Here we investigate the quality of interpretable computer vision algorithms using techniques from psychophysics. In crowdsourced annotation tasks we study the impact of different interpretability approaches on annotation accuracy and task time. In order to relate these findings to quality measures for interpretability without humans in the loop we compare quality metrics with and without humans in the loop. Our results demonstrate that psychophysical experiments allow for robust quality assessment of transparency in machine learning. Interestingly the quality metrics computed without humans in the loop did not provide a consistent ranking of interpretability methods nor were they representative for how useful an explanation was for humans. These findings highlight the potential of methods from classical psychophysics for modern machine learning applications. We hope that our results provide convincing arguments for evaluating interpretability in its natural habitat, human-ML interaction, if the goal is to obtain an authentic assessment of interpretability.

0+
0+
下载
预览

Dynamic Time Division Duplexing (D-TDD) allows cells to accommodate asymmetric traffic variations with high resource assignment flexibility. However, this feature is limited by two additional types of interference between cells in opposite transmission direction: downlink (DL) to uplink (UL) and UL to DL interference. Hence, using this mode with macro-cell deployments requires interference mitigation techniques to reduce the strong DL to UL interference. 3D beamforming is an efficient technique that minimizes interference and enhances performance by exploiting a large 2D array of antennas intelligently. Therefore, combining D-TDD and 3D beamforming can make D-TDD feasible for macro-cells. The aim of this work is to provide a 3D beamforming analytical model in a D-TDD based macro-cells' deployment where beamforming horizontal and vertical radiation patterns depend on the spatial distribution of random users' locations. We evaluate interference in terms of Interference to Signal Ratio (ISR). We show that the cumulative ISR can be written in terms of convergent series and its expectation is an almost sure convergent series. Different numerical results are presented to justify the applicability of this scheme.

0+
0+
下载
预览

The Massively Parallel Computation (MPC) model is an emerging model which distills core aspects of distributed and parallel computation. The aim is to solve (typically graph) problems in systems where the input is distributed over many machines with limited space. Recent work has focused on the regime in which machines have sublinear memory, with randomized algorithms presented for the tasks of Maximal Matching, Maximal Independent Set (MIS) and $(\Delta+1)$-Coloring. However, there have been no prior corresponding deterministic algorithms. We introduce a new \emph{graph sparsification technique} that \emph{deterministically} computes a low-degree subgraph with additional desired properties. The degree of the nodes in this subgraph is small in the sense that the edges of each node can be now stored on a single machine. Depending on the specific local problem, the low-degree subgraph also has the property that solving the problem on this subgraph provides a \emph{large} global progress. We obtain the following \emph{deterministic} MPC algorithms with $O(n^{\epsilon})$ space on each machine for any constant $\epsilon <1$: - We show an $O(\log \Delta+\log\log n)$ communication round algorithms for solving maximal matching and MIS, by derandomizing the well-known randomized algorithm of Luby [SICOMP'86]. Based on the recent work of Ghaffari et al. [FOCS'18], this additive $O(\log\log n)$ factor is \emph{conditionally} essential. These algorithms can also be shown to run in $O(\log \Delta)$ rounds in the closely related model of \congc. This improves up on the state-of-the-art bound of $O(\log^2 \Delta)$ rounds by Censor-Hillel et al. [DISC'17]. - By employing our graph sparsification technique accompanied with a palette sparsification, we give a deterministic (deg+1)-list coloring (and thus also a $(\Delta+1)$-coloring) algorithm in $O(\log \Delta+\log\log n)$ rounds.

0+
0+
下载
预览

Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of a machine learning model. These explanations share a trait with the long-established "principal reason" explanations required by U.S. credit laws: they both explain a decision by highlighting a set of features deemed most relevant--and withholding others. These "feature-highlighting explanations" have several desirable properties: They place no constraints on model complexity, do not require model disclosure, detail what needed to be different to achieve a different decision, and seem to automate compliance with the law. But they are far more complex and subjective than they appear. In this paper, we demonstrate that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions: that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, that features are only relevant to the decision at hand, and that the underlying model is stable over time, monotonic, and limited to binary outcomes. We then explore several consequences of acknowledging and attempting to address these assumptions, including a paradox in the way that feature-highlighting explanations aim to respect autonomy, the unchecked power that feature-highlighting explanations grant decision makers, and a tension between making these explanations useful and the need to keep the model hidden. While new research suggests several ways that feature-highlighting explanations can work around some of the problems that we identify, the disconnect between features in the model and actions in the real world--and the subjective choices necessary to compensate for this--must be understood before these techniques can be usefully implemented.

0+
0+
下载
预览

With Reinforcement Learning we assume that a model of the world does exist. We assume furthermore that the model in question is perfect (i.e. it describes the world completely and unambiguously). This article will demonstrate that it does not make sense to search for the perfect model because this model is too complicated and practically impossible to find. We will show that we should abandon the pursuit of perfection and pursue Event-Driven (ED) models instead. These models are generalization of Markov Decision Process (MDP) models. This generalization is essential because nothing can be found without it. Rather than a single MDP, we will aim to find a raft of neat simple ED models each one describing a simple dependency or property. In other words, we will replace the search for a singular and complex perfect model with a search for a large number of simple models.

0+
0+
下载
预览

Calcaneus is the largest tarsal bone designed to withstand the daily stresses of weight bearing. The calcaneal fracture is the most common type in the tarsal bone fractures. After a fracture is suspected, plain radiographs should be first requested. Bohler's Angle (BA) and Critical Angle of Gissane (CAG), measured by four anatomic landmarks in lateral foot radiograph, can aid operative restoration of the fractured calcaneus and fracture diagnosis and assessment. Due to the different postures of patient and operating conditions of X-ray machine, the calcaneus in lateral foot radiograph may be subjected to great variance of rotation and scale. The views of the radiographs also vary from partial ankle to whole leg. The aim of this study is to develop a system to automatically locate four anatomic landmarks and calculate the BA and CAG for fracture assessment. We proposed a coarse-to-fine Rotation-Invariant Regression-Voting (RIRV) landmarks detection approach based on Supported Vector Regression (SVR) and Scale Invariant Feature Transform (SIFT) patch descriptor. The study also designed a multi-stream CNN structure with multi-region input which can accurately identify the fractures with sensitivity of 96.81% and specificity of 90%.

0+
0+
下载
预览

Multiple rotation averaging is an essential task for structure from motion, mapping, and robot navigation. The task is to estimate the absolute orientations of several cameras given some of their noisy relative orientation measurements. The conventional methods for this task seek parameters of the absolute orientations that agree best with the observed noisy measurements according to a robust cost function. These robust cost functions are highly nonlinear and are designed based on certain assumptions about the noise and outlier distributions. In this work, we aim to build a neural network that learns the noise patterns from the data and predict/regress the model parameters from the noisy relative orientations. The proposed network is a combination of two networks: (1) a view-graph cleaning network, which detects outlier edges in the view-graph and rectifies noisy measurements; and (2) a fine-tuning network, which fine-tunes an initialization of absolute orientations bootstrapped from the cleaned graph, in a single step. The proposed combined network is very fast, moreover, being trained on a large number of synthetic graphs, it is more accurate than the conventional iterative optimization methods. Although the idea of replacing robust optimization methods by a graph-based network is demonstrated only for multiple rotation averaging, it could easily be extended to other graph-based geometric problems, for example, pose-graph optimization.

0+
0+
下载
预览

Wireless capsule endoscopy is a medical procedure used to visualize the entire gastrointestinal tract and to diagnose intestinal conditions, such as polyps or bleeding. Current analyses are performed by manually inspecting nearly each one of the frames of the video, a tedious and error-prone task. Automatic image analysis methods can be used to reduce the time needed for physicians to evaluate a capsule endoscopy video, however these methods are still in a research phase. In this paper we focus on computer-aided polyp detection in capsule endoscopy images. This is a challenging problem because of the diversity of polyp appearance, the imbalanced dataset structure and the scarcity of data. We have developed a new polyp computer-aided decision system that combines a deep convolutional neural network and metric learning. The key point of the method is the use of the triplet loss function with the aim of improving feature extraction from the images when having small dataset. The triplet loss function allows to train robust detectors by forcing images from the same category to be represented by similar embedding vectors while ensuring that images from different categories are represented by dissimilar vectors. Empirical results show a meaningful increase of AUC values compared to baseline methods. A good performance is not the only requirement when considering the adoption of this technology to clinical practice. Trust and explainability of decisions are as important as performance. With this purpose, we also provide a method to generate visual explanations of the outcome of our polyp detector. These explanations can be used to build a physician's trust in the system and also to convey information about the inner working of the method to the designer for debugging purposes.

0+
0+
下载
预览

We study the problem of efficiently disseminating authenticated blockchain information from blockchain nodes (servers) to Internet of Things (IoT) devices, through a wireless base station (BS). In existing blockchain protocols, upon generation of a new block, each IoT device receives a copy of the block header, authenticated via digital signature by one or more trusted servers. Since it relies on unicast transmissions, the required communication resources grow linearly with the number of IoT devices. We propose a more efficient scheme, in which a single copy of each block header is multicasted, together with the signatures of servers. In addition, if IoT devices tolerate a delay, we exploit the blockchain structure to amortize the authentication in time, by transmitting only a subset of signature in each block period. Finally, the BS sends redundant information, via a repetition code, to deal with the unreliable wireless channel, with the aim of decreasing the amount of feedback required from IoT devices. Our analysis shows the trade-off between timely authentication of blocks and reliability of the communication, depending on the packet loss rate offered by the channel. The numerical results show that the performance benefits of the proposed scheme makes it a viable starting point for designing new lightweight protocols for blockchains.

0+
0+
下载
预览

Aim of this paper is to provide a review of the state of the art in Search and Rescue (SAR) robotics. Suitable robotic applications in the SAR domain are described, and SAR-specific demands and requirements on the various components of a robotic system are pictured. Current research and development in SAR robotics is outlined, and an overview of robotic systems and sub-systems currently in use in SAR and disaster response scenarios is given. Finally we show a number of possible research directions for SAR robots, which might change the overall design and operation of SAR robotics in the longer-term future. All this is meant to support our main idea of taking SAR applications as an applied benchmark for the Field Robotics (FR) domain.

0+
0+
下载
预览

Oceans are the essential lifeblood of the Earth: they provide over 70% of the oxygen and over 97% of the water. Plankton and corals are two of the most fundamental components of ocean ecosystems, the former due to their function at many levels of the oceans food chain, the latter because they provide spawning and nursery grounds to many fish populations. Studying and monitoring plankton distribution and coral reefs is vital for environment protection. In the last years there has been a massive proliferation of digital imagery for the monitoring of underwater ecosystems and much research is concentrated on the automated recognition of plankton and corals. In this paper, we present a study about an automated system for monitoring of underwater ecosystems. The system here proposed is based on the fusion of different deep learning methods. We study how to create an ensemble based of different CNN models, fine tuned on several datasets with the aim of exploiting their diversity. The aim of our study is to experiment the possibility of fine-tuning pretrained CNN for underwater imagery analysis, the opportunity of using different datasets for pretraining models, the possibility to design an ensemble using the same architecture with small variations in the training procedure. The experimental results are very encouraging, our experiments performed on 5 well-knowns datasets (3 plankton and 2 coral datasets) show that the fusion of such different CNN models in a heterogeneous ensemble grants a substantial performance improvement with respect to other state-of-the-art approaches in all the tested problems. One of the main contributions of this work is a wide experimental evaluation of famous CNN architectures to report performance of both single CNN and ensemble of CNNs in different problems. Moreover, we show how to create an ensemble which improves the performance of the best single model.

0+
0+
下载
预览

Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene. They capture pixel-wise brightness variations and output a corresponding stream of asynchronous events. Despite having multiple advantages with respect to traditional cameras, their use is partially prevented by the limited applicability of traditional data processing and vision algorithms. To this aim, we present a framework which exploits the output stream of event cameras to synthesize RGB frames, relying on an initial or a periodic set of color key-frames and the sequence of intermediate events. Differently from existing work, we propose a deep learning-based frame synthesis method, consisting of an adversarial architecture combined with a recurrent module. Qualitative results and quantitative per-pixel, perceptual, and semantic evaluation on four public datasets confirm the quality of the synthesized images.

0+
0+
下载
预览
Top