Knaster-Tarski's theorem, characterising the greatest fixpoint of a monotone function over a complete lattice as the largest post-fixpoint, naturally leads to the so-called coinduction proof principle for showing that some element is below the greatest fixpoint (e.g., for providing bisimilarity witnesses). The dual principle, used for showing that an element is above the least fixpoint, is related to inductive invariants. In this paper we provide proof rules which are similar in spirit but for showing that an element is above the greatest fixpoint or, dually, below the least fixpoint. The theory is developed for non-expansive monotone functions on suitable lattices of the form $\mathbb{M}^Y$, where $Y$ is a finite set and $\mathbb{M}$ an MV-algebra, and it is based on the construction of (finitary) approximations of the original functions. We show that our theory applies to a wide range of examples, including termination probabilities, behavioural distances for probabilistic automata and bisimilarity. Moreover, quite interestingly, it allows us to determine original algorithms for solving simple stochastic games.

0
下载
关闭预览

相关内容

For a class $\mathcal{G}$ of graphs, the problem SUBGRAPH COMPLEMENT TO $\mathcal{G}$ asks whether one can find a subset $S$ of vertices of the input graph $G$ such that complementing the subgraph induced by $S$ in $G$ results in a graph in $\mathcal{G}$. We investigate the complexity of the problem when $\mathcal{G}$ is $H$-free for $H$ being a complete graph, a star, a path, or a cycle. We obtain the following results: - When $H$ is a $K_t$ (a complete graph on $t$ vertices) for any fixed $t\geq 1$, the problem is solvable in polynomial-time. This applies even when $\mathcal{G}$ is a subclass of $K_t$-free graphs recognizable in polynomial-time, for example, the class of $(t-2)$-degenerate graphs. - When $H$ is a $K_{1,t}$ (a star graph on $t+1$ vertices), we obtain that the problem is NP-complete for every $t\geq 5$. This, along with known results, leaves only two unresolved cases - $K_{1,3}$ and $K_{1,4}$. - When $H$ is a $P_t$ (a path on $t$ vertices), we obtain that the problem is NP-complete for every $t\geq 7$, leaving behind only two unresolved cases - $P_5$ and $P_6$. - When $H$ is a $C_t$ (a cycle on $t$ vertices), we obtain that the problem is NP-complete for every $t\geq 8$, leaving behind four unresolved cases - $C_4, C_5, C_6,$ and $C_7$. Further, we prove that these hard problems do not admit subexponential-time algorithms (algorithms running in time $2^{o(|V(G)|)}$), assuming the Exponential Time Hypothesis. A simple complementation argument implies that results for $\mathcal{G}$ are applicable for $\overline{\mathcal{G}}$, thereby obtaining similar results for $H$ being the complement of a complete graph, a star, a path, or a cycle. Our results generalize two main results and resolve one open question by Fomin et al. (Algorithmica, 2020).

0
0
下载
预览

Let $P$ be a set of $n$ points in $\mathbb{R}^2$. For a given positive integer $w<n$, our objective is to find a set $C \subset P$ of points, such that $CH(P\setminus C)$ has the smallest number of vertices and $C$ has at most $n-w$ points. We discuss the $O(wn^3)$ time dynamic programming algorithm for monotone decomposable functions (MDF) introduced for finding a class of optimal convex $w$-gons, with vertices chosen from $P$, and improve it to $O(n^3 \log w)$ time, which gives an improvement to the existing algorithm for MDFs if their input is a convex polygon.

0
0
下载
预览

While there has been extensive previous work on efficient quantum algorithms for linear differential equations, analogous progress for nonlinear differential equations has been severely limited due to the linearity of quantum mechanics. Despite this obstacle, we develop a quantum algorithm for initial value problems described by dissipative quadratic $n$-dimensional ordinary differential equations. Assuming $R < 1$, where $R$ is a parameter characterizing the ratio of the nonlinearity to the linear dissipation, this algorithm has complexity $T^2\mathrm{poly}(\log T, \log n, \log 1/\epsilon)/\epsilon$, where $T$ is the evolution time and $\epsilon$ is the allowed error in the output quantum state. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in $T$. We achieve this improvement using the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for $R \ge \sqrt{2}$. Finally, we discuss potential applications of this approach to problems arising in biology as well as in fluid and plasma dynamics.

0
0
下载
预览

Many parallel algorithms use at least linear auxiliary space in the size of the input to enable computations to be done independently without conflicts. Unfortunately, this extra space can be prohibitive for memory-limited machines, preventing large inputs from being processed. Therefore, it is desirable to design parallel in-place algorithms that use sublinear (or even polylogarithmic) auxiliary space. In this paper, we bridge the gap between theory and practice for parallel in-place (PIP) algorithms. We first define two computational models based on fork-join parallelism, which reflect modern parallel programming environments. We then introduce a variety of new parallel in-place algorithms that are simple and efficient, both in theory and in practice. Our algorithmic highlight is the Decomposable Property introduced in this paper, which enables existing non-in-place but highly-optimized parallel algorithms to be converted into parallel in-place algorithms. Using this property, we obtain algorithms for random permutation, list contraction, tree contraction, and merging that take linear work, $O(n^{1-\epsilon})$ auxiliary space, and $O(n^\epsilon\cdot\text{polylog}(n))$ span for $0<\epsilon<1$. We also present new parallel in-place algorithms for scan, filter, merge, connectivity, biconnectivity, and minimum spanning forest using other techniques. In addition to theoretical results, we present experimental results for implementations of many of our parallel in-place algorithms. We show that on a 72-core machine with two-way hyper-threading, the parallel in-place algorithms usually outperform existing parallel algorithms for the same problems that use linear auxiliary space, indicating that the theory developed in this paper indeed leads to practical benefits in terms of both space usage and running time.

0
0
下载
预览

In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, in this paper, we consider the case where the process of encoding is distributed and prone to error. We introduce the problem of distributed encoding, comprising of $K\in\mathbb{N}$ isolated source nodes and $N\in\mathbb{N}$ encoding nodes. Each source node has one symbol from a finite field and sends it to all encoding nodes. Each encoding node stores an encoded symbol, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of adversarial nodes, denoted by $\beta\in\mathbb{N}$, and the number of symbols that each one generates, denoted by $v\in\mathbb{N}$, the process of decoding from the encoded symbols could be impossible. Assume that a decoder connects to an arbitrary subset of $t \in\mathbb{N}$ encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. In this paper, we study $t^*\in\mathbb{N}$, the minimum of $t$, which is a function of $K$, $N$, $\beta$, and $v$. We show that when the encoding nodes use linear coding, $t^*_{\textrm{linear}}=K+2\beta(v-1)$, if $N\ge K+2\beta(v-1)$, and $t^*_{\textrm{linear}}=N$, if $N\le K+2\beta(v-1)$. In order to achieve $t^*_{\textrm{linear}}$, we use random linear coding and show that in any feasible solution that the decoder finds, the messages of the honest nodes are decoded correctly. For the converse of the fundamental limit, we show that when the adversary behaves in a particular way, it can always confuse the decoder between two feasible solutions that differ in the message of at least one honest node.

0
0
下载
预览

Extended Affine (EA) equivalence is the equivalence relation between two vectorial Boolean functions $F$ and $G$ such that there exist two affine permutations $A$, $B$, and an affine function $C$ satisfying $G = A \circ F \circ B + C$. While a priori simple, it is very difficult in practice to test whether two functions are EA-equivalent. This problem has two variants: EA-testing deals with figuring out whether the two functions can be EA-equivalent, and EA-recovery is about recovering the tuple $(A,B,C)$ if it exists. In this paper, we present a new efficient algorithm that efficiently solves the EA-recovery problem for quadratic functions. Though its worst-case complexity is obtained when dealing with APN functions, it supersedes all previously known algorithms in terms of performance, even in this case. This approach is based on the Jacobian matrix of the functions, a tool whose study in this context can be of independent interest. In order to tackle EA-testing efficiently, the best approach in practice relies on class invariants. We provide an overview of the literature on said invariants along with a new one based on the \emph{ortho-derivative} which is applicable to quadratic APN functions, a specific type of functions that is of great interest, and of which tens of thousands need to be sorted into distinct EA-classes. Our ortho-derivative-based invariant is both very fast to compute, and highly discriminating.

0
0
下载
预览

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

0
3
下载
预览

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

0
3
下载
预览

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

0
6
下载
预览

M. Christandl conjectured that the composition of any trace preserving PPT map with itself is entanglement breaking. We prove that Christandl's conjecture holds asymptotically by showing that the distance between the iterates of any unital or trace preserving PPT map and the set of entanglement breaking maps tends to zero. Finally, for every graph we define a one-parameter family of maps on matrices and determine the least value of the parameter such that the map is variously, positive, completely positive, PPT and entanglement breaking in terms of properties of the graph. Our estimates are sharp enough to conclude that Christandl's conjecture holds for these families.

0
3
下载
预览
小贴士
相关主题
相关论文
Dhanyamol Antony,Jay Garchar,Sagartanu Pal,R. B. Sandeep,Sagnik Sen,R. Subashini
0+阅读 · 3月4日
Vahideh Keikha
0+阅读 · 3月2日
Jin-Peng Liu,Herman Øie Kolden,Hari K. Krovi,Nuno F. Loureiro,Konstantina Trivisa,Andrew M. Childs
0+阅读 · 3月2日
Yan Gu,Omar Obeya,Julian Shun
0+阅读 · 3月1日
Nastaran Abadi Khooshemehr,Mohammad Ali Maddah-Ali
0+阅读 · 2月27日
Anne Canteaut,Alain Couvreur,Léo Perrin
0+阅读 · 2月26日
Maria-Florina Balcan,Yi Li,David P. Woodruff,Hongyang Zhang
3+阅读 · 2018年10月18日
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy, Dvijotham,Robert Stanforth,Sven Gowal,Timothy Mann,Pushmeet Kohli
3+阅读 · 2018年8月3日
Kevin Scaman,Francis Bach,Sébastien Bubeck,Yin Tat Lee,Laurent Massoulié
6+阅读 · 2018年6月1日
Matthew Kennedy,Nicholas A. Manor,Vern I. Paulsen
3+阅读 · 2017年12月7日
相关VIP内容
专知会员服务
34+阅读 · 2020年12月22日
专知会员服务
60+阅读 · 2020年8月7日
专知会员服务
28+阅读 · 2020年8月2日
专知会员服务
31+阅读 · 2020年7月26日
因果图,Causal Graphs,52页ppt
专知会员服务
107+阅读 · 2020年4月19日
强化学习最新教程,17页pdf
专知会员服务
45+阅读 · 2019年10月11日
相关资讯
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
Call4Papers
7+阅读 · 2019年6月24日
Transferring Knowledge across Learning Processes
CreateAMind
6+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
6+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
26+阅读 · 2019年1月3日
meta learning 17年:MAML SNAIL
CreateAMind
8+阅读 · 2019年1月2日
RL 真经
CreateAMind
4+阅读 · 2018年12月28日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
10+阅读 · 2018年12月24日
Hierarchical Disentangled Representations
CreateAMind
3+阅读 · 2018年4月15日
【推荐】SVM实例教程
机器学习研究会
12+阅读 · 2017年8月26日
强化学习族谱
CreateAMind
9+阅读 · 2017年8月2日
Top