自然计算（Natural Computing）是指在自然中观察到的计算过程，以及受自然启发而设计的人类计算。当我们从计算过程的角度分析复杂的自然现象时，我们对自然和计算本质的理解都得到了增强。灵感来自自然的人工设计计算的特点是隐喻性地使用自然系统下的概念、原理和机制。自然计算包括进化算法、神经网络、分子计算和量子计算。
官网地址：http://dblp.uni-trier.de/db/journals/nc/

** In this paper we present a numerical approach to solve the Navier-Stokes equations on moving domains with second-order accuracy. The space discretization is based on the ghost-point method, which falls under the category of unfitted boundary methods, since the mesh does not adapt to the moving boundary. The equations are advanced in time by using Crank-Nicholson. The momentum and continuity equations are solved simultaneously for the velocity and the pressure by adopting a proper multigrid approach. To avoid the checkerboard instability for the pressure, a staggered grid is adopted, where velocities are defined at the sides of the cell and the pressure is defined at the centre. The lack of uniqueness for the pressure is circumvented by the inclusion of an additional scalar unknown, representing the average divergence of the velocity, and an additional equation to set the average pressure to zero. Several tests are performed to simulate the motion of an incompressible fluid around a moving object, as well as the lid-driven cavity tests around steady objects. The object is implicitly defined by a level-set approach, that allows a natural computation of geometrical properties such as distance from the boundary, normal directions and curvature. Different shapes are tested: circle, ellipse and flower. Numerical results show the second order accuracy for the velocity and the divergence (that decays to zero with second order) and the efficiency of the multigrid, that is comparable with the tests available in literature for rectangular domains without objects, showing that the presence of a complex-shaped object does not degrade the performance. **

** The Turing Machine is the paradigmatic case of computing machines, but there are others, such as Artificial Neural Networks, Table Computing, Relational-Indeterminate Computing and diverse forms of analogical computing, each of which based on a particular underlying intuition of the phenomenon of computing. This variety can be captured in terms of system levels, re-interpreting and generalizing Newell's hierarchy, which includes the knowledge level at the top and the symbol level immediately below it. In this re-interpretation the knowledge level consists of human knowledge and the symbol level is generalized into a new level that here is called The Mode of Computing. Natural computing performed by the brains of humans and non-human animals with a developed enough neural system should be understood in terms of a hierarchy of system levels too. By analogy from standard computing machinery there must be a system level above the neural circuitry levels and directly below the knowledge level that is named here The mode of Natural Computing. A central question for Cognition is the characterization of this mode. The Mode of Computing provides a novel perspective on the phenomena of computing, interpreting, the representational and non-representational views of cognition, and consciousness. **

** Zero-knowledge and multi-prover systems are both central notions in classical and quantum complexity theory. There is, however, little research in quantum multi-prover zero-knowledge systems. This paper studies complexity-theoretical aspects of the quantum multi-prover zero-knowledge systems. This paper has two results: 1.QMIP* systems with honest zero-knowledge can be converted into general zero-knowledge systems without any assumptions. 2.QMIP* has computational quantum zero-knowledge systems if a natural computational conjecture holds. One of the main tools is a test (called the GHZ test) that uses GHZ states shared by the provers, which prevents the verifier's attack in the above two results. Another main tool is what we call the Local Hamiltonian based Interactive protocol (LHI protocol). The LHI protocol makes previous research for Local Hamiltonians applicable to check the history state of interactive proofs, and we then apply Broadbent et al.'s zero-knowledge protocol for QMA \cite{BJSW} to quantum multi-prover systems in order to obtain the second result. **

** Flip graphs are a ubiquitous class of graphs, which encode relations induced on a set of combinatorial objects by elementary, local changes. Skeletons of associahedra, for instance, are the graphs induced by quadrilateral flips in triangulations of a convex polygon. For some definition of a flip graph, a natural computational problem to consider is the flip distance: Given two objects, what is the minimum number of flips needed to transform one into the other? We consider flip graphs on orientations of simple graphs, where flips consist of reversing the direction of some edges. More precisely, we consider so-called $\alpha$-orientations of a graph $G$, in which every vertex $v$ has a specified outdegree $\alpha(v)$, and a flip consists of reversing all edges of a directed cycle. We prove that deciding whether the flip distance between two $\alpha$-orientations of a planar graph $G$ is at most two is \NP-complete. This also holds in the special case of perfect matchings, where flips involve alternating cycles. This problem amounts to finding geodesics on the common base polytope of two partition matroids, or, alternatively, on an alcoved polytope. It therefore provides an interesting example of a flip distance question that is computationally intractable despite having a natural interpretation as a geodesic on a nicely structured combinatorial polytope. We also consider the dual question of the flip distance betwe en graph orientations in which every cycle has a specified number of forward edges, and a flip is the reversal of all edges in a minimal directed cut. In general, the problem remains hard. However, if we restrict to flips that only change sinks into sources, or vice-versa, then the problem can be solved in polynomial time. Here we exploit the fact that the flip graph is the cover graph of a distributive lattice. This generalizes a recent result from Zhang, Qian, and Zhang. **

** Natural computing offers new opportunities to understand, model and analyze the complexity of the physical and human-created environment. This paper examines the application of natural computing in environmental informatics, by investigating related work in this research field. Various nature-inspired techniques are presented, which have been employed to solve different relevant problems. Advantages and disadvantages of these techniques are discussed, together with analysis of how natural computing is generally used in environmental research. **

** The 9th International Workshop on Physics and Computation (PC 2018) was held as a satellite workshop of the 17th International Conference on Unconventional Computation and Natural Computation (UCNC 2018) in Fontainebleau, France, which was held from 25-29 June 2018. PC 2018 was an interdisciplinary meeting which brought together researchers from various domains with interests in physics and computation. Research and important issues relating to the interface between physics and the theories of computation, computability and information, including their application to physical systems, were presented and discussed. **

** Dimensionality-reduction techniques are a fundamental tool for extracting useful information from high-dimensional data sets. Because secant sets encode manifold geometry, they are a useful tool for designing meaningful data-reduction algorithms. In one such approach, the goal is to construct a projection that maximally avoids secant directions and hence ensures that distinct data points are not mapped too close together in the reduced space. This type of algorithm is based on a mathematical framework inspired by the constructive proof of Whitney's embedding theorem from differential topology. Computing all (unit) secants for a set of points is by nature computationally expensive, thus opening the door for exploitation of GPU architecture for achieving fast versions of these algorithms. We present a polynomial-time data-reduction algorithm that produces a meaningful low-dimensional representation of a data set by iteratively constructing improved projections within the framework described above. Key to our algorithm design and implementation is the use of GPUs which, among other things, minimizes the computational time required for the calculation of all secant lines. One goal of this report is to share ideas with GPU experts and to discuss a class of mathematical algorithms that may be of interest to the broader GPU community. **

** Through the theory of Lie bi-algebroids and generalized complex structures, one could define a cohomology theory naturally associated to a holomorphic Poisson structure. It is known that it is the hypercohomology of a bi-complex such that one of the two operators is the classical $\overline{\partial}$-operator. Another operator is the adjoint action of the Poisson bivector with respect to the Schouten-Nijenhuis bracket. The hypercohomology is naturally computed by one of the two associated spectral sequences. In a prior publication, the author of this article and his collaborators investigated the degeneracy of this spectral sequence on the second page. In this note, the author investigates the conditions for which this spectral sequence degenerates on the first page. Particular effort is devoted to nilmanifolds with abelian complex structures. **

** We study the complexity of computational problems from quantum physics. Typically, they are studied using the complexity class QMA (quantum counterpart of NP) but some natural computational problems appear to be slightly harder than QMA. We introduce new complexity classes consisting of problems that are solvable with a small number of queries to a QMA oracle and use these complexity classes to quantify the complexity of several natural computational problems (for example, the complexity of estimating the spectral gap of a Hamiltonian). **

** Nature can be seen as informational structure with computational dynamics (info-computationalism), where an (info-computational) agent is needed for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system, which combines Bateson and Hewitt definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world where information processing is constantly going on, on a variety of levels of organization. This setting helps elucidating the relationships between computation, information, agency and cognition, within the common conceptual framework, which has special relevance for biology and robotics. **

** This article presents a naturalist approach to cognition understood as a network of info-computational, autopoietic processes in living systems. It provides a conceptual framework for the unified view of cognition as evolved from the simplest to the most complex organisms, based on new empirical and theoretical results. It addresses three fundamental questions: what cognition is, how cognition works and what cognition does at different levels of complexity of living organisms. By explicating the info-computational character of cognition, its evolution, agent-dependency and generative mechanisms we can better understand its life-sustaining and life-propagating role. The info-computational approach contributes to rethinking cognition as a process of natural computation in living beings that can be applied for cognitive computation in artificial systems. **

** We describe the theoretical and computational framework for the Dynamic Signatures for Genetic Regulatory Network (DSGRN) database. The motivation stems from urgent need to understand the global dynamics of biologically relevant signal transduction/gene regulatory networks that have at least 5 to 10 nodes, involve multiple interactions, and decades of parameters. The input to the database computations is a regulatory network, i.e.\ a directed graph with edges indicating up or down regulation, from which a computational model based on switching networks is generated. The phase space dimension equals the number of nodes. The associated parameter space consists of one parameter for each node (a decay rate), and three parameters for each edge (low and high levels of expression, and a threshold at which expression levels change). Since the nonlinearities of switching systems are piece-wise constant, there is a natural decomposition of phase space into cells from which the dynamics can be described combinatorially in terms of a state transition graph. This in turn leads to compact representation of the global dynamics called an annotated Morse graph that identifies recurrent and nonrecurrent. The focus of this paper is on the construction of a natural computable finite decomposition of parameter space into domains where the annotated Morse graph description of dynamics is constant. We use this decomposition to construct an SQL database that can be effectively searched for dynamic signatures such as bistability, stable or unstable oscillations, and stable equilibria. We include two simple 3-node networks to provide small explicit examples of the type information stored in the DSGRN database. To demonstrate the computational capabilities of this system we consider a simple network associated with p53 that involves 5-nodes and a 29-dimensional parameter space. **