On a temporal logic for object-based systems
系统设计外文文献及译文 张所炜

南京工程学院英文文献及译文作者:张所炜学号:209100738 系部:经济与管理学院专业:信息管理与信息系统题目:“投入产出分析系统”的设计与实现指导者:黄传峰副教授2014年 2月Emerging challenging in regional input-output analysisGeoffrey J. D. HewingsUniversity of Illinois at Urbana-Champaign USAandRodney C. JensenUniversity of Queensland St. Luci AustraliaAbstract.The changing interests and focus of research in the field of regional input-output analysis is examined. After reviewing some of the recent trends and suggesting the tenor of the prevailing philosophy in the field, attention is focused on three, interdependent emerging trends. These are characterized as (1)the conceptualization of input-output within the traditions of econometric analysis;(2) the integration of input-output with other regional and interregional models and (3) attempts to link input-output analysis with regional growth and development theory. I. PrefaceTo our knowledge, Michael Mischaikow's research interests have not directly encompassed regional input-output analysis. However, as Editor of the Annals, and as a highly respected statesman in regional science, he has had a significant influence in fostering the growth and development of many analytical techniques in regional analysis including input-output analysis. Several very important and influential articles have appeared in the Annals, many as the result of Mischaikow's initiative and encouragement. He has been a firm, committed champion of our sub-field of regional input-output analysis. We are pleased to have the opportunity to offer this paper as part of his Festschrift , both as a mark of personal appreciation and encouragement, and to honor the outstanding contribution of a valued colleague.II. IntroductionThe field of regional analytical modelling is undergoing a significant new surge of interest and development. In this paper, some of these developments will be reported in the context of a set of emerging ehallenges in the field of regional input-output analysis. First, however, the current state of the art will be reviewed briefly. Thereafter,some general comments will be made about the prevailing philosophy in input-output analysis. The fifth section of the paper will address the emerging trends as a way of establishing a possible agenda for the future. The final part of the paper provides some concluding comments.III. The State of the Art in Regional Input-Output AnalysisWith several recent contributions under this umbrella (Miller and Blair, 1985; Richardson, 1985; Hewings and Jensen, 1986), the need for yet another comprehensive review of input-output at the regional level is not a high priority. The objective in this section is not to provide the detailed coverage that these papers and monographs have contributed, but rather a summary and overview of general trends and directions. This overview is intended to facilitate the discussion in later sections of the paper.Two important points need to be established as a preface to an overview of important developments in input-output. First, while regional input-output models have become an accepted and much-used part of the arsenal of analytical techniques, there is a strong suspicion that many analysts have a higher level of awareness of the model's limitations than they have of its utility. This comment is made on the basis of casual empiricism derived from referees' comments on input-output papers submitted to journals and commentary made on similar papers presented at professional meetings. We see this primarily as a result of the continued high level of debate, evaluation, and testing which has characterized the field of regional input-output, probably far more than most other fields of economic analysis. This healthy debate and critical introspection has been in the traditions of an academic and professional environment aimed at continued improvement and evaluation of existing analytical skills. It is no exaggeration to suggest that the most informed and consistent critics of aspects of the input-output methodology are those actively and diligently involved in research on the technique. As is so often the ease, the negative aspects of such activities tend to fUter through to those with marginal interest and knowledge of the technique, with more efficiency than positive aspects, creating in this ease an image of input-output which is less encouraging than that warranted by the reality ofprogress. In a sense, this situation provides evidence of a Gresham's Law of Information--the bad driving out the good.Second, there is also an underlying perception that input-output models have not adapted well to the needs of the modern analyst; there appears to be some feeling that "more modern" methods are increasingly required in routine analytical situations, as the "life cycle" of input-output analysis proceeds past some peak of activity. We see such an attitude as extraordinary in light of the current unprecedented expansion in the use of input-output at the regional level~ where the technique is rapidly becoming routine in planning and impact studies, and obviously filling a need to an increasing extent. While it would be unfortunate if this paper devolved merely into a defense of the technique, some of these perceptions are widely held to be sound. Hence, some summary statement of the current status of the field and its developments appears to be appropriate and is provided below.An examination of the literature suggests that a number of important developments can be highlighted:(1)he rapid growth in the adoption of input-output analysis, for planning,forecasting and general impact analysis, at the regional level in countries of all political persuasions and levels of development, to the stage of routine application. Recent experience would appear to be counter to any suggestion that input-output analysis has reached and passed its zenith. In fact, input output appears to be entering a new stage of expanded routine application. In many ways, the challenges facing the construction and ultimate use of regional input-output models are as great as they were two or three decades ago, yet they are of a different kind. Of particular importance has been the gradual use of these models in developing economies, particularly in the context of integrated models (see below).(2)the decline in the attention given to the production of regional input outputtables{ new input-output tables are appropriately regarded as routine rather than significant events. No special fanfare is accorded the production of regional input-output tables in the developed world--unless some novelaccounting scheme, data collection method or particular application has been associated with their development. The process hasjbecorne routine, accelerated by the availability of several competing personal computer packages which will enable the construction of a regional input-output table with minimal regional data (for a partial review, see Sivitanidou and Polenske, 1987). Furthermore, there has been a greater recognition in the literature of the linkages between a number of important modelling paradigms. Thus, the bi-proportional or RAS technique which was first developed for updating input-output tables has been shown to be part of a broad family of matrix estimation techniques on the one hand (see Batten, 1982; Boyce and Batten, 19861 Nagurney, 19871 Willikens, 1981) and a special case of general error analysis in matrix systems on the other hand (see Sonis and Hewings, 1987).This recognition has lead to a great deal of shared expertise and an enriching of the analytical tools{ the flexibility afforded by entropy, contingency table and network approaches (such as those of Kadas and Klafsky, 1976 and the variational inequality proposals by Nagurney, 1987) to matrix estimation has provided the analyst with a choice of approaches which,for the most part, do not depend entirely on the data set available.(3)a movement towards the development of hybrid input-output tables, acompromise position between groups who advocated the construction of tables from survey data alone and those whose position is that nonsurvey techniques will produce tables of the requisite quality. Alternative approaches to the construction of regional tables dominated the literature for much of the 1970s. While extreme positions were taken in the earlier years, there would appear to have been significant mellowing of opinions and movements towards the center, the center being defined as the recognition that partial survey or hybrid tables would become the dominant construction technique.In part, this compromise was reached on both pragmatic and analytic grounds.In the former case, the recognition that the days of massive appropriation of funds by state, provincial and local government agencies for the de novoconstruction of input-output tables was over propelled researchers to ponder the alternatives. On the other hand, there was increasing evidence that the census mentality, namely that all entries in an input-output table had to be obtained from survey data, was probably misplaced. Notions of analytical importance began to provide the way for a compromise which would allow the investigator to maximize the quality of the effort involved in any data collection by focusing on garts of the system whose direct estimation was deemed critical. While debate now centers on the identification of these elements, few articles have appeared in the recent past which have ventured far from a notion of support for the development of hybrid.(4)a recognition that integrated and more specialized models, with theinput-output model as one component, will be more important at the regional level in the coming years, and the development of operational models of this type. While Isard's "Channels of Synthesis" chapter in Methods of Regional Analysis was considered by some as an unattainable goal, it was visionary and propitious in its ability to suggest a trend which appears to be dominating the development of regional analytical models in the 1980s. More attention wig be devoted to this issue in Section V.(5)the increasing attention which has been given to problems of errors andsensitivity in input-output models, a11owing reasonable perspectives of model robustness to be established. While some of the earlier input-output analysts raised questions about errors in the input-output model (in construction and in application), these issues were treated sotto voee by the profession for the most part. Only recently have some of these problems been revisited; rather than presenting major impediments to the use of the input-output model, a ease can be made that the raising of the issue has created some significant breakthroughs in the analytical utility of the model.Again, more discussion will be devoted to this issue in Section V.IV. The Prevailing Philosophy in Regional Input-Output AnalysisThis section presents some summary statements which describe our view of theprevailing philosophy in input-output analysis. In the preceding seetion, it was noted that over the last two decades, there has been a significant change in the focus of attention in regional input-output analysis. Three phases in the development may be articulated and these have led to the currently developing philosophy which many analysts in the field seem to share.First, the generation of regional input-output tables, a daunting challenge given computational and data resources, dominated the early phase of regional input-output survey-based work. For the most part, the tables which were constructed were influenced very heavily by the type of table constructed at the national level; concomitantly, most of the applications with the associated regional model mirrored the developments at the national level. In fact, during this period, the separation of "regional" and "national" analysts was relatively weak; thereafter, this separation increased to the point where, today, there appears to be little or no contact between the two groups."The second phase may be ascribed the label "accuracy issues" as debate centered on the acceptability of various (survey versus nonsurvey) methods for construction. The resolution on table construction saw movement towards a common ground aided by Jensen's (1980) attempt to distinguish between holistic and partitive accuracy. This distinction, often misinterpreted as simply a choice between two operational views of accuracy (see Richardson, 1985), raised the notion that the integrity of the table as a whole as a "portrait" of an economy is perhaps a more worthy objective than attending to the accuracy of large numbers of analytically insignificant cells.The third phase has witnessed several different approaches to the construction of hybrid tables, as analysts sought to provide more explicit ways of identifying the set of entries for which "superior data" were required. Unresolved at the present time is the issue of the a priori identification of these critical data sets in cases in which no existing table is available. This dilemma has lead to two important new developments which provide not only substantial challenges for input-output analysis but for regional analysis in general.Essentially, these developments seek to view the matrix of transactions (broadly defined to include other than interindustry transactions) as a representation of the structure of the economy. With this representation, the second perspective then seeks to use these structures to establish a taxonomy of economies and suggest the development of possible theories about the evolutions of economies over time and space. While these ideas have been articulated in detail elsewhere (see Hewings, Jensen and West, 1987; Jensen, West and Hewings, 1987; Jensen et al. 1987), some of the implications will be reviewed here.The first point to note is that regional growth and development theory has, for the most part, ignored the issue of structure in the sense of the structure of the economy embodied in the input-output table and associated model. Attention has been focused on aggregate indicators such as the' distribution of output, income or employment among major sectors but rarely on the interdependence existing among these sectors. While some parts of growth center theory have attempted to view regional growth in terms of the generation of linkages, the overall changes in the structure of the economy are rarely mentioned.Secondly, there exists a large number of input-output tables for regions of different sizes and at different stages of development. No attempt has been made to regard these as a sample of "photographs" of the structure of their economies at one point in time; while the photographic record is incomplete, the opportunity to view these as samples from a space-time development continuum appears to have been ignored. The need for some creative interpretive techniques to handle missing "records" (i.e., input-output tables for some regions or points in time) would appear to be paramount if this opportunity is not to be lost.While trying to avoid any suggestion of economic determinism, the third issue which arises 'focuses on our ability to use information about regional economic structures in building up a taxonomy of regional economies. It is felt that this taxonomy would be useful for a number of reasons--assisting the development of hybrid input-output tables for regions in which only limited information is available, providing guidance on possible development trends in regions undergoing change and,most importantly, testing for the existence of a fundamental economic Structure. Further discussion on this topic will follow in Section V.This change in philosophy reflects a sense in which the input-output model is now seen as part of the broader picture of regional growth and development rather than simply as an analytical tool designed to assist in providing answers for limited impact analyses. It is this broader picture which will be traced in more detail in the next section.V. Emerging Trends in Regional Input-Output AnalysisIn this section, the discussion will amplify many of the issues raised thus far, but wiU focus them more specifically to a set of three, interdependent, emerging trends. The term "regional" should be read as covering single-region, interregional and multiregional models and the term "input-output" should be considered very broadly (see Figure 1). The three trends may be labelled as (1) econometric input-output analysis; (2) integrated input-output modelling and (3) input-output and regional growth and development theory.5.1 Econometric Input-Output AnalysisScholars exposed to input-output analysis for the first time, are keenly aware of the fact that while conceptualized within conventional economics, it has been necessarily operationalized and developed, for the most part, outside the mainstream of rigorous econometric practice. While some early attempts were made to avoid this unfortunate distinction (see Jackson and West, 1987 and Sonis and Hewings, 1987 for a review), the fact remains that input-output analysis has often been presented, like much economic analysis, as more accurate than justified by available data sources. For example, little attention in the literature has been given to the sampling problems involved in data collection and the impact that errors in data might have on the model's reliability. The prevailing view that data inconsistencies in many survey-based models were rather crudely arbitraged lead Gerking (1976a, 1976b, 1979) to initiate a renewed charge for more analytical rigor in the development of the tables. The subsequent debate with Miernyk (1976, 1979) is now well known and wig not be reviewed here. However, several important developments took place almostsimultaneously--Bullard and Sebald's (1977) research on the issue of analytical importance, and the work of West (1981, 1986) and Jackson (1986) in developing distributions associated with errors in individual coefficients.5.2 Integrated Input-Output ModellingIt is unfortunate that in many textbooks, regional analytical models are presented as competitors rather than as alternatives within a broader conceptual framework. In the last decade, however, some of the most important developments in the field have been in the direction of linking the input-output model with one or more other models (see Batey and Madden, 1981, 1983 and 1986 for an excellent collection). This research has generated a new perspective on the input-output model; concerns about data collection, aggregation, accuracy and causality have had to be raised anew in the context of a larger, more encompassing analytical framework. Figure I describes some of the "linkages" in these new developments; only a few examples of the linked models are provided. The main purpose of the illustration is to reveal that there are a number of alternative approaches which seem to be converging on what we might term a general equilibrium framework. In this regard, input-output analysis is reestablishing the spirit of the Walrasian framework from which it was initially derived by Leontief. This trend may be illustrated in the evolution of the computable general equilibrium models which grew out of dissatisfaction with the input-output model and the limitations of the linear programming formulation. Here, the input-output model is but one part of the general interdependence captured in theeconomy, although a very important component nonetheless. The expansion of the input-output model in the direction of social accounts has recently been extended in the form of transaction value social accounts. Here, the accounting framework has been enhanced through the development of equations designed to estimate some of the parameters which appear in the SAM per se (see Drud, Grais, and Pyatt, 1985).Clearly, the distinction between the two approaches, general equilibrium and transaction value social accounts, may soon disappear. One major advantage of the framework is the added flexibility afforded for the analyst interested indeveloping more detail in one aspect of the model as many actors and relationships are made endogenous. For example, Van Di]~ and Oosterhaven (1986) have been able to link a vacancy chain model (to capture labor market changes) with an input-output model to explore the effects of government policy initiatives in employment creation in the northern part of The Netherlands. Batey and Madden's work (1981, 1983) in linking the input-output model with consumption behavior of households of different types is well known. Less well known is the important contribution of Bell, Hazell and Slade (1982) in linking the social accounting system to a project appraisal framework along the lines initially proposed by Tinbergen (1966) and implemented for an input-output model alone for Papua New Guinea by Karunaratne (1976).Some of the more ambitious attempts to link models have involved econometric, linear programming and environmental components integrated with or joined in some fashion with regional input-output models. Several of these efforts have been reviewed in Hewings and Jansen (1986), and Hewings (1986); the additional insights gained from the linkage far outweighed the extra effort involved in developing the model in almost all these cases. More recently, there have been some ventures into regional computer general equilibrium modelling at the regional level (see Ko and Hewings, 1986; Spencer, 1987; Harrigan, McGregor and Swales as well as Dixon et al., 1982).However, the linkage of models is not costless; in particular, one needs to draw attention to what Taylor and Lysy (1979) refer to as the closure rules. In many cases, the models contain a structure in which there are more variables than equations,leading to the delicate decision as to what is endogenous and what exogenous. The issue is not trivial and different choices can often lead to very different results in the estimation of many parameters. From another perspective, there is the issue of just what is driving the system (see Hewings, 1986). As more and more activities and relationships are determined endogenously, the driving mechanism for the regional or national economy is often reduced to a small subset of exogenous variables.5.3 Input-Output and Regional Growth and Development TheoryIn section IV, it was noted that some recent work had proposed utilizing the set of input-output tables now available in a more creative way to further regional growth and development theory. In this regard, a taxonomy of regional economies was proposed as a major initial objective to provide the basis for the development of an hypothesis to explain the evolution of economies over time and space. A critical component of this taxonomy was thought to be the notion of a fundamental economic structure. This notion represents an elaboration of the concept of a fundamental structure of production introduced by Simpson and Tsukui (1965) since it would include aspects other than interindustry transactions.The major research questions here are the degree to which a fundamental economic structure can be identified (several approaches are reviewed in I-Iewings, Jensen and West, 1987), its stability over time (within one region) and over space (across the spectrum of economies from small, rural ones to sophisticated national-level economies) and its importance in conditioning the pace and future development of economies.There is a further opportunity to integrate some of the issues noted in the first two emerging trends in section V, namely econometric analysis and integrated modelling. For example, the issue of analytical importance of the elements in the fundamental economic structure offers a challenge to examine the nature of change and its repercussions throughout the rest of the economy in question. In addition, the issue of integration may be addressed in the same context. Assume, for example, that this change is generated by the adoption of an innovation in production by a small number of firms within one region or the production of a new product. In this regard,the opportunity now exists to link Input-output analysis and innovation theory in a rigorous analytical fashion (see Hewings, Sonis and Jensen, I987). Changes in the regional input-output portion of the production function may be traced in terms of their impact upon the rest of the system and, in an interregional context, in terms of the competition for inputs or markets. The input-output model affords the possibility for generating the indirect effects of innovation diffusion in production and in innovation adoption in consumption by households and other components of what is traditionally referred to as final demand. On the other hand, the rich conceptual framework offered by innovation diffusion theory provides an appropriate context in which the general process of coefficient change can be considered.VL ConclusionsFrom this discussion, is there a sense of a substantial set of opportunities in the field of input-output analysis? The answer would appear to be most strongly affirmative. The issues which have been raised and the work conducted to date have merely scratched the surface that the new sets of issues facing input-output analysts are more challenging, interesting and exciting than many of those of the past.区域投入产出分析的新兴挑战Geoffrey J. D. HewingsUniversity of Illinois at Urbana-Champaign USAAndRodney C. JensenUniversity of Queensland St. Luci Australia摘要:区域投入产出分析研究领域不断变化的兴趣和研究的重点,回顾最近的一些趋势和区域投入产出的主流哲学,并把注意力集中在三个相互依存的新兴趋势。
语言逻辑PIII【Propositional logic

• The syntax语法学 and semantics语义学 of propositional calculus (命题演算的语法学、语义学) • Tautologies and contradictions (重言式、矛盾式) • Truth tables (真值表)
Propositional logic: connectives
•
•
•
•
conjunction连接 of p and q); “p∨q” is the proposition “p or q or both”, (the disjunction 析取 of p and q.) “p → q” is the proposition “if p then q”, (the implication蕴含 of p and q.) “p ↔ q” is the proposition “if p then q, and vice替代 versa反之亦然”, (the equivalence of p and q.) “¬p” is the proposition “not p”.
Propositional logic
• The meaning of the logical connectives Conjunction ∧, & Difference between ∧, & vs. and () Run a mile every day and you will feel like a new man. Not conjunction but implication蕴涵: () If you run a mile every day then you will feel like a new man. p→q
介尺度中的复杂性——人工智能发展中的共性挑战

ResearchArtificial Intelligence—PerspectiveComplexity at Mesoscales:A Common Challenge in Developing ArtificialIntelligenceLi Guo a ,b ,Jun Wu c ,Jinghai Li a ,b ,⇑aState Key Laboratory of Multiphase Complex Systems,Institute of Process Engineering,Chinese Academy of Sciences,Beijing 100190,China bSchool of Chemical Engineering,University of Chinese Academy of Sciences,Beijing 100049,China cSchool of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,Chinaa r t i c l e i n f o Article history:Received 15July 2018Revised 29July 2019Accepted 12August 2019Available online 21August 2019Keywords:Artificial intelligence Deep learning Mesoscience MesoscaleComplex systema b s t r a c tExploring the physical mechanisms of complex systems and making effective use of them are the keys to dealing with the complexity of the world.The emergence of big data and the enhancement of computing power,in conjunction with the improvement of optimization algorithms,are leading to the development of artificial intelligence (AI)driven by deep learning.However,deep learning fails to reveal the underlying logic and physical connotations of the problems being solved.Mesoscience provides a concept to under-stand the mechanism of the spatiotemporal multiscale structure of complex systems,and its capability for analyzing complex problems has been validated in different fields.This paper proposes a research paradigm for AI,which introduces the analytical principles of mesoscience into the design of deep learn-ing models.This is done to address the fundamental problem of deep learning models detaching the physical prototype from the problem being solved;the purpose is to promote the sustainable develop-ment of AI.Ó2019THE AUTHORS.Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company.This is an open access article under the CC BY-NC-ND license(/licenses/by-nc-nd/4.0/).1.AI has achieved significant development and increasingly become a common multidisciplinary techniqueIn recent years,various scientific and technological break-throughs have become a reality.AlphaGo,developed by Google,defeated top Go players such as Lee Sedol and Ke Jie;driverless cars have traveled safely for millions of kilometers and obtained legal driving rights in more than ten states in the United States;and image-and speech-recognition techniques have gradually matured and been widely used in consumer products such as cameras and smartphones,bringing great convenience.As a result,the old term ‘‘artificial intelligence”(AI)has reappeared in the public sight,trig-gering a new round of technological revolution.Today,AI is included in the major development strategies of many countries,and is regarded as a core capability [1–4].As AI is closely related to many fields,it is expected to become a model framework for interdisciplinary research and to further promote the coordinated development of various fields.AI has existed for more than 60years,since its birth at the Dartmouth conference in 1956.The development of its main-stream techniques has gone through three key periods:the reason-ing period,knowledge period,and learning period.From 1956to the early 1970s,AI research was in the reasoning period,and mainly concentrated on rule-based symbolic representation and reasoning;the representative achievements in this period were various automatic theorem-proving procedures [5].However,with the increasing difficulty of problems,it was challenging for machi-nes to be intelligent purely based on logical reasoning.Hence,some researchers turned from the exploration of general thinking law to the application of specialized knowledge,and AI research entered the knowledge period.From the early 1970s to the end of the 1980s,a large number of expert systems [6]were estab-lished and achieved remarkable results in specific application areas.However,with the expansion of the application scale,it was very difficult to summarize knowledge and then teach it to computers.Therefore,some researchers advocated having comput-ers learn knowledge from data automatically,and AI research entered the learning period.Since the early 1990s,AI research has been devoted to machine learning theory and algorithm research [7].Machine learning has not only made breakthroughs⇑Corresponding author.E-mail address:jhli@ (J.Li).in traditional AI tasks such as image recognition [8–10]and speech recognition [11,12],but also played an important role in many novel applications,such as predicting the activity of poten-tial drug molecules [13],analyzing particle accelerator procedure [14],reconstructing brain circuits [15],identifying exoplanets [16],diagnosing skin cancer [17],and predicting the effects of mutations in non-coding DNA on gene expression and disease [18,19].The revival of AI can be attributed to three main factors.First,the significant progress of data acquisition,storage,and transmis-sion techniques has resulted in big data,which is an essential resource for AI research.Moreover,the maturity of high-performance computing techniques and the emergence of power-ful computing hardware (e.g.,graphics processing units (GPUs)and central processing unit (CPU)clusters)have laid a solid foun-dation for the study of st but not least,researchers have accu-mulated abundant experience and skills in modeling large-scale complex problems,leading to the rapid development of machine learning methods represented by deep neural networks [20],which provide an effective approach to study AI.Today,AI is a mul-tidisciplinary technique,which can be used in any area that requires data analysis.2.Deep learning is prevailing,but its physical mechanism remains unclearA deep learning model is actually a multilayer artificial neural network.To avoid confusion,it is hereinafter referred to as a deep neural network,the structure of which is shown in Fig.1.The model consists of three main parts:the input layer,hidden layer (s),and output layer.Each node of the input layer corresponds to one dimension of the input data (e.g.,a pixel of image),each node of the output layer corresponds to a decision variable (e.g.,a semantic category),and the hidden layers are made up of many ‘‘neurons.”In terms of biological mechanism,a neuron receives a potential signal transmitted by other neurons and will be activated and out-put a signal when the accumulated signal is higher than its ownpotential.This process can be formalized as y ¼f w T x þb ÀÁ,where x ¼x 1ÁÁÁx n ½ T denotes a multidimensional input signal,y denotes aone-dimensional output signal,w ¼w 1ÁÁÁw n ½ T denotes the weight of the input signal,b denotes a bias,and f is an activation function.It can be seen that a deep neural network is essentially a mathe-matical model produced by nesting a simple function hierarchi-cally.Although the deep neural network is inspired by neurophysiology,its working principle is far from the brain simu-lation depicted by the media.In fact,the working principle of the human brain has not yet been fully explored.When many neurons are hierarchically organized as a deep neural network,this deep model is equivalent to a nested composite function,and each layer of the network corresponds to a nonlinear mapping (the output signal of the previous layer is used as the input signal for the next layer).Signal transmission throughout the network can be formally described as y ¼f W L ÁÁÁf W 2f W 1x þb 1ðÞþb 2½ ÁÁÁþb L f g ,where W l and b l l ¼ð1;2;:::;L Þrespectively denote the weight matrix and the bias vec-tor at the l th layer (i.e.,the model parameters to be solved).Here,the model parameters are packed into matrices and vectors,since each layer of the network contains multiple neurons.Given the application task,a loss function (used to measure the difference between the actual and expected outputs of a deep neural net-work)should be designed first,and then the model parameters can be solved by optimizing the loss function with a backpropaga-tion algorithm [21],so that a multilevel abstract representation collectively hidden in a dataset can be learned.Illustrated by statistical learning theory [22],the more numer-ous the parameters are,the higher the model complexity will be,and hence the stronger the learning ability will be.We can increase the model complexity of a deep neural network by ‘‘widening”or ‘‘deepening”it.The latter works better in practice.While ‘‘widen-ing”only increases the number of basic functions,‘‘deepening”increases not only the number of functions,but also the layers of function nesting,making it more powerful in view of functional expression.Therefore,‘‘deepening”is more helpful in improving model complexity and learning ability.Taking the ImageNet com-petition in the computer vision field as an example,neural archi-tecture is getting deeper and deeper,from the seven-layer AlexNet [8]to the 16-layer VggNet [9],and then to the 152-layer ResNet [10].At present,the deepest neural network has reached thousands of layers,with the number of model parameters reach-ing as many as severalbillions.Fig.1.An example of a deep neural network.Eight images on the bottom right of Fig.1was adapted from ImageNet ( ).L.Guo et al./Engineering 5(2019)924–929925It is worth noting that one crucial aspect of a deep neural net-work is how to design the neural architecture reasonably.Existing neural architectures have mostly been developed manually by human experts,which is a time-consuming and error-prone pro-cess.Therefore,a great deal of effort is currently being put into automatic machine learning(AutoML),with a particular focus on methods of automated neural architecture search(NAS)[23].The concept behind NAS is to explore a search space—which defines all of the architectures that can be represented in principle—by Bayesian optimization[24],reinforcement learning[25],or neuro-evolutionary learning[26],with the goal offinding the architec-tures that achieve high predictive performance on unseen data.Although deep learning has achieved many successes,the inter-pretability of such a model remains unclear.When researchers apply a deep neural network to model their problems,they regard it as a‘‘black box”and only focus on the input and output.Most employed neural architectures are designed entirely based on the experience and intuition of the researchers,which fail to link the problem to be solved with its physical background.Although the calculation process of a deep neural network can be explicitly rep-resented by a mathematical formula,it is difficult to explain it at the level of physical meaning.The model lacks a physical connota-tion that can reflect the essence of the problem.Even if AutoML is helpful tofind‘‘better”neural architectures in terms of predictive performance,it still cannot be physically explained.A few researchers have tried to explain a deep neural network in terms of some specific application tasks.Taking image recognition as an example,researchers performed a de-convolution operation on a deep convolutional neural network(CNN)to visualize the visual features learned by the layers,hoping to explain the microscopic process of image recognition[27].However,such a heuristic expla-nation is not universal and can hardly be extensively applied to other cases;furthermore,it fails to reveal the physical mechanism of the model.Therefore,the interpretability of the deep neural net-work is a bottleneck in its further development.3.Mesoscience is expected to be a possible solution to reveal the physical mechanism of deep learning and further promote the development of AIAssociating the design of a deep neural network with the physi-cal mechanism of the problem to be solved is a prerequisite for realizing breakthrough progress in AI,and the universality of the physical mechanism determines the scope of AI applications, which is a fundamental problem for AI in the future.Mesoscience is based on the idea that complexity originates from the compromise in the competition between two or more dominant mechanisms in a system,resulting in a complex spa-tiotemporal dynamic structure[28].Almost all systems studied in AI research are complex systems.Introducing mesoscience prin-ciples and methods into AI research(mainly in respect to deep neural networks)might be a promising way to address the afore-mentioned problems.Starting with the study of gas–solidfluidization in chemical engineering[29],mesoscience has been consecutively applied to studies in gas–liquidfluidization[30],turbulenceflow[31,32],pro-tein structure[33],catalysis[34],and so forth.The universal law has now been gradually summarized.The main spirit of meso-science can be summarized as follows[35,36]:In general,a com-plex problem has multiple levels,each of which has multiscale characteristics,and different levels are related to each other.A complex system consists of countless elements,and there is likely a spatiotemporal multiscale structure between the system and ele-ment scale due to the collective effect of elements.There are three types of regimes in such a structure(taking the case of a system controlled by two dominant mechanisms as an example),with completely different properties when the boundary and external conditions are changed:A–B regime:This regime is jointly controlled by physical mechanisms A and B,and is known as the mesoregime.The structure of the mesoregime shows the alternation of the two states,which is controlled by the compromise in competition between mechanism A and mechanism B.In this regime,the system structure conforms to the following:minAB!A-dominated regime:As the external conditions change, mechanism B disappears and mechanism A alone dominates the system.In this case,the system structure characteristics are simple and in line with the following:A¼minB-dominated regime:As the external conditions change in the opposite direction,mechanism A disappears and mechanism B dominates the system;the system structure now conforms to the following:B¼minMost importantly,the transition between the A,A–B,and B regimes is often accompanied by sudden changes in the system’s characteristics and function.The problem handled by deep learning can often be regarded as a complex system.The correlation between the input and output of such a system is usually modeled as a nonlinear nested ing mesoscience theory to examine existing deep learning models,this paper proposes the following research and application mode of mesoscience-based AI:Supposing that there is a huge training dataset,we want to establish a model to express the inherent laws of the dataset through deep learning.According to the concept and logic of meso-science,the following steps should be taken:(1)Analyze these data to determine how many levels they involve.(2)For each level,analyze the existence of three regimes.(3)If it belongs to regime A or regime B,the structure is simple and can be solved by the existing deep learning technique.(4)If it is in the A–B mesoregime,the system has a significant spatiotemporal dynamic structure.It is necessary to analyze its physical dominant mechanismsfirst;next,use a multi-objective variational model for the two or more mechanisms with the classi-cal gradient descent to help the model training.(5)After analyzing each level,carry out association and integra-tion among different levels.Examining the above steps,we found that,for problems belong-ing to regime A or regime B,the extreme conditions are relatively simple from the mesoscience perspective.Thus,the existing deep learning techniques can be used to quickly iterate to the solution to establish the mathematical model.However,for problems in the A–B mesoregime,the correlation between the input and output is controlled by multiple—at least two—physical mechanisms. Therefore,a time-consuming parameter-learning procedure is inevitable,if the conventional deep learning method is used.Alter-natively,if the physical mechanism is decomposedfirst,according to the concept of mesoscience,and the multi-objective variational method is adopted to analyze different control mechanisms,cou-pled with the classical backpropagation algorithm,the deep learn-ing solution satisfying the error condition can be obtained more quickly.926L.Guo et al./Engineering5(2019)924–929For complex problems dealt with by deep learning,if the con-trol mechanism is analyzedfirst at the physical level,the model can be established following the above steps.This is helpful to speed up the model training and facilitate a deeper understanding of the physical nature of the system.A general method to address the intrinsic problems in AI may be to introduce the analytical and processing means of mesoscience into deep learning,such as region decomposition,dominant mecha-nisms identification,and the multi-objective variational method.4.The problem-solving paradigm of AI can be improved by mesoscienceA generalflowchart of AI theoretical and application research based on deep learning is shown in Fig.2.The main steps can be summarized as follows:(1)Collecting training data:Gather(sufficient)data from application scenarios(often complex systems),and label the data-set if supervised learning is involved.(2)Constructing a deep neural network:Choose a suitable neural architecture and optimization algorithms to train a statisti-cal model that can capture the potential patterns hidden in the dataset.(3)Applying the model:Predict results for new data using the well-trained model.The essential step in such aflowchart is to construct the deep neural network.At present,researchers entirely rely on their own experience and intuition to complete this step,due to the ‘‘black box”issue inherent in artificial neural networks.To this end,we suggest that mesoscience principles and methods could be considered for the construction of a deep neural network.The resulting improvedflowchart is shown in Fig.3.Solving the problem of complex systems is a critical goal of AI, as illustrated by Figs.2and3;complex systems provide both appli-cation scenarios and massive datasets for AI.The human brain is also a complex system.On the one hand,brain science[37]—that is,studying the material basis and mechanism of human think-ing—is an important support for the future development of AI. On the other hand,improving our understanding of complex sys-tems will also help brain science research.The recent success of the deep learning technique can be regarded as a mathematical success.If the research results of brain science can be integrated into AI in the future,this will inevitably and significantly promote the research and application of AI.Brain scientists have been trying to reveal the secrets of the human brain,not only from the perspective of biology and anat-omy,but also in terms of the development of the cognitive mech-anism.By combining this knowledge of how intelligence occurs with advanced computer hardware and software technology,it is possible to build an‘‘artificial brain”that may be comparable to the human counterpart.However,different people use their differ-ing abilities to solve problems with their brain after receiving cor-responding education and training,even though different peoples’brains have identical structure and functions.The key is that when a problem arises,people make use of their understanding of the problem’s physical nature together with their brain’s reasoning and induction abilities to obtain the correct solution within a lim-ited time.Therefore,the ability of AI to solve practical problems should depend on the progress of brain science,the development of information technology(IT),the understanding of the physical nature of the problems,and the effective integration and coupling among them.Although mesoscience originates from thefield of chemical engineering,its basic principles are pervasively applicable to other complex systems.The core concept of mesoscience is tofind the multilevel correlations and multiscale associations in the system, as well as to identify the mesoregimes and their physical dominant mechanisms in different levels.Next,the multi-objective varia-tional method is used to seek the law of compromise in competi-tion of the dominant mechanisms in order to solve the problem. In the new paradigm illustrated by Fig.3,mesoscience plays an important role in improving the model architecture and learning algorithm,in addition to improving related computing hardware and computational methods.From the mesoscience concept to AI applications,many issues remain to be explored.For example,Google’s AlphaGo Fan[38] adopted deep reinforcement learning—which integrates the per-ception ability of deep learning with the decision-making ability of reinforcement learning—to beat the human Go world champion. By combining the deep reinforcement learning technique with the Monte Carlo search strategy,AlphaGo assesses the current board situation through the value network to reduce the search depth, and uses the policy network to reduce the search width,in order to improve the search efficiency.AlphaGo is a successful applica-tion example of deep reinforcement learning.From the perspective of system structural analysis,deep reinforcement learning can be divided into three levels:①tens of thousands of perceptrons,②several deep learning networks,and③deep reinforcement learning strategies.These levels coincide with the three scalesin Fig.2.Schematic of existing AI research and applications.L.Guo et al./Engineering5(2019)924–929927mesoscience:①element scale,②mesoscale,and ③system scale.It is worth investigating whether it is possible to directly apply analytical methods to deep reinforcement learning.Notably,DeepMind has developed four main versions of AlphaGo:Fan,Lee,Master,and Zero.The earlier versions of AlphaGo such as Fan and Lee [38]are trained by both supervised learning and reinforcement learning,while the latest version AlphaGo Zero [39]is solely trained by self-play reinforcement learning without any human knowledge,and use a single deep neural network rather than separate policy and value networks.Here,we take only AlphaGo Fan as an example,for two reasons:First,AlphaGo Fan [38]is the most complicated version,and this paper focuses on ana-lyzing complex systems;thus,AlphaGo Fan is the most typical case among the four versions.Second,no matter whether policy and value networks are separated (e.g.,AlphaGo Fan and Lee [38])or merged (e.g.,AlphaGo Zero [39]),they still correspond to the mesoscale in accordance with the mesoscience concept.Another example is the generative adversarial network (GAN)[40],which is one of the most popular and successful deep learning models.GAN performs learning tasks by means of a mutual game between the generative model and the discriminative model.GAN’s goal is to generate pseudo data consistent with the distribu-tion of real data by using a generative model with the help of a dis-criminative model.These two models have their own goals as well;the generative model attempts to generate data that can deceive the discriminative model,while the discriminative model strives to distinguish the generated data from the real data.In the process of establishing the GAN,the two models are mutually restrained,and each tries to lead in the direction of its own advantage.Finally,under the constraint of the GAN objective function,the two models reach equilibrium and compromise with each other.If the behavior of the two models is regarded as being equivalent to the two dominant mechanisms,A and B ,in the mesoregime,then the GAN’s training is the process of compromise in the competition of two dominant mechanisms in the mesoregime.In this way,the spirit of mesoscience may be beneficial when training a GAN model and further boosting its applications.Through an analysis of the progress of AI and big data during the past years,two conclusions can be drawn,as shown in Fig.3:①With the continuous development of brain science,the working principle of the human brain is gradually being revealed,and a breakthrough could be realized in AI using such achievements.②Big data has its own complexity.In order to tackle the complex-ity behind big data and build a physical model conforming to the objective law,it is necessary to identify the physical mechanismbehind the complexity.The above two aspects are logically consis-tent;that is,exploring the physical mechanism of complex systems and making effective use of them are the keys to dealing with com-plexity.Reflecting this logic,this paper advocates applying the principle and methods of mesoscience to AI.5.ConclusionThe emergence of big data,along with the advancement of com-puting hardware,has prompted great development in AI,leading to its applications in many fields.However,due to certain problems inherent in deep learning,the interpretability of deep learning is limited.Although mesoscience originates from chemical engineer-ing,its analytical methods—which include multilevel and multi-scale analysis,as well as the idea of compromise in the competition of dominant mechanisms in the mesoregime—can also be applied to other complex systems.In recent years,mesoscience has achieved successful applications in different fields,and is expected to provide a novel concept to improve the interpretability of deep learning.At present,the proposed mesoscience-based AI is a preliminary research idea,and its verification and expansion require the joint effort of researchers from various disciplines.In particular,explo-ration on its specific applications is required in future.AcknowledgementsWe would like to thank Dr.Wenlai Huang,Dr.Jianhua Chen,and Dr.Lin Zhang for the valuable discussion.We thank the editors and reviewers for their valuable comments about this article.We gratefully acknowledge the support from the National Natural Science Foundation of China (91834303).Compliance with ethics guidelinesLi Guo,Jun Wu,and Jinghai Li declare that they have no conflict of interest or financial conflicts to disclose.References[1]The State Council of China.Development Plans of New Generation ArtificialIntelligence;2017.[2]European Commission.Artificial Intelligence for Europe;2018.[3]UK Government Office for Science.Artificial intelligence:opportunities andimplications for the future of decision making;2016.Fig.3.Schematic of mesoscience-based AI research and applications.928L.Guo et al./Engineering 5(2019)924–929[4]National Science and Technology Council,Networking and InformationTechnology Research and Development Subcommittee.The National Artificial Intelligence Research and Development Strategic Plan.Washington: Executive Office of the President of the United State;2016Oct.[5]Chang CL,Lee RC.Symbolic logic and mechanical theorem proving.NewYork:Academic Press;1973.[6]Puppe F.Systematic introduction to expert systems.Berlin:Springer-Verlag;1993.[7]Jordan MI,Mitchell TM.Machin learning:trends,perspectives,and prospects.Science2015;349(6245):255–60.[8]Krizhevsky A,Sutskever I,Hinton GE.ImageNet classification with deepconvolutional neural networks.In:Proceedings of the Neural Information Processing Systems;2012Dec3–8;Lake Tahoe,NV,USA.San Diego:Neural Information Processing Systems Foundation;2012.[9]Simonyan K,Zisserman A.Very deep convolutional networks for large-scaleimage recognition.In:Proceedings of the3rd International Conference on Learning Representations(ICLR);2015May7–9;San Diego,CA,USA;2015.arXiv:1409.1556.[10]He K,Zhang X,Ren S,Sun J.Deep residual learning for image recognition.In:Proceedings of the28th IEEE Conference on Computer Vision and Pattern Recognition;2015June8–10;Boston,MA,USA.New York: IEEE;2016.[11]Hinton G,Deng L,Yu D,Dahl G,Mohamed A,Jaitly N,et al.Deep neuralnetworks for acoustic modeling in speech recognition.IEEE Signal Process Mag 2012;29:82–97.[12]Sainath TN,Mohamed A,Kingsbury B,Ramabhadran B.Deep convolutionalneural networks for LVCSR.In:Proceedings of the38th IEEE International Conference on Acoustics,Speech and Signal Processing;2013May26–31;Vancouver,Canada.New York:IEEE;2013.[13]Ma J,Sheridan RP,Liaw A,Dahl GE,Svetnik V.Deep neural nets as a method forquantitative structure-activity relationships.J Chem Inf Model 2015;55:263–74.[14]Ciodaro T,Deva D,De Seixas JM,Damazio D.Online particle detection withneural networks based on topological calorimetry information.J Phys Conf Ser 2012;368:12–30.[15]Helmstaedter M,Briggman KL,Turaga SC,Jain V,Seung S,Denk W.Connectomic reconstruction of the inner plexiform layer in the mouse retina.Nature2013;500:168–74.[16]Shallue V,Vanderburg A.Identifying exoplanets with deep learning:afive-planet resonant chain around Kepler-80and an eighth planet around Kepler-90.Astron J2018;155:94–115.[17]Esteva A,Kuprel B,Novova RA,Ko J,Swetter SM,Blau HM,et al.Dermatologist-level classification of skin cancer with deep neural networks.Nature 2017;552:23–5.[18]Leung MK,Xiong HY,Lee LJ,Frey BJ.Deep learning of the tissue-regulatedsplicing code.Bioinformatics2014;30:121–9.[19]Xiong H,Alipanahi B,Lee LJ,Bretschneider H,Merico D,Yuen RKC,et al.Thehuman splicing code reveals new insights into the genetic determinants of disease.Science2015;347(6218):1254806.[20]LeCun Y,Bengio Y,Hinton G.Deep learning.Nature2015;521:436–44.[21]Rumelhart DE,Hinton GE,Williams RJ.Learning representations by back-propagating errors.Nature1986;323(6088):533–6.[22]Vapnik V.The nature of statistical learning theory.New York:Springer;1998.[23]Elsken T,Metzen JH,Hutter F.Neural architecture search:a survey.J MachLearn Res2019;20:1–21.[24]Mendoza H,Klein A,Feurer M,Springenberg JT,Hutter F.Towardsautomatically-tuned neural networks.In:Proceedings of the International Conference on Machine Learning;2016Jun19–24;New York,NY,USA.JMLR Workshop Proc2016;64:58–65.[25]Zoph B,Le QV.Neural architecture search with reinforcement learning.In:Proceedings of the5rd International Conference on Learning Representations;2017April24–26;Toulon, Jolla:ICLR;2017.[26]Real E,Aggarwal A,Huang Y,Le QV.Aging evolution for image classifierarchitecture search.In:Proceedings of the23th AAAI Conference on Artificial Intelligence;2019Jan27–Feb1;Honolulu,HI,USA.Palo Alto:AAAI;2019. [27]Zeiler M,Fergus R.Visualizing and understanding convolutional networks.In:Proceedings of the13th European Conference on Computer Vision;2014Sep 6–12;Zurich,Switzerland.Cham:Springer;2014.[28]Li J,Ge W,Yang N,Liu X,Wang L,He X,et al.From multiscale modeling tomeso-science.Berlin:Springer;2013.[29]Li J,Tung Y,Kwauk M.Method of energy minimization in multi-scale modelingof particle-fluid two-phaseflow.In:Proceedings of the2nd International Conference on Circulating Fluidized Beds;1988Mar14–18;Compiégne, France.Circ Fluid Bed Technol;1988:89–103.[30]Yang N,Wu Z,Chen J,Wang Y,Li J.Multi-scale analysis of gas–liquidinteraction and CFD simulation of gas–liquidflow in bubble columns.Chem Eng Sci2011;66(14):3212–22.[31]Wang L,Qiu X,Zhang L,Li J.Turbulence originating from the compromise-in-competition between viscosity and inertia.Chem Eng J2016;300:83–97. [32]Li J,Zhang Z,Ge W,Sun Q,Yuan J.A simple variational criterion for turbulentflow in pipe.Chem Eng Sci1999;54(8):1151–4.[33]Ren Y,Gao J,Xu J,Ge W,Li J.Key factors in chaperonin-assisted protein folding.Particuology2012;10(1):105–16.[34]Huang WL,Li J.Mesoscale model for heterogeneous catalysis based on theprinciple of compromise in competition.Chem Eng Sci2016;147:83–90. [35]Li J,Huang W,Chen J,Ge W,Hou C.Mesoscience based on the EMMS principleof compromise in competition.Chem Eng J2018;333:327–35.[36]Li J,Huang W.From multiscale to mesoscience:addressing mesoscales inmesoregimes of different levels.Annu Rev Chem Biomol Eng2018;9:41–60.[37]Kandel ER,Schwartz JH,Jessell TM,Siegelbaum SA,Hudspeth AJ,Mack S.Principles of neural science.5th ed.New York:McGraw-Hill Education;2012.[38]Silver D,Huang A,Maddison CJ,Guez A,Sifre L,Van den Driessche G,et al.Mastering the game of Go with deep neural networks and tree search.Nature 2016;529(7587):484–9.[39]Silver D,Huang A,Maddison CJ,Guez A,Sifre L,Van den Driessche G,et al.Mastering the game of Go without human knowledge.Nature2016;550:484–9.[40]Goodfellow I,Pouget-Abadie J,Mirza M,Xu B,Warde-Farley D,Ozair S,et al.Generative adversarial nets.In:Proceedings of the Advances in Neural Information Processing Systems;2014Dec8–13;Montréal,Canada.San Diego:Neural Information Processing Systems Foundation;2014.L.Guo et al./Engineering5(2019)924–929929。
描述逻辑~

3 描述逻辑的研究进展
◆ 描述逻辑的基础研究
研究描述逻辑的构造算子、表示和推理的基本问题, 如可满足性、包含检测、一致性、可判定性等。 一般都在最基本的ALC的基础上在扩展一些构造算子, 如数量约束、逆关系、特征函数、关系的复合等。 TBox和Abox上的推理问题、包含检测算法等。 Schmidt-Schaub 和 Smolka首先建立了基于描述逻辑 ALC的Tableau算法,该算法能在多项式时间内判断描述 逻辑ALC概念的可满足性问题。
computer equipment
包含与可满足性的关系
C D iff C D是不可满足的。 C T D iff C D关于T是不可满足的。 C 关于T是一致的 iff C T A A D
高级人工智能
第二章 人工智能逻辑
第二部分
史忠植
中国科学院计算技术研究所
描述逻辑
Description Logics
主要内容
什么是描述逻辑? 什么是描述逻辑? ◆ 为什么用描述逻辑? 为什么用描述逻辑? ◆ 描述逻辑的研究进展 ◆ 描述逻辑的体系结构 ◆ 描述逻辑的构造算子 ◆ 描述逻辑的推理问题 ◆ 我们的工作
◆ C关于 关于Tbox T是协调的吗? 是协调的吗? 关于 是协调的吗
即检测是否有T的模型 I 使得 C ≠ ?
◆知识库 知识库<T, A>是协调的吗? 是协调的吗? 是协调的吗
即检测是否有<T, A>的模型 (解释) I ?
概念可满足性( 2) 概念可满足性(Satisfiablity) )
另外,有两个类似于FOL中的全集(true)和空集(false)的算子
top Bottom T ⊥ △I Male Male Man Man
基于TLA业务流程形式化分析

基于TLA的业务流程形式化分析摘要:本文先分析了业务流程形式化分析与验证的主要研究现状,提出基于tla的业务流程形式化分析的优势。
探讨如何对tla 理论体系进行扩展,以bpel为例研究如何对主流的业务流程的描述语言进行转换。
关键词:行为时序逻辑;业务流程;形式化分析;bpel中图分类号:tp301文献标识码:a文章编号:1007-9599 (2013) 07-0000-021引言行为时序逻辑tla[1,2]是由leslielamport于1990年提出的一种基于行为逻辑与线性时态逻辑的新的逻辑方法。
通过leslielamport与一些学者的研究,compaq、microsoft公司检测工具的开发,行为时序逻辑tla,其描述语言tla+[1,2]与检测工具tlc[1,2]逐步得以完善。
本文对如何使用行为时序逻辑对电子商务环境下的业务流程进行形式化分析进行了探讨。
2业务流程形式化分析与验证的研究现状近期的研究开始关注对业务流程的规范和验证,如利用petri网、自动机、进程代数规范和验证业务服务流程的bpel模型[3,4]。
xiaochuanyi[5]利用有色petri网来设计和验证web业务服务流程,一个流程可以转换到一个对等的cp-nets模型,然后用cpn工具分析验证,检验流程的正确性。
yanpingyang[6,7]把bpel转换到层次有色网,再利用cpn工具验证。
chunouyang[8]提供了比较完整地从bpel控制流到petri网的映射。
xiangfu[9]把bpel转换成自动机,进而转换成promela语言,再利用模型验证工具spin进行验证。
wombacher等人[10]用一种扩展逻辑表达式的自动机于对业务进行形式化建模,kochutk等人[11]提出一个基于petri网的设计和验证框架,可用于bpel进程的可视化、创建和验证,文献[12]给出了一套完整的、形式化的petri网语义,将bpel进程自动转换为petri网模型,可用多种验证工具对bpel进程做自动分析。
LTL性质的可监视性

" " 反 证 法 。 假 设 L 不 可 监 视 , 则
y , p y , 对 任 意 非 空 开 集 V p , 都 有 V L 并 且 V L c 。 对 任 意 A , 令 x A.y ,则 A.p A.y,A.V A.p ,有 A.V .L 并 且 A.V .L c , 即 A.V L 并 且 A.V L c ,从而, L 不可监视。矛盾。
V A1 Ai 1 i k q , (则 A0.V p )使得 V L 或 V L c 。因此, 非 空 开 集 A0.V .L 或 A0.V .L c 即 A0.V L 或 A0.V L c 。
对于长度为 1 的前缀,即 p 1, p A0 ,存在非 空开集 A0.V A 0 Ai A0 p ,因此,L 可 监视。
LTL 的语义和可监视的定义出发,证明 LTL 性质的 义 无 限 单 词 的 前 缀 和 后 缀 。 对 无 限 单 词
可监视性。ห้องสมุดไป่ตู้
1 符号说明
本文中, 表示非空有限字母表,并且假定 2 , 的元素是抽象符号,称为字母,用大写 字母 A, B,C, 表示。 上的单词是 中字母的有限 或无限序列,即单词具有形式 p A0 A1 An , (n N ) 或
but it’s not closed under (until). Further, by strengthening the conditions, we get several sufficient conditions to ensure the
monitorability of 1 2 . Key Words: monitorable; LTL formula; model checking; topology
Temporal RDF
Temporal RDFClaudio Gutierrez1,Carlos Hurtado1,and Alejandro Vaisman21Department of Computer ScienceUniversidad de Chile{cgutierr,churtado}@dcc.uchile.cl2Department of Computer ScienceUniversidad de Buenos Airesavaisman@dc.uba.arAbstract.The Resource Description Framework(RDF)is a metadatamodel and language recommended by the W3C.This paper presents aframework to incorporate temporal reasoning into RDF,yielding tem-poral RDF graphs.We present a semantics for temporal RDF graphs,asyntax to incorporate temporality into standard RDF graphs,an infer-ence system for temporal RDF graphs,complexity bounds showing thatentailment in temporal RDF graphs does not yield extra asymptoticcomplexity with respect to standard RDF graphs and sketch a temporalquery language for RDF.1IntroductionThe Resource Description Framework(RDF)[14]is a metadata model and lan-guage recommended by the W3C for building an infrastructure of machine-readable semantics for the data on the Web,a long-term vision known as Se-mantic Web.In the RDF model,the universe to be modeled is a set of resources, essentially anything that can have a universal resource identifier,URI.The lan-guage to describe them is a set of properties,technically binary predicates.De-scriptions are statements very much in the subject-predicate-object structure. Both subject and object can be anonymous objects,known as blank nodes.In addition,the RDF specification includes a built-in vocabulary with a norma-tive semantics(RDFS)[4].This vocabulary deals with inheritance of classes and properties,as well as typing,among other features allowing the descriptions of concepts and relationships that can exist for a community of people and software agents,enabling knowledge sharing and reuse.Although some studies exist about addressing changes in an ontology[15],or the need for temporal annotations on Web documents[22],little attention has deserved the problem of representing,updating and querying temporal informa-tion in RDF.Time is present in almost any Web application.Indeed,as pointed out by Abiteboul[1]the modeling of time is one of the key primitives needed in a query language for Web and semistructured data.Thus,there is a clear need of applying temporal database concepts to RDF to allow metadata navigation across time.Fig.1.Initial RDF graph(left)and after some changes(right).Consider an RDF graph describing information about a university,as of its creation time,Figure1(left).Students were classified as technical,graduate or undergraduate,and the only graduate programs offered were at the level of ‘Master’studies,like MBA or MSc;‘Professional Diploma’was the only program offered at the technical level.As the university evolved,a Ph.D program was created.Figure1(right)illustrates the new situation.Notice the dynamics of this example:students(e.g.,John)can enroll in one program(e.g.,Undergraduate), then shift to another one(e.g.,Master),and so on.Thefigures show that the impact of disregarding the time dimension is twofold:on the one hand,when a change occurs,a new metadata document must be created(and the current document dropped).On the other hand,queries asking for past states of the metadata cannot be supported.For instance,we cannot ask for the programs offered at the time when the university was created.1.1Problem Statement:Introducing Time into RDFGenerally speaking,a temporal database is a repository of temporal information. Although temporal databases were initially studied for adding the time dimen-sion to relational databases,as new data models emerged,temporal extensions to these models were also proposed(see Section1.2).We next discuss the main issues that arise when extending RDF with temporal information.Versioning vs.Time Labeling There are two mechanisms for adding the time dimension to non-temporal RDF graphs:labeling and versioning(following the timestamp and snapshot models,respectively).The former consists in labeling the elements subject to changes(i.e.triples).The latter is based on maintaining a snapshot of each state of the graph.For instance,each time a triple changes,a new version of the RDF graph is created,and the past state is stored somewhere. Although both models are equivalent,versioning appears to be not suitable for queries of the form:“all time instants whereΦholds in the database”.There are at least two temporal dimensions to consider when dealing with temporal databases:valid and transaction times.Valid time is the time when data is valid is the modeled world;transaction time is the time when data is actually stored in the database.The versioning approach captures transaction time,while labeling is mostly used when representing valid time.The approach we present in this paper supports both time dimensions.Fig.2.A temporal RDF graph accounting for the evolution of the university ontology.In summary,we believe that for RDF data,labeling is better than versioning, because(a)it preserves the spirit of the distributed and extensible nature of RDF,and(b)in scenarios where changes are frequent and only affecting a few elements of the document,creating a new physical version of the graph each time an update occurs may lead to large overheads when processing temporal queries that span multiple versions.Time Points vs.Time Intervals We will work with the point-based temporal domain for defining our data model and query language,but we will encode time-points in intervals when possible,for the sake of clarity.We will consider time as a discrete,linearly ordered domain,as usual in virtually all temporal database applications.An ordered pair[a,b]of time points,with a≤b,denotes the closed interval from a to b.Figure2shows a temporal RDF graph for the university example above.The arcs in the graph are labeled with their interval of validity.3 For example,the interval[0,Now]says that the triple(technical,sc,student)is valid from the document’s creation time to the current time.Also,note that the figure shows that John was an Undergraduate student in the interval[0,10],and now he is a PhD student.Vocabulary for Temporal Labeling Temporal labeling can be implemented within the RDF specification,making use of a simple additional vocabulary,as Figure3 shows.As we adopted the point-based,discrete and linearly ordered temporal domain,the left and right hand sides of Figure3are equivalent.We will use both representations indistinctly.Moreover,we define constructs that allow moving between intervals and time instants as follows:the instants depicted in Figure3 (left)can be encoded in an interval as shown in Figure3(right).Both alternatives will be used in the query language.3Note that the standard graph(ical)representation of an RDF graph is not the most faithful to convey the idea of statements(triples)being labeled by a temporal ele-ment.Technically,temporal labels should be attached to a whole subgraph u p→v, and not only to an arc.aFig.3.Point-based labeling(left)and Interval-based labeling(right). Temporal Entailment An RDF graph can be regarded as a knowledge base from which new knowledge,i.e.,other graphs,may be entailed.Entailment in a tem-poral setting is a slightly more involved in the RDF case than in the standard database case.In principle,one may be tempted to define the semantics as in temporal relational databases,i.e.,defining the temporal database as the union of all of its snapshots.(A snapshot at time t of a temporal RDF graph G is the corresponding subgraph formed by triples labeled by and instant t.)Blank nodes impose some constraints to this naive approach.For example,each of the three snapshots of Figure4(right)entails the corresponding snapshots of Figure4 (left).However,the whole graph of Figure4(left)cannot be entailed by the graph of Figure4(right).Indeed,the graph of Figure4(left)states that there is an anonymous object X in the triple(a,b,X)at times3and4,which is not the case for the other graph.Temporal Query Language Regarding query languages in temporal databases, basically two choices for defining the temporal domains exist:the point-based and the interval based temporal domains,yielding different query languages[20, 3].In the point-based approach,temporal variables in query languages refer to individual time instants,while in the interval-based domain,variables in the queries range over intervals,making queries more complicated and unnatural. Anyway,one can move easily between these two domains.1.2Related WorkThe RDF model was introducedfive years ago as a W3C recommendation[14]. Formal work in RDF includes the study of formal aspects of RDF data and query languages[10,21],considering RDF features like the entailment,presence of blank nodes,reification,premises in queries,and the RDFS vocabulary with predefined semantics.Several languages for querying RDF data have been pro-posed and implemented.Some of them in the lines of traditional database query languages(e.g.SQL,OQL),others based on logic and rule languages.Good sur-veys are[13,16].To the best of our knowledge,there is still no formal study of temporality issues in RDF graphs and RDF query languages.Temporal database management has been extensively studied,including data models,mostly based on the relational model and query languages[19],leading to the TSQL2language[18].Beyond the relational model,managing historicalFig.4.Temporal entailment:for each t the corresponding snapshots at t are equivalent, but the graph on the left is not entailed by the graph on the right. semistructured data wasfirst proposed by Chawathe et al[6],who extended the Object Exchange Model(OEM)with the ability to represent updates and to keep track of them by means of“deltas.”Later,Dyreson et al[7]allowed anno-tations on the edges of the database graph.In the XML world,Amagasa et al[2] introduced a temporal data model based on XPath for thefirst time.Dyreson[8] proposed an extension of XPath with support for transaction time by means of the addition of several temporal axes for specifying temporal directions,focus-ing on document versioning over the web in the absence of explicit time stamps. Chien et al[5]proposed update and versioning schemes for XML through an edit-based schema in which the most current version of the document is main-tained,and reverse edit scripts allow moving backward in time.Gao et al[9] introducedτXQuery,an extension to XQuery supporting valid time while main-taining the data model unchanged.Mendelzon et al[17]proposed a temporal model for XML,a temporal extension to XPath,and a novel indexing strategy for temporal XML documents.Like in our approach,they use labeling,and a point-based temporal domain and query language.Finally,Visser et al[22]pro-posed a temporal reasoning framework for the Semantic Web,which has been applied in BUSTER,an ontology-based prototype developed at the University of Bremen,supporting the so-called concept@location in time type of query.1.3ContributionsIn this paper we present a framework to incorporate temporal reasoning into RDF,yielding temporal RDF graphs.In particular,we present the following con-tributions:(a)a semantics for temporal RDF graphs in terms of the semantics of non-temporal RDF and RDFS graphs;(b)a study of properties of temporal RDF graphs and the interplay between timestamp and snapshot semantics in temporal RDF graphs;(c)a syntax to incorporate this framework into standard RDF graphs,which includes a vocabulary and rules.The syntax uses the stan-dard RDF vocabulary plus temporal labels;(d)a sound and complete inference system for temporal RDF graphs;(e)complexity bounds which show that entail-ment in temporal RDF graphs does not yield extra asymptotic time complexity with respect to standard RDF graphs;(f)a sketch for a temporal query language for RDF.For the sake of space,we do not include proofs in this version of the paper.2RDF PreliminariesIn this section we present a streamlined formalization of the RDF model following the W3C documents[14,12,4],along the lines of[10].2.1RDF Graphs.Assume there is an infinite set U(RDF URI references);an infinite set B= {N j:j∈N}(Blank nodes);and an infinite set L(RDF literals).A triple (v1,v2,v3)∈(U∪B)×U×(U∪B∪L)is called an RDF triple.In such a triple, v1is called the subject,v2the predicate and v3the object.We often denote by UBL the union of the sets U,B and L.An RDF graph(just graph from now on)is a set of RDF triples.A subgraph is a subset of a graph.The universe of a graph G,universe(G),is the set of elements of UBL that occur in the triples of G.The vocabulary of G is the set universe(G)∩(U∪L).We will use letters N,X,Y,...to denote blank nodes,and a,b,c,... for URIs and literals.A graph is ground if it has no blank nodes.Graphically we represent RDF graphs as follows:each triple(a,b,c)is represented by the labeled graph a b→c.Note that the set of arc labels can have non-empty intersection with the set of node labels.A map is a functionµ:UBL→UBL preserving URIs and literals,i.e.,µ(u)=u andµ(l)=l for all u∈U and l∈L.Given a graph G,we define µ(G)as the set of all(µ(s),µ(p),µ(o))such that(s,p,o)∈G.A mapµis consistent with G ifµ(G)is an RDF graph,i.e.,if s is the subject of a triple, thenµ(s)∈UB,and if p is the predicate of a triple,thenµ(p)∈U.In this case, we say that the graphµ(G)is an instance of the graph G.An instance of G is proper ifµ(G)has fewer blank nodes than G.This means that eitherµsends a blank node to an URI or a literal,or identifies two blank nodes of G.We will overload the meaning of map and speak of a mapµ:G1→G2if there is a map µsuch thatµ(G1)is a subgraph of G2.Two graphs G1,G2are isomorphic,denoted G1∼=G2,if there are maps µ1,µ2such thatµ1(G1)=G2andµ2(G2)=G1.We define two operations on graphs.The union of G1,G2,denoted G1∪G2, is the set theoretical union of their sets of triples.The merge of G1,G2,denoted G1+G2,is the union G1∪G 2,where G 2is an isomorphic copy of G2whose set of blank nodes is disjoint with that of G1.Note that G1+G2is unique up to isomorphism.2.2RDFS VocabularyThere is a set of reserved words defined in the RDF vocabulary description language,RDF Schema[4],–just rdfs-vocabulary for us–that may be used to de-scribe properties like attributes of resources(traditional attribute-value pairs),and also to represent relationships between resources.It defines classes and prop-erties that may be used for describing groups of related resources and relation-ships between resources.4Classes are sets of resources.Elements of a class are known as instances of that class.To state that a resource is an instance of a class, the property rdf:type may be used.The following are the most important classes (in brackets the name we will use in this paper)rdfs:Resource[res],rdfs:Class [class],rdfs:Literal[literal],rdfs:Datatype[datatype],rdf:XMLLiteral[xmlLit], rdf:Property[property].Properties are binary relations between subject re-sources and object resources.The built-in properties are:rdfs:range[range], rdfs:domain[dom],rdf:type[type],rdfs:subClassOf[sc],rdfs:subPropertyOf[sp].3Temporal RDF GraphsIn this paper we extend RDF graphs by allowing temporal elements to label triples.A temporal label is a temporal element t labeling a triple(a,b,c).For simplicity,without loss of generality,we will work with single intervals insteadof temporal elements.In an RDF graph,given a triple(a,b,c),the temporal element t represents the time period when the triple was valid,i.e.the valid time of the triple.At this time we do not deal with transaction time,which can be addressed in an analogous way.3.1Basic DefinitionsIn this section we define the notion of temporal RDF at a conceptual level.Definition1(Temporal graph).1.A temporal triple is an RDF triple with a temporal label(a natural number).We will use the notation(a,b,c):[t].The expression(a,b,c):[t1,t2]is a notation for{(a,b,c):[t]|t1≤t≤t2}.2.A temporal graph is a set of temporal triples.A subgraph is a subset of thegraph.For a temporal graph G,define the snapshot at time t as the RDF graphG(t)={(a,b,c)|(a,b,c):[t]∈G}The underlying RDF graph of a temporal RDF graph G,denoted u(G),istG(t),the union of the graphs G(t).For an RDF graph,define G t as the temporalization of all its triples by a temporal mark t,that is,G t={(a,b,c):[t]|(a,b,c)∈G}.4We omit in this paper vocabulary intended to describe lists,collections,some varia-tions on these,as well as vocabulary to help document and describe other function-alities for which there is no normative semantics.The complete vocabulary can be consulted in[4].The above definitions give the following elementary consequences about the relationship between RDF graphs and temporal RDF graphs.Lemma 1.Let G be an RDF graph,and G be a temporalRDF graph.Then:(1)G t (t )=G ;(2)(G (t ))t ⊆G ,and (3)G = t (G (t ))t .Several issues on the definition of temporal RDF graph are in order:–Recall we use a temporal model where an interval [a,b ]is of the form [a,a +1,...,b ]for a given unit of time that we will assume to be universal in this paper.The natural way to approach this issue is to specify,together with the temporal mark,the unit of time it represents.All the results given here extend without difficulties to this setting.–Temporal triples do not belong to the RDF syntax.In the next section we introduce an RDF-complying syntax for temporal triples,using a small temporal vocabulary.–Source of a temporal statement:Due to the extensible nature of the RDF model,it is possible to include the source of a temporal statement (i.e.who is the author of the temporal statement),and other properties that apply.Although our model (see next section)allows this,we will not study the semantic consequences of this extra information in this paper,but rather stay in the classic setting of temporal models.3.2SemanticsIn what follows,we present the semantics for the notion of entailment for tem-poral graphs based on the corresponding notion for RDF graphs.Definition 2(Temporal Entailment).Let G 1,G 2be RDF temporal graphs.Define–For ground temporal RDF graphs G 1,G 2define G 1|=t G 2as G 1(t )|=G 2(t )for each t ;–For general graphs,G 1|=t G 2iffthere exist ground instances µ1(G 1)and µ2(G 2)such that (µ1(G 1))(t )|=(µ2(G 2))(t )for each t .Note that the definition for ground graphs resembles classical temporal defi-nitions:Proposition 1.Let G 1,G 2be temporal graphs.Then,G 1|=t G 2implies G 1(t )|=G 2(t )for all t ,and the converse is true for ground graphs.In fact,the problems for general graphs are introduced by blank nodes and the notion of entailment.For example,G 1(t )|=G 2(t )for all t does not imply G 1|=t G 2(see Figure 4).We have the following issues:–A blank node represents the same (unnamed)resource throughout the time range,rather than a sequence of different resources.This makes the behavior of temporal marks in Temporal RDF different from the classical setting.Temporal marks here –contrary to temporal XML for example–are not only a relation among fixed objects ,but also among time-varying objects ,the blank nodes.See example in Figure 4.–The notion of entailment for temporal RDF needs a basic arithmetic of inter-vals in order to combine the notion of temporality and deductive properties.For example if we have (a,sc ,c ):[2,3],(c,sc ,d ):[2],then we should be able to derive (a,sc ,d ):[2],but not (a,sc ,d ):[3].In the rest of this section,we show that the notions of closure,lean graph,core –fundamental to define notions of normalization of this data–can be extended without difficulty to the temporal setting.(Compare discussion in [10]).The closure of a temporal graph G ,denoted tcl(G ),is a maximal set of temporal triples G over universe(G )plus the RDF vocabulary such that G contains G and is equivalent to it.Proposition 2(Entailment for Temporal graphs).Let G,G 1,G 2be temporal RDF graphs.Then 1.tcl(G )= t (cl(G (t )))t ;2.G 1|=t G 2if and only if tcl(G 1)|=t G 2.A temporal graph G is lean if and only if there is no proper temporal subgraph Gof G such that G |=t G .The core of G ,core(G )is a lean subgraph of G equivalent to it.With these notions,for a temporal RDF graph G we can define –as in the case of RDF graphs–a notion of normal form ,denoted by nf t (G ),as follows:nf t (G )=core t (G )for a temporal closure G of G .The computational complexities of computing the core and testing whether a graph is lean,are asymptotically the same as the case of standard RDF graphs [10].Proposition 3.Let G,G be graphs.1.The problem of deciding if G is the closure of G is DP-complete.2.The problem of deciding if G is the normal form of G is DP-complete.4Syntax and Deductive System for Temporal GraphsWe present a deductive system for temporal RDF.It is based on a sound and complete set of rules given in [12],plus three rules capturing temporal issues.4.1RDF syntax of temporal triplesDefinition 3(Temporal vocabulary).The temporal vocabulary is the follow-ing:temporal (abbreviated as tpl ),instant ,interval ,initial and final ,all of type property ,and now of type plain literal.The range of instant ,initial and final is the set of natural numbers.We will use the following notation shortcuts:reif(a,b,c,X):the set of triples (X,tsubj,a),(X,tpred,b),(X,tobj,c)(a kind of“temporal reification”of(a,b,c)).5 Definition4(Temporal triples and graphs).Temporal triples are the fol-lowing graphs using the temporal vocabulary.–(a,b,c),reif(a,b,c,X),(X,tpl,Y),(Y,instant,n)where n is a natural num-ber;we will summarize this as(a,b,c):[X,Y,n];–(a,b,c),reif(a,b,c,X),(X,tpl,Y),(Y,interval,Z),(Z,initial,I), (Z,final,F);where I,F are natural number;we will summarize this as (a,b,c):[X,Y,I,F];A temporal graph will be defined as a merge of a set of temporal triples.Because RDF is extensible,nothing prevents the use of the blank nodes included in the definition as target or source of other properties beyond the temporal vocabulary.We want to have a definition of temporal triple independent of the blank nodes occurring in the proposed syntactic definition of temporal triples,e.g.,we want that(a,b,c):[X,Y,n]be essentially equivalent to(a,b,c): [n].Both previous issues are overcame in our syntax by adding certain rules, which regulate the temporal vocabulary.4.2RulesThe set of rules is arranged in four groups.Groups A,B,C,and D are intended to describe the classical RDFS semantics,and we follow the approach in[10]. We omit another group of rules that has to do with internal relationships of the RDF model itself and that we do not consider in this paper.The novelty here is Group T(temporal rules),whose main objective is to be able to standardize the interval version and the instant version as well as help defining“absolute”temporal marks.GROUP A(Existential)For a mapµ:G →G,GGGROUP B(Subproperty)(a,type,property),(a,sp,b)(b,sp,c),(a,sp,b)(x,a,y)GROUP C(Subclass)(a,type,class),(a,sc,b)(b,sc,c),(a,sc,b)(x,type,a)GROUP D(Typing)(a,dom,c)(x,a,y) (x,type,c)(a,range,d)(x,a,y) (y,type,d)5We could have used here the standard reification vocabulary of RDF.We chose not to in order to stress the fact that the notions presented in this paper are independent of any view one may have about the concept of reification in RDF.GROUP T(Temporal)(i2t){(X,tpl,Y),(Y,instant,n):n∈[t1,t2]} (X,tpl,Y),(Y,int,Z),(Z,initial,t1),(Z,final,t2)(t2i)(X,tpl,Y),(Y,int,Z),(Z,initial,t1),(Z,final,t2),n∈[t1,t2] (abs)(a,b,c):[X1,Y1,n1],(a,b,c):[X2,Y2,n2](a,b,c):[U,V,n1],(a,b,c):[U,V,n2],U,V freshRules(i2t)(interval to instants)and(t2i)are needed to standardize the interval version and the instant version,by making them equivalent.Rule(abs) essentially says that marks(instants)can be collected in a single node.This permits to concentrate on temporal marks independent of other contexts in which the variables involving temporal vocabulary are immersed.The definition behaves well in the sense of the following lemma.Lemma2. 1.There exist blanks X,Y such that G|=t(a,b,c):[X,Y,t1,t2] if and only if there exist blanks U,V such that∀j with t1≤j≤t2G|=t (a,b,c):[U,V,t j]2.There exist blanks X1,Y1,X2,Y2such that G|=t(a,b,c):[X1,Y1,t1]andG|=t(a,b,c):[X2,Y2,t2]if and only if there exist blanks U,V such that G|=t(a,b,c):[U,V,t1]and G|=t(a,b,c):[U,V,t2].For a temporal RDF graph G,define G∗as the RDF graph{(a,b,c): [X t,Y t,t]|(a,b,c):[t]∈G},where X t,Y t are free blank variables,different for each t.Conversely,for each RDF graph G with temporal vocabulary,define G∗as the temporal graph defined as{(a,b,c):[t]|∃X∃Y(a,b,c):[X,Y,t]∈G}. Theorem1. 1.Let G1,G2be temporal RDF graphs.Then G1|=t G2implies G∗1|=G∗2.2.Let G1,G2be RDF graphs with temporal vocabulary.Then G1|=G2implies(G1)∗|=t(G2)∗.3.Let G be a temporal RDF graph,and G an RDF graph with temporal vocab-ulary.Then(G∗)∗=G and G |=(G ∗)∗.Now we can show that the syntax introduced captures the semantics of tem-poral RDF.The following deductive system based on the rules presented,is sound and complete for entailment of RDF graphs with rdfs vocabulary.Definition5.Let G be a graph.For each rule r:AB above,define G r G∪µ(B)iffthere is a mapµ:A→G.Also define G s G if and only if G is a subgraph of G.Define G G if there is afinite sequence of graphs G1,...,G n such that(1) G=G1;(2)G =G n;and(3)for each i,either,G i r G i+1for some r,or G i s G i+1.The following theorem shows that one can give a syntactic characterization over RDF graphs with temporal vocabulary for entailment of temporal RDF graphs:Theorem2.For any pair of temporal RDF graphs G1,G2:G1|=t G2if and only if G∗1 G∗2.Note that that we cannot establish the theorem in its complete generality, namely,to prove that for RDF graphs G1,G2with temporal vocabulary,G1 G2 if and only if(G1)∗|=t(G2)∗.(Both graphs in Figure4have identical()∗-images but are not -equivalent.)The previous theorem permits to concentrate for the following sections in temporal RDF(instead of diving into syntactic issues).5Query languageIn this section we present query language for temporal RDF graphs,along with its semantics.We also present a brief study of the complexity of query processing.5.1The Query Language by ExampleWe will give theflavor of the query language using our running example,the database of Figure2.Let us begin with a simple query:“Find students who have taken a Master course between year2000and now and return them qualified by 21-century-student”.This query can be expressed as:(?X,type,21-century-student)←(?X,takes,?C):[?T],(?C,type,Master):[?T],2000≤?T,?T≤Now.This example query illustrates the need of a built-in arithmetic language in order to reason about time and intervals.Another important observation is that temporal queries may output non-temporal RDF graph,as the previous query does.For the query asking for a snapshot of the graph at t1,we have:(?X,?Y,?Z)←(?X,?Y,?Y):[t1].Now consider the query“Students taking Ph.D courses together,and the time instants when this occurred.”For simplicity we expressed this as a point-based query.The translation of the result into intervals is straightforward.(?X,together,?Y)[?T]←(?X,type,P h.D):[?T],(?Y,type,P h.D):[?T].Next,we give examples of queries that use temporal triples with intervals. The query“Time intervals when the IT Master was offered”can be expressed as follows:(X,interval,Y),(Y,initial,T i),(Y,final,T f)←(IT Master,sc,Prof.Master):[T i,T f].Observe that the previous query returns a set of intervals.In order to retrieve maximal intervals we need a more subtle query,since their computation do not follow from the temporal rules.For the query“Compute the maximal interval when the triple(a,b,c)holds”,we need aggregate operators MAX and MIN. (a,b,c):[?T1,?T2]←(a,b,c):[?T i,?T f],?T1=MIN(?T i),?T2=MAX(?T f)For a query asking for“Students applying for jobs at time t afterfinishing their Ph.D.program in no more than4years”,we have:(?X,apply,job)←(X,type,Ph.D): t i,t f ,t f−t i<4,t f<t.Here,the notation t i,t f stands that t i and t f match with the maximal interval for the corresponding triple computed with the query given above.5.2Semantics and ComplexityLet V be a set of variables(disjoint from UBLT).Individual variables will be denoted?X,?Y,?Z,etc.There is also a set of temporal variables V t⊂V.The query language we define is analogous to the one presented by Gutierrez et al.[11].A query is a temporal tableau,which is a pair(H,B∪A),where H and B are temporal RDF graphs with some elements of UBL replaced by variables in V,and with some elements of T replaced with variables in V t,B has no blank nodes and all the variables in H occur also in B.The set A has the usual arithmetic built-in predicates such as<,>,=,.over elements in V t and T.We adopt the usual notion of safe rule from Datalog to prevent operations on infinite predicates.A rule is safe if all its variables are limited.A variable is limited if one of the following hold:a variable appears as an argument in a non-built-in predicate of the body;the variable X appears in a subgoal X=t (or t=X),where t is a constant in T;or the variable X appears in a subgoal X=Y(or Y=X),where Y is limited.The semantics is the usual in these cases.Given a temporal tableau(H,B∪A) and a temporal RDF graph G,for each matching of the graph pattern B in G, pick up the values of the variables and check whether they satisfy the built-in predicates in A.If this is the case,construct a pre-answer,which is the graph resulting by substituting the values of the variables in the head.Finally,the answer of the query is the union of all pre-answers.We end this section by showing that the additional time dimension in our model does not play any relevant role in the complexity of query answering,that is,the query language preserves the tractability of answers.In order to do this, we consider the simpler problem of testing emptiness of the query answer set in the following forms:(1)Query complexity version:For afixed database D,given a query q,is q(D)non-empty?(2)Data complexity version:For afixed query q, given a database D,is q(D)non-empty?Theorem3.The evaluation problem is NP-complete for the query complexity version,and polynomial for the data complexity version.。
Lecture-12-13-First-order_Logic.pdf
true/false/unknown true/false/unknown true/false/unknown
degree of belief 0…1 known interval value
CS561 - Lecture 11-12 - Macskassy - Fall 2010
9
Semantics/Interpretation
there is a correspondence between
◦ functions, which return values ◦ predicates, which are true or false Function: father_of(Mary) = Bill Predicate: father_of(Mary, Bill)
CS561 - Lecture 11-12 - Macskassy - Fall 2010
6
Why first-order logic?
Pros and cons of propositional logic
Propositional logic is declarative: pieces of syntax correspond to facts Propositional logic allows partial/disjunctive/negated information (unlike most data structures and databases)
Lectures: MW 5:00-6:20pm, ZHS 159 Office hours: By appointment Class page: /~macskass/CS561/Fall2010/ This class will use https:///webapps/login/ and class webpage - Up to date information - Lecture notes - Relevant dates, links, etc. Course material: [AIMA] Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig. (3rd ed)
网络攻击与防范3-漏洞
6
Sources of Vulnerabilities
• Among the most frequently mentioned sources of security vulnerability problems in computer networks are
– – – – design flaws incorrect implementation poor security management social engineering
Formal method
• The use of Internet brings security to the attention of masses
– What kind of problems can formal methods help to solve in security – What problems will formal methods never help to solve
What Are Software Vulnerabilities?
• A software vulnerability is an instance of a fault in the specification, development, or configuration of software such that its execution can violate the (implicit or explicit) security policy.
Security Vulnerabilities
• What can Security bugs an attacker do?
– avoid authentication – privilege escalation – bypass security check – deny service (crash/hose configuration) – run code remotely
ltl和CTL讲义PPT课件
LTL: Before R: <>R -> (P -> (!R U (S & !R))) U R After Q: [](Q -> [](P -> <>S))
We believe that almost all properties that one wants to express about software lie in intersection- of LTL and CTL
Motivation for Specification Patterns
AX p
p p
p p
pp pp
p
-
Computation Tree Logic
EX p
p p
p
pp
p
-
Computation Tree Logic
A[p U q]
p
p
p
p q
p
q q pp
q
-
Computation Tree Logic
E[p U q]
p qq p
p q
p
q
q
-
Example CTL Specifications
Response: A state/event P must always be followed a state/event Q within a scope
Chain Precedence: A sequence of state/events P1, …, Pn must always be preceded by a sequence of states/events Q1, …, Qm within a scope