Accepted Papers (with Abstracts)
Session 01 - Formal requirement analysis and specification
  • 1.1 - Haitao Dan, Rob Hierons and Steve Counsell.Non-local Choice and Implied Scenarios
    AbstractClick to Show/Hide Abstract

    A number of issues, such as non-local choice and implied scenarios, that arise in Message Sequence Charts (MSCs) have been investigated.

    However, the existing research on these two issues show disagreements regarding how they related.

    In this paper, we analyse the relations among existing conditions for non-local choice and Closure Conditions (CCs) for implied scenarios.

    On the basis of this we propose a new condition for non-local choice derived from CCs of implied scenarios under safe realisability.

    Compared to the existing conditions, we argue that the new condition covers more nodvji addition, we formally show that the existence of non-local choices in an MSC specification results in implied scenarios in runtime and the appearance of implied scenarios according to corresponding CC means there are non-local choices in the specification.

  • 1.2 - Faraz Hussain and Gary Leavens.temporaljmlc: A JML Runtime Assertion Checker Extension for Specification and Checking of Temporal Properties
    AbstractClick to Show/Hide Abstract

    Most mainstream specification languages primarily deal with a program's functional behavior.

    However, for many common problems, besides the system's functionality, it is necessary to be able to express its temporal properties, such as the necessity of calling methods in a certain order.

    We have developed temporaljmlc, a tool that performs runtime assertion checking of temporal properties specified in an extension of the Java Modeling Language (JML).

    The benefit of temporaljmlc is that it allows succinct specification of temporal properties that would otherwise be tedious and difficult to specify.

  • 1.3 - Emilia Katz and Shmuel Katz.User Queries for Specification Refinement Treating Shared Aspect Join Points
    AbstractClick to Show/Hide Abstract

    We present an interactive semi-automatic procedure to help users refine their requirements formally and precisely, using knowledge the user possesses but does not notice as relevant and has difficulty formalizing.

    Questions in natural language are presented to the user, and augmentations to specifications, written in Linear Temporal Logic, are automatically created according to the answers.

    We apply our approach to a case study on specifying the desired aspect behavior in a delicate case when multiple aspects can share a join-point, i.e., be applied at the same state of base program computation.

    The series of questions used in the case study is a result of an in-depth analysis of semantics and mutual influence of aspects at a shared join-point, presented in the paper.

    Aspects sharing a join-point might, but do not have to, semantically interfere.

    Our analysis and specification refinement will enable programmers to distinguish between potential and actual interference among aspects at shared join-points, when aspects are modeled as state transition diagrams, and specifications are given as LTL assumptions and guarantees.

    The refined aspect specification, obtained from the interactive procedure we describe, enables modular verification and interference detection among aspects even in the presence of shared join-points.

    .
  • 1.4 - Adrien de Kermadec, Frederic Dadeau and Fabrice Bouquet.Assessing the Quality of B Models
    AbstractClick to Show/Hide Abstract

    This paper proposes to define and assess the notion of quality of B models aiming at providing an automated feedback on a model by performing systematic checks on its content.

    We define and classify classes of automatic verification steps that help the modeller in knowing whether his model is well-written or not.

    This technique is defined in the context of "behavioral models" that describe the behavior of a system using the generalized substitutions mechanism.

    From these models, verification conditions are automatically computed and discharged using a dedicated tool.

    This technique has been adapted to the B notation, especially on B abstract machines, and implemented within a tool interfaced with a constraint solver that is able to find counter-examples to unvalid verification conditions.

Session 02 - Special track in memory of Geoff Dromey
  • 2.1 - Peter A. LindsayBehavior Trees: from Systems Engineering to Software Engineering
    AbstractClick to Show/Hide Abstract

    Geoff Dromey's Behavior Engineering method provides a vital link between systems engineering processes and software engineering processes.

    It has proven particularly effective in industry when applied to large complex systems, to help understand the problem space and clarify system and software requirements.

    In this paper we examine what it is about Behavior Trees that makes them succeed as a system design notation and method. We compare the method with some of the most widely used system design methods, including State Transition Diagrams, Algorithmic State Machines, Object Oriented Design in some of its various manifestations, IDEF0, UML and SysML.
    The comparison draws on the Design-Methods Comparison Project undertaken by Bahill et al in 1998, and uses their Traffic Lights case study to illustrate the similarities with, and differences from, the Behavior Engineering method.

  • 2.2 Kirsten Winter, Ian J. Hayes, Robert ColvinIntegrating Requirements: The Behavior Tree Approach
    AbstractClick to Show/Hide Abstract

    Behavior Trees were invented by Geoff Dromey as a graphical modelling notation.

    Their design was driven by the wish to ease the task of capturing functional system requirements and to bridge the gap between informal language description and the formal model.

    Vital to Dromey's intention is the idea of incrementally creating the model out of its building blocks, the functional requirements.

    This is done by graphically representing each requirement as a mini Behavior Tree and incrementally merging the trees to form a more complete model of the system.

    In this paper we investigate the essence of this constructive approach to creating a model in general notation-independent terms and discuss its advantages and disadvantages.

    The result can be seen as framework of rules and provides us with a semantic underpinning of requirements integration.

    We then use Behavior Trees as an example how this framework can be put into practise.

  • 2.3 Daniel PowellBehavior Engineering - A Scalable Modeling and Analysis Method
    AbstractClick to Show/Hide Abstract

    The impact of failing to develop a shared understanding of the requirements describing and constraining large, complex projects and programs with many, possibly distributed, stakeholders and suppliers is enormous.

    Traditional engineering methods provide little in the way of taming complexity when synthesising, analysing and communicating the requirements of such projects.

    The Behavior Engineering method, developed at Griffith University, and employed on a number of large, complex and nationally critical defence, aerospace, transport and government projects and programs addresses the problems of scale and complexity head on.

    The Behavior Engineering method is presented in this paper as an efficient and effective method for modeling, analysing, evaluating and communicating large requirements specifications comprising thousands of requirements as well as a method that facilitates the synthesis of quality requirements from architectural models and scenarios.

    It is demonstrated through analysis of industry data, that Behavior Engineering facilitates the development and communication of a deep, accurate and holistic understanding of the system needs, significantly reducing the risk of failure to capture and preserve intent in the development of large and complex systems.

Session 03 - Program analysis
  • 3.1 - José Barros, Daniela da CruzPedro Rangel Henriques and Jorge Sousa Pinto.Assertion-based Slicing and Slice Graphs
    AbstractClick to Show/Hide Abstract

    This paper revisits the idea of slicing programs based on theiraxiomatic semantics, rather than using syntactic criteria.

    We show how the forward propagation of preconditions and the backward propagation of postconditions can be combined in a new slicing algorithm that is more precise than the existing specification-based algorithms.

    The algorithm is based on (i) a precise test for removable statements, and (ii) the construction of a \emph{slice graph}, a program control flow graph extended with semantic labels.

    It improves on previous approaches in two aspects: it does not fail to identify removable commands; and it produces the smallest possible slice that can be obtained (in a relative sense that will be made precise).

    The paper also reviews in detail, through examples, the ideas behind the use of \preconds and\postconds for slicing programs.

  • 3.2 - Michele RisiGiuseppe Scanniello and Genoveffa Tortora.Architecture Recovery using Latent Semantic Indexing and K-Means: an Empirical Evaluation
    AbstractClick to Show/Hide Abstract

    A number of clustering based approaches and tools have been proposed in the past to partition a software system into subsystems.

    The greater part of these approaches is semiautomatic, thus requiring human decision to identify the best partition of software entities into clusters among the possible partitions.

    In addition, some approaches are conceived for software systems implemented using a particular programming language (e.g., C and C++).

    In this paper we present an approach to automate the partitioning of a given software system into subsystems.

    In particular, the approach first analyzes the software entities (e.g., programs or classes) and then using Latent Semantic Indexing the dissimilarity between these entities is computed.

    Finally, software entities are grouped using iteratively the k-means clustering algorithm.

    The approach has been implemented in a prototype of a supporting software system as an Eclipse plug-in.

    Finally, to assess the approach and the plug-in, we have conducted an empirical investigation on three open source software systems implemented using the programming languages Java and C/C++.

  • 3.3 - Cristina Tudose and Radu Oprisa.A method for testing software systems based on state design pattern using symbolic execution
    AbstractClick to Show/Hide Abstract

    The paper reports a new testing method working with state pattern designed software systems.

    The tests are performed in terms of symbolic execution aiming to identify conditions and values of some input parameters that violate assertions at runtime.

    The state based architecture of such systems allows a direct mapping of the methods to the transitions of the underlying finite state machine (FSM).

    In order to identify the methods that contain failing assertions, the Java PathFinder Symbolic Execution framework extension (JPF-SE) is used for an out of context execution of each method.

    We propose a new algorithm to compute a transition path from the initial state of the system to each error-prone transition.

    The computation is carried out using a backward traversal scheme of the FSM support graph where the JPF-SE symbolically executes each transition of the path.

    The transition execution performed by JPF-SE yields to the backward propagation of the conditions imposed on the input parameters.

    The final section of the paper contains some experimentally results, conclusions and suggestions for further work.

  • 3.4 - Zhiwu Xu, Lixiao Zheng and Haiming Chen.A Toolkit for Generating Sentences from Context-Free Grammars
    AbstractClick to Show/Hide Abstract

    Producing sentences from a grammar, according to various coverage criteria, is required in many applications.

    It is also a basic building block for grammar testing, which is one of the important techniques for grammar engineering.

    This paper presents a toolkit for context-free grammars, which mainly consists of several algorithms for sentence generation or enumeration and for coverage analysis for context-free grammars.

    The toolkit deals with general context-free grammars, and is implemented in Java.

    Besides providing implementations of algorithms, the toolkit also provides a simple graphical user interface, through which the user can use the toolkit directly.

    The toolkit is available at http://lcs.ios.ac.cn/zhiwu/toolkit.php.

    In the paper, the toolkit and the major algorithms implemented in the toolkit are presented, and experimental results are contained.

Session 04 - Formal approaches to service-oriented computing
  • 4.1 - Natallia Kokash, Christian Krause and Erik de Vink.Timed Data-centric Analysis of Graphical Business Process Models
    AbstractClick to Show/Hide Abstract

    Reo is a graphical channel-based coordination language that enables modeling of complex behavioral protocols using a small set of channel types with well-defined behavior.

    Reo has been developed for the coordination of stand-alone components or services, which makes it suitable for the modeling of service-based business processes.

    The existence of formal sematic models for Reo enables computer-aided analysis of different aspects of Reo diagrams, including their animation, simulation, control- and dataflow verification by means of model checking techniques.

    In this paper, we discuss the verification of Reo business process models with mCRL model checking toolset, and extend this approach with timed analysis.

    Furthermore, we present a detailed example that illustrates the application of Reo and the aforementioned framework to time-aware modeling and verification of data-centric business processes.

  • 4.2 - Christian LeuxnerBernd Spanfelner and Wassiou Sitou.A formal model for work flows
    AbstractClick to Show/Hide Abstract

    Within this paper, we present a structured, formal model for the specification and analysis of work flows.

    The model provides a graphical representation supporting a modular description of work flows together with a formal semantics accurately reflecting the execution of work flows and laying the foundation for tool support.

    Methodically relevant concepts such as hierarchy and refinement are both supported.

  • 4.3 - Ábel HegedüsGábor BergmannIstván Ráth and Daniel Varro.Back-annotation of Simulation Traces with Change-Driven Model Transformations
    AbstractClick to Show/Hide Abstract

    Model-driven analysis aims at detecting design flaws early in high-level design models by automatically deriving mathematical models.

    These analysis models are subsequently investigated by formal verification and validation (V&V) tools, which may retrieve traces violating a certain requirement.

    Back-annotation aims at mapping back the results of V&V tools to the design model in order to highlight the real source of the fault, to ease making necessary amendments.

    Here we propose a technique for the back-annotation of simulation traces based on change-driven model transformations.

    Simulation traces of analysis models will be persisted as a change model with high-level change commands representing macro steps of a trace.

    This trace is back-annotated to the design model using change-driven transformation rules, which bridge the conceptual differences between macro steps in the analysis and design traces.

    Our concepts will be demonstrated on the back-annotation problem for analyzing BPEL processes using a Petri net simulator.

  • 4.4 - Eduardo Mazza, Daniel Le Métayer and Maire-Laure Potet.Designing log architectures for legal evidence
    AbstractClick to Show/Hide Abstract

    Establishing contractual liabilities in case of litigation is generally a delicate matter.

    It becomes even more challenging when IT systems are involved.

    At the core of the problem lies the issue of the evidence provided by the opposing parties.

    We believe that the means to constitute evidence that could be used in case of conflict should be considered from the onset of IT projects and be part of the requirements for the design of IT systems.

    This paper proposes criteria to characterize acceptable log architectures based on two critical properties: observability of events and neutrality of witnesses.

    We show how these criteria depend on the architecture of the system and the potential claims between the parties and propose techniques to enhance a non-acceptable log architecture into an acceptable log architecture.

    We illustrate our framework with the case study of a distributed travel booking system with multiple bilateral service agreements.

Session 05 - Applications of formal methods
  • 5.1 - Mieke Massink, Diego Latella, Michael Harrison and Andrea Bracciali.A Scalable Fluid Flow Process Algebraic Approach to Emergency Egress Analysis
    AbstractClick to Show/Hide Abstract

    Pervasive environments offer an increasing number of services to a large number of people moving within these environments including timely information about where to go and when.

    People using these services interact with the system but they are also meeting other people and performing other activities as relevant opportunities arise.

    The design of such systems and the analysis of collective dynamic behaviour of people within them is a challenging problem.

    In previous work we have successfully explored a scalable analysis of stochastic process algebraic models of smart signage systems.

    In this paper we focus on the validation of a representative example of this class of models in the context of emergency egress.

    This context has the advantage that there is detailed data available from studies with alternative analysis methods.

    A second aim is to show how realistic human behaviour, often observed in emergency egress, can be embedded in the model and how the effect of this behaviour on building evacuation can be analysed in an efficient and scalable way.

  • 5.2 - Peter LindsayKirsten Winter and Nisansala Yatapanage.Safety assessment using Behavior Trees and Model Checking
    AbstractClick to Show/Hide Abstract

    This paper demonstrates the use of Behavior Trees and model checking to assess system safety requirements for a system containing substantial redundancy.

    The case study concerns the hydraulics systems for the Airbus A320 aircraft, which are critical for aircraft control.

    The system design is supposed to be able to handle up to 3 different components failing individually, without loss of all hydraulic power.

    Verifying the logic of such designs is difficult for humans because of the sheer amount of detail and number of different cases that need to be considered.

    The paper demonstrates how model checking can yield insights into what combinations of component failures can lead to system failure.

  • 5.3 - Davide Benetti, Massimo Merro and Luca Viganò.Model Checking Ad Hoc Network Routing Protocols: ARAN vs. endairA
    AbstractClick to Show/Hide Abstract

    Several different secure routing protocols have been proposed for determining the appropriate paths on which data should be transmitted in ad hoc networks.

    In this paper, we focus on two of the most relevant such protocols, ARAN and endairA, and present the results of a formal analysis that we have carried out using the AVISPA Tool, an automated model checker for the analysis of security protocols.

    By model checking ARAN with the AVISPA Tool, we have discovered two attacks (a spoofing attack and a disruption one), while our analysis of endairA revealed no attacks.

  • 5.4 - Michael Goldsmith and Sadie Creese.Refinement-Friendly Bigraphs and Spygraphs
    AbstractClick to Show/Hide Abstract

    Over the past decade the successful approach to specification and mechanical analysis of correctness and security properties using CSP and its refinement checker FDR has been extended to contexts including mobile ad-hoc networks and pervasive systems, but the more scope for network reconfiguration the system exhibits, the more intricate and less obviously accurate the models must become in order to accommodate such dynamic behaviour in a language with a basically static process and communication graph.

    Milner's Bigraph framework, on the other hand, and particularly Blackwell's Spygraph specialisation, are ideally suited for describing intuitively such dynamic reconfigurations of a system and support notions of locality and adjacency which fits them well for reasoning, for instance, about the interface between physical and electronic security; but they lack powerful analytic tool support.

    Our long-term goal is to combine the best of both approaches.

    Unfortunately the canonical labelled transition system induced by the category-theoretic semantics of a bigraphical reactive system present a number of challenges to the refinement-based approach.

    Prominent amongst these is the feature that the label on a transition is the 'borrowed context' required to make the redex of some reaction rule appear in the augmented source bigraph; this means that any reaction which can already take place entirely within a given bigraph gives rise to a transition labelled only with the trivial identity context, equivalent to a tau transition in CCS or CSP, with the result that neither the reaction rule nor the agents involved can be distinguished.

    This makes it quite impossible for an observer of the transition system to determine whether such a reaction was desirable with respect to any specification.

    We are investigating ways to remedy this situation.

    Here we present a systematic transformation of a bigraphical reactive system, both its rules and the underlying bigraphs, with the effect that every transition becomes labelled with the specific rule that gave rise to it and the set of agents involved.

    We also consider how that now possibly over-precise labelling can be restricted through selective hiding and judicious forgetful renaming.

Session 06 - Model checking
  • 6.1 - Maria-del-Mar Gallardo and David Sanan.Verification of Dynamic Data Tree with muCalculus extended with Separation
    AbstractClick to Show/Hide Abstract

    The problem of verifying software systems that use dynamic data structures (such as linked lists, queues, or binary trees) has attracted increasing interest over the last decade.

    Dynamic structures are barely supported by verification techniques because among other reasons, it is difficult to efficiently manage the pointer-based internal representation.

    This is a key aspect when the goal is to construct a verification tool based on model checking techniques, for instance.

    In addition, since new nodes may be dynamically inserted or extracted from the structure, the shape of the dynamic data (and other more specific properties) may vary at runtime, it being difficult to detect errors such as, for instance, the non desirable sharing between two nodes.

    In this paper, we propose to use muCalculus to describe and analyze, by model checking techniques,dynamic data such as lists, and non-linear data structures like trees.

    The expressiveness of mu-calculus makes it possible to naturally describe these structures.

    In addition, following the ideas of separation logic, the logic has been extended with a new operator able to describe the non-sharing property which is essential when analyzing data structures of this type.

  • 6.2 - Jiri Barnat, Lubos Brim and Petr Rockai.Parallel Partial Order Reduction with Topological Sort Proviso
    AbstractClick to Show/Hide Abstract

    Partial order reduction and distributed-memory processing are two essential techniques to fight the well known state explosion problem in explicit state model checking.

    Unfortunately, the two techniques have not been fully satisfactorily combined yet.

    While for verification of safety properties there are rather successful approaches to parallel partial order reduction, for full LTL model checking all suggested approaches are either too much technically involved to be smoothly incorporated with the existing parallel algorithms or they are simply weak in the sense that the achieved reduction in the size of the state space is minor.

    The main source of difficulties is the cycle proviso that requires one fully expanded state on every cycle in the reduced state space graph.

    This can be easily achieved in sequential case by employing depth-first search strategy for state space generation.

    Unfortunately, this strategy is incompatible with parallel (hence distributed-memory) processing, which limits application of partial order reduction technique to the sequential case.

    In this paper we suggest a new technique that guarantees correct construction of the reduced state space graph w.r.t.

    the cycle proviso.

    Our new technique is fully compatible with parallel graph traversal procedure while at the same time it provides competitive reduction of the state space if compared to the serial case.

    The new technique has been implemented within the parallel and distributed-memory LTL model checker DiVinE and its performance is reported in this paper

  • 6.3 - Franz WeitlShin Nakajima and Burkhard Freitag.Structured Counterexamples for the Temporal Description Logic ALCCTL
    AbstractClick to Show/Hide Abstract

    A new algorithm for generating counterexamples for the temporal description logic ALCCTL is presented.

    ALCCTL is a decidable combination of the description logic ALC and computation tree logic CTL.

    It extends CTL by unary and binary first order predicates and a restricted form of first order quantification.

    Predicates and quantified expressions are required for representing properties in application domains such as structured web documents and they are frequently used in software and hardware specifications which are verified by model checking.

    In the case of a specification violation, existing algorithms generate counterexamples that tend to be complex yet imprecise if specifications contain first order quantified expressions.The presented algorithm is the first algorithm for generating counterexamples for a temporal description logic that considers first order predicates and quantification.

    The algorithm is sound and semi-complete for ALCCTL.

    The generated counterexamples are both more precise and comprehensible than counterexamples generated by the previous algorithms.

  • 6.4 - Marcello M. BersaniLuca CavallaroAchille FrigeriMatteo Pradella and Matteo Rossi.SMT-based Verification of LTL Specifications with Integer Constraints and its Application to Runtime Checking of Service Substitutability
    AbstractClick to Show/Hide Abstract

    An important problem that arises during the execution of service-based applications concerns the ability to determine whether a running service can be substituted with one with a different interface, for example if the former is no longer available.

    Standard Bounded Model Checking techniques can be used to perform this check, but they must be able to provide answers very quickly, lest the check hampers the operativeness of the application, instead of aiding it.

    The problem becomes even more complex when conversational services are considered, i.e., services that expose operations that have Input/Output data dependencies among them.

    In this paper we introduce a formal verification technique for an extension of Linear Temporal Logic that allows users to includein formulae constraints on integer variables.

    This technique applied to the substitutability problem for conversational services is shown to be considerably faster and with smaller memoryfootprint than existing ones.

Session 07 - Formal approaches for testing
  • 7.1 - José Pablo Escobedo, Christope Gaston, Pascale Le Gall and Ana Cavalli.Testing Web Service Orchestrators in context: a symbolic approach
    AbstractClick to Show/Hide Abstract

    An orchestrator in a Web service system is a locally deployed piece of software used both to: allow users to interact with the system, and communicate with remote components (Web services) in order to fulfill a goal.

    We propose a symbolic model based approach to test orchestrators in the context of the systems they pilot.

    Our approach only takes as input a model of the orchestrator and no models of the Web services.

    In our approach the testing architecture is a parameter: communications between Web services and the orchestrator can be either simulated, or hidden or observable.

    When they are simulated, the orchestrator is tested in isolation and our approach comes to already defined classical model based testing approaches.

    When the SUT is connected with Web services (that is, in actual usage) it is no longer fully controlled by the tester, but tested in context.

    In that case two situations may occur: either communications with web services are observable or they are hidden.

    Our approach copes with all those cases.

    We give theorems allowing one relating our notion of conformance in context with regard to classical conformance in isolation.

    We present a test case generation algorithm based on symbolic execution techniques: it takes into account the status (controllable, hidden, or observable) of communication channels between the orchestrator and Web services.

    The algorithm has been implemented and is illustrated on a representative case study.

  • 7.2 - Maximiliano CristiaPablo Albertengo and Pablo Rodriguez Monetti.Pruning Testing Trees in the Test Template Framework by Detecting Mathematical Contradictions
    AbstractClick to Show/Hide Abstract

    Fastest is an automatic implementation of Phil Stocks and David Carrington's Test Template Framework (TTF), a model-based testing (MBT) framework for the Z formal notation.

    In this paper we present a new feature of Fastest that helps TTF users to eliminate inconsistent test classes automatically.

    The method is very simple and practical, and makes use of the peculiarities of the TTF.

    Perhaps its most interesting features are extensibility and ease of use, since it does not assume previous knowledge on theorem proving.

    Also we compare the solution with a first attempt using the Z/EVES proof assistant and with the HOL-Z environment.

    At the end, we show the results of an empirical assessment based on applying Fastest to four real-world, industrial-strength case studies and to six toy examples.

  • 7.3 - Peter Gorm Larsen, Kenneth Lausdahl and Nick Battle.Combinatorial Testing for VDM
    AbstractClick to Show/Hide Abstract

    Combinatorial testing in VDM involves the automatic generation and execution of a large collection of test cases derived from templates provided in the form of trace definitions added to a VDM specification.

    The main value of this is the rapid detection of run-time errors caused by forgotten pre-conditions as well as broken invariants and post-conditions.

    Trace definitions are defined as regular expressions describing possible sequences of operation calls, and are conceptually similar to the input provided to model checkers.

    In this paper we present a tool enabling test automation based on VDM traces, and explain how it is possible to reduce large collections of test cases in different fashions.

    Its use is illustrated with a small case study.

  • 7.4 - Stefan Galler, Martin Weiglhofer and Franz Wotawa.Synthesize it: from Design by Contract to Meaningful Test Input Data
    AbstractClick to Show/Hide Abstract

    Generating test input data is a complex task and nowadays mostly tackled with random approaches.

    Random testing of methods, which take non primitive data types as parameters, e.g.

    objects encapsulating database interactions, is a vain endeavor.

    Especially, if the precondition of the method under test (MUT) requires a particular object state of the method's parameters, random approaches rarely succeed.

    In this paper we present a technique to automatically synthesize implementations for the parameters of a MUT from a given Design by Contract specification.

    These implementations behave as described by the Design by Contract specification, but do not interact with their environment (e.g.

    database, network and file system).

    Furthermore, we can set the initial state of the synthesized implementations to the state required by the MUT's precondition.

    Besides a formal discussion of our approach we present results obtained by applying our technique to two case studies: a stack-based calculator and a real-world data collection tool from the telecommunication industry.

    The presented approach outperforms random data generation on both case studies in terms of amount of methods tested (function coverage) and line coverage.

Session 08 - Formal methods for real-time and timed analysis
  • 8.1 - Niusha HakimipourPaul Strooper and Andy Wellings.TART: Timed-Automata to Real-Time Java Tool
    AbstractClick to Show/Hide Abstract

    In previous work, we have proposed a modelbased approach to developing real-time Java programs from timed automata.

    This approach allows us to verify the timed automata model mechanically by using current real-time modelchecking tools.

    Programs are then derived from the model by following a systematic approach.

    TART (timed automata to RTSJ Tool) is a prototype tool to support this approach.

    This paper presents TART, including its limitations, and discusses its application on four examples.

  • 8.2 - Thomas Goethel and Sabine Glesner.Towards the Semi-Automatic Verification of Parameterized Real-Time Systems using Network Invariants
    AbstractClick to Show/Hide Abstract

    Real-time systems often have to cope with an unbounded number of components.

    For example, an operating system scheduler has to be able to manage an arbitrary number of threads.

    At the same time, the correctness of central control units such as schedulers is crucial for the correctness of the whole system.

    However, the comprehensive and semi-automatic verification of real-time systems that are parameterized with an unbounded number of components is still an open problem.

    In this paper, we propose an approach in which parameterized systems can be verified using a combination of theorem proving and model checking.

    The interactive theorem prover is used for the overall verification task delegating subsequent proof-goals to automatic verification tools.

    The central proof method is based on so-called network invariants.

    The idea of network invariants is to overapproximate all instances of a parameterized system and to perform the verification on the abstract model.

    We have adopted an existing network invariant approach for the verification of centralized real-time systems such as schedulers and formalized the theory in the Isabelle/HOL theorem prover.

    Preliminary results on applying our framework to small examples are promising and make it worth to evaluate the approach with larger case studies in future work.

  • 8.3 - Kun Wei, Jim Woodcock and Alan Burns.A Timed Model of Circus with the Reactive Design Miracle
    AbstractClick to Show/Hide Abstract

    We propose a timed model of Circus which is a compact extension of original Circus.

    Apart from introducing time, this model uses UTP-style semantics to describe each process as a reactive design.

    One of significant contributions of our timed model is to extensively explore the reactive design miracle, the top element of a complete lattice with respect to the implication ordering.

    The employment of the miracle brings a number of brand-new features such as strong deadline and uninterrupted events, which provide a more powerful and flexible expressiveness in system specifications.

  • 8.4 - Massimo Bartoletti and Roberto Zunino.Static Enforcement of Service Deadlines
    AbstractClick to Show/Hide Abstract

    We consider the problem of statically deciding when a service always provides its functionality within a given amount of time.

    In a timed pi-calculus, we propose a two-phases static analysis guaranteeing that processes enjoy both the maximal progress and the well-timedness properties.

    Exploiting this analysis, we devise a decision procedure for checking service deadlines.

Posters And Tool Demo Session
  • Maximiliano Cristia, Pablo Albertengo and Pablo Rodriguez MonettiFastest: a Model-Based Testing Tool for the Z Notation
    AbstractClick to Show/Hide Abstract

    Fastest is amodel-based testing tool for the Z notation providing an almost automatic implementation of the Test Template Framework.

    The core of this document is an example showing how to use Fastest to automatically derive abstract test cases from a Z specification.

  • Laura Panizo, Maria del Mar Gallardo, Pedro Merino, David Sanán and Antonio LinaresDam Management Based on Model Checking Techniques
    AbstractClick to Show/Hide Abstract

    Dams are critical systems which, traditionally, have been managed manually and, more recently, by means of decision support systems based on numerical models and simulations.

    In this work, we describe how model checking techniques may be used to develop a tool oriented to dam management.

    The dam is modeled as a hybrid system to take into account the discrete and continuous aspects of its behavior.

    Although SPIN is not intended to analyze hybrid systems, the manner in which the continuous variables of the dam evolve (following difference equations) has allowed us to use it along with the PROMELA language to both describe the hybrid model and to carry out the necessary analysis leading to the construction of the automatic decision support tool.

  • Neeraj Singh and Dominique MéryEB2C : A Tool for Event-B to C Conversion Support
    AbstractClick to Show/Hide Abstract

    To use of formal model effectively in formal method based development process, it is highly desirable that the formal specification be converted to C code, a de facto standard in many industrial application domains, such as medical, avionics and automotive control.

    In this paper we present the design methodology of a tool that translates an Event-B formal specification to equivalent C code with proper correctness assurance.

  • Daniela da Cruz, Pedro Henriques and Jorge Sousa PintoA tool to verify, slice and animate annotated components: GamaSlicer
    AbstractClick to Show/Hide Abstract

    In this paper we propose to demonstrate the features of GamaSlicer as an online tool to experiment and animate verification and assertion-based slicing algorithms on annotated components written in Java/JML.

  • Ade Azurat, Rikky Wenang Purbojati, Api Perdana and Heru SuhartantoA Preliminary Exercise of a Database Application Verification using LinguSQL
    AbstractClick to Show/Hide Abstract

    LinguSQL is an experimental development tool that integrates both blackbox testing and whitebox code verification during the code-writing activity.

    This research explores the application of LinguSQL in the development of a stock transaction application tool.

    This preliminary exercise is expected to bring forward an advantage on the overall testing and verification phase, and in the context of tool implementation, expose the parts that need further improvement for real world application.

  • Lorena Chavarría-Báez and Xiaoou LiA Petri Net-based Editor to Develop Active Rules
    AbstractClick to Show/Hide Abstract

    Active rules are a powerful mechanism to represent reactive behavior.

    In this paper we describe a software tool that creates active rules from a Petri net structure and automatically verifies the rule base, i.e., it finds structural errors such as redundancy, inconsistency, incompleteness and circularity.

  • Ábel Hegedüs, István Ráth and Daniel VarroFrom BPEL to SAL And Back: a Tool Demo on Back-Annotation with VIATRA2
    AbstractClick to Show/Hide Abstract

    Model-driven analysis aims at detecting design flaws early in high-level design models by automatically deriving mathematical models.

    These analysis models are subsequently investigated by formal verification and validation (V&V) tools, which may retrieve traces violating a certain requirement.

    Back-annotation aims at mapping back the results of V&V tools to the design model in order to highlight the real source of the fault, to ease making necessary amendments.

    In this tool demonstration we present an end-to-end V&V tool for BPEL business processes that includes complex backannotation support for representing V&V results as process execution traces in the design environment.

  • Luca Cesari, Rosario Pugliese and Francesco TiezziBliteC: a tool for developing WS-BPEL applications
    AbstractClick to Show/Hide Abstract

    WS-BPEL is imposing itself as a standard for orchestration of web services.

    However, there are still some well-known difficulties that make programming in WS-BPEL a tricky task.

    We present here BliteC, a software tool we have developed for supporting a rapid and easy development of WS-BPEL applications.

    BliteC translates service orchestrations written in Blite, a formal language inspired to but simpler than WS-BPEL, into readily executable WS-BPEL programs.

    We illustrate our approach by means of a practical programming example.

  • Peter Gorm Larsen and Kenneth LausdahlOverture/VDM Tool Status
    AbstractClick to Show/Hide Abstract

    Overture is a community-based initiative that aims to develop a common open-source platform integrating a range of tools for constructing and analysing formal models of systems using VDM.

    The mission is to both provide an industrial-strength tool set for VDM and also to provide an environment that allows researchers and other stakeholders to experiment with modifications and extensions to the tools and language.

    This page presents a short status of the Overture project where more details can be found in [1].

   

 
www.sefm2010.isti.cnr.it SEFM 2010 School
Home    |    News   |   Downloads   |   Sitemap   |     |   rss feed   |   css 2.1   |   last updated: 10 Nov, 2010 - 11:30