- Home
- Publications
-
Energy characterization of IoT systems through design aspect monitoringYear: 2021 Journal: International Journal on Software Tools for Technology Transfer Volume: 0 Pages: 1-18 Publisher: SpringerAuthors: Alexios Lekidis, Panagiotis Katsaros
Energy characterization of IoT systems through design aspect monitoring
The technological revolution brought by the Internet of Things (IoT) is characterized by a high level of automation based, to a large extent, on battery autonomy. Important risks hindering its wide adoption, though, are associated with device battery lifetime, which is affected by system design aspects such as connectivity, data processing and storage, as well as security protection against cyber-threats. Even though simulation can help for the energy cost estimation of IoT applications before their actual deployment, it is still challenging, and extensive effort is required to converge to a feasible architectural deployment scenario. This article introduces a method to address this challenge by estimating the energy cost of the IoT design aspects and identifying the feasible deployment scenarios, for an IoT system architecture. The method is illustrated on a smart city application that consists of subsystems for building management and intelligent transportation. These two subsystems employ a variety of IoT devices connected to an Orion border router. We estimate the feasibility of various architectural deployments with respect to the system requirements and conclude to those that are possible, as feedback to the system designers. The case study results include quantitative metrics for the evaluation of system requirements using a new aspect monitoring technique and the well-established Statistical Model Checking (SMC) approach. -
A survey on the formalisation of system requirements and their validationYear: 2020 Journal: Array Volume: 7 Pages: 100030 (9 pages) Publisher: ElsevierAuthors: Konstantinos Mokos, Panagiotis Katsaros
A survey on the formalisation of system requirements and their validation
System requirements define conditions and capabilities to be met by a system under design. They are a partial definition in natural language, with inevitable ambiguities. Formalisation concerns with the transformation of requirements into a specification with unique interpretation, for resolving ambiguities, underspecified references and for assessing whether requirements are consistent, correct (i.e. valid for an acceptable solution) and attainable. Formalisation and validation of system requirements provides early evidence of adequate specification, for reducing the validation tests and high-cost corrective measures in the later system development phases. This article has the following contributions. First, we characterise the specification problem based on an ontology for some domain. Thus, requirements represent a particular system among many possible ones, and their specification takes the form of mapping their concepts to a semantic model of the system. Second, we analyse the state-of-the-art of pattern-based specification languages, which are used to avoid ambiguity. We then discuss the semantic analyses (missing requirements, inconsistencies etc.) supported in such a framework. Third, we survey related research on the derivation of formal properties from requirements, i.e. verifiable specifications that constrain the system’s structure and behaviour. Possible flaws in requirements may render the derived properties unsatisfiable or not realizable. Finally, this article discusses the important challenges for the current requirements analysis tools, towards being adopted in industrial-scale projects. -
Correct-by-construction model-based design of reactive streaming software for multi-core embedded systemsYear: 2020 Journal: International Journal on Software Tools for Technology Transfer Volume: 22 Number: 1 Pages: 3-32 Publisher: SpringerAuthors: Fotios Gioulekas, Peter Poplavko, Panagiotis Katsaros, Saddek Bensalem, Pedro Palomo
Correct-by-construction model-based design of reactive streaming software for multi-core embedded systems
We present a model-based design approach toward correct-by-construction implementations of reactive streaming software for multi-core systems. A system’s implementation is derived from a high-level process network model by applying semantics-preserving model transformations. The so-called fixed priority process networks (FPPNs) are programmed independently from the execution platform and combine streaming and reactive control behavior with task parallelism for utilizing multi-core processing. We first define the FPPN sequential execution semantics that specifies precedence constraints between job executions of different tasks. Applications are thus rendered such that for any given test stimuli, a deterministic output response is expected. Furthermore, we define the FPPN real-time semantics based on a timed-automata modeling framework. This is provably a functionally equivalent semantics specifying the real-time execution of FPPNs and enabling runtime managers for scheduling jobs on multi-cores. A model transformation framework has been developed for deriving executable implementations of FPPNs on the BIP (Behavior–Interaction–Priority) runtime environment, ported on multi-core platforms. Schedulability is established by static analysis of the FPPN, and it is guaranteed by construction. Thus, the developers do not need to program low-level real-time OS services (e.g., for task management) and applications are amenable to testing, as opposed to if their outputs would depend on timing behavior. We have successfully ported a guidance-navigation and control application of a satellite system, onto a radiation hardened multi-core platform. Various implementation scenarios for efficiently utilizing HW resources are illustrated, and the test results are discussed. -
Correction to: Correct-by-construction model-based design of reactive streaming software for multi-core embedded systemsYear: 2020 Journal: International Journal on Software Tools for Technology Transfer Volume: 22 Number: 1 Pages: 33–34 Publisher: SpringerAuthors: Fotios Gioulekas, Peter Poplavko, Panagiotis Katsaros, Saddek Bensalem, Pedro Palomo
Correction to: Correct-by-construction model-based design of reactive streaming software for multi-core embedded systems
Correction to: International Journal on Software Tools for Technology Transfer http://doi.org/10.1007/s10009-019-00521-7
Updated Fig. 7 (pi is the job’s process, ki is the job’s invocation count, Ai is the invocation time, Di is the absolute deadline and Ci is the WCET). -
Formal analysis of DeGroot Influence Problems using probabilistic model checkingYear: 2018 Journal: Simulation Modelling Practice and Theory Volume: 89 Pages: 144-159 Publisher: ElsevierAuthors: Sotirios Gyftopoulos, Pavlos S. Efraimidis, Panagiotis Katsaros
Formal analysis of DeGroot Influence Problems using probabilistic model checking
DeGroot learning is a model of opinion diffusion and formation in a social network. We examine the behaviour of the DeGroot learning model when external strategic players that aim to influence the opinion formation process are introduced. More specifically, we consider the case of a single decision maker and that of two competing players, with a fixed number of possible influence actions for each of them. In the former case, the DeGroot model takes the form of a Markov Decision Process (MDP), while in the latter case it takes the form of a Stochastic Game (SG). These models are solved using probabilistic model checking techniques, as well as other solution techniques beyond model checking. The viability of our analysis is attested on a well-known social network, the Zachary’s karate club. Finally, the evaluation of influence in a social network simultaneously with the decision maker’s cost is supported, which is encoded as a multi-objective model checking problem. -
Early validation of system requirements and design through correctness-by-constructionYear: 2018 Journal: Journal of Systems and Software Volume: 145 Pages: 52-78 Publisher: ElsevierAuthors: Emmanouela Stachtiari, Anastasia Mavridou, Panagiotis Katsaros, Simon Bliudze, Joseph Sifakis
Early validation of system requirements and design through correctness-by-construction
Early validation of requirements aims to reduce the need for the high-cost validation testing and corrective measures at late development stages. This work introduces a systematic process for the unambiguous specification of system requirements and the guided derivation of formal properties, which should be implied by the system ’s structure and behavior in conjunction with its external stimuli. This rigorous design takes place through the incremental construction of a model using the BIP (Behavior-Interaction-Priorities) component framework. It allows building complex designs by composing simpler reusable designs enforcing given properties. If some properties are neither enforced nor verified, the model is refined or certain requirements are revised. A validated model provides evidence of requirements’ consistency and design correctness. The process is semi-automated through a new tool and existing verification tools. Its effectiveness was evaluated on a set of requirements for the control software of the CubETH nanosatellite and an extract of software requirements for a Low Earth Orbit observation satellite. Our experience and obtained results helped in identifying open challenges for applying the method in industrial context. These challenges concern with the domain knowledge representation, the expressiveness of used specification languages, the library of reusable designs and scalability. -
Maximal software execution time: a regression-based approachYear: 2018 Journal: Innovations in Systems and Software Engineering Volume: 14 Pages: 101–116 Publisher: SpringerAuthors: Ayoub Nouri, Peter Poplavko, Lefteris Angelis, Alexandros Zerzelidis, Saddek Bensalem, Panagiotis Katsaros
Maximal software execution time: a regression-based approach
This work aims at facilitating the schedulability analysis of non-critical systems, in particular those that have soft real-time constraints, where worst-case execution times (WCETs) can be replaced by less stringent probabilistic bounds, which we call maximal execution times (METs). To this end, it is possible to obtain adequate probabilistic execution time models by separating the non-random dependency on input data from a modeling error that is purely random. The proposed approach first utilizes execution time multivariate measurements for building a multiple regression model and then uses the theory related to confidence bounds of coefficients, in order to estimate the upper bound of execution time. Although certainly our method cannot directly achieve extreme probability levels that are usually expected for WCETs, it is an attractive alternative for MET analysis, since it can arguably guarantee safe probabilistic bounds. The method’s effectiveness is demonstrated on a JPEG decoder running on an industrial SPARC V8 processor. -
Model‐based design of IoT systems with the BIP component frameworkYear: 2018 Journal: Software: Practice and Experience Volume: 48 Number: 6 Pages: 1167-1194 Publisher: WileyAuthors: Alexios Lekidis, Emmanouela Stachtiari, Panagiotis Katsaros, Marius Bozga, Christos K. Georgiadis
Model‐based design of IoT systems with the BIP component framework
The design of software for networked systems with nodes running an Internet of things operating system faces important challenges due to the heterogeneity of interacting things and the constraints stemming from the often limited amount of available resources. In this context, it is hard to build confidence that a design solution fulfills the application's requirements. This paper introduces a design flow for web service applications of the representational state transfer style that is based on a formal modeling language, the behaviour, interaction, priority (BIP) component framework. The proposed flow applies the principles of separation of concerns in a component‐based design process that supports the modular design and reuse of model artifacts. The BIP tools for state‐space exploration allow verifying qualitative properties for service responsiveness, ie, the timely handling of events. Moreover, essential quantitative properties are validated through statistical model checking of a stochastic BIP model. All properties are preserved in actual implementation by ensuring that the deployed code is consistent with the validated model. We illustrate the design of a representational state transfer sense‐compute‐control application for a Wireless Personal Area Network architecture with nodes running the Contiki operating system. The results validate qualitative and quantitative properties for the system and include the study of error behaviours. -
Abstract model repair for probabilistic systemsYear: 2018 Journal: Information and Computation Volume: 259 Number: 1 Pages: 142-160 Publisher: ElsevierAuthors: George Chatzieleftheriou, Panagiotis Katsaros
Abstract model repair for probabilistic systems
Given a Discrete Time Markov Chain M and a probabilistic temporal logic formula φ, where M violates φ, the problem of Model Repair is to obtain a new model M', such that M' satisfies φ. Additionally, the changes made to M in order to obtain M' should be minimum with respect to all such M'. The state explosion problem makes the repair of large probabilistic systems almost infeasible. In this paper, we use the abstraction of Discrete Time Markov Chains in order to speed-up the process of model repair for temporal logic reachability properties. We present a framework based on abstraction and refinement, which reduces the state space of the probabilistic system to repair at the price of obtaining an approximate solution. A metric space is defined over the set of DTMCs, in order to measure the differences between the initial and the repaired models. For the repair, we introduce an algorithm and we discuss its important properties, such as soundness and complexity. As a proof of concept, we provide experimental results for probabilistic systems with diverse structures of state spaces, including the well-known Craps game, the IPv4 Zeroconf protocol, a message authentication protocol and the gambler's ruin model. -
Compositional execution semantics for business process verificationYear: 2018 Journal: Journal of Systems and Software Volume: 137 Pages: 217-238 Publisher: ElsevierAuthors: Emmanouela Stachtiari, Panagiotis Katsaros
Compositional execution semantics for business process verification
Service compositions are programmed as executable business processes in languages like WS-BPEL (or BPEL in short). In such programs, activities are nested within concurrency, isolation, compensation and event handling constructs that cause an overwhelming number of execution paths. Program correctness has to be verified based on a formal definition of the language semantics. For BPEL , previous works have proposed execution semantics in formal languages amenable to model checking. Most of the times the service composition structure is not preserved in the formal model, which impedes tracing the verification findings in the original program. Here, we propose a compositional semantics and a structure-preserving translator of BPEL programs onto the BIP component framework. In addition, we verify essential correctness properties that affect process responsiveness, and the compliance with partner services. The scalability of the proposed translation and analysis is demonstrated on BPEL programs of various sizes. Our compositional translation approach can be also applied to other executable languages with nesting syntax. -
Ensuring business and service requirements in enterprise mashupsYear: 2018 Journal: Information Systems and e-Business Management Volume: 16 Number: 1 Pages: 205-242 Publisher: SpringerAuthors: Nikolaos Vesyropoulos, Christos K. Georgiadis, Panagiotis Katsaros
Ensuring business and service requirements in enterprise mashups
During the past few years, mashups have gained wide attention as they utilize Web 2.0 technologies in order to combine data, as well as the functionalities of numerous services, in a simple web application. While developing mashups for simple user-specific needs is not a demanding procedure, this is not the case for value-added services that need to satisfy specific properties and business needs, known as enterprise mashups. As a number of business requirements have to be satisfied, and execution faults are less tolerated compared to user-centric scenarios, a rigorous approach for their development is required. In this work we present such an approach utilizing model checking techniques, provided by the behavior, interaction, priorities (BIP) component framework. In addition, a methodology for the transformation of business process model and notation models, describing the business logic of a requested mashup, into the corresponding BIP models is proposed. The generated models enable the verification of requested properties. -
Program analysis with risk-based classification of dynamic invariants for logical error detectionYear: 2017 Journal: Computers & Security Volume: 71 Pages: 36-50 Publisher: ElsevierAuthors: George Stergiopoulos, Panayiotis Katsaros, Dimitris Gritzalis
Program analysis with risk-based classification of dynamic invariants for logical error detection
The logical errors in programs causing deviations from the intended functionality cannot be detected by automated source code analysis, which mainly focuses on known defects and code vulnerabilities. To this end, we introduce a combination of analysis techniques implemented in a proof-of-concept prototype called PLATO. First, a set of dynamic invariants is inferred from the source code that represents the program's logic. The code is instrumented with assertions from the invariants, which are subsequently valuated through the program's symbolic execution. The findings are ranked using a fuzzy logic system with two scales characterizing their impact: (i) a Severity scale for the execution paths' characteristics and their Information Gain, (ii) a Reliability scale based on the measured Computational Density. Real, as well as synthetic applications with at least four different types of logical errors were analyzed. The method's effectiveness was assessed based on a dataset from 25 experiments. Albeit not without restrictions, the proposed automated analysis seems able to detect a wide variety of logical errors, while it filters out the false positives. -
Cost-aware horizontal scaling of NoSQL databases using probabilistic model checkingYear: 2017 Journal: Cluster Computing Volume: 20 Number: 3 Pages: 2687–2701 Publisher: SpringerAuthors: Athanasios Naskos, Anastasios Gounaris, Panagiotis Katsaros
Cost-aware horizontal scaling of NoSQL databases using probabilistic model checking
In this work we target horizontal scaling of NoSQL databases, which exhibit highly varying, unpredictable and difficult to model behavior coupled with transient phenomena during VM removals and/or additions. We propose a solution that is cost-aware, systematic, dependable while it accounts for performance unpredictability and volatility. To this end, we model the elasticity as a dynamically instantiated Markov decision process, which can be both solved and verified using probabilistic model checking. Further, we propose a range of complementary decision making policies, which are thoroughly evaluated in workloads from real traces. The evaluation provides strong insights into the trade-offs between performance and cost that our policies can achieve and prove that we can avoid both over- and under-provisioning. -
Security-aware elasticity for NoSQL databases in multi-cloud environmentsYear: 2017 Journal: International Journal of Intelligent Information and Database Systems Volume: 10 Number: 3 Pages: 168-190 Publisher: InderscienceAuthors: Athanasios Naskos, Anastasios Gounaris, Haralambos Mouratidis, Panagiotis Katsaros
Security-aware elasticity for NoSQL databases in multi-cloud environments
We focus on horizontally scaling NoSQL databases in a cloud environment, in order to meet performance requirements while respecting security constraints. The performance requirements refer to strict latency limits on the query response time. The security requirements are derived from the need to address two specific kinds of threats that exist in cloud databases, namely data leakage, mainly due to malicious activities of actors hosted on the same physical machine, and data loss after one or more node failures. A key feature of our approach is that we account for multiple cloud providers offering resources of different characteristics. We explain that usually there is a trade-off between performance and security requirements and we derive a model checking approach to drive runtime decisions that strike a user-defined balance between them taking into account the infrastructure heterogeneity. Finally, we evaluate our proposal using real traces to prove the effectiveness in configuring the trade-offs. -
Abstract Model RepairYear: 2017 Journal: Logical Methods in Computer Science Volume: 11 Number: 3 Pages: 43 pages Publisher: Logical Methods in Computer Science e.V.Authors: George Chatzieleftheriou, Borzoo Bonakdarpour, Panagiotis Katsaros, Scott A. Smolka
Abstract Model Repair
Given a Kripke structure M and CTL formula φ, where M does not satisfy φ, the problem of Model Repair is to obtain a new model M' such that M' satisfies φ. Moreover, the changes made to M to derive M' should be minimum with respect to all such M'. As in model checking, state explosion can make it virtually impossible to carry out model repair on models with infinite or even large state spaces. In this paper, we present a framework for model repair that uses abstraction refinement to tackle state explosion. Our framework aims to repair Kripke Structure models based on a Kripke Modal Transition System abstraction and a 3-valued semantics for CTL. We introduce an abstract-model-repair algorithm for which we prove soundness and semi-completeness, and we study its complexity class. Moreover, a prototype implementation is presented to illustrate the practical utility of abstract-model-repair on an Automatic Door Opener system model and a model of the Andrew File System 1 protocol. -
Online Analysis of Security Risks in Elastic Cloud ApplicationsYear: 2016 Journal: IEEE Cloud Computing Magazine Volume: 3 Number: 5 Pages: 26-33 Publisher: IEEEAuthors: Athanasios Naskos, Anastasios Gounaris, Haralambos Mouratidis, Panagiotis Katsaros
Online Analysis of Security Risks in Elastic Cloud Applications
Security-related concerns in elastic cloud applications call for a risk-based approach due to the inherent trade-offs among security and other nonfunctional requirements, such as performance. To this end, the authors advocate a solution that can be efficiently realized through modeling the application behavior as a Markov decision process, on top of which probabilistic model checking is applied. The article explains the main steps in this approach and illustrates its use in online analysis and decision making regarding elasticity decisions. The runtime analysis is capable of providing evidence for key security-related aspects of the running applications, such as the probability of data leakage in the next hour. -
Spacecraft early design validation using formal methodsYear: 2014 Journal: Reliability Engineering & System Safety Volume: 132 Pages: 20-35 Publisher: ElsevierAuthors: Marco Bozzano, Alessandro Cimatti, Joost-Pieter Katoen, Panagiotis Katsaros, Konstantinos Mokos, Viet Yen Nguyen, Thomas Noll, Bart Postma, Marco Roveri
Spacecraft early design validation using formal methods
The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested). -
A simulation process for asynchronous event processing systems: Evaluating performance and availability in transaction modelsYear: 2012 Journal: Simulation Modelling Practice and Theory Volume: 29 Pages: 66-77 Publisher: ElsevierAuthors: Anakreon Mentis, Panagiotis Katsaros, Lefteris Angelis
A simulation process for asynchronous event processing systems: Evaluating performance and availability in transaction models
Simulation is essential for understanding the performance and availability behavior of complex systems, but there are significant difficulties when trying to simulate systems with multiple components, which interact with asynchronous communication. A systematic process is needed, in order to cope with the complexity of asynchronous event processing and the failure semantics of the interacting components. We address this problem by introducing an approach that combines formal techniques for faithful representation of the complex system effects and a statistical analysis for simultaneously studying multiple simulation outcomes, in order to interpret them. Our process has been successfully applied to a synthetic workload for distributed transaction processing. We outline the steps followed towards generating a credible simulation model and subsequently we report and interpret the results of the applied statistical analysis. This serves as a proof of concept that the proposed simulation process can be also effective in other asynchronous system contexts, like for example distributed group communication systems, file systems and so on. -
Economic assessment of externalities for interactive audio media anti-SPIT protection of internet servicesYear: 2012 Journal: International Journal of Electronic Security and Digital Forensics (IJESDF) Volume: 4 Number: 2 Pages: 164-177 Publisher: InderscienceAuthors: Theodosios Tsiakis, Panagiotis Katsaros, Dimitris Gritzalis
Economic assessment of externalities for interactive audio media anti-SPIT protection of internet services
Spam over internet telephony (SPIT) refers to all unsolicited and massive scale attempts to establish voice communication with oblivious users of voice over internet protocol (VoIP) services. SPIT exhibits a significant increase over the last years, thus developing into a serious threat with adverse impact and costs for the business economy. An audio completely automated public Turing test to tell computers and human apart (CAPTCHA) has been introduced as a means to distinguish automated software agents (bots) from human. CAPTCHA has been proposed as a security measure against SPIT. In this paper, we lay the principles for an adequate understanding of the SPAM-related economic models, as well as their analogies to the SPIT phenomenon, so as to weigh the benefits of audio CAPTCHA protection against the incurred costs. Our approach is based on the economic assessment of externalities, i.e., the economic impact associated with the SPIT side effects on the everyday life. -
Model checking and code generation for transaction processing softwareYear: 2012 Journal: Concurrency and Computation: Practice and Experience Volume: 24 Number: 7 Pages: 711-722 Publisher: WileyAuthors: Anakreon Mentis, Panagiotis Katsaros
Model checking and code generation for transaction processing software
In modern transaction processing software, the ACID properties (atomicity, consistency, isolation, durability) are often relaxed, in order to address requirements that arise in computing environments of today. Typical examples are the long‐running transactions in mobile computing, in service‐oriented architectures and B2B collaborative applications. These new transaction models are collectively known as advanced or extended transactions. Formal specification and reasoning for transaction properties have been limited to proof‐theoretic approaches, despite the recent progress in model checking. In this work, we present a model‐driven approach for generating a provably correct implementation of the transaction model of interest. The model is specified by state machines for the transaction participants, which are synchronized on a set of events. All possible execution paths of the synchronized state machines are checked for property violations. An implementation for the verified transaction model is then automatically generated. To demonstrate the approach, the specification of nested transactions is verified, because it is the basis for many advanced transaction models. -
Formal analysis for robust anti-SPIT protection using model checkingYear: 2012 Journal: International Journal of Information Security Volume: 11 Number: 2 Pages: 121–135 Publisher: SpringerAuthors: Dimitris Gritzalis, Panagiotis Katsaros, Stylianos Basagiannis, Yannis Soupionis
Formal analysis for robust anti-SPIT protection using model checking
Anti-SPIT policies counter the SPam over Internet Telephony (SPIT) by distinguishing bots launching unsolicited bulks of VoIP calls from human beings. We propose an Anti-SPIT Policy Management mechanism (aSPM) that detects spam calls and prevents VoIP session establishment by the Session Initiation Protocol (SIP). The SPIN model checker is used to formally model and analyze the robustness of the aSPM mechanism in execution scenarios with parallel SIP sessions. In case of a possible design flaw, the model checker provides a trace of the caught unexpected behavior (counterexample), that can be used for the revision of the mechanism’s design. Our SPIN model is parameterized, based on measurements from experiments with VoIP users. Non-determinism plays a key role in representing all possible anti-SPIT policy decisions, in terms of the SIP messages that may be exchanged. The model checking results provide evidence for the timeliness of the parallel SIP sessions, the absence of deadlocks or livelocks, and the fairness for the VoIP service users. These findings ensure robust anti-SPIT protection, meaning that the aSPM mechanism operates as expected, despite the occurrence of random SPIT calls and communication error messages. To the best of our knowledge, this is the first analysis for exhaustively searching security policy flaws, due to complex interactions between anti-SPIT measures and the SIP protocol services. -
Quantitative analysis of a certified e-mail protocol in mobile environments: A probabilistic model checking approachYear: 2011 Journal: Computers & Security Volume: 30 Number: 4 Pages: 257-272 Publisher: ElsevierAuthors: Stylianos Basagiannis, Sophia G. Petridou, Nikolaos Alexiou, Georgios I. Papadimitriou, Panagiotis Katsaros
Quantitative analysis of a certified e-mail protocol in mobile environments: A probabilistic model checking approach
Formal analysis techniques, such as probabilistic model checking, offer an effective mechanism for model-based performance and verification studies of communication systems’ behavior that can be abstractly described by a set of rules i.e., a protocol. This article presents an integrated approach for the quantitative analysis of the Certified E-mail Message Delivery (CEMD) protocol that provides security properties to electronic mail services. The proposed scheme employs a probabilistic model checking analysis and provides for the first time insights on the impact of CEMD’s error tolerance on computational and transmission cost. It exploits an efficient combination of quantitative analysis and specific computational and communication parameters, i.e., the widely used Texas Instruments TMS320C55x Family operating in an High Speed Downlink Packet Access (HSDPA) mobile environment, where multiple CEMD participants execute parallel sessions with high bit error rates (BERs). Furthermore, it offers a tool-assistant approach for the protocol designers and analysts towards the verification of their products under varying parameters. Finally, this analysis can be also utilized towards reliably addressing cost-related issues of certain communication protocols and deciding on their cost-dependent viability, taking into account limitations that are introduced by hardware specifications of mobile devices and noisy mobile environments. -
Synthesis of attack actions using model checking for the verification of security protocolsYear: 2011 Journal: Security and Communication Networks Volume: 4 Number: 2 Pages: 147-161 Publisher: WileyAuthors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis
Synthesis of attack actions using model checking for the verification of security protocols
Model checking cryptographic protocols have evolved to a valuable method for discovering counterintuitive security flaws, which makes it possible for a hostile agent to subvert the goals of the protocol. Published works and existing security analysis tools are usually based on general intruder models that embody at least some aspects of the seminal work of Dolev–Yao, in an attempt to detect failures of secrecy. In this work, we propose an alternative intruder model, which is based on a thorough analysis of how potential attacks might proceed. We introduce an intruder model that provides an open‐ended base for the integration of multiple basic attack tactics. Those attack tactics have the possibility to be combined, in a way to compose complex attack actions that require a number of procedural steps from the intruder's side, such as a Denial of Service attack. In our model checking approach, protocol correctness is checked by appropriate user‐supplied assertions or reachability of invalid end states. The analyst can express security properties of specific attack actions that are not restricted to safety violations captured by a generic model checker. The described intruder model methodology was implemented within the SPIN model checker for verifying two security protocols, Micromint and PayWord. -
Quantification of interacting runtime qualities in software architectures: Insights from transaction processing in client–server architecturesYear: 2010 Journal: Information and Software Technology Volume: 52 Number: 12 Pages: 1331-1345 Publisher: ElsevierAuthors: Anakreon Mentis, Panagiotis Katsaros, Lefteris Angelis, George Kakarontzas
Quantification of interacting runtime qualities in software architectures: Insights from transaction processing in client–server architectures
Context
Architecture is fundamental for fulfilling requirements related to the non-functional behavior of a software system such as the quality requirement that response time does not degrade to a point where it is noticeable. Approaches like the Architecture Tradeoff Analysis Method (ATAM) combine qualitative analysis heuristics (e.g. scenarios) for one or more quality metrics with quantitative analyses. A quantitative analysis evaluates a single metric such as response time. However, since quality metrics interact with each other, a change in the architecture can affect unpredictably multiple quality metrics.
Objective
This paper introduces a quantitative method that determines the impact of a design change on multiple metrics, thus reducing the risks in architecture design. As a proof of concept, the method is applied on a simulation model of transaction processing in client server architecture.
Method
Factor analysis is used to unveil latent (i.e. not directly measurable) quality features represented by new variables that reflect architecture-specific correlations between metrics. Separate Analyses of Variance (ANOVA) are then applied to these variables, for interpreting the tradeoffs detected by factor analysis in terms of the quantified metrics.
Results
The results for the examined transaction processing architecture show three latent quality features, the corresponding groups of strongly correlated quality metrics and the impact of architecture characteristics on the latent quality features.
Conclusion
The proposed method is a systematic way for relating the variability of quality metrics and the implied tradeoffs to specific architecture characteristics. -
Component Certification as a Prerequisite forWidespread OSS ReuseYear: 2010 Journal: Electronic Communications of the European Association of Software Science and Technology (EASST) Volume: 33 Pages: 20 pages Publisher: European Association of Software Science and TechnologyAuthors: George Kakarontzas, Panagiotis Katsaros, Ioannis Stamelos
Component Certification as a Prerequisite forWidespread OSS Reuse
Open source software is the product of a community process that in a single project may employ different development techniques and volunteers with diverse skills, interests and hardware. Reuse of OSS software in systems that will have to guarantee certain product properties is still complicated. The main reason is the many different levels of trust that can be placed on the various OSS sources and the lack of information for the impact that a reused OSS component can have on the system properties. A prerequisite for promoting widespread reuse of OSS software is certification at the component level in an affordable cost. This work addresses the main technical issues behind OSS component certification by formal and semiformal techniques, as well as the incentives that raised the need for the OPEN-SME European funded project. OPEN-SME introduces an OSS software reuse service for SMEs, in order to address the problem that OSS evolves by volunteers that follow different development processes. We discuss the requirements relating to OSS software reuse based on the findings of a survey. Then we present the OPEN-SME tool-set and approach for OSS reuse and finally we show how the provision of verifiable certificates can provide assurance that an OSS component conforms to one or more anticipated requirements, necessary for reusing it in a system. -
An intruder model with message inspection for model checking security protocolsYear: 2010 Journal: Computers & Security Volume: 29 Number: 1 Pages: 16-34 Publisher: ElsevierAuthors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis
An intruder model with message inspection for model checking security protocols
Model checking security protocols is based on an intruder model that represents the eavesdropping or interception of the exchanged messages, while at the same time performs attack actions against the ongoing protocol session(s). Any attempt to enumerate all messages that can be deduced by the intruder and the possible actions in all protocol steps results in an enormous branching of the model's state-space. In current work, we introduce a new intruder model that can be exploited for state-space reduction, optionally in combination with known techniques, such as partial order and symmetry reduction. The proposed intruder modeling approach called Message Inspection (MI) is based on enhancing the intruder's knowledge with metadata for the exchanged messages. In a preliminary simulation run, the intruder tags the analyzed messages with protocol-specific values for a set of predefined parameters. This metadata is used to identify possible attack actions, for which it is a priori known that they cannot cause a security violation. The MI algorithm selects attack actions that can be discarded, from an open-ended base of primitive attack actions. Thus, model checking focuses only on attack actions that may disclose a security violation. The most interesting consequence is a non-negligible state-space pruning, but at the same time our approach also allows customizing the behavior of the intruder model, in order e.g. to make it appropriate for model checking problems that involve liveness. We provide experimental results obtained with the SPIN model checker, for the Needham–Schroeder security protocol. -
Probabilistic model checking for the quantification of DoS security threatsYear: 2009 Journal: Computers & Security Volume: 28 Number: 6 Pages: 450-465 Publisher: ElsevierAuthors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis, Nikolaos Alexiou
Probabilistic model checking for the quantification of DoS security threats
Secure authentication features of communication and electronic commerce protocols involve computationally expensive and memory intensive cryptographic operations that have the potential to be turned into denial-of-service (DoS) exploits. Recent proposals attempt to improve DoS resistance by implementing a trade-off between the resources required for the potential victim(s) with the resources used by a prospective attacker. Such improvements have been proposed for the Internet Key Exchange (IKE), the Just Fast Keying (JFK) key agreement protocol and the Secure Sockets Layer (SSL/TLS) protocol. In present article, we introduce probabilistic model checking as an efficient tool-assisted approach for systematically quantifying DoS security threats. We model a security protocol with a fixed network topology using probabilistic specifications for the protocol participants. We attach into the protocol model, a probabilistic attacker model which performs DoS related actions with assigned cost values. The costs for the protocol participants and the attacker reflect the level of some resource expenditure (memory, processing capacity or communication bandwidth) for the associated actions. From the developed model we obtain a Discrete Time Markov Chain (DTMC) via property preserving discrete-time semantics. The DTMC model is verified using the PRISM model checker that produces probabilistic estimates for the analyzed DoS threat. In this way, it is possible to evaluate the level of resource expenditure for the attacker, beyond which the likelihood of widespread attack is reduced and subsequently to compare alternative design considerations for optimal resistance to the analyzed DoS threat. Our approach is validated through the analysis of the Host Identity Protocol (HIP). The HIP base-exchange is seen as a cryptographic key-exchange protocol with special features related to DoS protection. We analyze a serious DoS threat, for which we provide probabilistic estimates, as well as results for the associated attacker and participants' costs. -
A roadmap to electronic payment transaction guarantees and a Colored Petri Net model checking approachYear: 2009 Journal: Information and Software Technology Volume: 51 Number: 2 Pages: 235-257 Publisher: ElsevierAuthors: Panagiotis Katsaros
A roadmap to electronic payment transaction guarantees and a Colored Petri Net model checking approach
Electronic payment systems play a vital role in modern business-to-consumer and business-to-business e-commerce. Atomicity, fault tolerance and security concerns form a problem domain of interdependent issues that are taken into account to assure the transaction guarantees of interest. We focus on the most notable payment transaction guarantees: money conservation, no double spending, goods atomicity, distributed payment atomicity, certified delivery or validated receipt and the high-level guarantees of fairness and protection of payment participants’ interests. Apart from a roadmap to the forenamed transaction guarantees, this work’s contribution is basically a full-fledged methodology for building and validating high-level protocol models and for proving payment transaction guarantees by model checking them from different participants perspectives (payer perspective, as well as payee perspective). Our approach lies on the use of Colored Petri Nets and the CPN Tools environment (i) for editing and analyzing protocol models, (ii) for proving the required transaction guarantees by CTL-based (Computation Tree Temporal Logic) model checking and (iii) for evaluating the need of candidate security requirements. -
Performance and effectiveness trade‐off for checkpointing in fault‐tolerant distributed systemsYear: 2007 Journal: Concurrency and Computation: Practice and Experience Volume: 19 Number: 1 Pages: 37-63 Publisher: WileyAuthors: Panagiotis Katsaros, Lefteris Angelis, Constantine Lazos
Performance and effectiveness trade‐off for checkpointing in fault‐tolerant distributed systems
Checkpointing has a crucial impact on systems' performance and fault‐tolerance effectiveness: excessive checkpointing results in performance degradation, while deficient checkpointing incurs expensive recovery. In distributed systems with independent checkpoint activities there is no easy way to determine checkpoint frequencies optimizing response‐time and fault‐tolerance costs at the same time. The purpose of this paper is to investigate the potentialities of a statistical decision‐making procedure. We adopt a simulation‐based approach for obtaining performance metrics that are afterwards used for determining a trade‐off between checkpoint interval reductions and efficiency in performance. Statistical methodology including experimental design, regression analysis and optimization provides us with the framework for comparing configurations, which use possibly different fault‐tolerance mechanisms (replication‐based or message‐logging‐based). Systematic research also allows us to take into account additional design factors, such as load balancing. The method is described in terms of a standardized object replication model (OMG FT‐CORBA), but it could also be applied in other (e.g. process‐based) computational models. -
Evaluation of composite object replication schemes for dependable server applicationsYear: 2006 Journal: Information and Software Technology Volume: 48 Number: 9 Pages: 795-806 Publisher: ElsevierAuthors: Panagiotis Katsaros, Nantia Iakovidou, Theodoros Soldatos
Evaluation of composite object replication schemes for dependable server applications
Object oriented dependable server applications often rely on fault-tolerance schemes, which are comprised of different replication policies for the constituent objects (composite replication schemes). This paper introduces a simulation-based evaluation approach for quantifying the tradeoffs between fault-tolerance overhead and fault-tolerance effectiveness in composite replication schemes. Compared to other evaluation approaches: (a) we do not use the well-known reliability blocks based simulation, but a hybrid reliability and system's traffic simulation and (b) we make a clear distinction between the measures used for the fault-affected service response times from those used for the fault-unaffected ones. The first mentioned feature allows taking into account additional concerns other than fault-tolerance, like for example load balancing and multithreading. The second feature renders the proposed approach suitable for design studies that aim to determine either optimal replication properties for the constituent objects or quality of service (QoS) guarantees for the perceived service response times. We obtain results for a case system model, based on different assumptions on what happens when server-objects fail (loss scenarios). The presented results give insight in the design of composite method request-retry schemes with appropriate request timeouts. -
A simulation test-bed for the design of dependable e-servicesYear: 2003 Journal: WSEAS Transactions on Computers Volume: 2 Number: 4 Pages: 915-919 Publisher: WSEASAuthors: Panajotis Katsaros, Constantine Lazos
A simulation test-bed for the design of dependable e-services
In this paper, we present the design and development of a simulation test-bed, for the performance analysis of dependable and mission critical e-services. Key characteristics of such systems, like for example, the object state transfer and recovery policies to be applied together with the chosen load distribution strategy, play the determinant role in the service’s performance and customer response perception. Analytic models usually fail to realistically capture the effects of the available design alternatives. On the other hand, the published simulation studies are usually bound to algorithms, which are not covered by the recently published standards, for the development of object based services. Our work aims to provide a comparison framework that adheres to the published standards, allows the compositional development of the service configurations of interest and the estimation of meaningful performance measures, in an efficient way. -
A technique for determining queuing network simulation length based on desired accuracyYear: 2000 Journal: International Journal of Computer Systems Science and Engineering Volume: 15 Number: 6 Pages: 399-404 Publisher: CRL Publishing Ltd.Authors: Panajotis Katsaros, Constantine Lazos
A technique for determining queuing network simulation length based on desired accuracy
Although simulation is a commonly used approach by the computer systems performance engineers and the communication systems engineers, not enough consideration is usually given to the quality of simulation results. However, the queuing network simulation is not only a representation of the dynamic behaviour of a system, but it is basically a stochastic process whose output has to be statistically analyzed. Moreover, the random selection of the simulation run length often leads to multiple replications of the same experiment in order to achieve the desired level of estimation accuracy. This paper shows an algorithm for dynamically determining the simulation run length in regard to the required accuracy. The last, is defined as an acceptable confidence interval length of the estimated values (confidence intervals are produced by the statistical analysis of the estimation results with the use of the regenerative approach).
-
Scalable IoT architecture for balancing performance and security in mobile crowdsensing systemsYear: 2020 Book title: 7th International Conference on Internet of Things: Systems, Management and Security (IOTSMS) Publisher: IEEE Pages: 1-8Authors: Theodoros Nestoridis, Chrysa Oikonomou, Anastasios Temperekidis, Fotios Gioulekas, Panagiotis Katsaros
Scalable IoT architecture for balancing performance and security in mobile crowdsensing systems
Crowdsourcing aims to deliver services and content by aggregating contributions from a large user population. For mobile networks and IoT systems, crowdsourcing is used to gather and process sensor data from mobile devices (crowdsensing), in order to deliver real-time, context-aware services and possibly support user collaboration in extended geographic areas. In applications like geonsensitive navigation, location-based activity sharing and recommendations, the challenge of adequate service quality and user experience may be at stake, as the services are provided securely to an ever-growing user population. This happens due to the inherent trade-off between security and real-time performance that ultimately sets in doubt any scalability prospect beyond a certain user-interaction load. This work introduces a publish-subscribe architecture for mobile crowdsensing systems, which can be transparently scaled up to higher usage load, while retaining adequate performance and security by load balancing into multiple MQTT brokers. The security support combines a lightweight TLS implementation with an integrated mechanism for two-level access control: user-device interactions and message topics. We provide proof-of-concept measurements that show how our solution scales to increasing interaction loads through load-balancing the processing cost that includes the overhead of the security mechanisms applied. The system architecture was implemented in a vehicular crowdsensing navigation network that allows to exchange navigation information at real-time, for improved routing of vehicles to their destination. -
Runtime Verification of Autonomous Driving Systems in CARLAYear: 2020 Book title: 20th International Conference on Runtime Verification (RV) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 12399 Pages: 172-183Authors: Eleni Zapridou, Ezio Bartocci, Panagiotis Katsaros
Runtime Verification of Autonomous Driving Systems in CARLA
Urban driving simulators, such as CARLA, provide 3-D environments and useful tools to easily simulate sensorimotor control systems in scenarios with complex multi-agent dynamics. This enables the design exploration at the early system development stages, reducing high infrastructure costs and high risks. However, due to the high-dimensional input and state spaces of closed-loop autonomous driving systems, their testing and verification is very challenging and it has not yet taken advantage of the recent developments in theory and tools for runtime verification. We show here how to integrate the recently introduced rtamt library, for runtime verification of STL (Signal Temporal Logic) specifications, with the CARLA simulator. Finally, we also present the obtained results from monitoring quantitatively interesting requirements for an experimental Adaptive Cruise Control system tested in CARLA. -
Formal Verification of Network Interlocking Control by Distributed Signal BoxesYear: 2019 Book title: 6th International Symposium on Model-Based Safety and Assessment (IMBSA) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 11842 Pages: 204-221Authors: Stylianos Basagiannis, Panagiotis Katsaros
Formal Verification of Network Interlocking Control by Distributed Signal Boxes
Interlocking control prevents certain operations from occurring, unless preceded by specific events. It is used in traffic network control systems (e.g. railway interlocking control), piping and tunneling control systems and in other applications like for example communication network control. Interlocking systems have to comply with certain safety properties and this fact elevates formal modeling as the most important concern in their design. This paper introduces an interlocking control algorithm based on the use of what we call Distributed Signal Boxes (DSBs). Distributed control eliminates the intrinsic complexity of centralized interlocking control solutions, which are mainly developed in the field of railway traffic control. Our algorithm uses types of network control units, which do not store state information. Control units are combined according to a limited number of patterns that in all cases yield safe network topologies. Verification of safety takes place by model checking a network that includes all possible interconnections between neighbor nodes. Obtained results can be used to generalize correctness by compositional reasoning for networks of arbitrary complexity that are formed according to the verified interconnection cases. -
Model-Based Energy Characterization of IoT System Design AspectsYear: 2019 Book title: From Reactive Systems to Cyber-Physical - Systems Essays Dedicated to Scott A. Smolka on the Occasion of His 65th Birthday Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 11500 Pages: 165-180Authors: Alexios Lekidis, Panagiotis Katsaros
Model-Based Energy Characterization of IoT System Design Aspects
The advances towards IoT systems with increased autonomy support improvements to existing applications and open new perspectives for other application domains. However, the design of IoT systems is challenging, due to the multiple design aspects that need to be considered. Connectivity and storage aspects are amongst the most significant ones, as IoT devices are resource-constrained and in many cases battery-powered. On top of them, it is also essential to consider privacy and security aspects that are linked to the protection of the IoT system, as well as of the data exchanged through its connectivity interfaces. Ensuring security in an IoT system, though, is an evident need and a complex challenge, due to its impact in the battery lifetime. In this paper, we propose a methodology to manage energy consumption through a model-based approach for the energy characterization of IoT design aspects using the BIP (Behavior, Interaction, Priority) component framework. Our approach is exemplified based on an Intelligent Transport System (ITS) that uses Zolertia Zoul devices placed in traffic lights and road signs to broadcast environmental and road hazard information to crossing vehicles. The results allow to find a feasible design solution that respects battery lifetime and security requirements. -
Μodel-based design of energy-efficient applications for IoT systemsYear: 2018 Book title: 1st International Workshop on Methods and Tools for Rigorous System Design (MeTRiD) Publisher: Electronic Proceedings in Theoretical Computer Science (EPTCS) Pages: 24-38Authors: Alexios Lekidis, Panagiotis Katsaros
Μodel-based design of energy-efficient applications for IoT systems
A major challenge that is currently faced in the design of applications for the Internet of Things (IoT) concerns with the optimal use of available energy resources given the battery lifetime of the IoT devices. The challenge is derived from the heterogeneity of the devices, in terms of their hardware and the provided functionalities (e.g data processing/communication). In this paper, we propose a novel method for (i) characterizing the parameters that influence energy consumption and (ii) validating the energy consumption of IoT devices against the system's energy-efficiency requirements (e.g. lifetime). Our approach is based on energy-aware models of the IoT application's design in the BIP (Behavior, Interaction, Priority) component framework. This allows for a detailed formal representation of the system's behavior and its subsequent validation, thus providing feedback for enhancements in the pre-deployment or pre-production stages. We illustrate our approach through a Building Management System, using well-known IoT devices running the Contiki OS that communicate by diverse IoT protocols (e.g. CoAP, MQTT). The results allow to derive tight bounds for the energy consumption in various device functionalities, as well as to validate lifetime requirements through Statistical Model Checking. -
Process Network Models for Embedded System Design Based on the Real-Time BIP Execution EngineYear: 2018 Book title: 1st International Workshop on Methods and Tools for Rigorous System Design (MeTRiD) Publisher: Electronic Proceedings in Theoretical Computer Science (EPTCS) Pages: 79-92Authors: Fotios Gioulekas, Peter Poplavko, Panagiotis Katsaros, Pedro Palomo
Process Network Models for Embedded System Design Based on the Real-Time BIP Execution Engine
Existing model-based processes for embedded real-time systems support the analysis of various non-functional properties, most notably schedulability, through model checking, simulation or other means. The analysis results are then used for modifying the system's design, so that the expected properties are satisfied. A rigorous model-based design flow differs in that it aims at a system implementation derived from high-level models by applying a sequence of semantics-preserving transformations. Properties established at any design step are preserved throughout the subsequent steps including the executable implementation. We introduce such a design flow using a process network model of computation for application design at a high level, which combines streaming and reactive control processing with task parallelism. The schedulability of the so-called FPPNs (Fixed Priority Process Networks) is well-studied and various solutions have been presented. This article focuses on the design flow's steps for deriving executable implementations on the BIP (Behavior - Interaction - Priority) runtime environment. FPPNs are designed using the TASTE toolset, a convenient architecture description interface. In this way, the developers do not program explicitly low-level real-time OS services and the schedulability properties are guaranteed throughout the design steps by construction. The approach has been validated on the design of a real spacecraft on-board application that has been scheduled for execution on an industrial multicore platform. -
A Process Network Model for Reactive Streaming Software with Deterministic Task ParallelismYear: 2018 Book title: 21st International Conference on Fundamental Approaches to Software Engineering (FASE) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 10802 Pages: 94-110Authors: Fotios Gioulekas, Peter Poplavko, Panagiotis Katsaros, Saddek Bensalem, Pedro Palomo
A Process Network Model for Reactive Streaming Software with Deterministic Task Parallelism
A formal semantics is introduced for a Process Network model, which combines streaming and reactive control processing with task parallelism properties suitable to exploit multi-cores. Applications that react to environment stimuli are implemented by communicating sporadic and periodic tasks, programmed independently from an execution platform. Two functionally equivalent semantics are defined, one for sequential execution and one real-time. The former ensures functional determinism by implying precedence constraints between jobs (task executions), hence, the program outputs are independent from the task scheduling. The latter specifies concurrent execution on a real-time platform, guaranteeing all model’s constraints; it has been implemented in an executable formal specification language. The model’s implementation runs on multi-core embedded systems, and supports integration of run-time managers for shared HW/SW resources (e.g. for controlling QoS, resource interference or power consumption). Finally, a model transformation approach has been developed, which allowed to port and statically schedule a real spacecraft on-board application on an industrial multi-core platform. -
Regression-based Statistical Bounds on Software Execution TimeYear: 2017 Book title: 11th International Conference on Verification and Evaluation of Computer and Communication Systems (VECoS) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 10466 Pages: 48-63Authors: Peter Poplavko, Ayoub Nouri, Lefteris Angelis, Alexandros Zerzelidis, Saddek Bensalem, Panagiotis Katsaros
Regression-based Statistical Bounds on Software Execution Time
Our work aims at facilitating the schedulability analysis of non-critical systems, in particular those that have soft real-time constraints, where WCETs can be replaced by less stringent probabilistic bounds, which we call Maximal Execution Times (METs). In our approach, we can obtain adequate probabilistic execution time models by separating the non-random input data dependency from a modeling error that is purely random. To achieve this, we propose to take advantage of the rich set of available statistical model-fitting techniques, in particular linear regression. Although certainly the proposed technique cannot directly achieve extreme probability levels that are usually expected for WCETs, it is an attractive alternative for MET analysis, since it can arguably guarantee safe probabilistic bounds. We demonstrate our method on a JPEG decoder running on an industrial SPARC V8 processor. -
Design of embedded systems with complex task dependencies and shared resource interference (Short Paper)Year: 2017 Book title: 15th International Conference on Software Engineering and Formal Methods (SEFM) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 10469 Pages: 401-407Authors: Fotios Gioulekas, Peter Poplavko, Rany Kahil, Panagiotis Katsaros, Marius Bozga, Saddek Bensalem, Pedro Palomo
Design of embedded systems with complex task dependencies and shared resource interference (Short Paper)
Languages for embedded systems ensure predictable timing behavior by specifying constraints based on either data streaming or reactive control models of computation. Moreover, various toolsets facilitate the incremental integration of application functionalities and the system design by evolutionary refinement and model-based code generation. Modern embedded systems involve various sources of interference in shared resources (e.g. multicores) and advanced real-time constraints, such as mixed-criticality levels. A sufficiently expressive modeling approach for complex dependency patterns between real-time tasks is needed along with a formal analysis of models for runtime resource managers with timing constraints. Our approach utilizes a model of computation, called Fixed-Priority Process Networks, which ensures functional determinism by unifying streaming and reactive control within a timed automata framework. The tool flow extends the open source TASTE tool-suite with model transformations to the BIP language and code generation tools. We outline the use of our flow on the design of a spacecraft on-board application running on a quad-core LEON4FT processor. -
Cache activity profiling tool for the LEON4 processorYear: 2017 Book title: 6th International Conference on Modern Circuits and Systems Technologies (MOCAST) Publisher: IEEE Pages: 1-4Authors: Maria Ntogramatzi, Panagiotis Katsaros, Spyridon Nikolaidis
Cache activity profiling tool for the LEON4 processor
The performance of modern systems depends significantly on the cache activity. Hence, tools that monitor the cache performance are very useful for optimization of the software or even, whenever this is possible, the hardware, of a system. In this paper, a tool that provides statistics about the cache activity of the LEON4-N2X embedded system is discussed. The tool analyzes the memory access trace of a program executed bare metal and calculates, with the use of the reuse distance, the latency added by the cache activity. Furthermore, the tool measures all the misses occurred and classifies them per their cause. -
Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities II (Extended Abstract)Year: 2014 Book title: 6th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA) - Part II Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 8803 Pages: 486-488Authors: George Chatzieleftheriou, Apostolos Chatzopoulos, Panagiotis Katsaros
Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities II (Extended Abstract)
A large number of tools that automate the process of finding errors in programs has recently emerged in the software development community. Many of them use static analysis as the main method for analyzing and capturing faults in the source code. Static analysis is deployed as an approximation of the programs’ runtime behavior with inherent limitations regarding its ability to detect actual code errors. It belongs to the class of computational problems which are undecidable [2]. For any such analysis, the major issues are: (1) the programming language of the source code where the analysis is applied (2) the type of errors to be detected (3) the effectiveness of the analysis and (4) the efficiency of the analysis. -
Securing Legacy Code with the TRACER PlatformYear: 2014 Book title: 18th Panhellenic Conference on Informatics (PCI) Publisher: ACM Pages: 26:1-26:6Authors: Kostantinos Stroggylos, Dimitris Mitropoulos, Zacharias Tzermias, Panagiotis Papadopoulos, Fotios Rafailidis, Diomidis Spinellis, Sotiris Ioannidis, Panagiotis Katsaros
Securing Legacy Code with the TRACER Platform
Software vulnerabilities can severely affect an organization's infrastructure and cause significant financial damage to it. A number of tools and techniques are available for performing vulnerability detection in software written in various programming platforms, in a pursuit to mitigate such defects. However, since the requirements for running such tools and the formats in which they store and present their results vary wildly, it is difficult to utilize many of them in the scope of a project. By simplifying the process of running a variety of vulnerability detectors and collecting their results in an efficient, automated manner during development, the task of tracking security defects throughout the evolution history of software projects is bolstered. In this paper we present tracer, a software framework and platform to support the development of more secure applications by constantly monitoring software projects for vulnerabilities. The platform allows the easy integration of existing tools that statically detect software vulnerabilities and promotes their use during software development and maintenance. To demonstrate the efficiency and usability of the platform, we integrated two popular static analysis tools, FindBugs and Frama-c as sample implementations, and report on preliminary results from their use. -
TRACER: A Platform for Securing Legacy CodeYear: 2014 Book title: 7th International Conference on Trust and Trustworthy Computing (TRUST) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 8564 Pages: 218-219Authors: Kostantinos Stroggylos, Dimitris Mitropoulos, Zacharias Tzermias, Panagiotis Papadopoulos, Fotios Rafailidis, Diomidis Spinellis, Sotiris Ioannidis, Panagiotis Katsaros
TRACER: A Platform for Securing Legacy Code
In this paper we present tracer, a framework to support the development of secure applications by constantly monitoring software projects for vulnerabilities. tracer simplifies the integration of existing tools that detect software vulnerabilities and promotes their use during development and maintenance. -
Stochastic Game-Based Analysis of the DNS Bandwidth Amplification Attack Using Probabilistic Model CheckingYear: 2014 Book title: 10th European Dependable Computing Conference (EDCC) Publisher: IEEE Pages: 226-237Authors: Tushar Deshpande, Panagiotis Katsaros, Scott A. Smolka, Scott D. Stoller
Stochastic Game-Based Analysis of the DNS Bandwidth Amplification Attack Using Probabilistic Model Checking
The Domain Name System (DNS) is an Internet-wide, hierarchical naming system used to translate domain names into numeric IP addresses. Any disruption of DNS service can have serious consequences. We present a formal game-theoretic analysis of a notable threat to DNS, namely the bandwidth amplification attack (BAA), and the countermeasures designed to defend against it. We model the DNS BAA as a two-player, turn-based, zero-sum stochastic game between an attacker and a defender. The attacker attempts to flood a victim DNS server with malicious traffic by choosing an appropriate number of zombie machines with which to attack. In response, the defender chooses among five BAA countermeasures, each of which seeks to increase the amount of legitimate traffic the victim server processes. To simplify the model and optimize the analysis, our model does not explicitly track the handling of each packet. Instead, our model is based on calculations of the rates at which the relevant kinds of events occur in each state. We use our game-based model of DNS BAA to generate optimal attack strategies, which vary the number of zombies, and optimal defense strategies, which aim to enhance the utility of the BAA countermeasures by combining them in advantageous ways. The goal of these strategies is to optimize the attacker's and defender's payoffs, which are defined using probabilistic reward-based properties, and are measured in terms of the attacker's ability to minimize the volume of legitimate traffic that is processed, and the defender's ability to maximize the volume of legitimate traffic that is processed. -
Correct-by-Construction Web Service ArchitectureYear: 2014 Book title: 8th International Symposium on Service Oriented System Engineering (SOSE) Publisher: IEEE Pages: 47-58Authors: Emmanouela Stachtiari, Nikos Vesyropoulos, George Kourouleas, Christos K. Georgiadis, Panagiotis Katsaros
Correct-by-Construction Web Service Architecture
Service-Oriented Computing aims to facilitate development of large-scale applications out of loosely coupled services. The service architecture sets the framework for achieving coherence and interoperability despite service autonomy and the heterogeneity in data representation and protocols. Service-Oriented Architectures are based on standardized service contracts, in order to infuse characteristic properties (stateless interactions, atomicity etc). However, contracts cannot ensure correctness of services if essential operational details are overlooked, as is usually the case. We introduce a modeling framework for the specification of Web Service architectures, in terms of formal operational semantics. Our approach aims to enable rigorous design of Web Services, based on the Behaviour Interaction Priorities (BIP) component framework and the principles of correctness-by construction. We provide executable BIP models for SOAP-based and RESTful Web Services and for a service architecture with session replication. The architectures are treated as reusable design artifacts that may be composed, such that their characteristic properties are preserved. -
Inlined monitors for security policy enforcement in web applicationsYear: 2013 Book title: 17th Panhellenic Conference on Informatics (PCI) Publisher: ACM Pages: 75-82Authors: Fotios Rafailidis, Ioannis Panagos, Panagiotis Katsaros, Alexandros Arvanitidis
Inlined monitors for security policy enforcement in web applications
Improper input validation in Web Applications undermines their security and this may have disastrous consequences for the users. Input data can or cannot be harmful depending on how they are used with regard to the interactions with the clients and the accessed sensitive resources (e.g. databases and files). Existing application frameworks cannot guarantee safe input sanitization with respect to all vulnerabilities. Also, when legacy code is incorporated that was not originally written for the Web, its security hardening is costly and error-prone. We propose a reference monitor inlining approach that treats input injection vulnerabilities as a cross-cutting concern. Our monitors enforce high-level security policies for taint propagation control, by weaving checks and repair actions into the untrusted code. Taint policies are specified into JavaMOP, a programming framework for generating runtime monitors, which are weaved into the application through the automated Aspect Oriented Programming process. When monitor design is guided by preliminary static taint analysis, the incurred overhead can be reduced. Further improvements are feasible through JavaMOP's optimizations. As a proof of concept, we present the design and experimental validation of inlined monitors against SQL injection and cross-site scripting attacks. -
An Interdisciplinary Perspective to the Design and Decision Support of Integral Safety SystemsYear: 2013 Book title: 4th IFAC Workshop on Dependable Control of Discrete Systems (DCDS) Publisher: Elsevier Series: IFAC Proceedings Volumes Volume: 46 Pages: 145-150Authors: Christian Berger, Panagiotis Katsaros, Mahdi Bohlouli, Lefteris Angelis
An Interdisciplinary Perspective to the Design and Decision Support of Integral Safety Systems
Next generation integral safety systems are expected to provide better protection against traffic accidents by interlinking sensors and actuators of active and passive safety. A series of advanced functions will be used to mitigate collisions and if they cannot be avoided they will at least reduce their severity. We explore the interplay between key technology areas towards a holistic approach in the design and decision support of integral safety systems. First, we refer to the main problems in the design of effective systems and the associated software engineering challenges. Recent advances in sensor data analytics are then explored and their integration with decision support for vehicle control is examined. Finally, we envision that rigorous design techniques based on models for human-machine interaction are essential for achieving adequate performance and robustness of integral safety systems. -
The Sphinx enigma in critical VoIP infrastructures: Human or botnet?Year: 2013 Book title: 4th Int. Conference on Information, Intelligence, Systems and Applications (IISA) Publisher: IEEE Pages: 1-6Authors: Dimitris Gritzalis, Yannis Soupionis, Vasilios Katos, Ioannis Psaroudakis, Panajotis Katsaros, Anakreon Mentis
The Sphinx enigma in critical VoIP infrastructures: Human or botnet?
Sphinx was a monster in Greek mythology devouring those who could not solve her riddle. In VoIP, a new service in the role of Sphinx provides protection against SPIT (Spam over Internet Telephony) by discriminating human callers from botnets. The VoIP Sphinx tool uses audio CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) that are controlled by an anti-SPIT policy mechanism. The design of the Sphinx service has been formally verified for the absence of side-effects in the VoIP services (robustness), as well as for its DoS-resistance. We describe the principles and innovations of Sphinx, together with experimental results from pilot use cases. -
Probabilistic Model Checking of CAPTCHA Admission Control for DoS Resistant Anti-SPIT ProtectionYear: 2012 Book title: 7th International Workshop on Critical Information Infrastructures Security (CRITIS 12) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 7722 Pages: 143-154Authors: Emmanouela Stachtiari, Yannis Soupionis, Panagiotis Katsaros, Anakreontas Mentis, Dimitris Gritzalis
Probabilistic Model Checking of CAPTCHA Admission Control for DoS Resistant Anti-SPIT Protection
Voice over IP (VoIP) service is expected to play a key role to new ways of communication. It takes advantage of Internet Protocols by using packet networks to transmit voice and multimedia data, thus providing extreme cost savings. On the other hand, this technology has inherited drawbacks, like SPAM over Internet Telephony (SPIT). A well-established method to tackle SPIT is the use of CAPTCHAs. CAPTCHAs are vulnerable to Denial of Service (DoS) attacks, due to their excessive demands for bandwidth. We suggest that anti-SPIT protection should be combined with appropriate admission control policies, for mitigating the effects of DoS attacks. In order to identify how effective is this technique, we quantify the costs and the benefits in bandwidth usage through probabilistic model checking four different admission control policies. We conclude with comments on how appropriate is each policy in tackling DoS attacks. -
Rigorous Analysis of Service Composability by Embedding WS-BPEL into the BIP Component FrameworkYear: 2012 Book title: 19th IEEE International Conference on Web Services (ICWS 12) Publisher: IEEE Computer Society Pages: 319-326Authors: Emmanouela Stachtiari, Anakreon Mentis, Panagiotis Katsaros
Rigorous Analysis of Service Composability by Embedding WS-BPEL into the BIP Component Framework
Behavioral correctness of service compositions refers to the absence of service interaction flaws, so that essential service properties like deadlock freedom are preserved and correctness properties related to safety and liveness are assured. Model checking is a widespread technique and it is based on extracting an abstract model representation of the program defining a service orchestration or choreography. During model extraction, the original structure of the service composition cannot be preserved and backwards traceability of the verification findings is not possible. We propose a rigorous analysis within the BIP component framework. Being rigorous means that the analyst is able to reason on which properties hold and why. The BIP language offers a sound execution semantics for a minimal set of primitives and constructs for modeling and composing layered components. We formally define the WS-BPEL 2.0 execution semantics and we provide a structure-preserving translation (embedding) of WS-BPEL to BIP. Structure preservation is feasible, due to the formally grounded expressiveness properties of BIP. As a proof of concept, we apply the developed embedding to a sample BPEL program and present the analysis results for a safety property. By exploiting the BIP model structure we interpret the analysis findings in terms of the service interactions stated in the BPEL source code. A significant benefit of BIP is that it applies compositional reasoning on the model structure to guarantee essential correctness properties and avoid, as much as possible, the scalability limitations of conventional model checking. -
Abstract Model RepairYear: 2012 Book title: 4th International NASA Formal Methods Symposium (NFM 12) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 7226 Pages: 341-355Authors: George Chatzieleftheriou, Borzoo Bonakdarpour, Scott A. Smolka, Panagiotis Katsaros
Abstract Model Repair
Given a Kripke structure M and CTL formula ϕ, where M ⊭ φ, the problem of Model Repair is to obtain a new model M′ such that M′ ⊧ ϕ. Moreover, the changes made to M to derive M′ should be minimal with respect to all such M′. As in model checking, state explosion can make it virtually impossible to carry out model repair on models with infinite or even large state spaces. In this paper, we present a framework for model repair that uses abstraction refinement to tackle state explosion. Our model-repair framework is based on Kripke Structures, a 3-valued semantics for CTL, and Kripke Modal Transition Systems (KMTSs), and features an abstract-model-repair algorithm for KMTSs. Application to an Automatic Door Opener system is used to illustrate the practical utility of abstract model repair. -
Formal Analysis of the DNS Bandwidth Amplification Attack and Its Countermeasures Using Probabilistic Model CheckingYear: 2011 Book title: 2011 IEEE 13th International Symposium on High-Assurance Systems Engineering (HASE 11) Publisher: IEEE Computer Society Pages: 360-367Authors: Tushar Deshpande, Panagiotis Katsaros, Stylianos Basagiannis, Scott A. Smolka
Formal Analysis of the DNS Bandwidth Amplification Attack and Its Countermeasures Using Probabilistic Model Checking
The DNS Bandwidth Amplification Attack (BAA) is a distributed denial-of-service attack in which a network of computers floods a DNS server with responses to requests that have never been made. Amplification enters into the attack by virtue of the fact that a small 60-byte request can be answered by a substantially larger response of 4,000 bytes or more in size. We use the PRISM probabilistic model checker to introduce a Continuous Time Markov Chain model of the DNS BAA and three recently proposed countermeasures, and to perform an extensive cost-benefit analysis of the countermeasures. Our analysis, which is applicable to both DNS and DNSSec (a security extension of DNS), is based on objective metrics that weigh the benefits for a server in terms of the percentage increase in the processing of legitimate packets against the cost incurred by incorrectly dropping legitimate traffic. The results we obtain, gleaned from more than 450 PRISM runs, demonstrate significant differences between the countermeasures as reflected by their respective net benefits. Our results also reveal that DNSSec is more vulnerable than DNS to a BAA attack, and, relatedly, DNSSec derives significantly less benefit from the countermeasures. -
Elastic Component Characterization with Respect to Quality Properties: An Intuitionistic Fuzzy-Based ApproachYear: 2011 Book title: 15th Panhellenic Conference on Informatics (PCI 11) Publisher: IEEE Computer Society Pages: 270-274Authors: George Kakarontzas, Vassilis C. Gerogiannis, Ioannis Stamelos, Panagiotis Katsaros
Elastic Component Characterization with Respect to Quality Properties: An Intuitionistic Fuzzy-Based Approach
Component selection based on quality properties is a fuzzy process because measurable component attributes cannot be attributed with certainty to high-level quality properties such as the ones proposed by the ISO/IEC 9126 quality model and other similar models. In addition, measurable component quality attributes can be characterized differently for different application domains (e.g., a total execution time value can be considered very satisfactory for one application domain and extremely unsatisfactory for another). In this paper we demonstrate the usage of an intuitionistic fuzzy approach in selecting components originating from an elastic component repository. Elastic components are the output of a quality-driven process for component development that results in component variants based on quality discrimination. During reuse, utilization of intuitionistic fuzzy sets can be proven an efficient solution to derive a first-level characterization of the available components, by considering not only the uncertainty of the reusers but also their hesitation degree. -
Economic Evaluation of Interactive Audio Media for Securing Internet ServicesYear: 2011 Book title: Joint Conferences on Global Security, Safety and Sustainability and e-Democracy (ICGS3/e-Democracy 11) Publisher: Springer Series: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Volume: 99 Pages: 46-53Authors: Theodosios Tsiakis, Panagiotis Katsaros, Dimitris Gritzalis
Economic Evaluation of Interactive Audio Media for Securing Internet Services
Internet Telephony (Voice over Internet Protocol or VoIP) has recently become increasingly popular mainly due to its cost advantages and range of advance services. On the same time, SPam over Internet Telephony (SPIT) referred as unsolicited bulk calls sent via VoIP networks by botnets, is expected to become a serious threat in the near future. Audio CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart) mechanism were introduced and employed as a security measure to distinguish automated software agents from human beings. The scope of this paper is to present the security economics frame and to have an in-depth review of the related economic models of SPAM and its analogies with SPIT. -
A Framework for Access Control with Inference ConstraintsYear: 2011 Book title: 35th Annual IEEE Computer Software and Applications Conference (COMPSAC 11) Publisher: IEEE Computer Society Pages: 289-297Authors: Vasilios Katos, Dimitrios Vrakas, Panagiotis Katsaros
A Framework for Access Control with Inference Constraints
In this paper we present an approach for investigating the feasibility of reducing inference control to access control, as the latter is a more desirable means of preventing unauthorized access to sensitive data. Access control is preferable over inference control in terms of efficiency, but it fails to offer confidentiality in the presence of inference channels. We argue that during the design phase of a data schema and the definition of user roles, inference channels should be considered. An approach is introduced that can be integrated into a risk assessment exercise to assist in determining the roles and/or attributes that lower the risks associated with information disclosure from inference. The residual risk from the remaining inference channels could be treated by well known inference control mechanisms. -
Test-Driving Static Analysis Tools in Search of C Code VulnerabilitiesYear: 2011 Book title: 35th Annual IEEE Computer Software and Applications Conference Workshops (COMPSACW 11) Publisher: IEEE Computer Society Pages: 96-103Authors: George Chatzieleftheriou, Panagiotis Katsaros
Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities
Recently, a number of tools for automated code scanning came in the limelight. Due to the significant costs associated with incorporating such a tool in the software lifecycle, it is important to know what defects are detected and how accurate and efficient the analysis is. We focus specifically on popular static analysis tools for C code defects. Existing benchmarks include the actual defects in open source programs, but they lack systematic coverage of possible code defects and the coding complexities in which they arise. We introduce a test suite implementing the discussed requirements for frequent defects selected from public catalogues. Four open source and two commercial tools are compared in terms of their effectiveness and efficiency of their detection capability. A wide range of C constructs is taken into account and appropriate metrics are computed, which show how the tools balance inherent analysis tradeoffs and efficiency. The results are useful for identifying the appropriate tool, in terms of cost-effectiveness, while the proposed methodology and test suite may be reused. -
Quantitative model checking of an RSA-based email protocol on mobile devicesYear: 2011 Book title: IEEE Symposium on Computers and Communications (ISCC 11) Publisher: IEEE Pages: 639-645Authors: Sophia Petridou, Stylianos Basagiannis, Nikolaos Alexiou, Georgios Papadimitriou, Panagiotis Katsaros
Quantitative model checking of an RSA-based email protocol on mobile devices
The current proliferation of mobile devices has resulted in a large diversity of hardware specifications, each designed for different services and applications (e.g. cell phones, smart phones, PDAs). At the same time, e-mail message delivery has become a vital part of everyday communications. This article provides a cost-aware study of an RSA-based e-mail protocol executed upon the widely used Apple iPhone 1,2 with ARM1176JZF-S, operating in an High Speed Downlink Packet Access (HSDPA) mobile environment. The proposed study employs formal analysis techniques, such as probabilistic model checking, and proceeds to a quantitative analysis of the email protocol, taking into account computational parameters derived by the devices' specifications. The value of this study is to form a computer-aided framework which balances the tradeoff between gaining in security, using high-length RSA keys, and conserving CPU resources, due to hardware limitations of mobile devices. To the best of our knowledge, this is the first time that probabilistic model checking is utilized towards verifying a secure e-mail protocol under hardware constrains. In fact, the proposed analysis can be widely exploited by protocol designers in order to verify their products in conjunction with specific mobile devices. -
Model Repair for Probabilistic SystemsYear: 2011 Book title: 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 11) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 6605 Pages: 326-340Authors: Ezio Bartocci, Radu Grosu, Panagiotis Katsaros, C. R. Ramakrishnan, Scott A. Smolka
Model Repair for Probabilistic Systems
We introduce the problem of Model Repair for Probabilistic Systems as follows. Given a probabilistic system M and a probabilistic temporal logic formula φ such that M fails to satisfy φ, the Model Repair problem is to find an M′ that satisfies φ and differs from M only in the transition flows of those states in M that are deemed controllable. Moreover, the cost associated with modifying M’s transition flows to obtain M′ should be minimized. Using a new version of parametric probabilistic model checking, we show how the Model Repair problem can be reduced to a nonlinear optimization problem with a minimal-cost objective function, thereby yielding a solution technique. We demonstrate the practical utility of our approach by applying it to a number of significant case studies, including a DTMC reward model of the Zeroconf protocol for assigning IP addresses, and a CTMC model of the highly publicized Kaminsky DNS cache-poisoning attack. -
Formal Analysis of the Kaminsky DNS Cache-Poisoning Attack Using Probabilistic Model CheckingYear: 2010 Book title: 2010 IEEE 12th International Symposium on High Assurance Systems Engineering (HASE 10) Publisher: IEEE Computer Society Pages: 94-103Authors: Nikolaos Alexiou, Stylianos Basagiannis, Tushar Deshpande, Panagiotis Katsaros, Scott A. Smolka
Formal Analysis of the Kaminsky DNS Cache-Poisoning Attack Using Probabilistic Model Checking
We use the probabilistic model checker PRISM to formally model and analyze the highly publicized Kaminsky DNS cache-poisoning attack. DNS (Domain Name System) is an internet-wide, hierarchical naming system used to translate domain names such as google.com into physical IP addresses such as 208.77.188.166. The Kaminsky DNS attack is a recently discovered vulnerability in DNS that allows an intruder to hijack a domain, i.e. corrupt a DNS server so that it replies with the IP address of a malicious web server when asked to resolve URLs within a non-malicious domain such as google.com. A proposed fix for the attack is based on the idea of randomizing the source port a DNS server uses when issuing a query to another server in the DNS hierarchy. We use PRISM to introduce a Continuous Time Markov Chain representation of the Kaminsky attack and the proposed fix, and to perform the required probabilistic model checking. Our results, gleaned from more than 240 PRISM runs, formally validate the existence of the Kaminsky cache-poisoning attack even in the presence of an intruder with virtually no knowledge of the victim DNS server's actions. They also serve to quantify the effectiveness of the proposed fix: using nonlinear least-squares curve fitting, we show that the probability of a successful attack obeys a 1/N distribution, where N is the upper limit on the range of source-port ids. We also demonstrate an increasing attack probability with an increasing number of attempted attacks or increasing rate at which the intruder guesses the source-port id. -
A Formally Verified Mechanism for Countering SPITYear: 2010 Book title: 5th International Workshop on Critical Information Infrastructures Security (CRITIS 10) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 6712 Pages: 128-139Authors: Yannis Soupionis, Stylianos Basagiannis, Panagiotis Katsaros, Dimitris Gritzalis
A Formally Verified Mechanism for Countering SPIT
Voice over IP (VoIP) is a key technology, which provides new ways of communication. It enables the transmission of telephone calls over the Internet, which delivers economical telephony that can clearly benefit both consumers and businesses, but it also provides a cheap method of mass advertising. Those bulks unsolicited calls are known as SPam over Internet Telephony (SPIT). In this paper we illustrate an anti-SPIT policy-based management (aSPM) mechanism which can handle the SPIT phenomenon. Moreover, we introduce a formal verification as a mean for validating the effectiveness of the aSPM against its intended goals. We provide model checking results that report upper bounds in the duration of call session establishment for the analyzed anti-SPIT policy over the Session Initiation Protocol (SIP) and prove the absence of deadlocks. -
Ontology-Based Model Driven Engineering for Safety VerificationYear: 2010 Book title: 36th EUROMICRO Conference on Software Engineering and Advanced Applications (EUROMICRO SEAA 10) Publisher: IEEE Computer Society Pages: 47-54Authors: Konstantinos Mokos, George Meditskos, Panagiotis Katsaros, Nick Bassiliades, Vangelis Vasiliades
Ontology-Based Model Driven Engineering for Safety Verification
Safety assessment of dependable systems is a complex verification task that is desirable to be explicitly incorporated into the development cycle during the very early stages of a project. The main reason is that the cost to correct a safety error at the late stages of system development is excessively high. Towards this aim, we introduce an ontology-based model-driven engineering process for automating transformations of models that are utilized as reusable artifacts. The logical and syntactical structures of the design and safety models have to conform to a number of metamodel constraints. These constraints are semantically represented by mapping them onto an OWL domain ontology, allowing the incorporation of a Description Logic OWL reasoner and inference rules, in order to detect lacks of model elements and semantically inconsistent parts. Model validation throughout the ontology-based transformation assures that the generated formal safety model fulfils a series of requirements that render it analyzable. Our approach has been implemented as a response to an industrial problem, where the architecture design is expressed in Architecture Analysis and Design Language (AADL) and safety models are specified in the AltaRica formal language. -
ACID Sim Tools: A simulation framework for distributed transaction processing architecturesYear: 2010 Book title: Industry Track of 1st Int. Conference on Simulation Tools and Techniques for Communications, Networks and Systems (SIMUtools 08) Publisher: ACM Pages: 8 pagesAuthors: Anakreon Mentis, Panagiotis Katsaros, Lefteris Angelis
ACID Sim Tools: A simulation framework for distributed transaction processing architectures
Modern network centric information systems implement highly distributed architectures that usually include multiple application servers. Application design is mainly based on the fundamental object-oriented principles and the adopted architecture matches the logical decomposition of applications (into several tiers like presentation, logic and data) to their software and hardware structuring. The provided recovery solutions ensure an at-most-once service request processing by an existing transaction processing infrastructure. However, in published works performance evaluation of transaction processing aspects is focused on the computational model of database servers. Also, there are no available tools which enable exploring the performance and availability trade-offs that arise when applying different combinations of concurrency control, atomic commit and recovery protocols. This paper introduces ACID Sim Tools, a publicly available tool and at the same time an open source framework for interactive and batch-mode simulation of transaction processing architectures that adopt the basic assumptions of an object-based computational model. -
Synthetic Metrics for Evaluating Runtime Quality of Software Architectures with Complex TradeoffsYear: 2009 Book title: 35th Euromicro Conference on Software Engineering and Advanced Applications (EUROMICRO SEAA 09) Publisher: IEEE Computer Society Pages: 237-242Authors: Anakreon Mentis, Panagiotis Katsaros, Lefteris Angelis
Synthetic Metrics for Evaluating Runtime Quality of Software Architectures with Complex Tradeoffs
Runtime quality of software, such as availability and throughput, depends on architectural factors and execution environment characteristics (e.g. CPU speed, network latency). Although the specific properties of the underlying execution environment are unknown at design time, the software architecture can be used to assess the inherent impact of the adopted design decisions on runtime quality. However, the design decisions that arise in complex software architectures exhibit non trivial interdependences. This work introduces an approach that discovers the most influential factors, by exploiting the correlation structure of the analyzed metrics via factor analysis of simulation data. A synthetic performance metric is constructed for each group of correlated metrics. The variability of these metrics summarizes the combined factor effects hence it is easier to assess the impact of the analyzed architecture decisions on the runtime quality. The approach is applied on experimental results obtained with the ACID Sim Tools framework for simulating transaction processing architectures. -
Hands on Dependability EconomicsYear: 2009 Book title: International Conference on Dependability (DEPEND 09) Publisher: IEEE Computer Society Pages: 117-121Authors: Theodosios Tsiakis, Panagiotis Katsaros
Hands on Dependability Economics
Contemporary societies (from individuals to organizations) depend on services delivered by systems to achieve individual goals, meaning that a system must have engineered and guaranteed dependability, regardless of continuous, rapid and unpredictable technological and context changes. The simplest questions that brought out in the surface are to understand on how to evaluate and how much (money) should we spend on dependability. Dependability risk management is an effective process that with simplicity can determine the likelihood of an accident and the severities of the consequences. On the other hand, we also need quantitative methods, in order to assess the cost of the various measures which can be taken to reduce the dependability risk. In this paper, we survey quantitative methods that can play an important role in Dependability Economics. These methods aim in providing estimations for the economic consequences of lowering dependability levels and the costs to implement dependability. -
The ACID model checker and code generator for transaction processingYear: 2009 Book title: 2009 International Conference on High Performance Computing & Simulation (HPCS 09) Publisher: IEEE Pages: 138-144Authors: Anakreon Mentis, Panagiotis Katsaros
The ACID model checker and code generator for transaction processing
Traditional transaction processing aims in delivering the ACID properties (Atomicity, Consistency, Isolation, Durability), that in our days are often relaxed, due to the need for transaction models that suit modern computing environments and workflow management applications. Typical examples are the requirements of long-running transactions in mobile computing or in the web, as well as the requirements of business-to-business collaborative applications. However, there is lack of tools for automatically verifying correctness of transaction model implementations. This work presents the ACID model checker and code generator, which plays a vital role in developing correct simulation models for the ACID Sim Tools environment. In essence, our contribution introduces an approach for automatically generating provably correct implementations of transaction management, for the transaction model of interest. -
Product Line Variability with Elastic Components and Test-Driven DevelopmentYear: 2008 Book title: International Conference on Computational Intelligence for Modelling Control & Automation (CIMCA 08) Publisher: IEEE Computer Society Volume: 1 Pages: 146-151Authors: George Kakarontzas, Ioannis Stamelos, Panagiotis Katsaros
Product Line Variability with Elastic Components and Test-Driven Development
In this work we present a systematic approach for the creation of new variant software components via customization of existing core assets of a software product line. We consider both functional and quality variants and address the issue of a controlled creation of variants which considers the reference architecture and its co-evolution with a number of other artifacts including components and functional and quality test suites. Furthermore we discuss the relationship between the popular agile practice of Test-Driven Development (TDD) and how it can be used to assist the evolution of software components of a software product line. -
Static Program Analysis for Java Card AppletsYear: 2008 Book title: 8th IFIP WG 8.8/11.2 International Conference on Smart Card Research and Advanced Applications (CARDIS 08) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 5189 Pages: 17-31Authors: Vasilios Almaliotis, Alexandros Loizidis, Panagiotis Katsaros, Panagiotis Louridas, Diomidis Spinellis
Static Program Analysis for Java Card Applets
The Java Card API provides a framework of classes and interfaces that hides the details of the underlying smart card interface, thus relieving developers from going through the swamps of microcontroller programming. This allows application developers to concentrate most of their effort on the details of application, assuming proper use of the Java Card API calls regarding (i) the correctness of the methods’ invocation targets and their arguments and (ii) temporal safety, i.e. the requirement that certain method calls have to be used in certain orders. Several characteristics of the Java Card applets and their multiple-entry-point program structure make it possible for a potentially unhandled exception to reach the invoked entry point. This contingency opens a possibility to leave the applet in an unpredictable state that is potentially dangerous for the application’s security. Our work introduces automatic static program analysis as a means for the early detection of misused and therefore dangerous API calls. The shown analyses have been implemented within the FindBugs bug detector, an open source framework that applies static analysis functions on the applet bytecode. -
Towards Compositional Safety Analysis via Semantic Representation of Component Failure BehaviourYear: 2008 Book title: Eighth Joint Conference on Knowledge-Based Software Engineering (JCKBSE 08) Publisher: IOS Press Series: Frontiers in Artificial Intelligence and Applications Volume: 180 Pages: 405 - 414Authors: Konstantinos Mokos, Panagiotis Katsaros, Nick Bassiliades, Vangelis Vassiliadis, Maxime Perrotin
Towards Compositional Safety Analysis via Semantic Representation of Component Failure Behaviour
In dependable systems engineering safety assessment of complex designs that involve software and hardware components is one of the most difficult tasks required. Due to the different modelling languages and models that are used for complementary tasks, the model and specification artefacts are not easily shared by the experts involved in the design process. Moreover, the structural and semantic differences of the used language representations open a possibility for inconsistencies between the corresponding models. This work explores the role of an ontology representation of component failure behaviour as a basis for automated model transformations, as well as a library of reusable knowledge artefacts to be used in different modelling languages and models. The presented approach was motivated by recent findings and requirements derived from European industrial-driven research and development projects. -
A Probabilistic Attacker Model for Quantitative Verification of DoS Security ThreatsYear: 2008 Book title: 32nd Annual IEEE International Computer Software and Applications Conference (COMPSAC 08) Publisher: IEEE Computer Society Pages: 12-19Authors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis, Nikolaos Alexiou
A Probabilistic Attacker Model for Quantitative Verification of DoS Security Threats
This work introduces probabilistic model checking as a viable tool-assisted approach for systematically quantifying DoS security threats. The proposed analysis is based on a probabilistic attacker model implementing simultaneous N zombie participants, which subvert secure authentication features in communication protocols and electronic commerce systems. DoS threats are expressed as probabilistic reachability properties that are automatically verified through an appropriate Discrete Time Markov Chain representing the protocol participants and attacker models. The overall analysis takes place in a mature probabilistic model checking toolset called PRISM. We believe that the applied quantitative verification approach is a valuable means for comparing protocol implementations with alternative parameter choices, for optimal resistance to the analyzed threats. -
Intrusion Attack Tactics for the model checking of e-commerce security guaranteesYear: 2007 Book title: 26th International Conference on Computer Safety, Reliability, and Security (SAFECOMP 07) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 4680 Pages: 238-251Authors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis
Intrusion Attack Tactics for the model checking of e-commerce security guarantees
In existing security model-checkers the intruder’s behavior is defined as a message deducibility rule base governing use of eavesdropped information, with the aim to find out a message that is meant to be secret or to generate messages that impersonate some protocol participant(s). The advent of complex protocols like those used in e-commerce brings to the foreground intrusion attacks that are not always attributed to failures of secrecy or authentication. We introduce an intruder model that provides an open-ended base for the integration of multiple attack tactics. In our model checking approach, protocol correctness is checked by appropriate user-supplied assertions or reachability of invalid end states. Thus, the analyst can express e-commerce security guarantees that are not restricted to the absence of secrecy and the absence of authentication failures. The described intruder model was implemented within the SPIN model-checker and revealed an integrity violation attack on the PayWord micro payment protocol. -
Elastic Components: Addressing variance of quality properties in componentsYear: 2007 Book title: 33rd Euromicro Conference on Software Engineering and Advanced Applications (EUROMICRO SEAA 07) Publisher: IEEE Computer Society Pages: 31-38Authors: George Kakarontzas, Panagiotis Katsaros, Ioannis Stamelos
Elastic Components: Addressing variance of quality properties in components
The quality properties of a software component, although verified by the component developer and even certified by a trusted third-party, might very well be inappropriate for the requirements of a new system. This is what we call the quality mismatch problem: the mismatch between the quality requirements of a new system with the quality properties exhibited by the components that we want to use for its development. This work contributes to the understanding of the quality mismatch problem between component properties and component-based systems requirements. To solve this problem we introduce the concept of elastic components. An elastic component is an open-ended hierarchy of the same pure component with variants that differ between them to the quality properties that they exhibit. We present a quality-driven design approach that can be effectively applied for the design and implementation of elastic components. -
Interlocking control by Distributed Signal Boxes: design and verification with the SPIN model checkerYear: 2006 Book title: 4th International Symposium on Parallel and Distributed Processing and Applications (ISPA 06) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 4330 Pages: 317-328Authors: Stylianos Basagiannis, Panagiotis Katsaros, Andrew Pombortsis
Interlocking control by Distributed Signal Boxes: design and verification with the SPIN model checker
Control systems are required to comply with certain safety and liveness correctness properties. In most cases, such systems have an intrinsic degree of complexity and it is not easy to formally analyze them, due to the resulting large state space. Also, exhaustive simulation and testing can easily miss system errors, whether they are life-critical or not. In this work, we introduce an interlocking control approach that is based on the use of the so-called Distributed Signal Boxes (DSBs). The proposed control design is applied to a railway-interlocking problem and more precisely, to the Athens underground metro system. Signal boxes correspond to the network’s interlocking points and communicate only with their neighbor signal boxes. Communication takes place by the use of rendezvous communication channels. This design results in a simple interlocking control approach that compared to other centralized solutions produces a smaller and easier to analyze state space. Formal analysis and verification is performed with the SPIN model checker. -
Simulation and verification of information flow paths for access control policies specified in the CORBA Security settingYear: 2005 Book title: Local Proceedings of the 10th Panhellenic Conference in Informatics (PCI 05) Publisher: University of Thessaly, Volos Pages: 225-237Authors: Panagiotis Katsaros
Simulation and verification of information flow paths for access control policies specified in the CORBA Security setting
The OMG CORBA security specification defines the core facilities and interfaces for ensuring the required level of security in CORBA-compliant systems. However, for a secure application it is not enough to control access to objects, without taking into account the information flow paths implied by a given, outstanding collection of access rights. The requirement to prevent insecure information leakage among objects is a key concern that has to be satisfied. We describe a Colored Petri Net model that allows simulating and verifying information flow security for access control policies specified in the OMG CORBA Security setting. The proposed model possesses the virtue of simulating insecure information leakage in a graphical environment and allows querying about the detected information flow paths and their dependencies. -
On the design of access control to prevent sensitive information leakage in distributed object systems: a Colored Petri Net based modelYear: 2005 Book title: OTM Confederated International Conferences, CoopIS, DOA, and ODBASE Part II (DOA 05) Publisher: Springer, Berlin, Heidelberg Series: Lecture Notes in Computer Science Volume: 3761 Pages: 941-959Authors: Panagiotis Katsaros
On the design of access control to prevent sensitive information leakage in distributed object systems: a Colored Petri Net based model
We introduce a Colored Petri Net model for simulating and verifying information flow in distributed object systems. Access control is specified as prescribed by the OMG CORBA security specification. An insecure flow arises when information is transferred from one object to another in violation of the applied security policy. We provide precise definitions, which determine how discretionary access control is related to the secure or insecure transfer of information between objects. The model can be queried regarding the detected information flow paths and their dependencies. This is a valuable mean for the design of multilevel mandatory access control that addresses the problem of enforcing object classification constraints to prevent undesirable leakage and inference of sensitive information. -
Colored Petri Net based model checking and failure analysis for e-commerce protocolsYear: 2005 Book title: Sixth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools (CPN 05) Publisher: Department of Computer Science, Aarhus University Pages: 267-283Authors: Panagiotis Katsaros, Vasilis Odontidis, Maria Gousidou-Koutita
Colored Petri Net based model checking and failure analysis for e-commerce protocols
We present a Colored Petri Net approach to model check three atom- icity properties for the NetBill electronic cash system. We verify that the proto- col satisfies money atomicity, goods atomicity and certified delivery in the presence of potential site or communication failures and all possible unilateral transaction abort cases. Model checking is performed in CPN Tools, a graphical ML-based tool for editing and analyzing Colored Petri Nets (CP-nets). In case of property violation, protocol failure analysis aims in exploring all property violation scenarios, in order to correct the protocol’s design. Model checking exploits the provided state space exploration functions and the supported Com- putation Tree like temporal logic (CTL). On the other hand, protocol failure analysis is performed by inspection of appropriately selected markings and if necessary, by interactively simulating certain property violation scenarios. In e- commerce, Colored Petri Net model checking has been used in verifying ab- sence of deadlocks, absence of livelocks and absence of unexpected dead transi- tions, as well as in verifying a protocol against its service. To the best of our knowledge, our work is the first attempt to employ CP-nets for model checking atomicity properties. We believe that the described approach can also be ap- plied in model checking other functional properties that are not directly related to the structural properties of the generated state space graph. -
Simulation and verification of atomicity properties for an electronic cash systemYear: 2005 Book title: European Simulation and Modelling Conference (ESMc 05) Publisher: EUROSIS Pages: 558-563Authors: Panagiotis Katsaros, Vasilis Odontidis, Maria Gousidou-Koutita
Simulation and verification of atomicity properties for an electronic cash system
An electronic cash system as any distributed system is subject to site and communication failures, as well as potential secu- rity attacks. This work focuses on simulation and verification of three important correctness properties, for ensuring failure resilience or detecting potential property violation scenarios. We refer to the NetBill electronic cash system and the proper- ties to be checked are money atomicity, goods atomicity and certified delivery. We introduce a Colored Petri Net that models NetBill, in the presence of site or communication failures and all possible transaction abort cases. The pre- sented model has been implemented in CPN Tools, a graphi- cal ML-based tool for analyzing Colored Petri Nets. This allows us to combine the provided state space exploration functions and the supported Computation Tree like temporal logic (CTL), for model checking the forenamed atomicity properties. At the same time, it is possible to exploit the pro- vided interactive simulation facilities to explore potential property violation scenarios and correct the protocol’s design. -
Assessing the modifiability of two object-oriented design alternatives - A controlled experiment replicationYear: 2004 Book title: 5th EUROSIM Congress on Modelling and Simulation Publisher: EUROSIM, Federation of European Simulation Societies Pages: 6 pagesAuthors: Ignatios Deligiannis, Panagiotis Sfetsos, Ioannis Stamelos, Lefteris Angelis, Alexandros Xatzigeorgiou, Panagiotis Katsaros
Assessing the modifiability of two object-oriented design alternatives - A controlled experiment replication
This paper presents a replication study of a controlled experiment, investigating the impact of many design characteristics on one of the most desirable quality factors, modifiability. Two alternative design structures were used; a responsibility-driven (RD) versus a control-oriented “mainframe” (MF) design. Two groups of undergraduate students participated, each performing on one of the two designs. The subjects designed, implemented in Java, and tested a set of three maintenance tasks in order to assess the degree of their understanding, effort, and performance. The results indicate that the RD version due to its delocalised structure, exhibited higher correctness, better extensibility, and design stability, than the MF version. In order to provide an objective assessment of the differences between the two versions, a considerable number of metrics were used on the delivered solutions, quantifying separately each produced design’s characteristics. -
Simulation metamodeling for the design of reliable object based systemsYear: 2004 Book title: 5th EUROSIM Congress on Modelling and Simulation Publisher: EUROSIM, Federation of European Simulation Societies Pages: 6 pagesAuthors: Panagiotis Katsaros, Lefteris Angelis, Constantine Lazos
Simulation metamodeling for the design of reliable object based systems
Replication is a suitable approach for the provision of fault tolerance and load balancing in distributed systems. Object replication takes place on the basis of well-designed interaction protocols that preserve object state consistency in an application transparent manner. The published analytic performance models may be only applied in single-server process replication schemes and are not suitable for schemes composed of miscellaneous policies, such as those arising in object based systems. This work introduces a general simulation metamodeling technique that makes feasible the comparative evaluation of composite fault tolerance schemes, on the basis of small size uniform experimental designs. The technique opens the possibility to take into account different design concerns in a combined manner (e.g. fault tolerance combined with load balancing and multithreading). The overall approach is presented in terms of a case system study. We provide interesting results that reveal a dependence of the optimal adjustments on the system load level. This finding suggests the devise of dynamically adjusted fault tolerance schemes. -
Optimal object state transfer - recovery policies for fault tolerant distributed systemsYear: 2004 Book title: 2004 International Conference on Dependable Systems and Networks (DSN 04) Publisher: IEEE Computer Society Volume: 1 Pages: 762-771Authors: Panagiotis Katsaros, Constantine Lazos
Optimal object state transfer - recovery policies for fault tolerant distributed systems
Recent developments in the field of object-based fault tolerance and the advent of the first OMG FT- CORBA compliant middleware raise new requirements for the design process of distributed fault-tolerant systems. In this work, we introduce a simulation-based design approach based on the optimum effectiveness of the compared fault tolerance schemes. Each scheme is defined as a set of fault tolerance properties for the objects that compose the system. Its optimum effectiveness is determined by the tightest effective checkpoint intervals, for the passively replicated objects. Our approach allows mixing miscellaneous fault tolerance policies, as opposed to the published analytic models, which are best suited in the evaluation of single-server process replication schemes. Special emphasis has been given to the accuracy of the generated estimates using an appropriate simulation output analysis procedure. We provide showcase results and compare two characteristic warm passive replication schemes: one with periodic and another one with load-dependent object state checkpoints. Finally, a trade-off analysis is applied, for determining appropriate checkpoint properties, in respect to a specified design goal. -
Single-pass static semantic check for efficient translation in YAPLYear: 2003 Book title: 1st Balkan Conference in Informatics (BCI 03) Publisher: Thessaloniki : Technological Educational Institute Pages: 623-632Authors: Zafiris Karaiskos, Panajotis Katsaros, Constantine Lazos
Single-pass static semantic check for efficient translation in YAPL
Static semantic check remains an active research topic in the construction of compiler front-ends. The main reasons lie into the ever-increasing set of semantic properties that have to be checked in modern languages and the diverse requirements in the timing of checks during the compilation process. Challenging single-pass compilers, impose the development of appropriate parse-driven attribute evaluation mechanisms. LR parsing is amenable to L-attributed definitions, where, in every syntax tree, the attributes may be evaluated in only one left to right depth-first-order traversal. In such definitions, for every production of the grammar, the inherited attributes on the right side depend only on attributes to the left of themselves. When the LR-parser begins to analyze a word for a non-terminal, the values of its inherited attributes must be made available. This outstanding difficulty constitutes the major concern of the existing attribute evaluation mechanisms, found in the literature. We review them and then we introduce the evaluation mechanism, which we successfully utilized in YAPL, a complete programming language build exclusively for educational purposes. Our approach takes advantage of an effective attribute-collection management scheme, between a parse-driven symbol node stack and the symbol table, for delivering synthesized and inherited attribute values. Finally, we demonstrate the applicability of the suggested approach to representative problems of enforcing language rules for declaration and scope related properties and in type checking of assignments. -
Approximate and simulation based analysis for distributed object software performance modelsYear: 2003 Book title: European Simulation and Modelling Conference (ESMc 03) Publisher: EUROSIS Pages: 409-414Authors: Panajotis Katsaros, Constantine Lazos
Approximate and simulation based analysis for distributed object software performance models
In complex software systems, the effectiveness of model based performance predictions is limited by the availability of appropriate solution techniques. These techniques should allow to take into account the software components interaction effects. In distributed object systems, the main problem is the simultaneous resource possession caused by the synchronous, often nested object invocations, which block the callers, until they get the replies. This paper provides a review of the analysis techniques, which address that fact, while preserving the abstract system view, offered by a queuing network representation. Two of these techniques, were proposed for solving a general class of models, with one or more layers of software servers and a third technique was designed specifically for distributed object software performance models. The advent of an extended flow-equivalent approximation, which is also described, opens new prospects for the development of efficient solution algorithms. Finally, simulation based estimation is discussed, in respect with the applicability of the well-founded and accurate, single-run regenerative method. -
Regenerative estimation variants of response times in closed networks of queuesYear: 2002 Book title: 2nd WSEAS International Conference on Simulation, Modeling and Optimization (ICOSMO 02) Publisher: WSEAS Pages: 2051-2056Authors: Panajotis Katsaros, Constantine Lazos
Regenerative estimation variants of response times in closed networks of queues
In this paper, we present a comparison of the possible regenerative estimation variants of response times, in multivariate simulations of closed queuing networks. The underlying stochastic framework of the techniques under study is first described and the applicability of each one of them is discussed. An appropriate sequential control procedure has been selected, in order to produce confidence intervals of the same nominal level and similar width, for the response times of interest. The first experimental results exhibit improved coverage of the corresponding analytic solutions, when a marked job based method is used, instead of an indirect estimation, by simulating a single regeneration sequence of the common number-in-queue process (usually used when estimating other characteristics like throughputs, utilizations, queue lengths etc). This finding clearly implies the simultaneous use of more than one regeneration sequence for multivariate studies that include response time characteristics. -
Structured performance modeling and analysis for object based distributed software systemsYear: 2002 Book title: 15th ISCA International Conference on Parallel and Distributed Computing Systems (PDCS 02) Publisher: International Society for Computers and Their Applications (ISCA) Pages: 96-102Authors: Panagiotis Katsaros, Constantine Lazos
Structured performance modeling and analysis for object based distributed software systems
In this paper, we address the problems related to the performance modeling of object based software systems, distributed across multiple platforms. Classical flat queuing models seem to be inappropriate, because of the inevitable complexity caused by the platform heterogeneity and the dual client/server role that objects often play in their interactions. Models should be structured vertically, in hierarchical levels, as well as horizontally, in interconnected model components. Each component may be using several analytically intractable queuing network extensions, for the precise representation of the synchronization phenomena that arise in the various object interaction cases. The overall model structure is utilized not only for incrementally specifying a complex workload and resource contention description, but also for approximately analyzing it, by successive flow equivalence aggregations. In this context, we discuss the properties that the model should possess, for achieving satisfactory levels of accuracy. The modeling of common design cases, like the synchronous object invocation, the distributed callback and the multithreading process structure, is illustrated, in the context of CORBA based object interactions. The suggested approach promotes the accomplishment of the appropriate type of analysis at the most suitable level of abstraction, in respect to the specific credibility and cost requirements of each study. -
Steady-state simulation of queuing processes in parallel time streams: problems and potentialitiesYear: 2001 Book title: 5th Hellenic-European Conference on Computer Mathematics and its Applications (HERCMA 01) Publisher: LEA Press, Athens Pages: 370-376Authors: Panajotis Katsaros, Constantine Lazos
Steady-state simulation of queuing processes in parallel time streams: problems and potentialities
This paper addresses statistical issues that arise in stochastic simulations of the steady-state behavior of queuing processes, when being executed in parallel time streams. This is often a desirable choice since otherwise, computer running times for such simulations tend to be large. A proper statistical methodology, which will enable the simulator to achieve the required statistical precision as quickly as possible, has to be used. A random number generator with a large cycle length has to be chosen, since any correlation across the parallel streams can affect the randomness of the results. Another critical issue is that, no procedure in which the run length is fixed, before the simulation begins, can guarantee accurate results. Instead of this, sequential procedures, which determine the length of the simulation during the course of the run, are preferred. The parallel processing speedup to be achieved, is always dependent on the simulated model’s structure. However, the number of processors to be used, has to be decided on the basis of the chosen accuracy requirements since experimental evidence shows that the quality of the results deteriorates as the number of processors used, increases. -
Applied multiresponse metamodeling for queuing network simulation experiments: problems and perspectivesYear: 2001 Book title: 4th EUROSIM Congress on Modelling and Simulation Publisher: EUROSIM, Federation of European Simulation Societies Pages: 6 pagesAuthors: Panajotis Katsaros, Eleftherios Angelis, Constantine Lazos
Applied multiresponse metamodeling for queuing network simulation experiments: problems and perspectives
A complete performance evaluation study of a simulated system should consider possible alternatives and response predictions to potential parameter changes. Simulation sensitivity analysis and metamodeling constitute an efficient approach for this kind of problems. However, this approach is usually despised, mainly because, a sophisticated methodological treatment is required. Such a methodology should take into account, peculiarities, inherent to queuing network models, as for example, multiple responses, large number of model parameters, many qualitative parameters etc. This work aims to illustrate the combined use of the proper statistical techniques to cope with this sort of problems and to show the need for a sound methodological framework that will bring this approach closer to the queuing network simulation practice. -
Shared memory parallel regenerative queuing network simulationYear: 2001 Book title: 15th European Simulation Multiconference (ESM 01) Publisher: SCS Europe Pages: 736-740Authors: Panajotis Katsaros, Constantine Lazos
Shared memory parallel regenerative queuing network simulation
Discrete-event stochastic simulation is one of the most commonly used tools for performance modeling and evaluation. Parallel/distributed simulation enables a simulation program to execute on a computing system containing multiple processors and aims in reducing the model’s execution time. Three basic types of execution mechanisms have appeared. The first two (the conservative and the optimistic approach) aim in partitioning the simulation model into a number of sub-models, also called logical processes (LPs). Their emphasis, lies on the specification of the appropriate synchronization, deadlock handling and/or memory management algorithms. The third approach (known as the time parallel approach or simply as Multiple Replications in Parallel Time Streams), aims in overcoming the need for sufficiently long runs in steady-state stochastic simulations, by executing multiple replications of the entire model in a parallel fashion. This work, presents a fast parallel OpenMP based implementation, for multivariate queuing network simulations. The simulation results are statistically processed, by applying the classical regenerative method under the Lavenberg & Sauer sequential analysis procedure. The first experimental results indicate significant speedups accompanied by acceptable confidence interval coverage. -
Regenerative Queuing Network Distributed SimulationYear: 2000 Book title: 14th European Simulation Multiconference - Simulation and Modelling: Enablers for a Better Quality of Life (ESM 00) Publisher: SCS Europe Pages: 109-113Authors: Panajotis Katsaros, Constantine Lazos
Regenerative Queuing Network Distributed Simulation
Complex, closed or open queuing network simulations, has been proved to be particularly time expensive experiments, when an acceptable level of estimation accuracy is to be achieved. Moreover, in most cases, the same experiment has to be repeated many times, by varying the model's parameters. Thus, a distributed execution architecture is often adopted as more suitable. At least two different types of distributed discrete event simulations have been suggested. Their emphasis lies on the development of the appropriate synchronisation, deadlock handling and/or memory management mechanisms. Our approach, moves the focus of interest, to the exploitation of the statistical nature of the simulation experiment, to obtain more reliable results. Thus, the regenerative property and its presence in queuing network simulations is studied. We suggest a way of distributing computation in regenerative queuing network simulation and we prove that, our approach, is theoretically correct. The method is applicable in a broad class of queuing network simulations with significant gains. -
Automating the evaluation of educational softwareYear: 1999 Book title: 5th International Conference of the Decision Sciences Institute (DSI 99) Publisher: Decision Sciences Institute Volume: 2 Pages: 1369-1373Authors: Ioannis Stamelos, Ioannis Refanidis, Panagiotis Katsaros, Alexandros Tsoukias, Ioannis Vlahavas, Andreas Pombortsis
Automating the evaluation of educational software
This paper proposes a framework for educational software evaluation based on the Multiple Criteria Decision Aid methodology, supported by ESSE, an Expert System for Software Evaluation. An evaluation example is presented that illustrates the overall evaluation process. Evaluating educational software products is a twofold process: both the technical and the educational aspect of the evaluated products have to be considered. As far as the product's educational effectiveness is concerned, the flexibility of ESSE in problem modeling allows the development and the use of a set of criteria, which clearly describe the context, and the educational setting in which the software products are to be used. From the technical point of view, a software attribute set based on the ISO/IEC 9126 standard has been chosen together with the accompanying measurement guidelines.
No books.
No book chapters.
-
Correct by construction model based design for systems and softwareYear: 2018Author: Emmanouela I. Stachtiari Supervisor: Panagiotis Katsaros Committee members: I. Stamelos, L. Angelis, S. Bensalem, A. Chatzigeorgiou, A. Symeonidis, S. Bliudze
Correct by construction model based design for systems and software
This thesis introduces correctness-by-construction techniques for rigorous system design. In particular, we focused on how to produce and validate a functional application model from a set of requirements or from application code. First, we dealt with the early validation of system requirements and design, in order to eliminate the need for a-posteriori verification at the later stages of development. Second, we focused on the automated generation of functional application models from programs with nested syntax, while maintaining the program semantics. Finally, we proposed a design flow that aims to maintain the consistency between the application model and the application code, using a new domain-specific language that focuses on the design of resource-constrained applications for the Internet of Things. -
Abstract Repair of Transition SystemsYear: 2018Author: George Chatzieleftheriou Supervisor: Panagiotis Katsaros Committee members: S. A. Smolka, I. Stamelos, J.-P. Katoen, D. Parker, G. Rahonis, K. Tsichlas
Abstract Repair of Transition Systems
Given a transition system M and a specification formula f, the problem of model checking is to determine if M satisfies f. An extended problem of model checking is that of model repair. In the case that M violates f, the problem of model repair is to obtain a new model M΄, such that M΄ satisfies f. Moreover, the changes made to M to derive M΄ should be minimum with respect to all such M΄. As in model checking, state explosion can make it virtually impossible to carry out model repair on models with infinite or even large state spaces. This thesis examines the problem of model repair for (i) Kripke structures and Computation Tree Logic and, (ii) probabilistic systems and reachability temporal logic properties. For Kripke structures, this thesis presents a framework for model repair that uses abstraction refinement to tackle state space explosion. The proposed framework aims to repair Kripke Structure models based on a Kripke Modal Transition System abstraction and a 3-valued semantics for CTL. An abstract-model-repair algorithm is introduced for which soundness and semi-completeness are proven, and its complexity class is studied. Moreover, a prototype implementation is presented to illustrate the practical utility of abstract model repair on an Automatic Door Opener system model and a model of the Andrew File System 1 protocol. For probabilistic systems, this thesis presents a framework based on abstraction and re#nement, which reduces the state space of the probabilistic system to repair at the price of obtaining an approximate solution. A metric space is defined over the set of DTMCs, in order to measure the differences between the initial and the repaired models. For the repair, this thesis introduces an algorithm and discusses its important properties, such as soundness and complexity. As a proof of concept, experimental results are provided for probabilistic systems with diverse structures of state spaces, including thewell-known Craps game, the IPv4 Zeroconf protocol, a message authentication protocol and the gambler's ruin model. -
Τυπική ανάλυση και απόδοση αρχιτεκτονικών κατανεμημένων συναλλαγών (Formal analysis and performance evaluation of software architecture for distributed transactions)Year: 2012Author: Ανακρέοντας Π. Μεντής (Anakreontas Mentis) Supervisor: Π. Κατσαρός Committee members: Λ. Αγγελής, Ι. Σταμέλος, Α. Χατζηγεωργίου, Α. Παπαδόπουλος, Ε. Καρατζά, Χ. Γεωργιάδης
Τυπική ανάλυση και απόδοση αρχιτεκτονικών κατανεμημένων συναλλαγών (Formal analysis and performance evaluation of software architecture for distributed transactions)
Modern network centric information systems implement highly distributed architectures that usually include multiple application servers. Application design is mainly based on the fundamental object—oriented principles and the adopted architecture matches the logical decomposition of applications into several tiers such as presentation, logic and data to their software and hardware structuring. Transactions offer a convenient mechanism for structuring distributed workloads due to the ACID properties (Atomicity, Consistency, Isolation, Durability). For some applications however, the ACID properties are too restrictive. In mobile computing or in web applications for example, long intervals of non—responsiveness or frequent disconnections lead to long—running transactions that may block concurrently running transactions and render acquired resources unavailable to other processes. New transaction models often relax one of the ACID properties in order to cope with the challenges of the aforementioned execution environments. In transactional memory for example, consistency and durability guaranties are not provided. The new transaction models, often termed as advanced transactions, may not fulfill some intended properties. Formal specification and reasoning for transaction properties is essential for proving the validity of transaction models, but it has been limited to proof—theoretic approaches despite the recent progress in model checking. Moreover, we have not seen so far automated techniques that can derive a correct implementation from a valid transaction model. Performance and availability of transaction processing depends on the adopted architectural design for a given transaction model. Design decisions have measurable effects on availability and performance of transaction processing systems. Existing trade—offs among the design factors of an architecture affect the quality of the transaction processing service in a complex manner.
In order to address the discussed problems, we introduce a model—driven architecture based approach with a two—fold contribution.
First, we provide a systematic process for model checking a transaction model and automatically generate a correct implementation. Models are specified by state machines that represent interacting transactional entities synchronized on a set of events. Verification is performed by examining all possible execution paths of the synchronized state machines for property violations.
Second, we elaborate on an approach for discovering architecture trade—offs and for evaluating the impact of design decisions on performance and availability. The proposed method is applicable on experimental data obtained by simulation or from a prototype implementation. If an architecture design fails to fulfil the performance and availability goals for the considered workloads, the method is iteratively applied towards modifying the initial architecture until the goals are achieved.
The aforementioned contributions are part of a process for the development of transaction systems that we believe is capable to cope with the inherent complexity of distributed transaction processing. -
Ανάπτυξη Τυπικών Μεθόδων για τον Έλεγχο Πρωτοκόλλων Ασφαλείας (Developing formal methods for security protocols validation)Year: 2010Author: Στυλιανός Δ. Μπασαγιάννης (Stylianos Basagiannis) Supervisor: Α. Πομπόρτσης Committee members: Π. Κατσαρός, Γ. Παπαδημητρίου, Γ. Πάγκαλος, Ε. Καρατζά, Κ. Καρανίκας, Β. Κάτος
Ανάπτυξη Τυπικών Μεθόδων για τον Έλεγχο Πρωτοκόλλων Ασφαλείας (Developing formal methods for security protocols validation)
Nowadays, the spread of computer networks and internet communications increases rapidly, causing an even strong need for quality of services, but mostly, the security of them. The present doctorate thesis involves with the verification of security protocols with the use of formal methods.
The research been conducted, studies the security guaranties that most of the modern security protocols, tend to offer to the participant entities. Related work has shown that in order to successfully verify a security protocol, it is a necessity to check the protocol correctness while operating with a strong intruder entity. Using automatic model checking tools, we have developed discrete formal method techniques that combined with the proposed intruder theories can be useful for the correctness and the exhaustive verification of a series of security properties. Using these intruder theories, we verify security protocols’ tolerance against common attack policies that can be applied today by dishonest protocol users.
One of the major problems found today in the research area of formal methods is the known problem of the State Space explosion. When modeling a system using model checking, the produced state space may increase dramatically due to the complexity of the processed involved in the model or for example the mechanisms that are embedded into (e.g. random generator number machine, cryptographic functions). Furthermore, the combination of a powerful intruder model into the security system may lead to an enormous state space, resulting in a difficult and time-consuming security analysis.
In the present thesis, three distinct intruder theories and models have been developed in order to guide the analysts into a much easier analysis of their security protocols. While the analyst today may find a wide area of security properties that target towards verification of them, the described intruder theories provide a selection for the appropriate intruder model - depending on the properties - to be applied, providing a successful verification mean, into revealing potential security flaws. We formally describe these three intruder theories and verify their success over a series of security protocols. As a result, the proposed intruders have not only dramatically decreased the produced state space during model checking but also they have been proved capable of revealing unknown to us security flaws in the tested protocol systems.
To conclude, the contribution of this thesis lies in the successful verification of the security guaranties of security protocols, with the help of specific intruder models, which are independent from the used mean of communication by the protocols’ participants. Using the developed powerful intruder creation methodologies, the analyst may exhaustively check its protocol for security flaws, using formal methods techniques, without producing a large state space that could lead its analysis impossible to be conducted. -
Κατανεμημένα Υπολογιστικά Συστήματα (Distributed Computing Systems)Year: 2002Author: Παναγιώτης Κατσαρός (Panagiotis Katsaros) Supervisor: Κωνσταντίνος Λάζος Committee members: Γ. Μπλέρης, Ε. Καρατζά, Ι. Μανωλόπουλος, Α. Πομπόρτσης, Ι. Τσουκαλάς, Κ. Τσούρος
Κατανεμημένα Υπολογιστικά Συστήματα
Οι σύγχρονες ανάγκες επεξεργασίας δεδομένων οδηγούν στην ανάπτυξη κατανεμημένων διατάξεων αρχιτεκτονικής πελάτη - διακομιστή (client - server). Κάθε νέο υπολογιστικό σύστημα αποτελείται πλέον από ένα σύνολο τμημάτων υλικού και λογισμικού, πιθανώς διαφορετικών τεχνολογιών ή και διαφορετικών κατασκευαστών, η διασύνδεση των οποίων υποστηρίζεται από σταθμούς εργασίας - μέλη ενός δικτύου.
Μέσα σ' αυτό το πλαίσιο, ο εντοπισμός των παραγόντων που επιδρούν καθοριστικά στην απόδοση και η εκτίμηση αυτής σε εναλλακτικές περιπτώσεις διαμόρφωσης του συστήματος προϋποθέτουν την υιοθέτηση μιας νέας προσέγγισης, πέρα από την κλασσική της χρήσης αναλυτικών ή προσομοιωτικών "επίπεδων" δικτύων ουρών.
Σκοπός της διατριβής αυτής ήταν η ανάπτυξη ή/και βελτίωση των τεχνικών εκείνων, που είναι απαραίτητες στην ανάλυση της απόδοσης λογισμικού κατανεμημένης αρχιτεκτονικής.
No technical reports.