RSS Advisory Board

RSS Validator

This service tests the validity of a Really Simple Syndication feed, checking to see that it follows the rules of the RSS specification.

Enter the URL of an RSS feed:



[Valid RSS] This is a valid RSS feed.


This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

  • line 8, column 4: A channel should not include both pubDate and dc:date [help]

  • line 18, column 6: An item should not include both pubDate and dc:date (50 occurrences) [help]

  • line 658, column 2: Missing atom:link with rel="self" [help]



  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <rss xmlns:dc="" version="2.0">
  3.  <channel>
  4.    <title>LRA Community:</title>
  5.    <link></link>
  6.    <description />
  7.    <pubDate>Sat, 01 Nov 2014 00:03:07 GMT</pubDate>
  8.    <dc:date>2014-11-01T00:03:07Z</dc:date>
  9.    <item>
  10.      <title>User Activity Recognition through Software Sensors</title>
  11.      <link></link>
  12.      <description>Title: User Activity Recognition through Software Sensors
  13. Authors: Reiff-Marganiec, Stephan
  14. Abstract: Context-aware systems are an instance of the ubiquitous or pervasive computing vision. They sense the users’ physical and virtual surrounding to identify the best system support for that user and adapt the system behaviour accordingly. The overall architecture of a context aware system can be broken into a number of logical aspects: gathering context data, storing the data, deriving knowledge through reasoning and mining and retrieving that knowledge to finally adapting system behaviours. Context is anything characterizing the environment of the user – their location, the ambient temperature, the people they are with, their current activity and some even consider the user’s mood. Traditionally context information has been gathered through the use of hardware sensors, such as GPS sensors or smart badges for locations and there has been work to track user’s eye movements at their desk to see which application they are using. However, determining the activity of a user has shown to be elusive to being sensed with hardware sensors. As users use web services more frequently they are exchanging messages with the services through the SOAP protocol. SOAP messages contain data, which is valuable if gathered and interpreted right – especially as this data can be shedding information on the activity of a user that goes beyond “sitting at the computer and typing”. We propose a complimentary sensor technology through the use of software sensors. The software sensors are essentially based on monitoring SOAP messages and inserting data for further reasoning and querying into a semantic context model. In this paper we consider details of extracting the data from SOAP messages in a non-obstructive way and show a solution to map the data from a SOAP message to our OWL ontology model automatically. On the latter, we specifically explain the methodology to map from SOAP messages to an existing structure of knowledge.
  15. Description: The file associated with this record is embargoed until 18 months after the date of publication. The final published version may be available through the links above.</description>
  16.      <pubDate>Thu, 30 Oct 2014 10:28:23 GMT</pubDate>
  17.      <guid isPermaLink="false"></guid>
  18.      <dc:date>2014-10-30T10:28:23Z</dc:date>
  19.    </item>
  20.    <item>
  21.      <title>Succinct indices for range queries with applications to orthogonal range maxima</title>
  22.      <link></link>
  23.      <description>Title: Succinct indices for range queries with applications to orthogonal range maxima
  24. Authors: Farzan, Arash; Munro, J. Ian; Raman, Rajeev
  25. Abstract: We consider the problem of preprocessing N points in 2D, each endowed with a priority, to answer the following queries: given a axis-parallel rectangle, determine the point with the largest priority in the rectangle. Using the ideas of the effective entropy of range maxima queries and succinct indices for range maxima queries, we obtain a structure that uses O(N) words and answers the above query in O(logN loglogN) time. This a direct improvement of Chazelle's result from 1985 [10] for this problem - Chazelle required O(N/ε) words to answer queries in O((logN)[superscript 1+ε]) time for any constant ε &gt; 0.</description>
  26.      <pubDate>Tue, 28 Oct 2014 10:40:48 GMT</pubDate>
  27.      <guid isPermaLink="false"></guid>
  28.      <dc:date>2014-10-28T10:40:48Z</dc:date>
  29.    </item>
  30.    <item>
  31.      <title>Succinct representations of ordinal trees</title>
  32.      <link></link>
  33.      <description>Title: Succinct representations of ordinal trees
  34. Authors: Raman, Rajeev; Rao, S. Srinivasa
  35. Abstract: We survey succinct representations of ordinal, or rooted, ordered trees. Succinct representations use space that is close to the appropriate information-theoretic minimum, but support operations on the tree rapidly, usually in O(1) time.</description>
  36.      <pubDate>Tue, 28 Oct 2014 10:33:36 GMT</pubDate>
  37.      <guid isPermaLink="false"></guid>
  38.      <dc:date>2014-10-28T10:33:36Z</dc:date>
  39.    </item>
  40.    <item>
  41.      <title>On probabilistic models for uncertain sequential pattern mining</title>
  42.      <link></link>
  43.      <description>Title: On probabilistic models for uncertain sequential pattern mining
  44. Authors: Muzammal, Muhammad; Raman, Rajeev
  45. Editors: Cao, L.; Feng, Y.; Zhong, J.
  46. Abstract: We study uncertainty models in sequential pattern mining. We consider situations where there is uncertainty either about a source or an event. We show that both these types of uncertainties could be modelled using probabilistic databases, and give possible-worlds semantics for both. We then describe ”interestingness” criteria based on two notions of frequentness (previously studied for frequent itemset mining) namely expected support [C. Aggarwal et al. KDD’09;Chui et al., PAKDD’07,’08] and probabilistic frequentness [Bernecker et al., KDD’09]. We study the interestingness criteria from a complexity-theoretic perspective, and show that in case of source-level uncertainty, evaluating probabilistic frequentness is #P-complete, and thus no polynomial time algorithms are likely to exist, but evaluate the interestingness predicate in polynomial time in the remaining cases.</description>
  47.      <pubDate>Tue, 28 Oct 2014 10:26:52 GMT</pubDate>
  48.      <guid isPermaLink="false"></guid>
  49.      <dc:date>2014-10-28T10:26:52Z</dc:date>
  50.    </item>
  51.    <item>
  52.      <title>Encodings for range selection and top-k queries</title>
  53.      <link></link>
  54.      <description>Title: Encodings for range selection and top-k queries
  55. Authors: Grossi, Roberto; Iacono, John; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
  56. Abstract: We study the problem of encoding the positions the top-k elements of an array A[1..n] for a given parameter 1 ≤ k ≤ n. Specifically, for any i and j, we wish create a data structure that reports the positions of the largest k elements in A[i..j] in decreasing order, without accessing A at query time. This is a natural extension of the well-known encoding range-maxima query problem, where only the position of the maximum in A[i..j] is sought, and finds applications in document retrieval and ranking. We give (sometimes tight) upper and lower bounds for this problem and some variants thereof.</description>
  57.      <pubDate>Tue, 28 Oct 2014 10:16:57 GMT</pubDate>
  58.      <guid isPermaLink="false"></guid>
  59.      <dc:date>2014-10-28T10:16:57Z</dc:date>
  60.    </item>
  61.    <item>
  62.      <title>Discovery of network properties with all-shortest-paths queries</title>
  63.      <link></link>
  64.      <description>Title: Discovery of network properties with all-shortest-paths queries
  65. Authors: Bilo, Davide; Erlebach, Thomas Rainer; Mihalak, Matus; Widmayer, Peter
  66. Editors: Shvartsman, A. A.; Felber, P.
  67. Abstract: We consider the problem of discovering properties (such as the diameter) of an unknown network G(V,E) with a minimum number of queries. Initially, only the vertex set V&#xD;
  68. of the network is known. Information about the edges and non-edges of the network can be obtained&#xD;
  69. by querying nodes of the network. A query at a node q∈V returns the&#xD;
  70. union of all shortest paths from q to all other nodes in V. We study the&#xD;
  71. problem as an online problem - an algorithm does not initially know the&#xD;
  72. edge set of the network, and has to decide where to make the next query&#xD;
  73. based on the information that was gathered by previous queries. We&#xD;
  74. study how many queries are needed to discover the diameter, a minimal&#xD;
  75. dominating set, a maximal independent set, the minimum degree, and&#xD;
  76. the maximum degree of the network. We also study the problem of deciding with a minimum number of queries whether the network is 2-edge or&#xD;
  77. 2-vertex connected. We use the usual competitive analysis to evaluate the&#xD;
  78. quality of online algorithms, i.e., we compare online algorithms with optimum offline algorithms. For all properties except maximal independent&#xD;
  79. set and 2-vertex connectivity we present and analyze online algorithms.&#xD;
  80. Furthermore we show, for all the aforementioned properties, that "many"&#xD;
  81. queries are needed in the worst case. As our query model delivers more&#xD;
  82. information about the network than the measurement heuristics that are&#xD;
  83. currently used in practise, these negative results suggest that a similar&#xD;
  84. behavior can be expected in realistic settings, or in more realistic models&#xD;
  85. derived from the all-shortest-paths query model.</description>
  86.      <pubDate>Thu, 23 Oct 2014 15:40:02 GMT</pubDate>
  87.      <guid isPermaLink="false"></guid>
  88.      <dc:date>2014-10-23T15:40:02Z</dc:date>
  89.    </item>
  90.    <item>
  91.      <title>Asympotically optimal encodings for range selection</title>
  92.      <link></link>
  93.      <description>Title: Asympotically optimal encodings for range selection
  94. Authors: Navarro, Gonzalo; Raman, Rajeev; Satti, Srinivasa Rao
  95. Abstract: We consider the problem of preprocessing an array A[1..n] to answer range selection and range top-k queries. Given a query interval [i..j] and a value k, the former query asks for the position of the kth largest value in A[i..j], whereas the latter asks for the positions of all the k largest values in A[i..j]. We consider the encoding version of the problem, where A is not available at query time, and an upper bound kappa on k, the rank that is to be selected, is given at construction time. We obtain data structures with asymptotically optimal size and query time on a RAM model with word size Θ(lg n) : our structures use O(n lg kappa) bits and answer range selection queries in time O(1+ lg k / lg lg n) and range top-k queries in time O(k), for any k ≤ kappa.
  96. Description: The file associated with this record is embargoed until the date of the conference.</description>
  97.      <pubDate>Wed, 22 Oct 2014 14:00:04 GMT</pubDate>
  98.      <guid isPermaLink="false"></guid>
  99.      <dc:date>2014-10-22T14:00:04Z</dc:date>
  100.    </item>
  101.    <item>
  102.      <title>Mining state-based models from proof corpora</title>
  103.      <link></link>
  104.      <description>Title: Mining state-based models from proof corpora
  105. Authors: Gransden, Thomas; Walkinshaw, Neil; Raman, Rajeev
  106. Abstract: Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.</description>
  107.      <pubDate>Tue, 07 Oct 2014 14:53:42 GMT</pubDate>
  108.      <guid isPermaLink="false"></guid>
  109.      <dc:date>2014-10-07T14:53:42Z</dc:date>
  110.    </item>
  111.    <item>
  112.      <title>Dynamizing succinct tree representations</title>
  113.      <link></link>
  114.      <description>Title: Dynamizing succinct tree representations
  115. Authors: Joannou, Stelios; Raman, Rajeev
  116. Abstract: We consider succinct, or space-efficient, representations of ordinal trees. Representations exist that take 2n + o(n) bits to represent a static n-node ordinal tree - close to the information-theoretic minimum - and support navigational operations in O(1) time on a RAM model; and some implementations have good practical performance. The situation is different for dynamic ordinal trees. Although there is theoretical work on succinct dynamic ordinal trees, there is little work on the practical performance of these data structures. Motivated by applications to representing XML documents, in this paper, we report on a preliminary study on dynamic succinct data structures. Our implementation is based on representing the tree structure as a sequence of balanced parentheses, with navigation done using the min-max tree of Sadakane and Navarro (SODA '10). Our implementation shows promising performance for update and navigation, and our findings highlight two issues that we believe will be important to future implementations: the difference between the finger model of (say) Farzan and Munro (ICALP '09) and the parenthesis model of Sadakane and Navarro, and the choice of the balanced tree used to represent the min-max tree.</description>
  117.      <pubDate>Thu, 18 Sep 2014 08:51:47 GMT</pubDate>
  118.      <guid isPermaLink="false"></guid>
  119.      <dc:date>2014-09-18T08:51:47Z</dc:date>
  120.    </item>
  121.    <item>
  122.      <title>A model for supporting electrical engineering with e-learning</title>
  123.      <link></link>
  124.      <description>Title: A model for supporting electrical engineering with e-learning
  125. Authors: Akaslan, Dursun
  126. Abstract: The overall goal of this research work was developing and evaluating a model for supporting electrical engineering with e-learning. The model development was based on the survey data collected from representative teachers and students in Turkey whereas the model evaluation was conducted in the relevant HEIs in Turkey and the United Kingdom. To develop the model, the study investigated the attitudes of representative key stakeholders towards e-learning in Turkey by administrating questionnaires and interviews with teachers and students. Then the responses of the teachers and students were compared. Based on the results, I proposed a model with a multi-dimensional approach to e-learning: (1) self-directed learning by studying e-book, (2) self-assessment by solving e-exercises, (3) teacher-directed learning by attending classroom sessions as an integral part of the blended learning (4) teacher-assessment by solving e-exercises, (5) computer-directed learning by playing e-games and (6) computer-assessment by solving e-exercises.&#xD;
  127. To evaluate the applicability of the model in different conditions, a case-control study was conducted to determine whether the model had the intended effect on the participating students in HEIs in Turkey and the United Kingdom. As the result of the case-control study, the effects of e-learning, blended learning and traditional learning were verified. However, there were significant differences among the groups. The overall scores indicated that e-learning and blended learning was more effective as compared to the traditional learning. The results of our study indicated that the knowledge increase in e-learners seemed to be gradual because they tended to study daily by completing each activity on time. However, the traditional learners did not have the same pattern because they usually did not read the core text and did not solve e-exercise regularly before the classroom sessions. The results of pre-placement, post-placement tests and middle tests also justified these assumptions.</description>
  128.      <pubDate>Mon, 08 Sep 2014 14:27:39 GMT</pubDate>
  129.      <guid isPermaLink="false"></guid>
  130.      <dc:date>2014-09-08T14:27:39Z</dc:date>
  131.    </item>
  132.    <item>
  133.      <title>Compressed representation of XML documents with rapid navigation</title>
  134.      <link></link>
  135.      <description>Title: Compressed representation of XML documents with rapid navigation
  136. Authors: Kharabsheh, Mohammad Kamel Ahmad
  137. Abstract: XML(Extensible Markup Language) is a language used in data representation and&#xD;
  138. storage, and transmission and manipulation of data. Excessive memory consumption&#xD;
  139. is an important challenge when representing XML documents in main memory.&#xD;
  140. Document Object Model (DOM) APIs are used in a processing level that provides&#xD;
  141. access to all parts of XML documents through the navigation operations. Although&#xD;
  142. DOM serves as a a general purpose tool that can be used in different applications,&#xD;
  143. it has high memory cost particularly if using na¨ıve. The space usage of DOM has&#xD;
  144. been reduced significantly while keeping fast processing speeds, by use of succinct&#xD;
  145. data structures in SiXDOM [1]. However, SiXDOM does not explore in depth XML&#xD;
  146. data compression principles to improve in-memory space usage. Such XML data&#xD;
  147. compression techniques have been proven to be very effective in on-disk compression&#xD;
  148. of XML document. In this thesis we propose a new approach to represent XML&#xD;
  149. documents in-memory using XML data compression ideas to further reduce space&#xD;
  150. usage while rapidly supporting operations of the kind supported by DOM.&#xD;
  151. Our approach is based upon a compression method [2] which represents an XML&#xD;
  152. document as a directed acyclic graph (DAG) by sharing common subtrees. However,&#xD;
  153. this approach does not permit the representation of attributes and textual data,&#xD;
  154. and furthermore, a naive implementation of this idea gives very poor space usage&#xD;
  155. relative to other space-efficient DOM implementations [1]. In order to realise the&#xD;
  156. potential of this compression method as an in-memory representation, a number&#xD;
  157. of optimisations are made by application of succinct data structures and variablelength&#xD;
  158. encoding. Furthermore, a framework for supporting attribute and textual&#xD;
  159. data nodes is introduced. Finally, we propose a novel approach to representing the&#xD;
  160. textual data using Minimal Perfect Hashing(MPH).&#xD;
  161. We have implemented our ideas in a software library called DAGDOMand performed&#xD;
  162. extensive experimental evaluation on a number of standard XML files. DAGDOM&#xD;
  163. yields a good result and we are able to obtain significant space reductions over existing&#xD;
  164. space-efficient DOM implementations (typically 2 to 5 times space reduction),&#xD;
  165. with very modest degradations in CPU time for navigational operations.</description>
  166.      <pubDate>Fri, 05 Sep 2014 15:30:29 GMT</pubDate>
  167.      <guid isPermaLink="false"></guid>
  168.      <dc:date>2014-09-05T15:30:29Z</dc:date>
  169.    </item>
  170.    <item>
  171.      <title>Backward analysis via over-approximate abstraction and under-approximate subtraction</title>
  172.      <link></link>
  173.      <description>Title: Backward analysis via over-approximate abstraction and under-approximate subtraction
  174. Authors: Piterman, Nir; Bakhirkin, Alexey; Berdine, Josh
  175. Abstract: We propose a novel approach for computing weakest liberal safe preconditions of programs. The standard approaches, which call for either under-approximation of a greatest fixed point, or complementation of a least fixed point, are often difficult to apply successfully. Our approach relies on a different decomposition of the weakest precondition of loops. We exchange the greatest fixed point for the computation of a least fixed point above a recurrent set, instead of the bottom element. Convergence is achieved using over-approximation, while in order to maintain soundness we use an under-approximating logical subtraction operation. Unlike general complementation, subtraction more easily allows for increased precision in case its arguments are related. The approach is not restricted to a specific abstract domain and we use it to analyze programs using the abstract domains of intervals and of 3-valued structures.
  176. Description: The file associated with this record is embargoed until 12 months after the date of publication. The final published version may be available through the links above.</description>
  177.      <pubDate>Thu, 26 Jun 2014 14:09:45 GMT</pubDate>
  178.      <guid isPermaLink="false"></guid>
  179.      <dc:date>2014-06-26T14:09:45Z</dc:date>
  180.    </item>
  181.    <item>
  182.      <title>Mining sequential patterns from probabilistic databases</title>
  183.      <link></link>
  184.      <description>Title: Mining sequential patterns from probabilistic databases
  185. Authors: Muzammal, Muhammad; Raman, Rajeev
  186. Abstract: This paper considers the problem of sequential pattern mining (SPM) in&#xD;
  187. probabilistic databases. Specifically, we consider SPM in situations where there is uncertainty&#xD;
  188. in associating an event with a source, model this kind of uncertainty in the probabilistic&#xD;
  189. database framework and consider the problem of enumerating all sequences&#xD;
  190. whose expected support is sufficiently large. We give an algorithm based on dynamic&#xD;
  191. programming to compute the expected support of a sequential pattern. Next, we propose&#xD;
  192. three algorithms for mining sequential patterns from probabilistic databases. The&#xD;
  193. first two algorithms are based on the candidate generation framework – one each based&#xD;
  194. on a breadth-first (similar to GSP) and a depth-first (similar to SPAM) exploration&#xD;
  195. of the search space. The third one is based on the pattern growth framework (similar&#xD;
  196. to PrefixSpan). We propose optimizations that mitigate the effects of the expensive&#xD;
  197. dynamic programming computation step. We give an empirical evaluation of the probabilistic&#xD;
  198. SPM algorithms and the optimizations, and demonstrate the scalability of the&#xD;
  199. algorithms in terms of CPU time and the memory usage. We also demonstrate the&#xD;
  200. effectiveness of the probabilistic SPM framework in extracting meaningful sequences in&#xD;
  201. the presence of noise.
  202. Description: The file associated with this record is embargoed until 12 months after the date of publication. The final published version may be available through the links above.</description>
  203.      <pubDate>Thu, 26 Jun 2014 13:15:31 GMT</pubDate>
  204.      <guid isPermaLink="false"></guid>
  205.      <dc:date>2014-06-26T13:15:31Z</dc:date>
  206.    </item>
  207.    <item>
  208.      <title>Design-by-contract for software architectures</title>
  209.      <link></link>
  210.      <description>Title: Design-by-contract for software architectures
  211. Authors: Poyias, Kyriakos
  212. Abstract: We propose a design by contract (DbC) approach to specify and maintain architectural&#xD;
  213. level properties of software. Such properties are typically relevant in the design&#xD;
  214. phase of the development cycle but may also impact the execution of systems. We give a&#xD;
  215. formal framework for specifying software architectures (and their refi nements) together&#xD;
  216. with contracts that architectural con figurations abide by. In our framework, we can&#xD;
  217. specify that if an architecture guarantees a given pre- condition and a refi nement rule&#xD;
  218. satisfi es a given contract, then the refi ned architecture will enjoy a given post-condition.&#xD;
  219. Methodologically, we take Architectural Design Rewriting (ADR) as our architectural&#xD;
  220. description language. ADR is a rule-based formal framework for modelling (the&#xD;
  221. evolution of) software architectures. We equip the recon figuration rules of an ADR&#xD;
  222. architecture with pre- and post-conditions expressed in a simple logic; a pre-condition&#xD;
  223. constrains the applicability of a rule while a post-condition specifi es the properties&#xD;
  224. expected of the resulting graphs. We give an algorithm to compute the weakest precondition&#xD;
  225. out of a rule and its post-condition. Furthermore, we propose a monitoring&#xD;
  226. mechanism for recording the evolution of systems after certain computations, maintaining&#xD;
  227. the history in a tree-like structure. The hierarchical nature of ADR allows us to&#xD;
  228. take full advantage of the tree-like structure of the monitoring mechanism. We exploit&#xD;
  229. this mechanism to formally defi ne new rewriting mechanisms for ADR reconfi guration&#xD;
  230. rules. Also, by monitoring the evolution we propose a way of identifying which part of&#xD;
  231. a system has been a ffected when unexpected run-time behaviours emerge. Moreover,&#xD;
  232. we propose a methodology that allows us to select which rules can be applied at the&#xD;
  233. architectural level to reconfigure a system so to regain its architectural style when it&#xD;
  234. becomes compromised by unexpected run-time recon figurations.</description>
  235.      <pubDate>Mon, 16 Jun 2014 15:40:37 GMT</pubDate>
  236.      <guid isPermaLink="false"></guid>
  237.      <dc:date>2014-06-16T15:40:37Z</dc:date>
  238.    </item>
  239.    <item>
  240.      <title>A structured approach to VO reconfigurations through Policies</title>
  241.      <link></link>
  242.      <description>Title: A structured approach to VO reconfigurations through Policies
  243. Authors: Reiff-Marganiec, Stephan
  244. Abstract: One of the strength of Virtual Organisations is their ability to dynamically and rapidly adapt in response&#xD;
  245. to changing environmental conditions. Dynamic adaptability has been studied in other system&#xD;
  246. areas as well and system management through policies has crystallized itself as a very prominent solution&#xD;
  247. in system and network administration. However, these areas are often concerned with very&#xD;
  248. low-level technical aspects. Previous work on the APPEL policy language has been aimed at dynamically&#xD;
  249. adapting system behaviour to satisfy end-user demands and – as part of STPOWLA – APPEL&#xD;
  250. was used to adapt workflow instances at runtime. In this paper we explore how the ideas of APPEL&#xD;
  251. and STPOWLA can be extended from workflows to the wider scope of Virtual Organisations. We will&#xD;
  252. use a Travel Booking VO as example.</description>
  253.      <pubDate>Fri, 30 May 2014 10:52:19 GMT</pubDate>
  254.      <guid isPermaLink="false"></guid>
  255.      <dc:date>2014-05-30T10:52:19Z</dc:date>
  256.    </item>
  257.    <item>
  258.      <title>Maintaining transactional integrity in long running workflow service : a policy-driven framework</title>
  259.      <link></link>
  260.      <description>Title: Maintaining transactional integrity in long running workflow service : a policy-driven framework
  261. Authors: Reiff-Marganiec, Stephan; Ali, Manar S.
  262. Abstract: This chapter presents a framework to provide autonomous handling of long running transactions based&#xD;
  263. on dependencies which are derived from the workflow. Business Processes naturally involve long running&#xD;
  264. activities and require transactional behaviour across them. This framework presents a solution for forward&#xD;
  265. recovery from errors by automatic application of compensation to executing instances of workflows. The&#xD;
  266. mechanism is based on propagation of failures through a recursive hierarchical structure of transaction&#xD;
  267. components (nodes and execution paths). The authors discuss a transaction management system that is&#xD;
  268. implemented as a reactive system controller, where system components change their states based on rules&#xD;
  269. in response to triggering of events, such as activation, failure, force-fail, completion, or compensation&#xD;
  270. events. One notable feature of the model is the distinction of vital and non-vital components, allowing&#xD;
  271. the process designer to express the cruciality of activities in the workflow with respect to the business&#xD;
  272. logic. Another novel feature is that in addition to dependencies arising from the structure of the workflow,&#xD;
  273. the approach also permits the workflow designer to specify additional dependencies which will also be&#xD;
  274. enforced. Thus, the authors introduce new techniques and architectures supporting enterprise integration&#xD;
  275. solutions that cater to the dynamics of business needs. The approach is implemented through workflow&#xD;
  276. actions executed by services and allows management of faults through a policy-driven framework.</description>
  277.      <pubDate>Thu, 29 May 2014 15:48:08 GMT</pubDate>
  278.      <guid isPermaLink="false"></guid>
  279.      <dc:date>2014-05-29T15:48:08Z</dc:date>
  280.    </item>
  281.    <item>
  282.      <title>Encoding range minima and range top-2 queries</title>
  283.      <link></link>
  284.      <description>Title: Encoding range minima and range top-2 queries
  285. Authors: Davoodi, Pooya; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
  286. Abstract: We consider the problem of encoding range minimum queries (RMQs): given an array A[1..n] of distinct totally ordered values, to pre-process A and create a data structure that can answer the query RMQ(i,j), which returns the index containing the smallest element in A[i..j], without access to the array A at query time. We give a data structure whose space usage is 2n+o(n) bits, which is asymptotically optimal for worst-case data, and answers RMQs in O(1) worst-case time. This matches the previous result of Fischer and Heun, but is obtained in a more natural way. Furthermore, our result can encode the RMQs of a random array A in 1.919n+o(n) bits in expectation, which is not known to hold for Fischer and Heun’s result. We then generalize our result to the encoding range top-2 query (RT2Q) problem, which is like the encoding RMQ problem except that the query RT2Q(i,j) returns the indices of both the smallest and second smallest elements of A[i..j]. We introduce a data structure using 3.272n+o(n) bits that answers RT2Qs in constant time, and also give lower bounds on the effective entropy of the RT2Q problem.</description>
  287.      <pubDate>Wed, 28 May 2014 10:49:40 GMT</pubDate>
  288.      <guid isPermaLink="false"></guid>
  289.      <dc:date>2014-05-28T10:49:40Z</dc:date>
  290.    </item>
  291.    <item>
  292.      <title>Optimal indexes for sparse bit vectors</title>
  293.      <link></link>
  294.      <description>Title: Optimal indexes for sparse bit vectors
  295. Authors: Golynski, Alexander; Orlandi, Alessio; Raman, Rajeev; Rao, S. Srinivasa
  296. Abstract: We consider the problem of supporting rank and select operations on a bit vector of length m with n 1-bits. The problem is considered in the succinct index model, where the bit vector is stored in "read-only" memory and an additional data structure, called the index is created during pre-processing to help answer the above queries. We give asymptotically optimal density-sensitive trade-offs, involving both m and n, that relate the size of the index to the number of accesses to the bit vector (and processing time) needed to answer the above queries. The results are particularly interesting for the case where n=o(m).</description>
  297.      <pubDate>Tue, 27 May 2014 10:19:32 GMT</pubDate>
  298.      <guid isPermaLink="false"></guid>
  299.      <dc:date>2014-05-27T10:19:32Z</dc:date>
  300.    </item>
  301.    <item>
  302.      <title>An empirical evaluation of extendible arrays</title>
  303.      <link></link>
  304.      <description>Title: An empirical evaluation of extendible arrays
  305. Authors: Joannou, Stelios; Raman, Rajeev
  306. Editors: Pardalos, PM; Rebennack, S
  307. Abstract: We study the performance of several alternatives for implementing extendible arrays, which allow random access to elements stored&#xD;
  308. in them, whilst allowing the arrays to be grown and shrunk. The study&#xD;
  309. not only looks at the basic operations of grow/shrink and accessing data, but also the effects of memory fragmentation on performance.
  310. Description: The final publication is&#xD;
  311. available at</description>
  312.      <pubDate>Wed, 09 Apr 2014 10:40:44 GMT</pubDate>
  313.      <guid isPermaLink="false"></guid>
  314.      <dc:date>2014-04-09T10:40:44Z</dc:date>
  315.    </item>
  316.    <item>
  317.      <title>Cell-cycle regulation of NOTCH signaling during C. elegans vulval development</title>
  318.      <link></link>
  319.      <description>Title: Cell-cycle regulation of NOTCH signaling during C. elegans vulval development
  320. Authors: Nusser-Stein, Stefanie; Beyer, Antje; Rimann, Ivo; Adamczyk, Magdalene; Piterman, Nir; Hajnal, Alex; Fisher, Jasmin
  321. Abstract: C. elegans vulval development is one of the best‐characterized systems to study cell fate specification during organogenesis. The detailed knowledge of the signaling pathways determining vulval precursor cell (VPC) fates permitted us to create a computational model based on the antagonistic interactions between the epidermal growth factor receptor (EGFR)/RAS/MAPK and the NOTCH pathways that specify the primary and secondary fates, respectively. A key notion of our model is called bounded asynchrony, which predicts that a limited degree of asynchrony in the progression of the VPCs is necessary to break their equivalence. While searching for a molecular mechanism underlying bounded asynchrony, we discovered that the termination of NOTCH signaling is tightly linked to cell‐cycle progression. When single VPCs were arrested in the G1 phase, intracellular NOTCH failed to be degraded, resulting in a mixed primary/secondary cell fate. Moreover, the G1 cyclins CYD‐1 and CYE‐1 stabilize NOTCH, while the G2 cyclin CYB‐3 promotes NOTCH degradation. Our findings reveal a synchronization mechanism that coordinates NOTCH signaling with cell‐cycle progression and thus permits the formation of a stable cell fate pattern.</description>
  322.      <pubDate>Fri, 07 Mar 2014 13:50:41 GMT</pubDate>
  323.      <guid isPermaLink="false"></guid>
  324.      <dc:date>2014-03-07T13:50:41Z</dc:date>
  325.    </item>
  326.    <item>
  327.      <title>Faster temporal reasoning for infinite-state programs</title>
  328.      <link></link>
  329.      <description>Title: Faster temporal reasoning for infinite-state programs
  330. Authors: Piterman, Nir; Cook, Byron; Khlaaf, Heidy
  331. Abstract: In many model checking tools that support temporal logic, performance is hindered by redundant reasoning performed in the presence of nested temporal operators. In particular, tools supporting the state-based temporal logic CTL often symbolically partition the system's state space using the sub-formulae of the input temporal formula. This can lead to repeated work when tools are applied to infinite-state programs, as often the characterization of the state-spaces for nearby program locations are similar and interrelated. In this paper, we describe a new symbolic procedure for CTL verification of infinite-state programs. Our procedure uses the structure of the program's control-flow graph in combination with the nesting of temporal operators in order to optimize reasoning performed during symbolic model checking. An experimental evaluation against competing tools demonstrates that our approach not only gains orders-of-magnitude performance speed improvement, but allows for scalability of temporal reasoning for larger programs.</description>
  332.      <pubDate>Thu, 20 Feb 2014 14:08:53 GMT</pubDate>
  333.      <guid isPermaLink="false"></guid>
  334.      <dc:date>2014-02-20T14:08:53Z</dc:date>
  335.    </item>
  336.    <item>
  337.      <title>Strongly complete logics for coalgebras</title>
  338.      <link></link>
  339.      <description>Title: Strongly complete logics for coalgebras
  340. Authors: Kurz, Alexander; Rosicky, Jiri
  341. Abstract: Coalgebras for a functor model different types of transition systems in a uniform way. This paper focuses on a uniform account of finitary logics for set-based coalgebras. In particular, a general construction of a logic from an arbitrary set-functor is given and proven to be strongly complete under additional assumptions. We proceed in three parts. Part I argues that sifted colimit preserving functors are those functors that preserve universal algebraic structure. Our main theorem here states that a functor preserves sifted colimits if and only if it has a finitary presentation by operations and equations. Moreover, the presentation of the category of algebras for the functor is obtained compositionally from the presentations of the underlying category and of the functor. Part II investigates algebras for a functor over ind-completions and extends the theorem of J{'o}nsson and Tarski on canonical extensions of Boolean algebras with operators to this setting. Part III shows, based on Part I, how to associate a finitary logic to any finite-sets preserving functor T. Based on Part II we prove the logic to be strongly complete under a reasonable condition on T.</description>
  342.      <pubDate>Fri, 14 Feb 2014 10:52:40 GMT</pubDate>
  343.      <guid isPermaLink="false"></guid>
  344.      <dc:date>2014-02-14T10:52:40Z</dc:date>
  345.    </item>
  346.    <item>
  347.      <title>Completeness for the coalgebraic cover modality</title>
  348.      <link></link>
  349.      <description>Title: Completeness for the coalgebraic cover modality
  350. Authors: Kupke, Clemens; Kurz, Alexander; Venema, Yde
  351. Abstract: We study the finitary version of the coalgebraic logic introduced by L.Moss. The syntax of this logic, which is introduced uniformly with respect to a coalgebraic type functor, required to preserve weak pullbacks, extends that of classical propositional logic with a so-called coalgebraic cover modality depending on the type functor. Its semantics is defined in terms of a categorically defined relation lifting operation. As the main contributions of our paper we introduce a derivation system, and prove that it provides a sound and complete axiomatization for the collection of coalgebraically valid inequalities. Our soundness and completeness proof is algebraic, and we employ Pattinson's stratification method, showing that our derivation system can be stratified in countably many layers, corresponding to the modal depth of the formulas involved. In the proof of our main result we identify some new concepts and obtain some auxiliary results of independent interest. We survey properties of the notion of relation lifting, induced by an arbitrary but fixed set functor. We introduce a category of Boolean algebra presentations, and establish an adjunction between it and the category of Boolean algebras. Given the fact that our derivation system involves only formulas of depth one, it can be encoded as a endo-functor on Boolean algebras. We show that this functor is finitary and preserves embeddings, and we prove that the Lindenbaum-Tarski algebra of our logic can be identified with the initial algebra for this functor.</description>
  352.      <pubDate>Fri, 14 Feb 2014 10:40:06 GMT</pubDate>
  353.      <guid isPermaLink="false"></guid>
  354.      <dc:date>2014-02-14T10:40:06Z</dc:date>
  355.    </item>
  356.    <item>
  357.      <title>Activity awareness in context-aware systems using software sensors</title>
  358.      <link></link>
  359.      <description>Title: Activity awareness in context-aware systems using software sensors
  360. Authors: Pathan, Kamran Taj
  361. Abstract: Context-aware systems being a component of ubiquitous or pervasive computing environment sense the users’ physical and virtual surrounding to adapt their behaviour accordingly. To achieve activity context tracking devices are common practice. Service Oriented Architecture is based on collections of services that communicate with each other. The communication between users and services involves data that can be used to sense the activity context of the user. SOAP is a simple protocol to let applications exchange their information over the web. Semantic Web provides standards to express the relationship between data to allow machines to process data more intelligently.&#xD;
  362. This work proposes an approach for supporting context-aware activity sensing using software sensors. The main challenges in the work are specifying context information in a machine processable form, developing a mechanism that can understand the data extracted from exchanges of services, utilising the data extracted from these services, and the architecture that supports sensing with software sensors. To address these issues, we have provided a bridge to combine the traditional web services with the semantic web technologies, a knowledge structure that supports the activity context information in the context-aware environments and mapping methods that extract the data out of exchanges occurring between user and services and map it into a context model. The Direct Match, the Synonym Match and the Hierarchical Match methods are developed to put the extracted data from services to the knowledge structure.&#xD;
  363. This research will open doors to further develop automated and dynamic context-aware systems that can exploit the software sensors to sense the activity of the user in the context-aware environments.</description>
  364.      <pubDate>Fri, 08 Nov 2013 15:49:07 GMT</pubDate>
  365.      <guid isPermaLink="false"></guid>
  366.      <dc:date>2013-11-08T15:49:07Z</dc:date>
  367.    </item>
  368.    <item>
  369.      <title>Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation</title>
  370.      <link></link>
  371.      <description>Title: Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
  372. Authors: Severi, Paula; de Vries, Fer-Jan
  373. Abstract: In this paper, we use types for ensuring that programs involving streams are well-behaved.We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTSs allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalise the idea of well-behaved programs using the concept of infinitary normalisation. The set of finite and infinite terms is defined as a metric completion. We establish a precise connection between the modal operator (• A) and the metric at a syntactic level by relating a variable of type (• A) with the depth of all its occurrences in a term. This syntactic connection between the modal operator and the depth is the key to the proofs of infinitary weak and strong normalisation.</description>
  374.      <pubDate>Mon, 28 Oct 2013 15:24:43 GMT</pubDate>
  375.      <guid isPermaLink="false"></guid>
  376.      <dc:date>2013-10-28T15:24:43Z</dc:date>
  377.    </item>
  378.    <item>
  379.      <title>Succinct Representations of Binary Trees for Range Minimum Queries</title>
  380.      <link></link>
  381.      <description>Title: Succinct Representations of Binary Trees for Range Minimum Queries
  382. Authors: Davoodi, Pooya; Raman, Rajeev; Satti, Satti Srinivasa
  383. Abstract: We provide two succinct representations of binary trees that can be used to represent the Cartesian tree of an array A of size n. Both the representations take the optimal 2n + o(n) bits of space in the worst case and support range minimum queries (RMQs) in O(1) time. The first one is a modification of the representation of Farzan and Munro (SWAT 2008); a consequence of this result is that we can represent the Cartesian tree of a random permutation in 1.92n + o(n) bits in expectation. The second one uses a well-known transformation between binary trees and ordinal trees, and ordinal tree operations to effect operations on the Cartesian tree. This provides an alternative, and more natural, way to view the 2D-Min-Heap of Fischer and Huen (SICOMP 2011). Furthermore, we show that the pre-processing needed to output the data structure can be performed in linear time using o(n) bits of extra working space, improving the result of Fischer and Heun who use n + o(n) bits working space.</description>
  384.      <pubDate>Mon, 28 Oct 2013 13:05:12 GMT</pubDate>
  385.      <guid isPermaLink="false"></guid>
  386.      <dc:date>2013-10-28T13:05:12Z</dc:date>
  387.    </item>
  388.    <item>
  389.      <title>Succinct representations of permutations and functions</title>
  390.      <link></link>
  391.      <description>Title: Succinct representations of permutations and functions
  392. Authors: Munro, J. Ian; Raman, Rajeev; Raman, Venkatesh; Rao, Satti Srinivasa
  393. Abstract: We investigate the problem of succinctly representing an arbitrary permutation, π, on {0, . . . , n−1} so that π[superscript k](i) can be computed quickly for any i and any (positive or negative) integer power k. A representation taking (1 + ϵ)n lg n + O(1) bits suffices to compute arbitrary powers in constant time, for any positive constant ϵ ≤ 1. A representation taking the optimal ⌈lg n!⌉ + o(n) bits can be used to compute arbitrary powers in O(lg n/ lg lg n) time.&#xD;
  394. We then consider the more general problem of succinctly representing an arbitrary function, f : [n] → [n] so that f[superscript k](i) can be computed quickly for any i and any integer power k. We give a representation that takes (1 + ϵ)n lg n + O(1) bits, for any positive constant ϵ ≤ 1, and computes arbitrary positive powers in constant time. It can also be used to compute f[superscript k](i), for any negative integer k, in optimal O(1+ | f[superscript k](i) |) time. We place emphasis on the redundancy, or the space beyond the information-theoretic lower bound that the data structure uses in order to support operations efficiently. A number of lower bounds have recently been shown on the redundancy of data structures. These lower bounds confirm the space–time optimality of some of our solutions.&#xD;
  395. Furthermore, the redundancy of one of our structures "surpasses" a recent lower bound by Golynski [Golynski, SODA 2009], thus demonstrating the limitations of this lower bound.
  396. Description: NOTICE: this is the author’s version of a work that was accepted for publication in Theoretical Computer Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Theoretical Computer Science, 2012, 438, pp. 47-88, DOI: 10.1016/j.tcs.2012.03.005.</description>
  397.      <pubDate>Mon, 28 Oct 2013 12:29:09 GMT</pubDate>
  398.      <guid isPermaLink="false"></guid>
  399.      <dc:date>2013-10-28T12:29:09Z</dc:date>
  400.    </item>
  401.    <item>
  402.      <title>Dynamic Compressed Strings with Random Access</title>
  403.      <link></link>
  404.      <description>Title: Dynamic Compressed Strings with Random Access
  405. Authors: Grossi, Roberto; Raman, Rajeev; Rao, Satti Srinivasa; Venturini, Rossano
  406. Abstract: We consider the problem of storing a string S in dynamic compressed form, while permitting operations directly on the compressed representation of S: access a substring of S; replace, insert or delete a symbol in S; count how many occurrences of a given symbol appear in any given prefix of S (called rank operation) and locate the position of the ith occurrence of a symbol inside S (called select operation). We discuss the time complexity of several combinations of these operations along with the entropy space bounds of the corresponding compressed indexes. In this way, we extend or improve the bounds of previous work by Ferragina and Venturini [TCS, 2007], Jansson et al. [ICALP, 2012], and Nekrich and Navarro [SODA, 2013].</description>
  407.      <pubDate>Fri, 04 Oct 2013 12:14:04 GMT</pubDate>
  408.      <guid isPermaLink="false"></guid>
  409.      <dc:date>2013-10-04T12:14:04Z</dc:date>
  410.    </item>
  411.    <item>
  412.      <title>Maintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven Framework</title>
  413.      <link></link>
  414.      <description>Title: Maintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven Framework
  415. Authors: Ali, Manar Sayed Salamah
  416. Abstract: Business to Business integration is enhanced by Workflow structures, which allow for aggregating web services as interconnected business tasks to achieve a business outcome. Business processes naturally involve long running activities, and require transactional behavior across them addressed through general management, failure handling and compensation mechanisms. Loose coupling and the asynchronous nature of Web Services make an LRT subject to a wider range of communication failures. Two basic requirements of transaction management models are reliability and consistency despite failures. This research presents a framework to provide autonomous handling of long running transactions, based on dependencies which are derived from the workflow. The framework presents a solution for forward recovery from errors and compensations automatically applied to executing instances of workflows. The failure handling mechanism is based on the propagation of failures through a recursive hierarchical structure of transaction components (nodes and execution paths). The management system of transactions (COMPMOD) is implemented as a reactive system controller, where system components change their states based on rules in response to triggering of execution events. One practical feature of the model is the distinction of vital and non-vital components, allowing the process designer to express the cruciality of activities in the workflow with respect to the business logic. A novel feature of this research is that the approach permits the workflow designer to specify additional compensation dependencies which will be enforced. A notable feature is the extensibility of the model that is eased by the simple and declarative based formalism. In our approach, the main concern is the provision of flexible and reliable underlying control flow mechanisms supported by management policies. The main idea for incorporating policies is to manage the static structure of the workflow, as well as handling arbitrary failure and compensation events. Thus, we introduce new techniques and architectures to support enterprise integration solutions that support the dynamics of business needs.</description>
  417.      <pubDate>Thu, 12 Sep 2013 10:44:21 GMT</pubDate>
  418.      <guid isPermaLink="false"></guid>
  419.      <dc:date>2013-09-12T10:44:21Z</dc:date>
  420.    </item>
  421.    <item>
  422.      <title>Computing Minimum Spanning Trees with Uncertainty</title>
  423.      <link></link>
  424.      <description>Title: Computing Minimum Spanning Trees with Uncertainty
  425. Authors: Erlebach, Thomas; Hoffmann, Michael; Krizanc, Danny; Mihal’ák, Matúš; Raman, Rajeev
  426. Editors: Albers, S.; Weil, P.
  427. Abstract: We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge e of the graph only a set Aₑ, called an uncertainty area, that contains the actual edge weight wₑ is known. The algorithm can ‘update’ e to obtain the edge weight wₑ E Aₑ. The task is to output the edge set of a minimum spanning tree after a minimum number of updates.&#xD;
  428. An algorithm is k-update competitive if it makes at most k times as many updates as the optimum. We present a 2-update competitive algorithm if all areas Aₑ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas Aₑ is to exclude degenerate inputs for which no constant update competitive algorithm can exist.&#xD;
  429. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.</description>
  430.      <pubDate>Tue, 10 Sep 2013 13:05:03 GMT</pubDate>
  431.      <guid isPermaLink="false"></guid>
  432.      <dc:date>2013-09-10T13:05:03Z</dc:date>
  433.    </item>
  434.    <item>
  435.      <title>Inferring Extended Finite State Machine Models from Software Executions</title>
  436.      <link></link>
  437.      <description>Title: Inferring Extended Finite State Machine Models from Software Executions
  438. Authors: Walkinshaw, Neil; Taylor, Ramsay; Derrick, John
  439. Abstract: The ability to reverse-engineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverse-engineering techniques focus either on control-specific behaviour (e.g. in the form of Finite State Machines), or data-specific behaviour (e.g. as pre/post-conditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software’s operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to two open-source software projects.</description>
  440.      <pubDate>Wed, 04 Sep 2013 09:00:20 GMT</pubDate>
  441.      <guid isPermaLink="false"></guid>
  442.      <dc:date>2013-09-04T09:00:20Z</dc:date>
  443.    </item>
  444.    <item>
  445.      <title>Attitudes towards User Experience (UX) Measurement</title>
  446.      <link></link>
  447.      <description>Title: Attitudes towards User Experience (UX) Measurement
  448. Authors: Law, Lai-Chong; van Schaik, Paul
  449. Editors: Wiedenbeck, S
  450. Abstract: User experience (UX), as an immature research area, is still haunted by the challenges of defining the scope of UX in general and operationalising experiential qualities in particular. To explore the basic question whether UX constructs are measurable, we conducted semi-structured interviews with eleven UX professionals where a set of questions in relation to UX measurement were explored. The interviewees expressed scepticism as well as ambivalence towards UX measures and shared anecdotes related to such measures in different contexts. Besides, the data suggested that design-oriented UX professionals tended to be sceptical about UX measurement. To examine whether such an attitude prevailed in the HCI community, we conducted a survey with essentially the same set of questions used in the interviews. Altogether 367 responses were received; 170 of them were valid and analysed. The survey provided empirical evidence on this issue as a baseline for progress in UX measurement. Overall, results indicated that attitude was favourable and there were nuanced views on details of UX measurement, implying good prospects for its acceptance, given further progress in research and education in UX measurement where UX modelling grounded in theories can play a crucial role. Mutual recognition of the value of objective measures and subjective accounts of user experience can enhance the maturity of this area.</description>
  451.      <pubDate>Tue, 03 Sep 2013 14:47:00 GMT</pubDate>
  452.      <guid isPermaLink="false"></guid>
  453.      <dc:date>2013-09-03T14:47:00Z</dc:date>
  454.    </item>
  455.    <item>
  456.      <title>Algorithms for Wireless Communication and Sensor Networks</title>
  457.      <link></link>
  458.      <description>Title: Algorithms for Wireless Communication and Sensor Networks
  459. Authors: Grant, Thomas
  460. Abstract: In this thesis we will address four problems concerned with algorithmic issues that arise from communication and sensor networks. &#xD;
  461. The problem of scheduling wireless transmissions under SINR constraints has received much attention for unicast (one to one) transmissions. We consider the scheduling problem for multicast requests of one sender to many receivers, and present a logarithmic approximation algorithm and an online lower bound for arbitrary power assignments.&#xD;
  462. We study the problem of maximising the lifetime of a sensor network for fault-tolerant target coverage in a setting with composite events, where a composite event is the simultaneous occurrence of one or more atomic events. We are the first to study this variation of the problem from a theoretical perspective, where each event must be covered twice and there are several event types, and we present a (6 + ɛ)-approximation algorithm for the problem.&#xD;
  463. The online strongly connected dominating set problem concerns the construction of a dominating set that is strongly connected at all times, and for every vertex not in the dominating set, there exists an edge to some vertex in the dominating set, and an edge from a vertex in the dominating set. We present a lower bound for deterministic online algorithms and present an algorithm that achieves competitive ratio matching the lower bound.&#xD;
  464. The monotone barrier resilience problem is to determine how many sensors must be removed from a sensor network, such that a monotone path can exist between two points that does not intersect any sensor. We present a polynomial time algorithm that can determine the monotone barrier resilience for sensor networks of convex pseudo-disks of equal width.</description>
  465.      <pubDate>Thu, 29 Aug 2013 10:44:20 GMT</pubDate>
  466.      <guid isPermaLink="false"></guid>
  467.      <dc:date>2013-08-29T10:44:20Z</dc:date>
  468.    </item>
  469.    <item>
  470.      <title>Mining Sequential Patterns from Probabilistic Databases</title>
  471.      <link></link>
  472.      <description>Title: Mining Sequential Patterns from Probabilistic Databases
  473. Authors: Muzammal, Muhammad; Raman, Rajeev
  474. Editors: Huang, J.Z.; Cao, L.; Srivastava, J.
  475. Abstract: We consider sequential pattern mining in situations where there is uncertainty about which source an event is associated with. We model this in the probabilistic database framework and consider the problem of enumerating all sequences whose expected support is sufficiently large. Unlike frequent itemset mining in probabilistic databases [C. Aggarwal et al. KDD’09; Chui et al., PAKDD’07; Chui and Kao, PAKDD’08], we use dynamic programming (DP) to compute the probability that a source supports a sequence, and show that this suffices to compute the expected support of a sequential pattern. Next, we embed this DP algorithm into candidate generate-and-test approaches, and explore the pattern lattice both in a breadth-first (similar to GSP) and a depth-first (similar to SPAM) manner. We propose optimizations for efficiently computing the frequent 1-sequences, for re-using previously-computed results through incremental support computation, and for elmiminating candidate sequences without computing their support via probabilistic pruning. Preliminary experiments show that our optimizations are effective in improving the CPU cost.
  476. Description: Full text of this item is not currently available on the LRA.  The final published version may be available through the links above.</description>
  477.      <pubDate>Thu, 25 Jul 2013 10:38:00 GMT</pubDate>
  478.      <guid isPermaLink="false"></guid>
  479.      <dc:date>2013-07-25T10:38:00Z</dc:date>
  480.    </item>
  481.    <item>
  482.      <title>Range Extremum Queries</title>
  483.      <link></link>
  484.      <description>Title: Range Extremum Queries
  485. Authors: Raman, Rajeev
  486. Abstract: There has been a renewal of interest in data structures for range extremum queries. In such problems, the input comprises N points, which are either elements of a d-dimensional matrix, that is, their coordinates are specified by the 1D submatrices they lie in (row and column indices for d = 2), or they are points in ℝ[superscript d] . Furthermore, associated with each point is a priority that is independent of the point’s coordinate. The objective is to pre-process the given points and priorities to answer the range maximum query (RMQ): given a d-dimensional rectangle, report the points with maximum priority. The objective is to minimze the space used by the data structure and the time taken to answer the above query. This talk surveys a number of recent developments in this area, focussing on the cases d = 1 and d = 2.</description>
  487.      <pubDate>Thu, 25 Jul 2013 09:04:54 GMT</pubDate>
  488.      <guid isPermaLink="false"></guid>
  489.      <dc:date>2013-07-25T09:04:54Z</dc:date>
  490.    </item>
  491.    <item>
  492.      <title>Random Access to Grammar-Compressed Strings</title>
  493.      <link></link>
  494.      <description>Title: Random Access to Grammar-Compressed Strings
  495. Authors: Bille, Philip; Landau, Gad M.; Raman, Rajeev; Sadakane, Kunihiko; Satti, Srinivasa Rao; Weimann, Oren
  496. Abstract: Let S be a string of length N compressed into a context-free grammar S of size n. We present two representations of S achieving O(logN) random access time, and either O(n · α[subscript k](n)) construction time and space on the pointer machine model, or 0(n) construction time and space on the RAM. Here, α[subscript k](n) is the inverse of the k[superscript th] row of Ackermann's function. Our representations also efficiently support decompression of any substring in S: we can decompress any substring of length m in the same complexity as a single random access query and additional O(m) time. Combining these results with fast algorithms for uncompressed approximate string matching leads to several efficient algorithms for approximate string matching on grammar-compressed strings without decompression. For instance, we can find all approximate occurrences of a pattern P with at most k errors in time O(n(min{|P|k,k[superscript 4] + |P|}+logN)+occ), where occ is the number of occurrences of P in S. Finally, we are able to generalize our results to navigation and other operations on grammar-compressed trees. All of the above bounds significantly improve the currently best known results. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavy-paths in grammars.</description>
  497.      <pubDate>Thu, 11 Jul 2013 12:05:53 GMT</pubDate>
  498.      <guid isPermaLink="false"></guid>
  499.      <dc:date>2013-07-11T12:05:53Z</dc:date>
  500.    </item>
  501.    <item>
  502.      <title>Using Evidential Reasoning to Make Qualified Predictions of Software Quality</title>
  503.      <link></link>
  504.      <description>Title: Using Evidential Reasoning to Make Qualified Predictions of Software Quality
  505. Authors: Walkinshaw, Neil
  506. Editors: Wagner, S
  507. Abstract: Software quality is commonly characterised in a top-down manner. High-level notions such as quality are decomposed into hierarchies of sub-factors, ranging from abstract notions such as maintainability and reliability to lower-level notions such as test coverage or team-size. Assessments of abstract factors are derived from relevant sources of information about their respective lower-level sub-factors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER) - a mathematical technique for reasoning about uncertainty and evidence - can address this problem. It enables the quality assessment to proceed in a bottom-up manner, by the provision of low-level assessments that make any uncertainty explicit, and automatically propagating these up to higher-level 'belief-functions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance.</description>
  508.      <pubDate>Thu, 04 Jul 2013 15:35:16 GMT</pubDate>
  509.      <guid isPermaLink="false"></guid>
  510.      <dc:date>2013-07-04T15:35:16Z</dc:date>
  511.    </item>
  512.    <item>
  513.      <title>Ant Colony Optimization in Stationary and Dynamic Environments</title>
  514.      <link></link>
  515.      <description>Title: Ant Colony Optimization in Stationary and Dynamic Environments
  516. Authors: Mavrovouniotis, Michalis
  517. Abstract: The ant colony optimization (ACO) metaheuristic is inspired by the foraging behaviour of real ant colonies. Similarly with other metaheuristics, ACO suffers from stagnation behaviour, where all ants construct the same solution from early stages.&#xD;
  518. In result, the solution quality may be degraded because the population may get trapped on local optima. In this thesis, we propose a novel approach, called direct communication (DC) scheme, that helps ACO algorithms to escape from a local optimum if they get trapped. The experimental results on two routing problems showed that the DC scheme is effective.&#xD;
  519. Usually, researchers are focused on problems in which they have static environment.&#xD;
  520. In the last decade, there is a growing interest to apply nature-inspired metaheuristics in optimization problems with dynamic environments. Usually, dynamic optimization problems (DOPs) are addressed using evolutionary algorithms. In this thesis, we apply several novel ACO algorithms in two routing DOPs. The proposed ACO algorithms are integrated with immigrants schemes in which immigrant ants are generated, either randomly or with the use of knowledge from previous environment(s), and replace other ants in the current population. The experimental results showed that each proposed algorithm performs better in different dynamic cases, and that they have better performance than other peer ACO algorithms in general.&#xD;
  521. The existing benchmark generators for DOPs are developed for binary-encoded combinatorial problems. Since routing problems are usually permutation-encoded combinatorial problems, the dynamic environments used in the experiments are generated using a novel benchmark generator that converts a static problem instance to a dynamic one. The specific dynamic benchmark generator changes the fitness landscape of the problem, which causes the optimum to change in every environmental change. Furthermore in this thesis, another benchmark generator is proposed which moves the population to another location in the fitness landscape, instead of modifying it. In this way, the optimum is known and one can see how close to the optimum an algorithm performs during the environmental changes.</description>
  522.      <pubDate>Fri, 14 Jun 2013 09:45:27 GMT</pubDate>
  523.      <guid isPermaLink="false"></guid>
  524.      <dc:date>2013-06-14T09:45:27Z</dc:date>
  525.    </item>
  526.    <item>
  527.      <title>Completeness of Conversion between Reactive Programs for Ultrametric Models</title>
  528.      <link></link>
  529.      <description>Title: Completeness of Conversion between Reactive Programs for Ultrametric Models
  530. Authors: Severi, Paula; de Vries, Fer-Jan
  531. Abstract: In 1970 Friedman proved completeness of beta eta conversion in the simply-typed lambda calculus for the set-theoretical model. Recently Krishnaswami and Benton have captured the essence of Hudak’s reactive programs in an extension of simply typed lambda calculus with causal streams and a temporal modality and provided this typed lambda calculus for reactive programs with a sound ultrametric semantics.&#xD;
  532. We show that beta eta conversion in the typed lambda calculus of reactive programs is complete for the ultrametric model.</description>
  533.      <pubDate>Tue, 11 Jun 2013 10:45:55 GMT</pubDate>
  534.      <guid isPermaLink="false"></guid>
  535.      <dc:date>2013-06-11T10:45:55Z</dc:date>
  536.    </item>
  537.    <item>
  538.      <title>Broadcasting, Coverage, Energy Efficiency and Network Capacity in Wireless Networks</title>
  539.      <link></link>
  540.      <description>Title: Broadcasting, Coverage, Energy Efficiency and Network Capacity in Wireless Networks
  541. Authors: Henna, Shagufta
  542. Abstract: Broadcasting, coverage, duty cycling, and capacity improvement are some of the important areas of interest in Wireless Networks. We address different problems related with broadcasting, duty cycling, and capacity improvement by sensing different network conditions and dynamically adapting to them. We propose two cross layer broadcasting protocols called CASBA and CMAB which dynamically adapt to network conditions of congestion and mobility. We also propose a broadcasting protocol called DASBA which dynamically adapts to local node density. CASBA, CMAB, and DASBA improve the reachability while minimizing the broadcast cost. Duty cycling is an efficient mechanism to conserve energy in Wireless Sensor Networks (WSNs). Existing duty cycling techniques are unable to handle the contention under dynamic traffic loads. Our proposed protocol called SA-RI-MAC handles traffic contention much more efficiently than RI-MAC without sacrificing the energy efficiency. It improves the delivery ratio with a significant reduction in the latency and energy consumption. Due to limited battery life and fault tolerance issues posed by WSNs, efficient methods which ensure reliable coverage are highly desirable. One solution is to use disjoint set covers to cover the targets. We formulate a problem called MDC which addresses the maximum coverage by using disjoint set covers S1 and S2. We prove that MDC is NP-complete and propose a √n-approximation algorithm for the MDC problem to cover n targets. The use of multi-channel MAC protocols improves the capacity of wireless networks. Efficient multi-channel MAC protocols aim to utilize multiple channels effectively. Our proposed multi-channel MAC protocol called LCV-MMAC effectively utilizes the multiple channels by handling the control channel saturation.  LCV-MMAC demonstrates significantly better throughput and fairness compared to DCA, MMAC, and AMCP in different network scenarios.</description>
  543.      <pubDate>Wed, 13 Mar 2013 11:14:56 GMT</pubDate>
  544.      <guid isPermaLink="false"></guid>
  545.      <dc:date>2013-03-13T11:14:56Z</dc:date>
  546.    </item>
  547.    <item>
  548.      <title>Zooming out of Membrane Graph Transformation Systems</title>
  549.      <link></link>
  550.      <description>Title: Zooming out of Membrane Graph Transformation Systems
  551. Authors: Bapodra, Mayur
  552. Abstract: Living cells offer a rich variety of complex interactions and interesting structures to those wishing to model processes in systems biology. Of particular interest is the hierarchical nature of cell configurations, the compartmentalized reactions that can occur within individual cells, and the interaction between different levels of this hierarchy. Graph transformation systems are an intuitive and readable modelling paradigm that lends itself to representing such systems since graphs can be utilised to represent this rich structural information, while graph rewriting rules can concisely describe cell reactions. We formulate a generic graph transformation model that captures many functional properties of membrane (or P) systems that take inspiration from such cell biological processes. The main focus is then on abstraction of systems defined as instances of this metamodel, which we refer to as membrane graph transformation systems. Often, such systems are analysed by stochastic simulation, as this allows us to examine their overall, emergent behaviour, incorporating the effect that randomness may have on the results. Stochastic simulation can be resource intensive, limiting the applicability of many modelling languages to real biological systems. To improve performance and the scalability of modelling, we formalize a methodology that hides detail in the lowest level of the hierarchy, but retains any important information as attributes. We then train the parameter of the abstract model using Bayesian networks so that the local, per-rule behaviour of the original, concrete model is preserved. Consequently, trends in global properties are preserved, such as the way in which they change with respect to the stochastic parameters of certain rules. The methodology is demonstrated and evaluated against two case studies: a hypothetical immunological response and a peer-to-peer voice over IP network.</description>
  553.      <pubDate>Tue, 12 Mar 2013 11:26:40 GMT</pubDate>
  554.      <guid isPermaLink="false"></guid>
  555.      <dc:date>2013-03-12T11:26:40Z</dc:date>
  556.    </item>
  557.    <item>
  558.      <title>The SiXML Project: SiXDOM 1.2</title>
  559.      <link></link>
  560.      <description>Title: The SiXML Project: SiXDOM 1.2
  561. Authors: Delpratt, O’Neil; Joannou, Stelios; Rahman, Naila; Raman, Rajeev
  562. Abstract: This White Paper introduces the SiXML project, developed by O’Neil Delpratt, Stelios Joannou, Naila Rahman and Rajeev Raman at the University of Leicester. SiXML uses the novel technology of succinct data structures to process XML documents in their entirety in main memory with a very low memory footprint. SiXML greatly improves the scalability of XML processing, both in terms of volume of data that can be handled and processing time. SiXML software can be downloaded in the accompanying zip file.
  563. Description: This is an updated version of the SDOM 1.0 software available at</description>
  564.      <pubDate>Fri, 08 Feb 2013 09:10:48 GMT</pubDate>
  565.      <guid isPermaLink="false"></guid>
  566.      <dc:date>2013-02-08T09:10:48Z</dc:date>
  567.    </item>
  568.    <item>
  569.      <title>Matching of Service Feature Diagrams based on Linear Logic</title>
  570.      <link></link>
  571.      <description>Title: Matching of Service Feature Diagrams based on Linear Logic
  572. Authors: Naeem, Muhammad
  573. Abstract: Managing variability is essential for an efficient implementation of end-user services that can be customised to individual needs. Apart from variations in selection and orchestration, also third-party services may have to be customisable. Although feature diagrams provide a high-level visual notation for variability, their use for specifying variability of services raises the problem of matching a required feature diagram against a set of provided ones.&#xD;
  574. In particular, the established interpretation of feature diagrams in Propositional Logic is not expressive enough for matching in the context of service variability. The problem becomes more visible when a certain requirement is going to be satisfied by a combination of multiple offers with overlapping features, which is a consequence of idempotence in Propositional Logic.&#xD;
  575. To address this problem, we propose service feature diagrams with semantics in Linear Logic. Linear Logic only allows the use of idempotence on the propositions with modalities. The permissible selection of features of a service feature diagram is called an instance diagram. We provide rules to obtain instance diagrams from the service feature diagram. The semantics of instance diagrams are also supported by Linear Logic.&#xD;
  576. This thesis not only introduces service feature diagrams, but also formalises their matching as linear deduction. We propose two categories of rules to verify diagrammatically if a collection of service descriptions satisfy the requirements. First, graphical matching rules are used to match service feature diagrams of requestor and provider. Second, graphical merging rules are used to merge multiple feature diagrams contributing to satisfy the requestor’s demands. We prove the correctness of these rules using the inference system of Linear Logic. We also provide the analysis of graphical rules and show that the application of the graphical rules is independent of the context in service feature diagram, i.e., graphical rule can be applied anywhere in a service feature diagram.</description>
  577.      <pubDate>Thu, 07 Feb 2013 10:46:02 GMT</pubDate>
  578.      <guid isPermaLink="false"></guid>
  579.      <dc:date>2013-02-07T10:46:02Z</dc:date>
  580.    </item>
  581.    <item>
  582.      <title>Online Algorithms for Temperature Aware Job Scheduling Problems</title>
  583.      <link></link>
  584.      <description>Title: Online Algorithms for Temperature Aware Job Scheduling Problems
  585. Authors: Birks, Martin David
  586. Abstract: Temperature is an important consideration when designing microprocessors. When exposed to high temperatures component reliability can be reduced, while some components completely fail over certain temperatures. We consider the design and analysis of online algorithms; in particular algorithms that use knowledge of the amount of heat a job will generate. We consider algorithms with two main objectives. The first is maximising job throughput. We show upper and lower bounds for the case where jobs are unit length, both when jobs are weighted and unweighted. Many of these bounds are matching for all cooling factors in the single and multiple machine case.&#xD;
  587. We extend this to consider the single machine case where jobs have longer than unit length. When all jobs are equal length we show matching bounds for the case without preemption. We also show that both models of pre-emption enable at most a slight reduction in the competitive ratio of algorithms. We then consider when jobs have variable lengths. We analyse both the models of unweighted jobs and the jobs with weights proportional to their length. We show bounds that match within constant factors, in the non-preemptive and both preemptive models.&#xD;
  588. The second objective we consider is minimising flow time. We consider the objective of minimising the total flow time of a schedule. We show NP-hardness and inapproximability results for the offline case, as well as giving an approximation algorithm for the case where all release times are equal. For the online case we give some negative results for the case where maximum job heats are bounded. We also give some results for a resource augmentation model that include a 1-competitive algorithm when the extra power for the online algorithm is high enough. Finally we consider the objective of minimising the maximum flow time of any job in a schedule.</description>
  589.      <pubDate>Thu, 17 Jan 2013 11:55:28 GMT</pubDate>
  590.      <guid isPermaLink="false"></guid>
  591.      <dc:date>2013-01-17T11:55:28Z</dc:date>
  592.    </item>
  593.    <item>
  594.      <title>Partner-Based Scheduling and Routing for Grid Workflows</title>
  595.      <link></link>
  596.      <description>Title: Partner-Based Scheduling and Routing for Grid Workflows
  597. Authors: Ashraf, Jawad
  598. Abstract: The Grid has enabled the scientific community to make faster progress. Scientific experiments and data analyses once spanning several years can now be completed in a matter of hours. With the advancement of technology, the execution of scientific experiments, often represented as workflows, has become more demanding. Thus, there is a vital need for improvements in the scheduling of scientific workflows.  Efficient execution of scientific workflows can be achieved by the timely allocation of the resources. Advance reservation can ensure the future availability of heterogeneous resources and help a scheduler to produce better schedules.&#xD;
  599. We propose a novel resource mapping technique for jobs of a Grid workflow in an advance reservation environment. Using a dynamic critical path based job selection method, our proposed technique considers the conditional mapping of parent and child jobs to the same resource, trying to minimise the communication duration between jobs and thus optimising the workflow completion time. The proposed method is analysed in both static and dynamic environments, and the simulation results show encouraging performance especially for workflows where the communication costs are higher than the computation costs.&#xD;
  600. We also propose a hybrid of multiple scheduling heuristics for the aforementioned problem, which chooses the best among multiple schedules computed by different algorithms. Simulation results show a significant improvement over well known scheduling heuristics in terms of workflow completion time.&#xD;
  601. Considering the advance reservation environment, a better schedule for the earliest completion of a workflow can be achieved if better paths can be found for the transfer of data files between jobs executed on different resources. We propose a K-shortest path based routing algorithm for finding good paths in the advance reservation environment. The results show that our proposed algorithm performs very well in terms of the earliest arrival time of the data.&#xD;
  602. Finally, we also study a modified partner based scheduling heuristic for non-advance reservation environments. The results demonstrate that our proposed algorithm is a promising candidate for adoption in such Grid environments.</description>
  603.      <pubDate>Wed, 16 Jan 2013 14:15:54 GMT</pubDate>
  604.      <guid isPermaLink="false"></guid>
  605.      <dc:date>2013-01-16T14:15:54Z</dc:date>
  606.    </item>
  607.    <item>
  608.      <title>Mining Sequential Patterns from Probabilistic Data</title>
  609.      <link></link>
  610.      <description>Title: Mining Sequential Patterns from Probabilistic Data
  611. Authors: Muzammal, Muhammad
  612. Abstract: Sequential Pattern Mining (SPM) is an important data mining problem. Although it is assumed in classical SPM that the data to be mined is deterministic, it is now recognized that data obtained from a wide variety of data sources is inherently noisy or uncertain, such as data from sensors or data being collected from the web from different (potentially conflicting) data sources. Probabilistic databases is a popular framework for modelling uncertainty. Recently, several data mining and ranking problems have been studied in probabilistic databases. To the best of our knowledge, this is the first systematic study of mining sequential patterns from probabilistic databases.&#xD;
  613. In this work, we consider the kind of uncertainties that could arise in SPM. We propose four novel uncertainty models for SPM, namely tuple-level uncertainty, event-level uncertainty, source-level uncertainty and source-level uncertainty in deduplication, all of which fit into the probabilistic databases framework, and motivate them using potential real-life scenarios. We then define the interestingness predicate for two measures of interestingness, namely expected support and probabilistic frequentness. Next, we consider the computational complexity of evaluating the interestingness predicate, for various combinations of uncertainty models and interestingness measures, and show that different combinations have very different outcomes from a complexity theoretic viewpoint: whilst some cases are computationally tractable, we show other cases to be computationally intractable.&#xD;
  614. We give a dynamic programming algorithm to compute the source support probability and hence the expected support of a sequence in a source-level uncertain database. We then propose optimizations to speedup the support computation task. Next, we propose probabilistic SPM algorithms based on the candidate generation and pattern growth frameworks for the source-level uncertainty model and the expected support measure. We implement these algorithms and give an empirical evaluation of the probabilistic SPM algorithms and show the scalability of these algorithms under different parameter settings using both real and synthetic datasets. Finally, we demonstrate the effectiveness of the probabilistic SPM framework at extracting meaningful patterns in the presence of noise.</description>
  615.      <pubDate>Thu, 20 Dec 2012 11:37:59 GMT</pubDate>
  616.      <guid isPermaLink="false"></guid>
  617.      <dc:date>2012-12-20T11:37:59Z</dc:date>
  618.    </item>
  619.    <item>
  620.      <title>Model-Based Testing Using Visual Contracts</title>
  621.      <link></link>
  622.      <description>Title: Model-Based Testing Using Visual Contracts
  623. Authors: Khan, Tamim Ahmed
  624. Abstract: Web services only expose interface level information, abstracting away implementation details. Testing is a time consuming and resource-intensive activity. Therefore, it is important to minimize the set of test cases executed without compromising quality. Since white-box testing techniques and traditional structural coverage criteria require access to code, we require a model-based approach for web service testing. Testing relies on oracles to provide expected outcomes for test cases and, if implemented manually, they depend on testers’ understanding of functional requirements to decide the correct response of the system on every given test case. As a result, they are costly in creation and maintenance and their quality depends on the correct interpretation of the requirements. Alternatively, if suitable specifications are available, oracles can be generated automatically at lower cost and with better quality. We propose to specify service operations as visual contracts with executable formal specifications as rules of a typed attributed graph transformation system. We associate operation signatures with these rules for providing test oracles.&#xD;
  625. We analyze dependencies and conflicts between visual contracts to develop a dependency graph. We propose model-based coverage criteria, considering this dependency graph, to assess the completeness of test suites. We also propose a mechanism to find out which of the potential dependencies and the conflicts were exercised by a given test case. While executing the tests, the model is simulated and coverage is recorded as well as measured against the criteria. The criteria are formalized and the dynamic detection of conflicts and dependencies is developed. This requires keeping track of occurrences and overlaps of pre- and post-conditions, their enabling and disabling, in successive model states, and interpreting these in terms of the static dependency graph.&#xD;
  626. Systems evolve over time and need retesting each time there is a change. In order to verify that the quality of the system is maintained, we use regression testing. Since regression test suites tend to be large, we isolate the affected part in the system only retesting affected parts by rerunning a selected subset of the total test suite. We analyze the test cases that were executed on both versions and propose a mechanism to transfer the coverage provided by these test cases. This information helps us to assess the completeness of the test suite on the new version without executing all of it.</description>
  627.      <pubDate>Fri, 02 Nov 2012 12:37:51 GMT</pubDate>
  628.      <guid isPermaLink="false"></guid>
  629.      <dc:date>2012-11-02T12:37:51Z</dc:date>
  630.    </item>
  631.    <item>
  632.      <title>PCTL model checking of Markov chains: Truth and falsity as winning strategies in games</title>
  633.      <link></link>
  634.      <description>Title: PCTL model checking of Markov chains: Truth and falsity as winning strategies in games
  635. Authors: Fecher, H; Huth, M; Piterman, N; Wagner, D</description>
  636.      <pubDate>Wed, 24 Oct 2012 09:21:52 GMT</pubDate>
  637.      <guid isPermaLink="false"></guid>
  638.      <dc:date>2012-10-24T09:21:52Z</dc:date>
  639.    </item>
  640.    <item>
  641.      <title>A hierarchy of reverse bisimulations on stable configuration structures</title>
  642.      <link></link>
  643.      <description>Title: A hierarchy of reverse bisimulations on stable configuration structures
  644. Authors: Phillips, I; Ulidowski, I</description>
  645.      <pubDate>Wed, 24 Oct 2012 09:21:34 GMT</pubDate>
  646.      <guid isPermaLink="false"></guid>
  647.      <dc:date>2012-10-24T09:21:34Z</dc:date>
  648.    </item>
  649.    <item>
  650.      <title>Enriched Logical Connections</title>
  651.      <link></link>
  652.      <description>Title: Enriched Logical Connections
  653. Authors: Kurz, A; Velebil, J</description>
  654.      <pubDate>Wed, 24 Oct 2012 09:10:17 GMT</pubDate>
  655.      <guid isPermaLink="false"></guid>
  656.      <dc:date>2012-10-24T09:10:17Z</dc:date>
  657.    </item>
  658.  </channel>
  659. </rss>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use: