RSS Advisory Board

RSS Validator

This service tests the validity of a Really Simple Syndication feed, checking to see that it follows the rules of the RSS specification.

Enter the URL of an RSS feed:

 

Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

  • line 8, column 4: A channel should not include both pubDate and dc:date [help]

        <dc:date>2014-12-17T00:56:24Z</dc:date>
        ^
  • line 17, column 6: An item should not include both pubDate and dc:date (50 occurrences) [help]

          <dc:date>2014-12-15T10:36:12Z</dc:date>
          ^
  • line 675, column 2: Missing atom:link with rel="self" [help]

      </channel>
      ^

Source: https://lra.le.ac.uk/feed/rss_2.0/2381/316

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  3.  <channel>
  4.    <title>LRA Community:</title>
  5.    <link>http://hdl.handle.net/2381/316</link>
  6.    <description />
  7.    <pubDate>Wed, 17 Dec 2014 00:56:24 GMT</pubDate>
  8.    <dc:date>2014-12-17T00:56:24Z</dc:date>
  9.    <item>
  10.      <title>Flexibo : language and its application to static analysis</title>
  11.      <link>http://hdl.handle.net/2381/30106</link>
  12.      <description>Title: Flexibo : language and its application to static analysis
  13. Authors: Zhou, Jianguo
  14. Abstract: This thesis introduces a new object-based language FlexibO to support prototype development paradigm and more importantly, program static analysis. FlexibO offers extreme flexibility and hence enables developers to write programs that contain rich information for further analysis and optimization. FlexibO interpreter's seamless integration with Java (including direct access to Java classes and methods and direct inheritance of Java classes) makes it a suitable tool for fast prototype software development. FlexibO's extreme flexibility allows developers to redefine the behavior of program evaluation by overriding its default evaluation method. This mechanism can be used to translate FlexibO to other efficient languages. In this thesis we design a translator in FlexibO to translate Bulk-Synchronous Parallel specifications (expressed in FlexibO) to executable C pro grams linked with BSPLib. Before translation, the tool first checks syntax and type, then statically analyzes potential communication conflicts, and finally generates C code. The translation process can accurately analyze primitive commands but require approximation (using abstract interpretation) for more advanced commands such as loops. The appropriateness of the translator and the associated static analysis can be formally analyzed using the technique of normal form.</description>
  15.      <pubDate>Mon, 15 Dec 2014 10:36:12 GMT</pubDate>
  16.      <guid isPermaLink="false">http://hdl.handle.net/2381/30106</guid>
  17.      <dc:date>2014-12-15T10:36:12Z</dc:date>
  18.    </item>
  19.    <item>
  20.      <title>Management concerns in service-driven applications</title>
  21.      <link>http://hdl.handle.net/2381/30107</link>
  22.      <description>Title: Management concerns in service-driven applications
  23. Authors: Alghamdi, Ahmed Musfer
  24. Abstract: With the abundance of functionally-similar Web-Services, the offered or agreed-on qualities are becoming decisive factors in attracting private as well as corporate customers to a given service, among all others. Nevertheless, the state-of-art in handling qualities, in this emerging service paradigm, remains largely bound to the aspects of technology and their standards (e.g. time-response, availability, throughputs). However, current approaches still ignore capital domain-based business qualities and management concerns (e.g. customer profiles, business deadlines). The main objective of this thesis is to leverage the handling of quality and management issues in service-driven business applications toward the intuitive business level supported by a precise and flexible conceptualisation. Thus, instead of addressing qualities using just rigid IT-SLA (service-level agreements) as followed by Web Services technology and standards, we propose to cope with more abstract and domain-dependent and adaptive qualities in an intuitive, yet conceptual, manner. The approach is centred on evolving business rules and policies for management, with a clean separation of functionalities as specific rules. At the conceptual level, we propose specialised architectural connectors called management laws that we also separate from coordination laws for functionality issues. We further propose a smooth and compliant mapping of the conceptualisation toward service technology, using existing rule-based standards.</description>
  25.      <pubDate>Mon, 15 Dec 2014 10:36:12 GMT</pubDate>
  26.      <guid isPermaLink="false">http://hdl.handle.net/2381/30107</guid>
  27.      <dc:date>2014-12-15T10:36:12Z</dc:date>
  28.    </item>
  29.    <item>
  30.      <title>Automatic presentations of groups and semigroups</title>
  31.      <link>http://hdl.handle.net/2381/30105</link>
  32.      <description>Title: Automatic presentations of groups and semigroups
  33. Authors: Oliver, Graham
  34. Abstract: Effectively deciding the satisfiability of logical sentences over structures is an area well-studied in the case of finite structures. There has been growing work towards considering this question for infinite structures. In particular the theory of automatic structures, considered here, investigates structures representable by finite automata. The closure properties of finite automata lead naturally to algorithms for deciding satisfiability for some logics. The use of finite automata to investigate infinite structures has been inspired by the interplay between the theory of finite automata and the theory of semigroups. This inspiration has come in particular from the theory of automatic groups and semigroups, which considers (semi)groups with regular sets of normal forms over their generators such that generator-composition is also regular. The work presented here is a contribution to the foundational problem for automatic structures: given a class of structures, classify those members that have an automatic presentation. The classes considered here are various interesting subclasses of the class of finitely generated semigroups, as well as the class of Cayley Graphs of groups. Although similar, the theories of automatic (semi)groups and automatic presentation differ in their construction. A classification for finitely generated groups allows a direct comparison of the theory of automatic presentations with the theory of automatic groups.</description>
  35.      <pubDate>Mon, 15 Dec 2014 10:36:10 GMT</pubDate>
  36.      <guid isPermaLink="false">http://hdl.handle.net/2381/30105</guid>
  37.      <dc:date>2014-12-15T10:36:10Z</dc:date>
  38.    </item>
  39.    <item>
  40.      <title>Formal languages and the irreducible word problem in groups</title>
  41.      <link>http://hdl.handle.net/2381/30103</link>
  42.      <description>Title: Formal languages and the irreducible word problem in groups
  43. Authors: Fonseca, Ana
  44. Abstract: There exist structural classifications of groups with a regular, one-counter or context-free word problem. Following on from this, the main object of the work presented here has been the irreducible word problem of a group, a notion introduced by Haring-Smith, who denned it as the subset of the word problem consisting of the non-empty words which have no non-empty proper subword equal to the identity. He proved that the groups with a finite irreducible word problem with respect to some group generating set are exactly the plain groups.;We know that the class of groups with a context-free irreducible word problem is a proper subclass of the virtually free groups. We look at direct products of finitely generated free groups by finite groups and also at the plain groups and consider their irreducible word problems with respect to minimal group generating sets. We prove that, of all the direct products of the infinite cyclic group by a non-trivial finite group, only Z x Z2 and Z x Z 3 have context-free irreducible word problem (and only with respect to a few group generating sets). We also exhibit a plain group that has context-free irreducible word problem with respect to every minimal group generating set.;Looking at the direct products of finitely generated free groups by non-trivial finite groups, we have found that the only irreducible word problem that is one-counter is that of Z x Z2 with respect to the canonical group generating set.;As for irreducible word problems lying in classes of languages above context-free, on one hand, we prove that having a recursive irreducible word problem is equivalent to having a recursive word problem. On the other hand, we prove that, while there are groups such that the fact that their irreducible word problem is recursively enumerable implies that they are recursive, that is not always the case.</description>
  45.      <pubDate>Mon, 15 Dec 2014 10:36:09 GMT</pubDate>
  46.      <guid isPermaLink="false">http://hdl.handle.net/2381/30103</guid>
  47.      <dc:date>2014-12-15T10:36:09Z</dc:date>
  48.    </item>
  49.    <item>
  50.      <title>Optimization problems in communication networks</title>
  51.      <link>http://hdl.handle.net/2381/30104</link>
  52.      <description>Title: Optimization problems in communication networks
  53. Authors: Mihalak, Matus
  54. Abstract: We study four problems arising in the area of communication networks. The minimum-weight dominating set problem in unit disk graphs asks, for a given set D of weighted unit disks, to find a minimum-weight subset D' ⊆ D such that the disks D' intersect all disks D. The problem is NP-hard and we present the first constant-factor approximation algorithm. Applying our techniques to other geometric graph problems, we can obtain better (or new) approximation algorithms. The network discovery problem asks for a minimum number of queries that discover all edges and non-edges of an unknown network (graph). A query at node v discovers a certain portion of the network. We study two different query models and show various results concerning the complexity, approximability and lower bounds on competitive ratios of online algorithms. The OVSF-code assignment problem deals with assigning communication codes (nodes) from a complete binary tree to users. Users ask for codes of a certain depth and the codes have to be assigned such that (i) no assigned code is an ancestor of another assigned code and (ii) the number of (previously) assigned codes that have to be reassigned (in order to satisfy (i)) is minimized. We present hardness results and several algorithms (optimal, approximation, online and fixed-parameter tractable). The joint base station scheduling problem asks for an assignment of users to base stations (points in the plane) and for an optimal colouring of the resulting conflict graph: user u with its assigned base station b is in conflict with user v, if a disk with center at 6, and u on its perimeter, contains v. We study the complexity, and present and analyse optimal, approximation and greedy algorithms for general and various special cases.</description>
  55.      <pubDate>Mon, 15 Dec 2014 10:36:09 GMT</pubDate>
  56.      <guid isPermaLink="false">http://hdl.handle.net/2381/30104</guid>
  57.      <dc:date>2014-12-15T10:36:09Z</dc:date>
  58.    </item>
  59.    <item>
  60.      <title>Categories of containers</title>
  61.      <link>http://hdl.handle.net/2381/30102</link>
  62.      <description>Title: Categories of containers
  63. Authors: Abbott, Michael Gordon
  64. Abstract: This thesis develops a new approach to the theory of datatypes based on separating data and storage resulting in a class of datatype called a container. The extension of a container is a functor which can be regarded as a generalised polynomial functor in type variables. A representation theorem allows every natural transformation between container functors to be represented as a unique pair of morphisms in a category.;Under suitable assumptions on the ambient category container functors are closed under composition, limits, coproducts and the construction of initial algebras and final coalgebras. These closure properties allow containers to provide a functorial semantics for an important class of polymorphic types including the strictly positive types.;Since polymorphic functions between functorial polymorphic types correspond to natural transformations, every polymorphic function can be represented as a container morphism; this simplifies reasoning about such functions and provides a framework for developing concepts of generic programming.;Intuitionistic well-founded trees or W-types are the initial algebras of container functors in one parameter; the construction of the initial algebra of a container in more than one parameter leads to the solution of a problem left incomplete by earlier work of Dybjer.;We also find that containers provide a suitable framework to define the derivative of a functor as a kind of linear exponential. We show that the use of containers is essential for this approach.;The theory is developed in the context of a fairly general category to allow for a wide choice of applications. We use the language of dependent type theory to develop the theory of containers in an arbitrary extensive locally cartesian closed category; much of the development in this thesis can also be generalised to display map categories. We develop the appropriate internal language and its interpretation in a category with families.</description>
  65.      <pubDate>Mon, 15 Dec 2014 10:36:09 GMT</pubDate>
  66.      <guid isPermaLink="false">http://hdl.handle.net/2381/30102</guid>
  67.      <dc:date>2014-12-15T10:36:09Z</dc:date>
  68.    </item>
  69.    <item>
  70.      <title>Alternative approaches to optophonic mappings</title>
  71.      <link>http://hdl.handle.net/2381/30101</link>
  72.      <description>Title: Alternative approaches to optophonic mappings
  73. Authors: Capp, Michael.
  74. Abstract: This thesis presents a number of modifications to a blind aid, known as the video optophone, which enables a blind user to more readily interpret that local environment for enhanced mobility and navigation.  Versions of this form of blind aid are generally both difficult to use and interpret, and are therefore inadequate for safe mobility.  The reason for this severe problem lies in the complexity and excessive bandwidth of the optophonic output after the conversion from scene-to-sound.;The work herein describes a number of modifications that can be applied to the current optophonic process to make more efficient use of the limited bandwidth provided by the auditory system when converting scene images to sound.  Various image processing and stereo techniques have been employed to artificially emulate the human visual system through the use of depth maps that successfully fade out the quantity of relatively unimportant image features, whilst emphasising the more significant regions such as nearby obstacles.;A series of experiments was designed to test these various modifications to the optophonic mapping by studying important factors of mobility and subject response whilst going about everyday life.  The devised system, labelled DeLIA for the Detection, Location, Identification, and Avoidance (or Action) of obstacles, provided a means for gathering statistical data on users' interpretation of the optophonic output.  An analysis of this data demonstrated a significant improvement when using the stereo cartooning technique, developed as part of this work, over the more conventional plain image as an input to an optophonic mapping from scene-to-sound.;Lastly, conclusions were drawn from the results, which indicated that the use of a stereo depth map as an input to a video optophone would improve its usefulness as an aid to general mobility.  For the purposes of detecting and determining text or similar detail, either a plain unmodified image or some form of edge (depth) image were found to produce the best results.</description>
  75.      <pubDate>Mon, 15 Dec 2014 10:36:08 GMT</pubDate>
  76.      <guid isPermaLink="false">http://hdl.handle.net/2381/30101</guid>
  77.      <dc:date>2014-12-15T10:36:08Z</dc:date>
  78.    </item>
  79.    <item>
  80.      <title>Multimodal human-computer interaction for enhancing customers’ decision-making and experience on B2C e-commerce websites</title>
  81.      <link>http://hdl.handle.net/2381/29330</link>
  82.      <description>Title: Multimodal human-computer interaction for enhancing customers’ decision-making and experience on B2C e-commerce websites
  83. Authors: Al Sokkar, Abdullah Ahmad Musa
  84. Abstract: The main aim of this thesis was to identify, complement and refine the factors that contribute to users’ intention to purchase, satisfaction and attitude toward using a particular B2C online environment, as well as the causal relationships between these factors. A systematic literature review on Information System (IS), Market Research, and User Experience (UX), which has informed the design and development of a pilot study, has been conducted. Results have led to the conception of an online shopping decision-making (OSDM) model called ‘Episodic UX Model on Decision-Making’ (EUX-DM). It has been developed by integrating the established Technology Acceptance Model (TAM) as well as Information System Success Model (ISSM), and emerging UX models, and Expectation-Confirmation Theory (ECT). Results from analysing 305 responses to the web-based questionnaire aimed to evaluate EUX-DM verified its validity. In addition, after investigating the users’ preferences for the possible modifications related to the use of visual avatar in a particular B2C e-Commerce website for information presentation, another research focus has been placed on identifying the real conversational functions and their related communicational behaviour in designing male and female visual avatars’ facial expressions and body gestures. Following this, four different types of information presentations have been developed to be used in a contrived B2C online shopping environment, namely: (i) 2D static graphical and textual information, (ii) non-expressive avatars, (iii) avatars with facial expressions, (iv) and avatars with facial expressions and body gestures information presentations. Consequently, these information presentations were empirically investigated through two experimental studies. The outcomes of these studies indicated that the gender of the avatar and participants were found to be insignificant factors for any of the measured qualities, and the use of visual avatars with animated facial expressions and body gestures positively influenced customers’ usage attitude, intention to purchase and satisfaction.</description>
  85.      <pubDate>Tue, 09 Dec 2014 11:49:14 GMT</pubDate>
  86.      <guid isPermaLink="false">http://hdl.handle.net/2381/29330</guid>
  87.      <dc:date>2014-12-09T11:49:14Z</dc:date>
  88.    </item>
  89.    <item>
  90.      <title>Lem : Reusable engineering of real-world semantics</title>
  91.      <link>http://hdl.handle.net/2381/29328</link>
  92.      <description>Title: Lem : Reusable engineering of real-world semantics
  93. Authors: Mulligan, Dominic P.; Gray, Kathryn E.; Sewell, Peter; Owens, Scott; Ridge, Tom
  94. Abstract: Recent years have seen remarkable successes in rigorous engineering: using mathematically rigorous semantic models (not just idealised calculi) of real-world processors, programming languages, protocols, and security mechanisms, for testing, proof, analysis, and design. Building these models is challenging, requiring experimentation, dialogue with vendors or standards bodies, and validation; their scale adds engineering issues akin to those of programming to the task of writing clear and usable mathematics. But language and tool support for specification is lacking. Proof assistants can be used but bring their own difficulties, and a model produced in one, perhaps requiring many person-years effort and maintained over an extended period, cannot be used by those familiar with another. We introduce Lem, a language for engineering reusable large-scale semantic models. The Lem design takes inspiration both from functional programming languages and from proof assistants, and Lem definitions are translatable into OCaml for testing, Coq, HOL4, and Isabelle/HOL for proof, and LaTeX and HTML for presentation. This requires a delicate balance of expressiveness, careful library design, and implementation of transformations - akin to compilation, but subject to the constraint of producing usable and human-readable code for each target. Lem's effectiveness is demonstrated by its use in practice. © 2014 ACM.</description>
  95.      <pubDate>Tue, 09 Dec 2014 10:33:34 GMT</pubDate>
  96.      <guid isPermaLink="false">http://hdl.handle.net/2381/29328</guid>
  97.      <dc:date>2014-12-09T10:33:34Z</dc:date>
  98.    </item>
  99.    <item>
  100.      <title>Simple, efficient, sound and complete combinator parsing for all context-free grammars, using an oracle</title>
  101.      <link>http://hdl.handle.net/2381/29327</link>
  102.      <description>Title: Simple, efficient, sound and complete combinator parsing for all context-free grammars, using an oracle
  103. Authors: Ridge, Tom
  104. Abstract: Parsers for context-free grammars can be implemented directly and naturally in a functional style known as “combinator parsing”, using recursion following the structure of the grammar rules. Traditionally parser combinators have struggled to handle all features of context-free grammars, such as left recursion.&#xD;
  105. &#xD;
  106. Previous work introduced novel parser combinators that could be used to parse all context-free grammars. A parser generator built using these combinators was proved both sound and complete in the HOL4 theorem prover. Unfortunately the performance was not as good as other parsing methods such as Earley parsing.&#xD;
  107. &#xD;
  108. In this paper, we build on this previous work, and combine it in novel ways with existing parsing techniques such as Earley parsing. The result is a sound-and-complete combinator parsing library that can handle all context-free grammars, and has good performance.
  109. Description: timestamp: Tue, 09 Sep 2014 10:57:11 +0200 biburl: http://dblp.uni-trier.de/rec/bib/conf/sle/Ridge14 bibsource: dblp computer science bibliography, http://dblp.org</description>
  110.      <pubDate>Tue, 09 Dec 2014 10:20:37 GMT</pubDate>
  111.      <guid isPermaLink="false">http://hdl.handle.net/2381/29327</guid>
  112.      <dc:date>2014-12-09T10:20:37Z</dc:date>
  113.    </item>
  114.    <item>
  115.      <title>On the Synthesis of Choreographies</title>
  116.      <link>http://hdl.handle.net/2381/29310</link>
  117.      <description>Title: On the Synthesis of Choreographies
  118. Authors: Lange, Julien
  119. Abstract: The theories based on session types stand out as effective methodologies&#xD;
  120. to specify and verify properties of distributed systems. A key result in the area shows the&#xD;
  121. suitability of choreography languages and session types as a basis for a choreography-driven&#xD;
  122. methodology for distributed software development. The methodology it advocates&#xD;
  123. is as follows: a team of programmers designs a global view of the interactions to be&#xD;
  124. implemented (i.e., a choreography), then the choreography is projected onto each role.&#xD;
  125. Finally, each program implementing one or more roles in the choreography is validated&#xD;
  126. against its corresponding projection(s).&#xD;
  127. This is an ideal methodology but it may not always be possible to design one set of&#xD;
  128. choreographies that will drive the overall development of a distributed system. Indeed,&#xD;
  129. software needs maintenance, specifications may evolve (sometimes also during the development),&#xD;
  130. and issues may arise during the implementation phase. Therefore, there is a&#xD;
  131. need for an alternative approach whereby it is possible to infer a choreography from local&#xD;
  132. behavioural specifications (i.e., session types).&#xD;
  133. We tackle the problem of synthesising choreographies from local behavioural specifications&#xD;
  134. by introducing a type system which assigns – if possible – a choreography to&#xD;
  135. set of session types. We demonstrate the importance of obtaining a choreography from&#xD;
  136. local specifications through two applications. Firstly, we give three algorithms and a&#xD;
  137. methodology to help software designers refine a choreography into a global assertion,&#xD;
  138. i.e., a choreography decorated with logical formulae specifying senders’ obligations and&#xD;
  139. receivers’ requirements. Secondly, we introduce a formal model for distributed systems&#xD;
  140. where each participant advertises its requirements and obligations as behavioural contracts&#xD;
  141. (in the form of session types), and where multiparty sessions are started when a set&#xD;
  142. of contracts allows to synthesise a choreography.</description>
  143.      <pubDate>Thu, 04 Dec 2014 12:10:48 GMT</pubDate>
  144.      <guid isPermaLink="false">http://hdl.handle.net/2381/29310</guid>
  145.      <dc:date>2014-12-04T12:10:48Z</dc:date>
  146.    </item>
  147.    <item>
  148.      <title>User Activity Recognition through Software Sensors</title>
  149.      <link>http://hdl.handle.net/2381/29228</link>
  150.      <description>Title: User Activity Recognition through Software Sensors
  151. Authors: Reiff-Marganiec, Stephan
  152. Abstract: Context-aware systems are an instance of the ubiquitous or pervasive computing vision. They sense the users’ physical and virtual surrounding to identify the best system support for that user and adapt the system behaviour accordingly. The overall architecture of a context aware system can be broken into a number of logical aspects: gathering context data, storing the data, deriving knowledge through reasoning and mining and retrieving that knowledge to finally adapting system behaviours. Context is anything characterizing the environment of the user – their location, the ambient temperature, the people they are with, their current activity and some even consider the user’s mood. Traditionally context information has been gathered through the use of hardware sensors, such as GPS sensors or smart badges for locations and there has been work to track user’s eye movements at their desk to see which application they are using. However, determining the activity of a user has shown to be elusive to being sensed with hardware sensors. As users use web services more frequently they are exchanging messages with the services through the SOAP protocol. SOAP messages contain data, which is valuable if gathered and interpreted right – especially as this data can be shedding information on the activity of a user that goes beyond “sitting at the computer and typing”. We propose a complimentary sensor technology through the use of software sensors. The software sensors are essentially based on monitoring SOAP messages and inserting data for further reasoning and querying into a semantic context model. In this paper we consider details of extracting the data from SOAP messages in a non-obstructive way and show a solution to map the data from a SOAP message to our OWL ontology model automatically. On the latter, we specifically explain the methodology to map from SOAP messages to an existing structure of knowledge.
  153. Description: The file associated with this record is embargoed until 18 months after the date of publication. The final published version may be available through the links above.</description>
  154.      <pubDate>Thu, 30 Oct 2014 10:28:23 GMT</pubDate>
  155.      <guid isPermaLink="false">http://hdl.handle.net/2381/29228</guid>
  156.      <dc:date>2014-10-30T10:28:23Z</dc:date>
  157.    </item>
  158.    <item>
  159.      <title>Succinct indices for range queries with applications to orthogonal range maxima</title>
  160.      <link>http://hdl.handle.net/2381/29206</link>
  161.      <description>Title: Succinct indices for range queries with applications to orthogonal range maxima
  162. Authors: Farzan, Arash; Munro, J. Ian; Raman, Rajeev
  163. Abstract: We consider the problem of preprocessing N points in 2D, each endowed with a priority, to answer the following queries: given a axis-parallel rectangle, determine the point with the largest priority in the rectangle. Using the ideas of the effective entropy of range maxima queries and succinct indices for range maxima queries, we obtain a structure that uses O(N) words and answers the above query in O(logN loglogN) time. This a direct improvement of Chazelle's result from 1985 [10] for this problem - Chazelle required O(N/ε) words to answer queries in O((logN)[superscript 1+ε]) time for any constant ε &gt; 0.</description>
  164.      <pubDate>Tue, 28 Oct 2014 10:40:48 GMT</pubDate>
  165.      <guid isPermaLink="false">http://hdl.handle.net/2381/29206</guid>
  166.      <dc:date>2014-10-28T10:40:48Z</dc:date>
  167.    </item>
  168.    <item>
  169.      <title>Succinct representations of ordinal trees</title>
  170.      <link>http://hdl.handle.net/2381/29205</link>
  171.      <description>Title: Succinct representations of ordinal trees
  172. Authors: Raman, Rajeev; Rao, S. Srinivasa
  173. Abstract: We survey succinct representations of ordinal, or rooted, ordered trees. Succinct representations use space that is close to the appropriate information-theoretic minimum, but support operations on the tree rapidly, usually in O(1) time.</description>
  174.      <pubDate>Tue, 28 Oct 2014 10:33:36 GMT</pubDate>
  175.      <guid isPermaLink="false">http://hdl.handle.net/2381/29205</guid>
  176.      <dc:date>2014-10-28T10:33:36Z</dc:date>
  177.    </item>
  178.    <item>
  179.      <title>On probabilistic models for uncertain sequential pattern mining</title>
  180.      <link>http://hdl.handle.net/2381/29204</link>
  181.      <description>Title: On probabilistic models for uncertain sequential pattern mining
  182. Authors: Muzammal, Muhammad; Raman, Rajeev
  183. Editors: Cao, L.; Feng, Y.; Zhong, J.
  184. Abstract: We study uncertainty models in sequential pattern mining. We consider situations where there is uncertainty either about a source or an event. We show that both these types of uncertainties could be modelled using probabilistic databases, and give possible-worlds semantics for both. We then describe ”interestingness” criteria based on two notions of frequentness (previously studied for frequent itemset mining) namely expected support [C. Aggarwal et al. KDD’09;Chui et al., PAKDD’07,’08] and probabilistic frequentness [Bernecker et al., KDD’09]. We study the interestingness criteria from a complexity-theoretic perspective, and show that in case of source-level uncertainty, evaluating probabilistic frequentness is #P-complete, and thus no polynomial time algorithms are likely to exist, but evaluate the interestingness predicate in polynomial time in the remaining cases.</description>
  185.      <pubDate>Tue, 28 Oct 2014 10:26:52 GMT</pubDate>
  186.      <guid isPermaLink="false">http://hdl.handle.net/2381/29204</guid>
  187.      <dc:date>2014-10-28T10:26:52Z</dc:date>
  188.    </item>
  189.    <item>
  190.      <title>Encodings for range selection and top-k queries</title>
  191.      <link>http://hdl.handle.net/2381/29203</link>
  192.      <description>Title: Encodings for range selection and top-k queries
  193. Authors: Grossi, Roberto; Iacono, John; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
  194. Abstract: We study the problem of encoding the positions the top-k elements of an array A[1..n] for a given parameter 1 ≤ k ≤ n. Specifically, for any i and j, we wish create a data structure that reports the positions of the largest k elements in A[i..j] in decreasing order, without accessing A at query time. This is a natural extension of the well-known encoding range-maxima query problem, where only the position of the maximum in A[i..j] is sought, and finds applications in document retrieval and ranking. We give (sometimes tight) upper and lower bounds for this problem and some variants thereof.</description>
  195.      <pubDate>Tue, 28 Oct 2014 10:16:57 GMT</pubDate>
  196.      <guid isPermaLink="false">http://hdl.handle.net/2381/29203</guid>
  197.      <dc:date>2014-10-28T10:16:57Z</dc:date>
  198.    </item>
  199.    <item>
  200.      <title>Discovery of network properties with all-shortest-paths queries</title>
  201.      <link>http://hdl.handle.net/2381/29198</link>
  202.      <description>Title: Discovery of network properties with all-shortest-paths queries
  203. Authors: Bilo, Davide; Erlebach, Thomas Rainer; Mihalak, Matus; Widmayer, Peter
  204. Editors: Shvartsman, A. A.; Felber, P.
  205. Abstract: We consider the problem of discovering properties (such as the diameter) of an unknown network G(V,E) with a minimum number of queries. Initially, only the vertex set V&#xD;
  206. of the network is known. Information about the edges and non-edges of the network can be obtained&#xD;
  207. by querying nodes of the network. A query at a node q∈V returns the&#xD;
  208. union of all shortest paths from q to all other nodes in V. We study the&#xD;
  209. problem as an online problem - an algorithm does not initially know the&#xD;
  210. edge set of the network, and has to decide where to make the next query&#xD;
  211. based on the information that was gathered by previous queries. We&#xD;
  212. study how many queries are needed to discover the diameter, a minimal&#xD;
  213. dominating set, a maximal independent set, the minimum degree, and&#xD;
  214. the maximum degree of the network. We also study the problem of deciding with a minimum number of queries whether the network is 2-edge or&#xD;
  215. 2-vertex connected. We use the usual competitive analysis to evaluate the&#xD;
  216. quality of online algorithms, i.e., we compare online algorithms with optimum offline algorithms. For all properties except maximal independent&#xD;
  217. set and 2-vertex connectivity we present and analyze online algorithms.&#xD;
  218. Furthermore we show, for all the aforementioned properties, that "many"&#xD;
  219. queries are needed in the worst case. As our query model delivers more&#xD;
  220. information about the network than the measurement heuristics that are&#xD;
  221. currently used in practise, these negative results suggest that a similar&#xD;
  222. behavior can be expected in realistic settings, or in more realistic models&#xD;
  223. derived from the all-shortest-paths query model.</description>
  224.      <pubDate>Thu, 23 Oct 2014 15:40:02 GMT</pubDate>
  225.      <guid isPermaLink="false">http://hdl.handle.net/2381/29198</guid>
  226.      <dc:date>2014-10-23T15:40:02Z</dc:date>
  227.    </item>
  228.    <item>
  229.      <title>Asympotically optimal encodings for range selection</title>
  230.      <link>http://hdl.handle.net/2381/29193</link>
  231.      <description>Title: Asympotically optimal encodings for range selection
  232. Authors: Navarro, Gonzalo; Raman, Rajeev; Satti, Srinivasa Rao
  233. Abstract: We consider the problem of preprocessing an array A[1..n] to answer range selection and range top-k queries. Given a query interval [i..j] and a value k, the former query asks for the position of the kth largest value in A[i..j], whereas the latter asks for the positions of all the k largest values in A[i..j]. We consider the encoding version of the problem, where A is not available at query time, and an upper bound kappa on k, the rank that is to be selected, is given at construction time. We obtain data structures with asymptotically optimal size and query time on a RAM model with word size Θ(lg n) : our structures use O(n lg kappa) bits and answer range selection queries in time O(1+ lg k / lg lg n) and range top-k queries in time O(k), for any k ≤ kappa.
  234. Description: The file associated with this record is embargoed until the date of the conference.</description>
  235.      <pubDate>Wed, 22 Oct 2014 14:00:04 GMT</pubDate>
  236.      <guid isPermaLink="false">http://hdl.handle.net/2381/29193</guid>
  237.      <dc:date>2014-10-22T14:00:04Z</dc:date>
  238.    </item>
  239.    <item>
  240.      <title>Mining state-based models from proof corpora</title>
  241.      <link>http://hdl.handle.net/2381/29141</link>
  242.      <description>Title: Mining state-based models from proof corpora
  243. Authors: Gransden, Thomas; Walkinshaw, Neil; Raman, Rajeev
  244. Abstract: Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.</description>
  245.      <pubDate>Tue, 07 Oct 2014 14:53:42 GMT</pubDate>
  246.      <guid isPermaLink="false">http://hdl.handle.net/2381/29141</guid>
  247.      <dc:date>2014-10-07T14:53:42Z</dc:date>
  248.    </item>
  249.    <item>
  250.      <title>Dynamizing succinct tree representations</title>
  251.      <link>http://hdl.handle.net/2381/29103</link>
  252.      <description>Title: Dynamizing succinct tree representations
  253. Authors: Joannou, Stelios; Raman, Rajeev
  254. Abstract: We consider succinct, or space-efficient, representations of ordinal trees. Representations exist that take 2n + o(n) bits to represent a static n-node ordinal tree - close to the information-theoretic minimum - and support navigational operations in O(1) time on a RAM model; and some implementations have good practical performance. The situation is different for dynamic ordinal trees. Although there is theoretical work on succinct dynamic ordinal trees, there is little work on the practical performance of these data structures. Motivated by applications to representing XML documents, in this paper, we report on a preliminary study on dynamic succinct data structures. Our implementation is based on representing the tree structure as a sequence of balanced parentheses, with navigation done using the min-max tree of Sadakane and Navarro (SODA '10). Our implementation shows promising performance for update and navigation, and our findings highlight two issues that we believe will be important to future implementations: the difference between the finger model of (say) Farzan and Munro (ICALP '09) and the parenthesis model of Sadakane and Navarro, and the choice of the balanced tree used to represent the min-max tree.</description>
  255.      <pubDate>Thu, 18 Sep 2014 08:51:47 GMT</pubDate>
  256.      <guid isPermaLink="false">http://hdl.handle.net/2381/29103</guid>
  257.      <dc:date>2014-09-18T08:51:47Z</dc:date>
  258.    </item>
  259.    <item>
  260.      <title>A model for supporting electrical engineering with e-learning</title>
  261.      <link>http://hdl.handle.net/2381/29066</link>
  262.      <description>Title: A model for supporting electrical engineering with e-learning
  263. Authors: Akaslan, Dursun
  264. Abstract: The overall goal of this research work was developing and evaluating a model for supporting electrical engineering with e-learning. The model development was based on the survey data collected from representative teachers and students in Turkey whereas the model evaluation was conducted in the relevant HEIs in Turkey and the United Kingdom. To develop the model, the study investigated the attitudes of representative key stakeholders towards e-learning in Turkey by administrating questionnaires and interviews with teachers and students. Then the responses of the teachers and students were compared. Based on the results, I proposed a model with a multi-dimensional approach to e-learning: (1) self-directed learning by studying e-book, (2) self-assessment by solving e-exercises, (3) teacher-directed learning by attending classroom sessions as an integral part of the blended learning (4) teacher-assessment by solving e-exercises, (5) computer-directed learning by playing e-games and (6) computer-assessment by solving e-exercises.&#xD;
  265. To evaluate the applicability of the model in different conditions, a case-control study was conducted to determine whether the model had the intended effect on the participating students in HEIs in Turkey and the United Kingdom. As the result of the case-control study, the effects of e-learning, blended learning and traditional learning were verified. However, there were significant differences among the groups. The overall scores indicated that e-learning and blended learning was more effective as compared to the traditional learning. The results of our study indicated that the knowledge increase in e-learners seemed to be gradual because they tended to study daily by completing each activity on time. However, the traditional learners did not have the same pattern because they usually did not read the core text and did not solve e-exercise regularly before the classroom sessions. The results of pre-placement, post-placement tests and middle tests also justified these assumptions.</description>
  266.      <pubDate>Mon, 08 Sep 2014 14:27:39 GMT</pubDate>
  267.      <guid isPermaLink="false">http://hdl.handle.net/2381/29066</guid>
  268.      <dc:date>2014-09-08T14:27:39Z</dc:date>
  269.    </item>
  270.    <item>
  271.      <title>Compressed representation of XML documents with rapid navigation</title>
  272.      <link>http://hdl.handle.net/2381/29062</link>
  273.      <description>Title: Compressed representation of XML documents with rapid navigation
  274. Authors: Kharabsheh, Mohammad Kamel Ahmad
  275. Abstract: XML(Extensible Markup Language) is a language used in data representation and&#xD;
  276. storage, and transmission and manipulation of data. Excessive memory consumption&#xD;
  277. is an important challenge when representing XML documents in main memory.&#xD;
  278. Document Object Model (DOM) APIs are used in a processing level that provides&#xD;
  279. access to all parts of XML documents through the navigation operations. Although&#xD;
  280. DOM serves as a a general purpose tool that can be used in different applications,&#xD;
  281. it has high memory cost particularly if using na¨ıve. The space usage of DOM has&#xD;
  282. been reduced significantly while keeping fast processing speeds, by use of succinct&#xD;
  283. data structures in SiXDOM [1]. However, SiXDOM does not explore in depth XML&#xD;
  284. data compression principles to improve in-memory space usage. Such XML data&#xD;
  285. compression techniques have been proven to be very effective in on-disk compression&#xD;
  286. of XML document. In this thesis we propose a new approach to represent XML&#xD;
  287. documents in-memory using XML data compression ideas to further reduce space&#xD;
  288. usage while rapidly supporting operations of the kind supported by DOM.&#xD;
  289. Our approach is based upon a compression method [2] which represents an XML&#xD;
  290. document as a directed acyclic graph (DAG) by sharing common subtrees. However,&#xD;
  291. this approach does not permit the representation of attributes and textual data,&#xD;
  292. and furthermore, a naive implementation of this idea gives very poor space usage&#xD;
  293. relative to other space-efficient DOM implementations [1]. In order to realise the&#xD;
  294. potential of this compression method as an in-memory representation, a number&#xD;
  295. of optimisations are made by application of succinct data structures and variablelength&#xD;
  296. encoding. Furthermore, a framework for supporting attribute and textual&#xD;
  297. data nodes is introduced. Finally, we propose a novel approach to representing the&#xD;
  298. textual data using Minimal Perfect Hashing(MPH).&#xD;
  299. We have implemented our ideas in a software library called DAGDOMand performed&#xD;
  300. extensive experimental evaluation on a number of standard XML files. DAGDOM&#xD;
  301. yields a good result and we are able to obtain significant space reductions over existing&#xD;
  302. space-efficient DOM implementations (typically 2 to 5 times space reduction),&#xD;
  303. with very modest degradations in CPU time for navigational operations.</description>
  304.      <pubDate>Fri, 05 Sep 2014 15:30:29 GMT</pubDate>
  305.      <guid isPermaLink="false">http://hdl.handle.net/2381/29062</guid>
  306.      <dc:date>2014-09-05T15:30:29Z</dc:date>
  307.    </item>
  308.    <item>
  309.      <title>Backward analysis via over-approximate abstraction and under-approximate subtraction</title>
  310.      <link>http://hdl.handle.net/2381/28950</link>
  311.      <description>Title: Backward analysis via over-approximate abstraction and under-approximate subtraction
  312. Authors: Piterman, Nir; Bakhirkin, Alexey; Berdine, Josh
  313. Abstract: We propose a novel approach for computing weakest liberal safe preconditions of programs. The standard approaches, which call for either under-approximation of a greatest fixed point, or complementation of a least fixed point, are often difficult to apply successfully. Our approach relies on a different decomposition of the weakest precondition of loops. We exchange the greatest fixed point for the computation of a least fixed point above a recurrent set, instead of the bottom element. Convergence is achieved using over-approximation, while in order to maintain soundness we use an under-approximating logical subtraction operation. Unlike general complementation, subtraction more easily allows for increased precision in case its arguments are related. The approach is not restricted to a specific abstract domain and we use it to analyze programs using the abstract domains of intervals and of 3-valued structures.
  314. Description: The file associated with this record is embargoed until 12 months after the date of publication. The final published version may be available through the links above.</description>
  315.      <pubDate>Thu, 26 Jun 2014 14:09:45 GMT</pubDate>
  316.      <guid isPermaLink="false">http://hdl.handle.net/2381/28950</guid>
  317.      <dc:date>2014-06-26T14:09:45Z</dc:date>
  318.    </item>
  319.    <item>
  320.      <title>Mining sequential patterns from probabilistic databases</title>
  321.      <link>http://hdl.handle.net/2381/28949</link>
  322.      <description>Title: Mining sequential patterns from probabilistic databases
  323. Authors: Muzammal, Muhammad; Raman, Rajeev
  324. Abstract: This paper considers the problem of sequential pattern mining (SPM) in&#xD;
  325. probabilistic databases. Specifically, we consider SPM in situations where there is uncertainty&#xD;
  326. in associating an event with a source, model this kind of uncertainty in the probabilistic&#xD;
  327. database framework and consider the problem of enumerating all sequences&#xD;
  328. whose expected support is sufficiently large. We give an algorithm based on dynamic&#xD;
  329. programming to compute the expected support of a sequential pattern. Next, we propose&#xD;
  330. three algorithms for mining sequential patterns from probabilistic databases. The&#xD;
  331. first two algorithms are based on the candidate generation framework – one each based&#xD;
  332. on a breadth-first (similar to GSP) and a depth-first (similar to SPAM) exploration&#xD;
  333. of the search space. The third one is based on the pattern growth framework (similar&#xD;
  334. to PrefixSpan). We propose optimizations that mitigate the effects of the expensive&#xD;
  335. dynamic programming computation step. We give an empirical evaluation of the probabilistic&#xD;
  336. SPM algorithms and the optimizations, and demonstrate the scalability of the&#xD;
  337. algorithms in terms of CPU time and the memory usage. We also demonstrate the&#xD;
  338. effectiveness of the probabilistic SPM framework in extracting meaningful sequences in&#xD;
  339. the presence of noise.
  340. Description: The file associated with this record is embargoed until 12 months after the date of publication. The final published version may be available through the links above.</description>
  341.      <pubDate>Thu, 26 Jun 2014 13:15:31 GMT</pubDate>
  342.      <guid isPermaLink="false">http://hdl.handle.net/2381/28949</guid>
  343.      <dc:date>2014-06-26T13:15:31Z</dc:date>
  344.    </item>
  345.    <item>
  346.      <title>Design-by-contract for software architectures</title>
  347.      <link>http://hdl.handle.net/2381/28924</link>
  348.      <description>Title: Design-by-contract for software architectures
  349. Authors: Poyias, Kyriakos
  350. Abstract: We propose a design by contract (DbC) approach to specify and maintain architectural&#xD;
  351. level properties of software. Such properties are typically relevant in the design&#xD;
  352. phase of the development cycle but may also impact the execution of systems. We give a&#xD;
  353. formal framework for specifying software architectures (and their refi nements) together&#xD;
  354. with contracts that architectural con figurations abide by. In our framework, we can&#xD;
  355. specify that if an architecture guarantees a given pre- condition and a refi nement rule&#xD;
  356. satisfi es a given contract, then the refi ned architecture will enjoy a given post-condition.&#xD;
  357. Methodologically, we take Architectural Design Rewriting (ADR) as our architectural&#xD;
  358. description language. ADR is a rule-based formal framework for modelling (the&#xD;
  359. evolution of) software architectures. We equip the recon figuration rules of an ADR&#xD;
  360. architecture with pre- and post-conditions expressed in a simple logic; a pre-condition&#xD;
  361. constrains the applicability of a rule while a post-condition specifi es the properties&#xD;
  362. expected of the resulting graphs. We give an algorithm to compute the weakest precondition&#xD;
  363. out of a rule and its post-condition. Furthermore, we propose a monitoring&#xD;
  364. mechanism for recording the evolution of systems after certain computations, maintaining&#xD;
  365. the history in a tree-like structure. The hierarchical nature of ADR allows us to&#xD;
  366. take full advantage of the tree-like structure of the monitoring mechanism. We exploit&#xD;
  367. this mechanism to formally defi ne new rewriting mechanisms for ADR reconfi guration&#xD;
  368. rules. Also, by monitoring the evolution we propose a way of identifying which part of&#xD;
  369. a system has been a ffected when unexpected run-time behaviours emerge. Moreover,&#xD;
  370. we propose a methodology that allows us to select which rules can be applied at the&#xD;
  371. architectural level to reconfigure a system so to regain its architectural style when it&#xD;
  372. becomes compromised by unexpected run-time recon figurations.</description>
  373.      <pubDate>Mon, 16 Jun 2014 15:40:37 GMT</pubDate>
  374.      <guid isPermaLink="false">http://hdl.handle.net/2381/28924</guid>
  375.      <dc:date>2014-06-16T15:40:37Z</dc:date>
  376.    </item>
  377.    <item>
  378.      <title>A structured approach to VO reconfigurations through Policies</title>
  379.      <link>http://hdl.handle.net/2381/28881</link>
  380.      <description>Title: A structured approach to VO reconfigurations through Policies
  381. Authors: Reiff-Marganiec, Stephan
  382. Abstract: One of the strength of Virtual Organisations is their ability to dynamically and rapidly adapt in response&#xD;
  383. to changing environmental conditions. Dynamic adaptability has been studied in other system&#xD;
  384. areas as well and system management through policies has crystallized itself as a very prominent solution&#xD;
  385. in system and network administration. However, these areas are often concerned with very&#xD;
  386. low-level technical aspects. Previous work on the APPEL policy language has been aimed at dynamically&#xD;
  387. adapting system behaviour to satisfy end-user demands and – as part of STPOWLA – APPEL&#xD;
  388. was used to adapt workflow instances at runtime. In this paper we explore how the ideas of APPEL&#xD;
  389. and STPOWLA can be extended from workflows to the wider scope of Virtual Organisations. We will&#xD;
  390. use a Travel Booking VO as example.</description>
  391.      <pubDate>Fri, 30 May 2014 10:52:19 GMT</pubDate>
  392.      <guid isPermaLink="false">http://hdl.handle.net/2381/28881</guid>
  393.      <dc:date>2014-05-30T10:52:19Z</dc:date>
  394.    </item>
  395.    <item>
  396.      <title>Maintaining transactional integrity in long running workflow service : a policy-driven framework</title>
  397.      <link>http://hdl.handle.net/2381/28880</link>
  398.      <description>Title: Maintaining transactional integrity in long running workflow service : a policy-driven framework
  399. Authors: Reiff-Marganiec, Stephan; Ali, Manar S.
  400. Abstract: This chapter presents a framework to provide autonomous handling of long running transactions based&#xD;
  401. on dependencies which are derived from the workflow. Business Processes naturally involve long running&#xD;
  402. activities and require transactional behaviour across them. This framework presents a solution for forward&#xD;
  403. recovery from errors by automatic application of compensation to executing instances of workflows. The&#xD;
  404. mechanism is based on propagation of failures through a recursive hierarchical structure of transaction&#xD;
  405. components (nodes and execution paths). The authors discuss a transaction management system that is&#xD;
  406. implemented as a reactive system controller, where system components change their states based on rules&#xD;
  407. in response to triggering of events, such as activation, failure, force-fail, completion, or compensation&#xD;
  408. events. One notable feature of the model is the distinction of vital and non-vital components, allowing&#xD;
  409. the process designer to express the cruciality of activities in the workflow with respect to the business&#xD;
  410. logic. Another novel feature is that in addition to dependencies arising from the structure of the workflow,&#xD;
  411. the approach also permits the workflow designer to specify additional dependencies which will also be&#xD;
  412. enforced. Thus, the authors introduce new techniques and architectures supporting enterprise integration&#xD;
  413. solutions that cater to the dynamics of business needs. The approach is implemented through workflow&#xD;
  414. actions executed by services and allows management of faults through a policy-driven framework.</description>
  415.      <pubDate>Thu, 29 May 2014 15:48:08 GMT</pubDate>
  416.      <guid isPermaLink="false">http://hdl.handle.net/2381/28880</guid>
  417.      <dc:date>2014-05-29T15:48:08Z</dc:date>
  418.    </item>
  419.    <item>
  420.      <title>Encoding range minima and range top-2 queries</title>
  421.      <link>http://hdl.handle.net/2381/28856</link>
  422.      <description>Title: Encoding range minima and range top-2 queries
  423. Authors: Davoodi, Pooya; Navarro, Gonzalo; Raman, Rajeev; Rao, S. Srinivasa
  424. Abstract: We consider the problem of encoding range minimum queries (RMQs): given an array A[1..n] of distinct totally ordered values, to pre-process A and create a data structure that can answer the query RMQ(i,j), which returns the index containing the smallest element in A[i..j], without access to the array A at query time. We give a data structure whose space usage is 2n+o(n) bits, which is asymptotically optimal for worst-case data, and answers RMQs in O(1) worst-case time. This matches the previous result of Fischer and Heun, but is obtained in a more natural way. Furthermore, our result can encode the RMQs of a random array A in 1.919n+o(n) bits in expectation, which is not known to hold for Fischer and Heun’s result. We then generalize our result to the encoding range top-2 query (RT2Q) problem, which is like the encoding RMQ problem except that the query RT2Q(i,j) returns the indices of both the smallest and second smallest elements of A[i..j]. We introduce a data structure using 3.272n+o(n) bits that answers RT2Qs in constant time, and also give lower bounds on the effective entropy of the RT2Q problem.</description>
  425.      <pubDate>Wed, 28 May 2014 10:49:40 GMT</pubDate>
  426.      <guid isPermaLink="false">http://hdl.handle.net/2381/28856</guid>
  427.      <dc:date>2014-05-28T10:49:40Z</dc:date>
  428.    </item>
  429.    <item>
  430.      <title>Optimal indexes for sparse bit vectors</title>
  431.      <link>http://hdl.handle.net/2381/28854</link>
  432.      <description>Title: Optimal indexes for sparse bit vectors
  433. Authors: Golynski, Alexander; Orlandi, Alessio; Raman, Rajeev; Rao, S. Srinivasa
  434. Abstract: We consider the problem of supporting rank and select operations on a bit vector of length m with n 1-bits. The problem is considered in the succinct index model, where the bit vector is stored in "read-only" memory and an additional data structure, called the index is created during pre-processing to help answer the above queries. We give asymptotically optimal density-sensitive trade-offs, involving both m and n, that relate the size of the index to the number of accesses to the bit vector (and processing time) needed to answer the above queries. The results are particularly interesting for the case where n=o(m).</description>
  435.      <pubDate>Tue, 27 May 2014 10:19:32 GMT</pubDate>
  436.      <guid isPermaLink="false">http://hdl.handle.net/2381/28854</guid>
  437.      <dc:date>2014-05-27T10:19:32Z</dc:date>
  438.    </item>
  439.    <item>
  440.      <title>An empirical evaluation of extendible arrays</title>
  441.      <link>http://hdl.handle.net/2381/28738</link>
  442.      <description>Title: An empirical evaluation of extendible arrays
  443. Authors: Joannou, Stelios; Raman, Rajeev
  444. Editors: Pardalos, PM; Rebennack, S
  445. Abstract: We study the performance of several alternatives for implementing extendible arrays, which allow random access to elements stored&#xD;
  446. in them, whilst allowing the arrays to be grown and shrunk. The study&#xD;
  447. not only looks at the basic operations of grow/shrink and accessing data, but also the effects of memory fragmentation on performance.
  448. Description: The final publication is&#xD;
  449. available at link.springer.com</description>
  450.      <pubDate>Wed, 09 Apr 2014 10:40:44 GMT</pubDate>
  451.      <guid isPermaLink="false">http://hdl.handle.net/2381/28738</guid>
  452.      <dc:date>2014-04-09T10:40:44Z</dc:date>
  453.    </item>
  454.    <item>
  455.      <title>Cell-cycle regulation of NOTCH signaling during C. elegans vulval development</title>
  456.      <link>http://hdl.handle.net/2381/28643</link>
  457.      <description>Title: Cell-cycle regulation of NOTCH signaling during C. elegans vulval development
  458. Authors: Nusser-Stein, Stefanie; Beyer, Antje; Rimann, Ivo; Adamczyk, Magdalene; Piterman, Nir; Hajnal, Alex; Fisher, Jasmin
  459. Abstract: C. elegans vulval development is one of the best‐characterized systems to study cell fate specification during organogenesis. The detailed knowledge of the signaling pathways determining vulval precursor cell (VPC) fates permitted us to create a computational model based on the antagonistic interactions between the epidermal growth factor receptor (EGFR)/RAS/MAPK and the NOTCH pathways that specify the primary and secondary fates, respectively. A key notion of our model is called bounded asynchrony, which predicts that a limited degree of asynchrony in the progression of the VPCs is necessary to break their equivalence. While searching for a molecular mechanism underlying bounded asynchrony, we discovered that the termination of NOTCH signaling is tightly linked to cell‐cycle progression. When single VPCs were arrested in the G1 phase, intracellular NOTCH failed to be degraded, resulting in a mixed primary/secondary cell fate. Moreover, the G1 cyclins CYD‐1 and CYE‐1 stabilize NOTCH, while the G2 cyclin CYB‐3 promotes NOTCH degradation. Our findings reveal a synchronization mechanism that coordinates NOTCH signaling with cell‐cycle progression and thus permits the formation of a stable cell fate pattern.</description>
  460.      <pubDate>Fri, 07 Mar 2014 13:50:41 GMT</pubDate>
  461.      <guid isPermaLink="false">http://hdl.handle.net/2381/28643</guid>
  462.      <dc:date>2014-03-07T13:50:41Z</dc:date>
  463.    </item>
  464.    <item>
  465.      <title>Faster temporal reasoning for infinite-state programs</title>
  466.      <link>http://hdl.handle.net/2381/28591</link>
  467.      <description>Title: Faster temporal reasoning for infinite-state programs
  468. Authors: Piterman, Nir; Cook, Byron; Khlaaf, Heidy
  469. Abstract: In many model checking tools that support temporal logic, performance is hindered by redundant reasoning performed in the presence of nested temporal operators. In particular, tools supporting the state-based temporal logic CTL often symbolically partition the system's state space using the sub-formulae of the input temporal formula. This can lead to repeated work when tools are applied to infinite-state programs, as often the characterization of the state-spaces for nearby program locations are similar and interrelated. In this paper, we describe a new symbolic procedure for CTL verification of infinite-state programs. Our procedure uses the structure of the program's control-flow graph in combination with the nesting of temporal operators in order to optimize reasoning performed during symbolic model checking. An experimental evaluation against competing tools demonstrates that our approach not only gains orders-of-magnitude performance speed improvement, but allows for scalability of temporal reasoning for larger programs.</description>
  470.      <pubDate>Thu, 20 Feb 2014 14:08:53 GMT</pubDate>
  471.      <guid isPermaLink="false">http://hdl.handle.net/2381/28591</guid>
  472.      <dc:date>2014-02-20T14:08:53Z</dc:date>
  473.    </item>
  474.    <item>
  475.      <title>Strongly complete logics for coalgebras</title>
  476.      <link>http://hdl.handle.net/2381/28587</link>
  477.      <description>Title: Strongly complete logics for coalgebras
  478. Authors: Kurz, Alexander; Rosicky, Jiri
  479. Abstract: Coalgebras for a functor model different types of transition systems in a uniform way. This paper focuses on a uniform account of finitary logics for set-based coalgebras. In particular, a general construction of a logic from an arbitrary set-functor is given and proven to be strongly complete under additional assumptions. We proceed in three parts. Part I argues that sifted colimit preserving functors are those functors that preserve universal algebraic structure. Our main theorem here states that a functor preserves sifted colimits if and only if it has a finitary presentation by operations and equations. Moreover, the presentation of the category of algebras for the functor is obtained compositionally from the presentations of the underlying category and of the functor. Part II investigates algebras for a functor over ind-completions and extends the theorem of J{'o}nsson and Tarski on canonical extensions of Boolean algebras with operators to this setting. Part III shows, based on Part I, how to associate a finitary logic to any finite-sets preserving functor T. Based on Part II we prove the logic to be strongly complete under a reasonable condition on T.</description>
  480.      <pubDate>Fri, 14 Feb 2014 10:52:40 GMT</pubDate>
  481.      <guid isPermaLink="false">http://hdl.handle.net/2381/28587</guid>
  482.      <dc:date>2014-02-14T10:52:40Z</dc:date>
  483.    </item>
  484.    <item>
  485.      <title>Completeness for the coalgebraic cover modality</title>
  486.      <link>http://hdl.handle.net/2381/28586</link>
  487.      <description>Title: Completeness for the coalgebraic cover modality
  488. Authors: Kupke, Clemens; Kurz, Alexander; Venema, Yde
  489. Abstract: We study the finitary version of the coalgebraic logic introduced by L.Moss. The syntax of this logic, which is introduced uniformly with respect to a coalgebraic type functor, required to preserve weak pullbacks, extends that of classical propositional logic with a so-called coalgebraic cover modality depending on the type functor. Its semantics is defined in terms of a categorically defined relation lifting operation. As the main contributions of our paper we introduce a derivation system, and prove that it provides a sound and complete axiomatization for the collection of coalgebraically valid inequalities. Our soundness and completeness proof is algebraic, and we employ Pattinson's stratification method, showing that our derivation system can be stratified in countably many layers, corresponding to the modal depth of the formulas involved. In the proof of our main result we identify some new concepts and obtain some auxiliary results of independent interest. We survey properties of the notion of relation lifting, induced by an arbitrary but fixed set functor. We introduce a category of Boolean algebra presentations, and establish an adjunction between it and the category of Boolean algebras. Given the fact that our derivation system involves only formulas of depth one, it can be encoded as a endo-functor on Boolean algebras. We show that this functor is finitary and preserves embeddings, and we prove that the Lindenbaum-Tarski algebra of our logic can be identified with the initial algebra for this functor.</description>
  490.      <pubDate>Fri, 14 Feb 2014 10:40:06 GMT</pubDate>
  491.      <guid isPermaLink="false">http://hdl.handle.net/2381/28586</guid>
  492.      <dc:date>2014-02-14T10:40:06Z</dc:date>
  493.    </item>
  494.    <item>
  495.      <title>Activity awareness in context-aware systems using software sensors</title>
  496.      <link>http://hdl.handle.net/2381/28379</link>
  497.      <description>Title: Activity awareness in context-aware systems using software sensors
  498. Authors: Pathan, Kamran Taj
  499. Abstract: Context-aware systems being a component of ubiquitous or pervasive computing environment sense the users’ physical and virtual surrounding to adapt their behaviour accordingly. To achieve activity context tracking devices are common practice. Service Oriented Architecture is based on collections of services that communicate with each other. The communication between users and services involves data that can be used to sense the activity context of the user. SOAP is a simple protocol to let applications exchange their information over the web. Semantic Web provides standards to express the relationship between data to allow machines to process data more intelligently.&#xD;
  500. This work proposes an approach for supporting context-aware activity sensing using software sensors. The main challenges in the work are specifying context information in a machine processable form, developing a mechanism that can understand the data extracted from exchanges of services, utilising the data extracted from these services, and the architecture that supports sensing with software sensors. To address these issues, we have provided a bridge to combine the traditional web services with the semantic web technologies, a knowledge structure that supports the activity context information in the context-aware environments and mapping methods that extract the data out of exchanges occurring between user and services and map it into a context model. The Direct Match, the Synonym Match and the Hierarchical Match methods are developed to put the extracted data from services to the knowledge structure.&#xD;
  501. This research will open doors to further develop automated and dynamic context-aware systems that can exploit the software sensors to sense the activity of the user in the context-aware environments.</description>
  502.      <pubDate>Fri, 08 Nov 2013 15:49:07 GMT</pubDate>
  503.      <guid isPermaLink="false">http://hdl.handle.net/2381/28379</guid>
  504.      <dc:date>2013-11-08T15:49:07Z</dc:date>
  505.    </item>
  506.    <item>
  507.      <title>Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation</title>
  508.      <link>http://hdl.handle.net/2381/28332</link>
  509.      <description>Title: Pure Type Systems with Corecursion on Streams: From Finite to Infinitary Normalisation
  510. Authors: Severi, Paula; de Vries, Fer-Jan
  511. Abstract: In this paper, we use types for ensuring that programs involving streams are well-behaved.We extend pure type systems with a type constructor for streams, a modal operator next and a fixed point operator for expressing corecursion. This extension is called Pure Type Systems with Corecursion (CoPTS). The typed lambda calculus for reactive programs defined by Krishnaswami and Benton can be obtained as a CoPTS. CoPTSs allow us to study a wide range of typed lambda calculi extended with corecursion using only one framework. In particular, we study this extension for the calculus of constructions which is the underlying formal language of Coq. We use the machinery of infinitary rewriting and formalise the idea of well-behaved programs using the concept of infinitary normalisation. The set of finite and infinite terms is defined as a metric completion. We establish a precise connection between the modal operator (• A) and the metric at a syntactic level by relating a variable of type (• A) with the depth of all its occurrences in a term. This syntactic connection between the modal operator and the depth is the key to the proofs of infinitary weak and strong normalisation.</description>
  512.      <pubDate>Mon, 28 Oct 2013 15:24:43 GMT</pubDate>
  513.      <guid isPermaLink="false">http://hdl.handle.net/2381/28332</guid>
  514.      <dc:date>2013-10-28T15:24:43Z</dc:date>
  515.    </item>
  516.    <item>
  517.      <title>Succinct Representations of Binary Trees for Range Minimum Queries</title>
  518.      <link>http://hdl.handle.net/2381/28331</link>
  519.      <description>Title: Succinct Representations of Binary Trees for Range Minimum Queries
  520. Authors: Davoodi, Pooya; Raman, Rajeev; Satti, Satti Srinivasa
  521. Abstract: We provide two succinct representations of binary trees that can be used to represent the Cartesian tree of an array A of size n. Both the representations take the optimal 2n + o(n) bits of space in the worst case and support range minimum queries (RMQs) in O(1) time. The first one is a modification of the representation of Farzan and Munro (SWAT 2008); a consequence of this result is that we can represent the Cartesian tree of a random permutation in 1.92n + o(n) bits in expectation. The second one uses a well-known transformation between binary trees and ordinal trees, and ordinal tree operations to effect operations on the Cartesian tree. This provides an alternative, and more natural, way to view the 2D-Min-Heap of Fischer and Huen (SICOMP 2011). Furthermore, we show that the pre-processing needed to output the data structure can be performed in linear time using o(n) bits of extra working space, improving the result of Fischer and Heun who use n + o(n) bits working space.</description>
  522.      <pubDate>Mon, 28 Oct 2013 13:05:12 GMT</pubDate>
  523.      <guid isPermaLink="false">http://hdl.handle.net/2381/28331</guid>
  524.      <dc:date>2013-10-28T13:05:12Z</dc:date>
  525.    </item>
  526.    <item>
  527.      <title>Succinct representations of permutations and functions</title>
  528.      <link>http://hdl.handle.net/2381/28330</link>
  529.      <description>Title: Succinct representations of permutations and functions
  530. Authors: Munro, J. Ian; Raman, Rajeev; Raman, Venkatesh; Rao, Satti Srinivasa
  531. Abstract: We investigate the problem of succinctly representing an arbitrary permutation, π, on {0, . . . , n−1} so that π[superscript k](i) can be computed quickly for any i and any (positive or negative) integer power k. A representation taking (1 + ϵ)n lg n + O(1) bits suffices to compute arbitrary powers in constant time, for any positive constant ϵ ≤ 1. A representation taking the optimal ⌈lg n!⌉ + o(n) bits can be used to compute arbitrary powers in O(lg n/ lg lg n) time.&#xD;
  532. We then consider the more general problem of succinctly representing an arbitrary function, f : [n] → [n] so that f[superscript k](i) can be computed quickly for any i and any integer power k. We give a representation that takes (1 + ϵ)n lg n + O(1) bits, for any positive constant ϵ ≤ 1, and computes arbitrary positive powers in constant time. It can also be used to compute f[superscript k](i), for any negative integer k, in optimal O(1+ | f[superscript k](i) |) time. We place emphasis on the redundancy, or the space beyond the information-theoretic lower bound that the data structure uses in order to support operations efficiently. A number of lower bounds have recently been shown on the redundancy of data structures. These lower bounds confirm the space–time optimality of some of our solutions.&#xD;
  533. Furthermore, the redundancy of one of our structures "surpasses" a recent lower bound by Golynski [Golynski, SODA 2009], thus demonstrating the limitations of this lower bound.
  534. Description: NOTICE: this is the author’s version of a work that was accepted for publication in Theoretical Computer Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Theoretical Computer Science, 2012, 438, pp. 47-88, DOI: 10.1016/j.tcs.2012.03.005.</description>
  535.      <pubDate>Mon, 28 Oct 2013 12:29:09 GMT</pubDate>
  536.      <guid isPermaLink="false">http://hdl.handle.net/2381/28330</guid>
  537.      <dc:date>2013-10-28T12:29:09Z</dc:date>
  538.    </item>
  539.    <item>
  540.      <title>Dynamic Compressed Strings with Random Access</title>
  541.      <link>http://hdl.handle.net/2381/28247</link>
  542.      <description>Title: Dynamic Compressed Strings with Random Access
  543. Authors: Grossi, Roberto; Raman, Rajeev; Rao, Satti Srinivasa; Venturini, Rossano
  544. Abstract: We consider the problem of storing a string S in dynamic compressed form, while permitting operations directly on the compressed representation of S: access a substring of S; replace, insert or delete a symbol in S; count how many occurrences of a given symbol appear in any given prefix of S (called rank operation) and locate the position of the ith occurrence of a symbol inside S (called select operation). We discuss the time complexity of several combinations of these operations along with the entropy space bounds of the corresponding compressed indexes. In this way, we extend or improve the bounds of previous work by Ferragina and Venturini [TCS, 2007], Jansson et al. [ICALP, 2012], and Nekrich and Navarro [SODA, 2013].</description>
  545.      <pubDate>Fri, 04 Oct 2013 12:14:04 GMT</pubDate>
  546.      <guid isPermaLink="false">http://hdl.handle.net/2381/28247</guid>
  547.      <dc:date>2013-10-04T12:14:04Z</dc:date>
  548.    </item>
  549.    <item>
  550.      <title>Maintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven Framework</title>
  551.      <link>http://hdl.handle.net/2381/28168</link>
  552.      <description>Title: Maintaining Transactional Integrity in Long Running Workflow Services: A Policy-Driven Framework
  553. Authors: Ali, Manar Sayed Salamah
  554. Abstract: Business to Business integration is enhanced by Workflow structures, which allow for aggregating web services as interconnected business tasks to achieve a business outcome. Business processes naturally involve long running activities, and require transactional behavior across them addressed through general management, failure handling and compensation mechanisms. Loose coupling and the asynchronous nature of Web Services make an LRT subject to a wider range of communication failures. Two basic requirements of transaction management models are reliability and consistency despite failures. This research presents a framework to provide autonomous handling of long running transactions, based on dependencies which are derived from the workflow. The framework presents a solution for forward recovery from errors and compensations automatically applied to executing instances of workflows. The failure handling mechanism is based on the propagation of failures through a recursive hierarchical structure of transaction components (nodes and execution paths). The management system of transactions (COMPMOD) is implemented as a reactive system controller, where system components change their states based on rules in response to triggering of execution events. One practical feature of the model is the distinction of vital and non-vital components, allowing the process designer to express the cruciality of activities in the workflow with respect to the business logic. A novel feature of this research is that the approach permits the workflow designer to specify additional compensation dependencies which will be enforced. A notable feature is the extensibility of the model that is eased by the simple and declarative based formalism. In our approach, the main concern is the provision of flexible and reliable underlying control flow mechanisms supported by management policies. The main idea for incorporating policies is to manage the static structure of the workflow, as well as handling arbitrary failure and compensation events. Thus, we introduce new techniques and architectures to support enterprise integration solutions that support the dynamics of business needs.</description>
  555.      <pubDate>Thu, 12 Sep 2013 10:44:21 GMT</pubDate>
  556.      <guid isPermaLink="false">http://hdl.handle.net/2381/28168</guid>
  557.      <dc:date>2013-09-12T10:44:21Z</dc:date>
  558.    </item>
  559.    <item>
  560.      <title>Computing Minimum Spanning Trees with Uncertainty</title>
  561.      <link>http://hdl.handle.net/2381/28154</link>
  562.      <description>Title: Computing Minimum Spanning Trees with Uncertainty
  563. Authors: Erlebach, Thomas; Hoffmann, Michael; Krizanc, Danny; Mihal’ák, Matúš; Raman, Rajeev
  564. Editors: Albers, S.; Weil, P.
  565. Abstract: We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge e of the graph only a set Aₑ, called an uncertainty area, that contains the actual edge weight wₑ is known. The algorithm can ‘update’ e to obtain the edge weight wₑ E Aₑ. The task is to output the edge set of a minimum spanning tree after a minimum number of updates.&#xD;
  566. An algorithm is k-update competitive if it makes at most k times as many updates as the optimum. We present a 2-update competitive algorithm if all areas Aₑ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas Aₑ is to exclude degenerate inputs for which no constant update competitive algorithm can exist.&#xD;
  567. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.</description>
  568.      <pubDate>Tue, 10 Sep 2013 13:05:03 GMT</pubDate>
  569.      <guid isPermaLink="false">http://hdl.handle.net/2381/28154</guid>
  570.      <dc:date>2013-09-10T13:05:03Z</dc:date>
  571.    </item>
  572.    <item>
  573.      <title>Inferring Extended Finite State Machine Models from Software Executions</title>
  574.      <link>http://hdl.handle.net/2381/28128</link>
  575.      <description>Title: Inferring Extended Finite State Machine Models from Software Executions
  576. Authors: Walkinshaw, Neil; Taylor, Ramsay; Derrick, John
  577. Abstract: The ability to reverse-engineer models of software behaviour is valuable for a wide range of software maintenance, validation and verification tasks. Current reverse-engineering techniques focus either on control-specific behaviour (e.g. in the form of Finite State Machines), or data-specific behaviour (e.g. as pre/post-conditions or invariants). However, typical software behaviour is usually a product of the two; models must combine both aspects to fully represent the software’s operation. Extended Finite State Machines (EFSMs) provide such a model. Although attempts have been made to infer EFSMs, these have been problematic. The models inferred by these techniques can be non deterministic, the inference algorithms can be inflexible, and only applicable to traces with specific characteristics. This paper presents a novel EFSM inference technique that addresses the problems of inflexibility and non determinism. It also adapts an experimental technique from the field of Machine Learning to evaluate EFSM inference techniques, and applies it to two open-source software projects.</description>
  578.      <pubDate>Wed, 04 Sep 2013 09:00:20 GMT</pubDate>
  579.      <guid isPermaLink="false">http://hdl.handle.net/2381/28128</guid>
  580.      <dc:date>2013-09-04T09:00:20Z</dc:date>
  581.    </item>
  582.    <item>
  583.      <title>Attitudes towards User Experience (UX) Measurement</title>
  584.      <link>http://hdl.handle.net/2381/28125</link>
  585.      <description>Title: Attitudes towards User Experience (UX) Measurement
  586. Authors: Law, Lai-Chong; van Schaik, Paul
  587. Editors: Wiedenbeck, S
  588. Abstract: User experience (UX), as an immature research area, is still haunted by the challenges of defining the scope of UX in general and operationalising experiential qualities in particular. To explore the basic question whether UX constructs are measurable, we conducted semi-structured interviews with eleven UX professionals where a set of questions in relation to UX measurement were explored. The interviewees expressed scepticism as well as ambivalence towards UX measures and shared anecdotes related to such measures in different contexts. Besides, the data suggested that design-oriented UX professionals tended to be sceptical about UX measurement. To examine whether such an attitude prevailed in the HCI community, we conducted a survey with essentially the same set of questions used in the interviews. Altogether 367 responses were received; 170 of them were valid and analysed. The survey provided empirical evidence on this issue as a baseline for progress in UX measurement. Overall, results indicated that attitude was favourable and there were nuanced views on details of UX measurement, implying good prospects for its acceptance, given further progress in research and education in UX measurement where UX modelling grounded in theories can play a crucial role. Mutual recognition of the value of objective measures and subjective accounts of user experience can enhance the maturity of this area.</description>
  589.      <pubDate>Tue, 03 Sep 2013 14:47:00 GMT</pubDate>
  590.      <guid isPermaLink="false">http://hdl.handle.net/2381/28125</guid>
  591.      <dc:date>2013-09-03T14:47:00Z</dc:date>
  592.    </item>
  593.    <item>
  594.      <title>Algorithms for Wireless Communication and Sensor Networks</title>
  595.      <link>http://hdl.handle.net/2381/28100</link>
  596.      <description>Title: Algorithms for Wireless Communication and Sensor Networks
  597. Authors: Grant, Thomas
  598. Abstract: In this thesis we will address four problems concerned with algorithmic issues that arise from communication and sensor networks. &#xD;
  599. The problem of scheduling wireless transmissions under SINR constraints has received much attention for unicast (one to one) transmissions. We consider the scheduling problem for multicast requests of one sender to many receivers, and present a logarithmic approximation algorithm and an online lower bound for arbitrary power assignments.&#xD;
  600. We study the problem of maximising the lifetime of a sensor network for fault-tolerant target coverage in a setting with composite events, where a composite event is the simultaneous occurrence of one or more atomic events. We are the first to study this variation of the problem from a theoretical perspective, where each event must be covered twice and there are several event types, and we present a (6 + ɛ)-approximation algorithm for the problem.&#xD;
  601. The online strongly connected dominating set problem concerns the construction of a dominating set that is strongly connected at all times, and for every vertex not in the dominating set, there exists an edge to some vertex in the dominating set, and an edge from a vertex in the dominating set. We present a lower bound for deterministic online algorithms and present an algorithm that achieves competitive ratio matching the lower bound.&#xD;
  602. The monotone barrier resilience problem is to determine how many sensors must be removed from a sensor network, such that a monotone path can exist between two points that does not intersect any sensor. We present a polynomial time algorithm that can determine the monotone barrier resilience for sensor networks of convex pseudo-disks of equal width.</description>
  603.      <pubDate>Thu, 29 Aug 2013 10:44:20 GMT</pubDate>
  604.      <guid isPermaLink="false">http://hdl.handle.net/2381/28100</guid>
  605.      <dc:date>2013-08-29T10:44:20Z</dc:date>
  606.    </item>
  607.    <item>
  608.      <title>Mining Sequential Patterns from Probabilistic Databases</title>
  609.      <link>http://hdl.handle.net/2381/28080</link>
  610.      <description>Title: Mining Sequential Patterns from Probabilistic Databases
  611. Authors: Muzammal, Muhammad; Raman, Rajeev
  612. Editors: Huang, J.Z.; Cao, L.; Srivastava, J.
  613. Abstract: We consider sequential pattern mining in situations where there is uncertainty about which source an event is associated with. We model this in the probabilistic database framework and consider the problem of enumerating all sequences whose expected support is sufficiently large. Unlike frequent itemset mining in probabilistic databases [C. Aggarwal et al. KDD’09; Chui et al., PAKDD’07; Chui and Kao, PAKDD’08], we use dynamic programming (DP) to compute the probability that a source supports a sequence, and show that this suffices to compute the expected support of a sequential pattern. Next, we embed this DP algorithm into candidate generate-and-test approaches, and explore the pattern lattice both in a breadth-first (similar to GSP) and a depth-first (similar to SPAM) manner. We propose optimizations for efficiently computing the frequent 1-sequences, for re-using previously-computed results through incremental support computation, and for elmiminating candidate sequences without computing their support via probabilistic pruning. Preliminary experiments show that our optimizations are effective in improving the CPU cost.
  614. Description: Full text of this item is not currently available on the LRA.  The final published version may be available through the links above.</description>
  615.      <pubDate>Thu, 25 Jul 2013 10:38:00 GMT</pubDate>
  616.      <guid isPermaLink="false">http://hdl.handle.net/2381/28080</guid>
  617.      <dc:date>2013-07-25T10:38:00Z</dc:date>
  618.    </item>
  619.    <item>
  620.      <title>Range Extremum Queries</title>
  621.      <link>http://hdl.handle.net/2381/28079</link>
  622.      <description>Title: Range Extremum Queries
  623. Authors: Raman, Rajeev
  624. Abstract: There has been a renewal of interest in data structures for range extremum queries. In such problems, the input comprises N points, which are either elements of a d-dimensional matrix, that is, their coordinates are specified by the 1D submatrices they lie in (row and column indices for d = 2), or they are points in ℝ[superscript d] . Furthermore, associated with each point is a priority that is independent of the point’s coordinate. The objective is to pre-process the given points and priorities to answer the range maximum query (RMQ): given a d-dimensional rectangle, report the points with maximum priority. The objective is to minimze the space used by the data structure and the time taken to answer the above query. This talk surveys a number of recent developments in this area, focussing on the cases d = 1 and d = 2.</description>
  625.      <pubDate>Thu, 25 Jul 2013 09:04:54 GMT</pubDate>
  626.      <guid isPermaLink="false">http://hdl.handle.net/2381/28079</guid>
  627.      <dc:date>2013-07-25T09:04:54Z</dc:date>
  628.    </item>
  629.    <item>
  630.      <title>Random Access to Grammar-Compressed Strings</title>
  631.      <link>http://hdl.handle.net/2381/28052</link>
  632.      <description>Title: Random Access to Grammar-Compressed Strings
  633. Authors: Bille, Philip; Landau, Gad M.; Raman, Rajeev; Sadakane, Kunihiko; Satti, Srinivasa Rao; Weimann, Oren
  634. Abstract: Let S be a string of length N compressed into a context-free grammar S of size n. We present two representations of S achieving O(logN) random access time, and either O(n · α[subscript k](n)) construction time and space on the pointer machine model, or 0(n) construction time and space on the RAM. Here, α[subscript k](n) is the inverse of the k[superscript th] row of Ackermann's function. Our representations also efficiently support decompression of any substring in S: we can decompress any substring of length m in the same complexity as a single random access query and additional O(m) time. Combining these results with fast algorithms for uncompressed approximate string matching leads to several efficient algorithms for approximate string matching on grammar-compressed strings without decompression. For instance, we can find all approximate occurrences of a pattern P with at most k errors in time O(n(min{|P|k,k[superscript 4] + |P|}+logN)+occ), where occ is the number of occurrences of P in S. Finally, we are able to generalize our results to navigation and other operations on grammar-compressed trees. All of the above bounds significantly improve the currently best known results. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavy-paths in grammars.</description>
  635.      <pubDate>Thu, 11 Jul 2013 12:05:53 GMT</pubDate>
  636.      <guid isPermaLink="false">http://hdl.handle.net/2381/28052</guid>
  637.      <dc:date>2013-07-11T12:05:53Z</dc:date>
  638.    </item>
  639.    <item>
  640.      <title>Using Evidential Reasoning to Make Qualified Predictions of Software Quality</title>
  641.      <link>http://hdl.handle.net/2381/28050</link>
  642.      <description>Title: Using Evidential Reasoning to Make Qualified Predictions of Software Quality
  643. Authors: Walkinshaw, Neil
  644. Editors: Wagner, S
  645. Abstract: Software quality is commonly characterised in a top-down manner. High-level notions such as quality are decomposed into hierarchies of sub-factors, ranging from abstract notions such as maintainability and reliability to lower-level notions such as test coverage or team-size. Assessments of abstract factors are derived from relevant sources of information about their respective lower-level sub-factors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER) - a mathematical technique for reasoning about uncertainty and evidence - can address this problem. It enables the quality assessment to proceed in a bottom-up manner, by the provision of low-level assessments that make any uncertainty explicit, and automatically propagating these up to higher-level 'belief-functions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance.</description>
  646.      <pubDate>Thu, 04 Jul 2013 15:35:16 GMT</pubDate>
  647.      <guid isPermaLink="false">http://hdl.handle.net/2381/28050</guid>
  648.      <dc:date>2013-07-04T15:35:16Z</dc:date>
  649.    </item>
  650.    <item>
  651.      <title>Ant Colony Optimization in Stationary and Dynamic Environments</title>
  652.      <link>http://hdl.handle.net/2381/27971</link>
  653.      <description>Title: Ant Colony Optimization in Stationary and Dynamic Environments
  654. Authors: Mavrovouniotis, Michalis
  655. Abstract: The ant colony optimization (ACO) metaheuristic is inspired by the foraging behaviour of real ant colonies. Similarly with other metaheuristics, ACO suffers from stagnation behaviour, where all ants construct the same solution from early stages.&#xD;
  656. In result, the solution quality may be degraded because the population may get trapped on local optima. In this thesis, we propose a novel approach, called direct communication (DC) scheme, that helps ACO algorithms to escape from a local optimum if they get trapped. The experimental results on two routing problems showed that the DC scheme is effective.&#xD;
  657. Usually, researchers are focused on problems in which they have static environment.&#xD;
  658. In the last decade, there is a growing interest to apply nature-inspired metaheuristics in optimization problems with dynamic environments. Usually, dynamic optimization problems (DOPs) are addressed using evolutionary algorithms. In this thesis, we apply several novel ACO algorithms in two routing DOPs. The proposed ACO algorithms are integrated with immigrants schemes in which immigrant ants are generated, either randomly or with the use of knowledge from previous environment(s), and replace other ants in the current population. The experimental results showed that each proposed algorithm performs better in different dynamic cases, and that they have better performance than other peer ACO algorithms in general.&#xD;
  659. The existing benchmark generators for DOPs are developed for binary-encoded combinatorial problems. Since routing problems are usually permutation-encoded combinatorial problems, the dynamic environments used in the experiments are generated using a novel benchmark generator that converts a static problem instance to a dynamic one. The specific dynamic benchmark generator changes the fitness landscape of the problem, which causes the optimum to change in every environmental change. Furthermore in this thesis, another benchmark generator is proposed which moves the population to another location in the fitness landscape, instead of modifying it. In this way, the optimum is known and one can see how close to the optimum an algorithm performs during the environmental changes.</description>
  660.      <pubDate>Fri, 14 Jun 2013 09:45:27 GMT</pubDate>
  661.      <guid isPermaLink="false">http://hdl.handle.net/2381/27971</guid>
  662.      <dc:date>2013-06-14T09:45:27Z</dc:date>
  663.    </item>
  664.    <item>
  665.      <title>Completeness of Conversion between Reactive Programs for Ultrametric Models</title>
  666.      <link>http://hdl.handle.net/2381/27961</link>
  667.      <description>Title: Completeness of Conversion between Reactive Programs for Ultrametric Models
  668. Authors: Severi, Paula; de Vries, Fer-Jan
  669. Abstract: In 1970 Friedman proved completeness of beta eta conversion in the simply-typed lambda calculus for the set-theoretical model. Recently Krishnaswami and Benton have captured the essence of Hudak’s reactive programs in an extension of simply typed lambda calculus with causal streams and a temporal modality and provided this typed lambda calculus for reactive programs with a sound ultrametric semantics.&#xD;
  670. We show that beta eta conversion in the typed lambda calculus of reactive programs is complete for the ultrametric model.</description>
  671.      <pubDate>Tue, 11 Jun 2013 10:45:55 GMT</pubDate>
  672.      <guid isPermaLink="false">http://hdl.handle.net/2381/27961</guid>
  673.      <dc:date>2013-06-11T10:45:55Z</dc:date>
  674.    </item>
  675.  </channel>
  676. </rss>
  677.  
  678.  

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.rssboard.org/rss-validator/check.cgi?url=https%3A//lra.le.ac.uk/feed/rss_2.0/2381/316