Computer Science and Software Engineering

UC Home > Departments > College of Engineering > Computer Science and Software Engineering 

Computer Science and Software Engineering University of Canterbury, Christchurch, New Zealand

Contacts

  • +64 3 364 2362
  • admin@cosc .canterbury.ac.nz
  • Computer Science and Software Engineering,
    University of Canterbury,
    Private Bag 4800,
    Christchurch 8140,
    New Zealand.

Abstracts

Jason Alexander: Improving Document Navigation using Space-Filling Thumbnails (Hons)

Space-Filling Thumbnails (SFT) is a new technique for document navigation that disposes of traditional scrollbars. SFT provides a thumbnail image of every page in a document on a single screen. Pages may then be selected for viewing at full zoom.

SFT has advantages for visual search location time and exploits spatial memory to a higher degree than scrollbars. Results from an evaluation already conducted show a promising decrease in target location time, especially for re-location of a page already visited.

This seminar will present a discussion of this new technique, its advantages and limitations along with initial results, further planned evaluations, applications and future areas for consideration.

Taher Amer: Evaluation of SWIFT point for target acquisition (MSc)

Graphical user interfaces are now the standard in computer interaction techniques, while pointing devices and desktop user interfaces are the predominant mechanisms for issuing commands. Thus improving pointing devices would increase the users’ efficiency and productivity.

Pointing devices, (such as the mouse, touchpad, and trackball) are not the typical/efficient solution for mobile users, because of the constrained space. Hence, a new pointing device called Swiftpoint was designed to overcome the shortcomings of other pointing devices and satisfy the requirements of mobility. It is small, and can be used on top of a keyboard, thus reducing the homing time and solving the problem of constrained spaces.

In my research I will evaluate Swiftpoint against other pointing devices, in mobile and desktop computers, by applying the latest recommendations of the ISO 9241 standard for the evaluation of pointing devices.

Sung Bae: Improved Algorithms for the K-Maximum Subarrays Problem (PhD)

The maximum subarray problem is to find the consecutive array portion that maximizes the sum of array elements in it. This generic problem is encountered in various fields including graphics and data mining, and an efficient algorithm is demanded by time critical operations in military, medical sector.

We generalize this problem to find K subarrays with largest sums and propose efficient algorithms.

The first official result for this problem was our O(Kn) time algorithm, which was reduced to O(min{√K*n, K+nlog 2n}) time by Bengtsson and Chen.

We show that a simple modification to our O(Kn) time solution leads to O( K 2+nlogK) time. Our recent development finally removed K 2 term and established O(nlogK) time with advanced data structure and selection algorithm.

Nilufar Baghaei: COLLECT-UML: A Collaborative Constraint-based Intelligent System for Learning Object-Oriented Analysis and Design using UML (PhD)

In this talk we present COLLECT-UML, a constraint-based tutoring system that teaches Object-Oriented analysis and design using UML. The system observes students' actions and adapts to their knowledge and learning abilities. We describe the system's architecture and functionality. The effectiveness of the system has been evaluated in two studies with students taking ITS and software engineering courses. Objective data shows that students' performance increases significantly while interacting with the system. The students have enjoyed the system's adaptivity and found it a valuable asset to their learning. The goal of future work is to extend the system to support collaborative learning addressing both collaborative issues and task-oriented issues.

Oliver Batchelor: Volumetric reconstruction using voxel colouring (MSc)

Voxel colouring is a method of volumetric scene reconstruction for multiple images/views, with uses such as image based scanning, heritage and 3D photography. Recent advances in the area have been the development of optimisation based colour consistency criteria, and methods for arbitrary view points. Impracticalities arise from expensive visibility calculation. We have looked at some (hopefully more practical) alternatives to calculating visibility based on voxel ray casting. We make some simple comparisons to commonly used algorithms with variations based on ray casting. It is useful for level of detail and optimisation based colour consistency.

Lee Begg: Introduction to multimedia streaming, protocols and problems (MSc)

This presentation will give an overview of multimedia streaming, highlighting the importance of streaming and current and future applications. The issues surrounding the modelling, simulation and development of media streaming systems and applications will be shown and current problems in research will be discussed, focused specifically on jitter control and transport layer improvements. Finally, a brief overview of high quality video streaming using SCTP over CDMA2000 will be presented and how the SCTP transport layer interacts with video and the lower layer protocol and physical transmission.

Carey Bishop: Usability Issues of Multiple Layer Display Technology (Hons)

PureDepth, an Auckland-based company, produce a device known as a Multiple Layer Display (MLD). MLDs are a novel invention designed to bring some new possibilities into the field of Human Computer Interaction. Multiple layer displays are a type of LCD monitor that introduce depth to what would otherwise be a two dimensional display. The transparency of the each layer is determined by the colours displayed on it, while the depth between the layers provides a parallax effect. This creates some interesting issues regarding colours displayed on each layer and text legibility. This presentation introduces multiple layer displays (MLDs) and discusses some of their unique characteristics.

Nick Brettell: Advanced terrain rendering using geometry clipmaps (Hons)

A primary difficulty in Terrain Rendering is displaying realistic terrains to the user at real-time frame rates.  A number of algorithms exist that use Level of Detail (LOD) to do this.  The geometry clipmap is a recently proposed approach that utilises the potential of modern graphics hardware.  LOD is achieved using a number of regular nested grids of increasing size and decreasing detail, centred around the viewpoint.  Height values are stored in vertex buffers on the graphics card for fast access, and can be updated incrementally using toroidal arrays. We have been studying the implementation aspects of the geometry clipmap algorithm using different sets of real terrain data. The project also aims to perform a comparative analysis with other optimized terrain rendering techniques to find if and when the geometry clipmap is the most effective algorithm.

Philip Brock: An investigation of target acquisition with visually expanding targets in constant motor-space (Hons)

Target acquisition is a core part of modern computer use. Fitts' law has frequently been proven to predict performance of target acquisition tasks; even with targets that change size as the cursor approaches. Research into expanding targets has focussed on targets that expand in both visual- and motor-space. We investigate whether a visual expansion with no change in motor-space offers any performance benefit, and whether Fitts' law models the acquisition of the target before or after its expansion. We investigate constant motor-space visual expansion in both abstract pointing tasks (based on the ISO standard) and in a realistic deployment of the technique within Fisheye Menus. Our fisheye menu system eliminates the "hunting effect" of target acquisition observed in Bederson's initial proposal of Fisheye Menus.

James Carpinter: Evaluation classification ensembles for spam filtering (Hons)

Ensemble classifiers combine the relative strengths of different classification models in order to form a more perfect model of the classification function. Caruana et al. (2004) propose a library-based approach to ensemble generation, where several thousand models are built, and an ensemble is formed from a near-optimal subset of the collection. Their results showed consistent improvements over other standard machine learning techniques. This research aims to evaluate this method in a difficult, real-world domain: spam filtering. Spam is a growing threat to internet users worldwide: it is estimated to cost US$50 billion in lost productivity and increased network maintenance costs. Anti-spam research has largely focused on applying machine learning techniques to form superior filtering solutions; this work is yet another step towards this goal. In this seminar, the underlying concepts of this method will be presented, along with preliminary results.

Adrian Clark: An expert system using scene analysis and adaptive algorithm switching for Image-Based Object Registration (PhD)

Current Image-Based Object Registration research is generally limited to a single core algorithm, with the occasional use of a second algorithm when the core algorithm fails. While this works in constrained laboratory environments, the changing environmental conditions in public spaces cause such an approach to perform poorly. This research examines various environmental conditions and their relationship to a range of established registration algorithms. This knowledge database of relationships is used by an expert system for examining an image of a scene to choose which of the known algorithms perform optimally given the environmental features and object to be recognised.

Carl Cook: Computer supported collaborative software engineering (PhD)

Tools to support real-time Collaborative Software Engineering (CSE) have many perceived benefits including increased programmer communication and faster resolution of development conflicts. Demand and support for such tools is rapidly increasing, but the cost of developing such tools is prohibitively expensive. During the course of my research towards enabling CSE, I have developed an architecture, CAISE, to support the rapid development of CSE tools.  In this presentation I will give a brief history of CSE, discuss the key aspects of the CAISE architecture, and outline the papers that summarise our investigations.

Anthony Dale: A hybrid approach to workflow resource allocation using the PM/PMM framework (MSc)

Traditionally, resource allocation in workflow processes is done in one of these ways:

1. Static worklists, where a literal resource is assigned to a task.  The worklists may be hierarchical, allowing some subdivision of the work.

2. Role-based assignment, where the assignment is to a resource type rather than a literal resource.  Different resources may assume different roles at different times, rather than being "hard-coded" in to roles by the work list approach.

3. A broker-based system such as tendering for the cheapest or fastest resource to perform a task.
In all cases, the resource assignment has to happen in the instantaneous context of the process, because the assumption is that we are executing tasks in real time.

This is not necessarily the case when using our framework, and so we have a significant advantage in the area of resource allocation: because the framework has access to both static project data and dynamic process data, we can apply a hybrid approach to resource allocation. We do this by running a PMM forward in to the future as far as possible (say: by listing all the tasks in a sequence) and then applying static task-resource allocation approaches to the task list.  For example: resource levelling algorithms from the static project domain could be incrementally applied to a dynamic PMM, as "chunks" of the PMM were executed. A PMM chunk is a sequence of steps which are not dependant on a control structure to be instantiated e.g.: the contents of a control step itself, or the contents of a sequenced step.  Traditional workflow approaches can still be applied to the resource allocation problem, and so we have the best of both worlds.

Mirko Eickhoff: Sequential simulation in MRIP: beyond mean value analysis (PhD)

Stochastic discrete event simulation is well known to be a powerful approach to investigate dynamic behaviour of complex systems. Nowadays much research work in this area is focused on sequential and automated methods of output analysis to guarantee a satisfactory level of confidence of the final results.

The estimation of mean values has traditionally been the main goal of simulation output analysis. However, in many situations mean value analysis is not sufficient. The estimation of quantiles is known to provide the analyst with a deeper insight into the system’s behaviour. The main problem facing quantile estimation is that the output streams from discrete event simulation are auto-correlated and observations are not identically distributed. The use of multiple independent replications in parallel (MRIP) within one simulation experiment enables the investigation of effective statistical methods of quantile analysis and offers a new paradigm for studying performance of complex systems.

The ultimate goal of this doctoral thesis is to investigate the probability distribution of arbitrary performance measures, based on quantile analysis. The results are calculated with a certain confidence level given by a sequential and automated approach within the MRIP-scenario.

Hongzai Gao: PEDDA: Pixel Exclusion Double Difference Algorithm to track fast objects in noisy images (PhD)

Pixel Exclusion Double Difference Algorithm (PEDDA) is a novel motion image segmentation algorithm. This algorithm is designed for segmenting fast moving objects from noisy background in a sequence of images. In addition to the double differential calculations that are implemented in the traditional Double Difference Algorithm (DDA), the novel PEDDA algorithm involves a pixel exclusion operation over the double differential result and the next frame in the same sequence.

A comparing study has been done over the Adjacent Frames Difference Algorithm (AFDA), DDA and PEDDA. The experimental results support that comparing with the other two algorithms, the novel PEDDA algorithm emphases the fast moving target in the image differential results.

Andrew Gin: The Performance of the IEEE 802.11i/WPA2 Security Specifications in Wireless LANs (Hons)

The use of wireless networks has increased over recent years. Mobility and convenience to connect without wires has contributed to its popularity. However, wireless networks are inherently more vulnerable than wired networks, since data is broadcast over the air. Where a wired network needs to have its lines, hub or switch tapped to compromise its security, a wireless network can be compromised by anyone receiving the signal. Several measures have been developed to deal with this vulnerability. WEP was originally designed to make wireless networks as secure as an unsecured wired network, however it has major flaws. WPA was released to fix these flaws, however it was developed within the constraints of hardware designed for WEP. WPA2 was released late 2004, and is a completely new security specification. The project is to evaluate the effects of the WPA2 specifications on performance (such as throughput and latency) and comparing this performance with existing architectures: WEP and WPA.

Christiaan Gough: Better Realising Direct Manipulation

Direct Manipulation is an approach to designing user interfaces that forms the basis of the Graphical User Interfaces we are familiar with today. The central idea of a Direct Manipulation interface is to improve usability by reducing the distance between the user and computer.

Despite the importance of the concept of Direct Manipulation and the interest in research pertaining to the bettering of user interfaces, no formal model of Direct Manipulation has yet been adopted.
This paper proposes a mathematical model of Direct Manipulation, which combines the traditional aspects with some new ones. By establishing that this proposed relationship is in fact correct, a new understanding of how to construct and quantitatively compare or predict the most usable user interfaces will be gained.

Robert Grant: Constricting a 3D persistent local terrain map of an unconstrained environment for navigation (PhD)

This research investigates methods of video and other sensor data acquisition and processing for maintaining a persistent 3D terrain map of a local region through motion. Prior research into atmospheric condition filtering, illumination change compensation and motion blur reduction enable a knowledge base supporting a novel merged environmental filtering algorithm. Such an algorithm is robust to variations from outdoor to indoor environments. The sensors deployed include a 3D IR camera, a colour camera, an accelerometer and a gyrometer. The 3D image from this merged data uses odometry from modified optical flow and Lucas-Kanade algorithms. Texture, odometry and structure are used to match this successive 3D data with the existing persistent 3D map to provide an up-to-date model of the surrounding environment. Traversability of terrain areas on the map is evaluated to determine the suitability of vehicles to attempt such paths. For example, a car may pass over rough stones which are too large for a wheelchair to traverse whereas no vehicle can pass through a wall. Computational requirements of this system are reduced using computer vision and updating 3D environment data at a low camera frame rate while sampling accelerometer and gyrometer data at high frame rates. This enables fast interactions in real-time while more complex map building is performed only a few times a second. This novel solution has the potential allow vehicle navigation systems to finally leave the laboratory and become deployed in the real world where many applications await.

Jörg Hauber: Supporting Social Presence in Collaborative Virtual Environments (PhD)

Collaborative Virtual Environments (CVE) are shared artificial spaces where distant participants can meet virtually and work together with others connected through networked computers. This research investigates how some of the limitations of conventional two dimensional videoconferencing can be overcome by integrating real videos of remote people into the spatial context of a CVE. Prototypes of different CVEs are developed and evaluated with regard to the social presence they can support. From the data collected in numerous user studies a set of practical interface design guidelines will be derived. These guidelines will inform how to make telecommunication through CVEs a more natural, effective, pleasurable, simply better experience in the future.

Jay Holland: A constrained based ITS for the JAVA programming language (MSc)

Acquisition of computer programming skill is a core component of the Computer Science curriculum, a fact reflected by the many first-year tertiary prescriptions that require a student to undertake some kind of programming course. The Java programming language provides an appropriate introductory programming syllabus; due to its abstractions of low-level functions and its system-independent nature, the student is able to concentrate more on general programming concepts rather than system idiosyncrasies.

Although the relevant material tends to be taught in lectures, most learning reinforcement takes place in laboratory classes, where practical tasks are carried out. An increasingly popular and effective way of improving student learning in a laboratory setting is through Intelligent Tutoring Systems (ITSs), which enhance the learning experience by providing feedback personalised to a user. This presentation provides an outline for the development and evaluation of an ITS for the Java programming language, and the progress made thus far.

Oliver Hunt: Haskell.NET : The Art of Avoiding Work (MSc)

We are investigating the compilation of the Haskell programming language to the Common Language Runtime (CLR).

There are significant problems in performing direct compilation of Haskell to the CLR, one of the most significant is Haskell's non-strict evaluation semantics.

Previous work in this area has included the non-strict functional language Mondrian that was a specifically designed to interwork in an OO environment. However Mondrian was targeted at scripting applications and used an interpretive approach to non-strictness. Versions of Haskell have also been developed which provide a bridge between code executing on the targeted virtual machine and Haskell programs running natively.

Recent work at Canterbury developed a method for supporting limited non-strictness in languages executing on the CLR. Our work is a development and extension of this.

In our report we will discuss this extension, as well as other problems we have encountered.

Warwick Irwin: Understanding and improving object-oriented software through static software analysis (PhD)

The process of designing and constructing software requires software engineers to understand an intricate structure of components with many and various inter-relationships.  In OO software, classes, attributes, methods, control structures and other elements participate in inheritance, containment, invocation and usage relationships, among others.  The resulting configuration gives rise to diverse program characteristics, design forces and software neighbourhoods that must be understood and modified by the designers.  In current software development practice, this understanding is gained with little help from tools, other than text editors that simply show the source code and perhaps UML diagram generators.  This research addresses the practical difficulties of developing tools that are better able to inform software engineers about software structure.  In particular, we identify and overcome the practical limitations of conventional parser generators, and develop a semantic model of Java programs that provides complete and rigorous software structure information.

Michael JasonSmith: Designing a Temporal Document-Organisation System (PhD)

Organising documents, by naming them and placing them in folders, is a daily chore for users of computing systems. Time is a compelling method of organising documents because:

. People have a good memory for order,

. Temporally-ordered documents effectively act as reminders, and

. Organisation is automatic, so the user is not burdened with naming and classifying

documents.

I present the design guidelines used to create Swaca: a temporal document-organisation system that does not use files. I will show how the timeline-based system in Swaca can effectively be used to carry out document retrieval and error recovery.

Mayank Keshariya: A New Interoperability Architecture for a Real Time Policy Driven Wireless Mobile IP Environment (PhD)

Managing networks, both homogenous and heterogeneous, remains a difficult task, as more network devices need to be set up, ongoing configuration control must be maintained to meet business changes and their real time configuration demands become more complex. Policy Based Network Management (PBNM) has been introduced to simplify and control network management, targeting QoS and security. It provides automatic configuration for network devices, ensures end-to-end network management, and predicates and controls preferential treatments. Network management is greatly simplified as network is treated as an entity rather than a collection of individual network devices.

As various wireless architectures provide heterogeneous interfaces, provisioning of authentication across these different networks as well as support for real-time policy-based systems poses some real research challenges while achieving an Always Best Connected state. This implies provisioning of a managed system, which can be connected transparently using different technologies, at different locations, and securely switch between them without disturbing or requiring any input from the user.

We propose a vision for 4G wireless network architecture to achieve anywhere, anytime – transparent, secure, managed, automated, scalable, modular, dynamic, standards-compliant, inter-domain always best-connected services; bringing benefits to both end-users and service providers to provide ubiquitous wireless coverage and high throughput across all geographical areas.

Yi Liu: A Bayesian Network Inference System for Smart Badge (MSc)

The presentation talked about a Bayesian network inference system of Smart electric badge system.The inference system has an ability to figure out the interest of people who wear the smart badge according to time and position information collected by the smart badge. Meanwhile, the badge presents the feedback from the inference system to the wearer, so that people may obtain effective conversations.

The evaluation of the inference system was implemented in a simulation of people behaviour. We tuned the parameters of the model using several test situations. Although a strictly quantitative evaluation is difficult to make, the simulated behaviour appeared realistic.

Julian Looser: Augmented Reality Magic Lenses (PhD)

This work investigates using Magic Lenses as a tool for Augmented Reality interfaces. Magic Lenses are interface components that allow a region of the user’s workspace to be viewed in a different way. Lenses can be moved around and used to expose more detail, modify a visualisation, or perform dynamic queries, for example.

Originally designed for 2D desktop applications, Magic Lenses have also been implemented in 3D virtual environments. We have extended this previous work so that Magic Lenses can be utilised in Augmented Reality applications.

Magic Lenses have never been satisfactorily evaluated and there are few guidelines to direct their use. This work aims to resolve this issue by analysing Magic Lenses and providing a detailed description of their operation, potential uses, advantages and disadvantages. User evaluations will test the effectiveness of Magic Lenses in a variety of scenarios, in both traditional 2D interfaces as well as Augmented Reality.

Ryan Mallon: History Variables: Implementation in a Procedural Programming Language (MSc)

A history variable is a special kind of variable that stores more than the current value: it stores the previous values, up to the given history depth, in addition to the newly assigned value. In this seminar I will introduce the concept of history variables and detail the implementation issues for simple variables and arrays. I will present my current algorithms for history variable storage.

Nancy Milik: Enhancing ITS by providing A Question-asking Module with Styled Answers (MSc)

The process of building up and correcting knowledge structures is driven by the questions we ask. Asking good questions plays a central role in learning. It is evident that such a crucial meta-cognitive skill is mastered only through experience. Personal differences also play a vital role in grasping such a skill. This presentation will look at the potential of providing a question-asking module in an Intelligent Tutoring System that will allow students to ask open-ended questions. The implication of presenting the answers tailored to the various learning styles will be presented. Some initial experiences and future direction will also be portrayed.

Blair Neate: An object-oriented semantic model for .NET (Hons)

Static analysis of source code helps developers to understand their programs. Semantic modelling involves identifying all entities in a program and the relationships between these entities. This has many useful applications including calculating metrics and generating data for software visualisation.


The Common Intermediate Language (CIL) is part of the Microsoft .NET Framework. It is a low-level object-oriented language that runs on the .NET virtual machine. This is a significant change in programming technology because this language defines common semantics for any language targeted to run within the .NET Framework.


My research has involved designing a semantic model for any .NET program. This presentation describes the semantic model we have designed and how this model is populated from source code. An application of the model will be shown, in which the model has been used to calculate a new metric for measuring code reuse.

Trond Nilsen: Game design guidelines for Augmented Reality table top games (MSc)

Augmented reality offers a new platform for tabletop games. Using it, game designs may be able to blend some of the best elements of traditional games and computer games, surmounting old limitations and affording new game designs. A naïve approach to AR game development has been to simply merge design elements from computer and tabletop games. However, this does not take into account the unique nature of interaction, visualization and collaboration in AR. This research addresses the development of guidelines for augmented reality games based on three approaches – examination of HCI issues within the AR platform, the experience of designing AR games, and the gathered experience of players.

Edward Okoko: Authentication architectures in mobile wireless local and wide area networks (Hons)

Achieving Internet access “anywhere, anytime” is an ideal that much research has been working towards. One aspect of this research is the integration of different network technologies such as wireless local area network (WLAN) technologies and third generation (3G) technologies into so-called heterogeneous networks. With such integration, an important issue is handling mobility both within a given network technology and among the various ones. Mobile IP is a standard that supports transparent mobility at the network layer. However, another issue that arises is authentication and authorisation of users. The Mobile IP standard does not adequately address this issue. This presentation will discuss extensions to an implementation of Mobile IP to address authentication.

Vincent Pau: Development of Secure IPSec Tunneling in a Mobile IP Architecture (Hons)

Internet Protocol Security (IPsec) is a widely accepted standard for securing IP network traffic but has limited functionality in a Mobile IP environment. Previous researches suggest two general approaches to solving this problem—to run IPsec over Mobile IP, or to dynamically update the IPsec tunnel endpoints. This study has two objectives. Firstly, the study proposes a variation of the latter approach, whereby Mobile IP registration messages are used to update the IPsec tunnel endpoints. Secondly, the study also aims to compare the performance of the proposed solution against running IPsec over Mobile IP, and the current approach of re-establishing new IPsec tunnels. Although the proposed solution is more complex compared to running IPsec over Mobile IP, we will show that it is more efficient in terms of bandwidth overhead. We will also show that the proposed solution should have a lower handoff delay compared to the current approach of re-establishing new IPsec tunnels.

Jung Shin: Skeleton based toon shading of 3D human characters (MSc)

Non-photorealistic rendering techniques are becoming increasingly popular in Computer Graphics and animation, particularly in the field of cartoon shading. Most of the cartoon shading techniques researched so far are focused on real time-rendering, as they used primarily for games and similar applications requiring interactive frame rates. Cartoon shading (or toon-shading as it is often called) has also been used for rendering some background scenes and mechanical objects but not generally character models in cartoons. That is mainly because the artefacts introduced by toon-shading are often perceived as computer generated.

This study aims to develop a GPU based algorithm for cartoon rendering of 3D human character models. The various information of the input geometry such as colour, surface normal, depth buffer etc. are rendered into textures. 2D image processing operations are performed on the rendered textures to extract silhouette edges, and to enhance their rendering quality. A method based on Bezier curves is applied to silhouette edges to create the appearance of hand-drawn images. Effects of variations in the distance and orientation with respect to the camera, and illumination conditions have been analysed. Further work in this study is directed towards generating artistic levels of detail for the 3D model, and carrying out comparative analysis of performance using models with differing geometrical characteristics.

Steve Violich: Monster Garage: Combinatorial Generation by Fusing Loopless Algorithms (MSc)

Some combinatorial generation problems can be broken into subproblems for which loopless algorithms already exist. We discuss means by which loopless algorithms can be fused to produce a new loopless algorithm that solves the original problem. We demonstrate this method with two new loopless algorithms, MIXPAR and MULTPERM. MIXPAR generates well-formed parenthesis strings containing two different types of parentheses. MULTPERM generates multiset permutations in linear space using only arrays; it is simpler and more efficient than the recent algorithm of Korsh and LaFollette.

Pramudi Suraweera: A Knowledge Acquisition System for Intelligent Tutoring Systems (PhD)

Numerous empirical studies have shown that Intelligent Tutoring Systems (ITS) are highly effective tools for education. However, developing ITSs is a time and labour intensive process with a major portion of the effort required for composing the required domain knowledge, essential for providing pedagogical assistance. Our main goal is to dramatically reduce the time and effort required for composing such domain models.


We have developed CAS (Constraint Acquisition System), an authoring system that automatically generates the required domain model with the assistance of an expert of the domain. We will present an overview of authoring process and details of the system’s architecture. We will also present results of preliminary evaluations of the system, which have been promising.

Zhiqi Tu: Enhancements of a Public-Key Cryptosystem Based on the Non-Linear Knapsack Problem (MSc)

Nowadays all existing public-key cryptosystems are classified into three categories. The first one is based on the difficulty of factoring the product of two prime numbers. The most widely used one is the RSA cryptosystem. The second one such as the ElGamal cryptosystem is based on the difficulty of discrete logarithm computation. The last one is based on the NP-Completeness of knapsack problem. The first two categories survived crypto attacks, whereas the last one was broken and there is no attempt to use such a cryptosystem.

In order to save the last category, Mr. Kiriyama proposed a new public-key cryptosystem based on the non-linear knapsack problem, which is called a Non-Linear Knapsack Cryptosystem. Due to the property of non-linear knapsack, this system can resist all known attacks to linear knapsack problem. Based on his work, we extend our research into the following two ways.

One is to enable authentication of multiple identities in one execution of our authentication protocol. Another is to propose a digital signature scheme based on non-linear knapsack problem and Chinese Remaindering Theorem.

Amali Weerasinghe: Use of an affective model to enhance learning (PhD)

A tutor's ability to adapt the tutorial strategy to a student's emotional and cognitive states is an important factor contributing to the effectiveness of human one-to-one tutoring. Even though tutoring systems were developed with the aim of providing the experience of human one-to-one tutoring to masses of students in an economical way, using learners' emotional states to adapt tutorial strategies have been ignored until very recently. This paper proposes an initial study to understand how human tutors adapt their teaching strategies based on the affective needs of students. The findings of the study will be used to investigate how these strategies could be incorporated into an existing tutoring system which can then adapt the tutoring environment based on the learner's affect and cognitive models.

Alexander Wong: Investigating Noise Tolerance in Generalised Nearest Neighbour Learning (Hons)

The goal for all machine learning algorithms is to be able to predict an unseen instance without any classification error. One of the prohibiting factors for perfect classification is that the data sets that machine learners learn from to make their predictions often contain noise. Noise can come from human input error and error in the taking of readings. We look at the Nearest Neighbour with Generalised Exemplars (NNGE) algorithm, which is susceptible to noise, and try to improve it by applying two nearest neighbour noise tolerating techniques, k-NN and IB3. We predict that because NNGE is already a generalised version of simple nearest neighbour, that applying k-NN will add little improvement to its current classification performance. However, integrating IB3 with NNGE is not as simple as k-NN, and variations had to be made. With IB3’s complexity, finding a suitable variation could significantly improve NNGE’s noise tolerance.

Konstantin Zakharov: Recognition and support of affective states in ITSs (MSc)

Current level of development in Intelligent Tutoring Systems (ITS) allows for successful support of cognitive aspects of learning. At the same time, a number of studies suggest that learning outcomes are significantly influenced by a complex interaction between cognitive and affective states of learners. One of the approaches of affective state recognition is based on processing physiological signals that accompany elicitation of emotions. Recording and interpretation of galvanic skin response, electromyographic activity, heart rate and respiration rate can provide an overall picture of a persons’ affective state. Little research has been done to investigate effectiveness of learning with the help of affect-aware ITSs. On the other hand, animated pedagogical agents are known to improve learners’ engagement and motivation. Our research is aimed at developing and evaluating an affect-aware animated pedagogical agent to be used in ITSs.

 
 
© University of Canterbury - Christchurch, New Zealand