This site also provides information about Experience-Based Language Acquisition (EBLA), the software system that I developed as part of my dissertation research at the LSU Department of Computer Science.
Brian E. Pangburn
May 27, 2003
KurzweilAI had a line to this Guardian Unlimited article discussing Bruce Lahn's research on the sophistication of the human brain.
From the article, "Professor Lahn's research, published this week in the journal Cell, suggests that humans evolved their cognitive abilities not owing to a few sporadic and accidental genetic mutations - as is the usual way with traits in living things - but rather from an enormous number of mutations in a short period of time, acquired though an intense selection process favouring complex cognitive abilities."
The First Symposium on Semantic Mining in Biomedicine (SMBM 2005) will be held from April 10 - 13, 2005. It is being hosted by the European Bioinformatics Institute (EBI) in Hinxton, Cambridge, UK.
More information is available here.
A workshop on the Analysis of Informal and Formal Information Exchange during Negotiations will be held from May 26-27, 2005 at the University of Ottawa in Ottawa, Ontario, Canada.
More information is available here.
The AAAI-05 Workshop on Question Answering in Restricted Domains will be held on July 9 or 10, 2005 (date TBA) in Pittsburgh, Pennsylvania as part of AAAI-05.
More information is available here.
This research focuses oÂn the development of a telerobotic system that employs several state-action policies to carry out a task using oÂn-line learning with human operator (HO) intervention through a virtual reality (VR) interface. The case-study task is to empty the contents of an unknown bag for subsequent scrutiny.A system state is defined as a condition that exists in the system for a significant period of time and consists of the following sub-states: 1) the bag which includes a feature set such as its type (e.g., plastic bag, briefcase, backpack, or suitcase); 2) the robot (e.g., gripper spatial coordinates, home position, idle, performing a task) and 3) other objects (e.g., contents that fell out of the bag, obstructions).A system action takes the system to a new state. Action examples include initial grasping point, lift and shake trajectory, re-arranging the position of a bag to prepare it for better grasping and enable the system to verify if all the bag contents have been extracted.Given the system state and a set of actions, a policy is a set of state-action pairs to perform a robotic task. The system starts with knowledge of the individual operators of the robot arm, such as opening and closing the gripper, but neither it has policy for deciding when these operators are appropriate, nor does it have knowledge about the special properties of the bags. A policy is defined as the best action for a given state. The system learns this policy from experience and human guidance and is found to be beneficial if a bag was grabbed successfully and all its contents have been extracted.The expected contributions of this research include the system ability to accept or reject HO new policies (e.g., add or remove possible grasping points, lifting, and shaking trajectories) through a VR interface when it encounters a situation it cannot handle or its learning rate is considered to be low. Policy examples are to let the HO classify the type of a bag (e.g., a briefcase) when it was recognized mistakenly as a different type (e.g., a suitcase) and to provide a set of possible grasping points by the HO when the system finds it difficult to recognize points that are beneficial for completing the task. When HO intervention is found to be beneficial, the system learns, and its dependence oÂn the HO decreases.Learning the optimal policy for bag classification will be conducted using support vector machines (SVMs). The inference of the SVMs will be a recommendation for a set of possible grasping points. Reinforcement learning (e.g., Q-learning) will be used to find the best action (e.g., determining the optimal grasping point followed by a lift and shake trajectory) for a given state.For testing the above, an advanced virtual reality (VR) telerobotic bag shaking system is proposed. It is assumed that several kinds of bags are placed oÂn a platform. All locks have been removed and latches and zippers opened. The task of the system is to empty the contents of an unknown bag oÂnto the platform for subsequent scrutiny. It is assumed that the bag has already passed X-ray inspection to ensure the bag is not empty and does not contain obvious explosives (e.g., mines, gun bullets).Keywords: support vector machines, robot learning, machine vision, virtual reality, human-robot collaboration, telerobotics
A cooperative human-robot learning system for remote robotic operations using a virtual reality (VR) interface is presented. The paper describes the overall system architecture, and the VR telerobotic system interface. Initial tests using oÂn-line control through the VR interface for the task of shaking out the contents of a plastic bag are presented. The system employs several state-action policies. System states are defined by: type of bag, status of robot, and environment. Actions are defined by initial grasping point, lift and shake trajectory. A policy is a set of state-action pairs to perform a robotic task. The system starts with knowledge of the individual operators of the robot arm, such as opening and closing the gripper, but it has no policy for deciding when these operators are not appropriate, nor does it have knowledge about the special properties of the bags. An optimal policy is defined as the best action for a given state that is learned from experience and human guidance. A policy is found to be beneficial if a bag was grabbed successfully and all its contents extracted.
This paper describes a telerobotic system operated through a virtual reality (VR) interface. A least squares method is used to find the transformation mapping, from the virtual to real environments. Results revealed an average transformation error of 3mm. The system was tested for the task of planning minimum time shaking trajectories to discharge the contents of a suspicious package oÂnto a workstation platform. Performance times to carry out the task directly through the VR interface showed rapid learning, reaching standard time (288 seconds) within 7 to 8 trials - exhibiting a learning rate of 0.79.
The last stage of any type of automatic surveillance system is the interpretation of the acquired information from its sensors. This work focuses oÂn the interpretation of motion pictures taken from a surveillance camera, i.e.; image understanding. A prototype of a fuzzy expert system is presented which can describe in a natural language like manner, simple human activity in the field of view of a surveillance camera. The system is comprised of three components: a pre-processing module for image segmentation and feature extraction, an object identification fuzzy expert system (static model), and an action identification fuzzy expert system (dynamic temporal model). The system was tested oÂn a video segment of a pedestrian passageway taken by a surveillance camera.
The last stage of any type of automatic surveillance system is the interpretation of the acquired information from the sensors. This work focuses oÂn the interpretation of motion pictures taken from a surveillance camera, i.e.; image understanding. An expert system is presented which can describe in a natural language like, simple human activity in the field of view of a surveillance camera. The system has three different components: a pre-processing module for image segmentation and feature extraction, an object identification expert system (static model), and an action identification expert system (dynamic temporal model). The system was tested oÂn a video segment of a pedestrian passageway taken by a surveillance camera.