8:45am
Welcome and Information
9:00am
Invited Speaker: Christos Papadimitriou (COLT/ICML)
(Physics Theatre)
10:00am
Ensemble Learners
Chair: Patricia Riddle
Hierarchical Reinforcement Learning
Chair: Tom Dietterich
Text Learning
Chair: Ian Witten
Is Combining Classifiers Better than Selecting the Best One?
Discovering Hierarchy in Reinforcement Learning with HEXQ
Learning word normalization using word suffix and context from unlabeled data
11:00am
A Unified Decomposition of Ensemble Loss for Predicting Ensemble Performance
Automatic Creation of Useful Macro-Actions in Reinforcement Learning
A New Statistical Approach on Personal Name Extraction
11:30am
Cranking: An Ensemble Method for Combining Rankers using Conditional Probability Models on Permutations
Using Abstract Models of Behaviours to Automatically Generate Reinforcement Learning Hierarchies
IEMS - The Intelligent Email Sorter
Elisabeth Crawford, Judy Kay, Eric McCreath
12:00
Active + Semi-supervised Learning = Robust Multi-View Learning
Model-based Hierarchical Average-reward Reinforcement Learning
Combining Labeled and Unlabeled Data for MultiClass Text Categorization
Rayid Ghani
12:30pm
Lunch (Square House)
Decision Trees
Chair: Ross Quinlan
Chair: Prasad Tadepalli
Text Learning
Fast Minimum Training Error Discretization
Hierarchically Optimal Average Reward Reinforcement Learning
Mohammad Ghavamzadeh, Sridhar Mahadevan
Partially Supervised Classification of Text Documents
Bing Liu, Wee
Sun Lee, Philip S. Yu,
Xiaoli Li
2:30pm
Learning Decision Trees Using the Area Under the ROC Curve
Cesar Ferri,
Peter Flach,
Jose Hernandez-Orallo
Action Refinement in Reinforcement Learning by Probability Smoothing
Thomas Dietterich, Didac Busquets, Ramon Lopez de Mantaras, Carles Sierra
Syllables and other String Kernel Extensions
Craig
Saunders, Hauke Tschach,
John Shawe-Taylor
3:00pm
An Analysis of Functional Trees
Learning Spatial and Temporal Correlation for Navigation in a 2-Dimensional Continuous World
A Boosted Maximum Entropy Model for Learning Text Chunking
3:30pm
Afternoon Tea (Physics Lawn)
Decision Trees
Chair: Mike Cameron-Jones
Reinforcement Learning
Chair: Srdihar Mahadevan
Data Mining
Chair: Marko Grobelnik
4:00pm
Classification Value Grouping
Scalable Internal-State Policy-Gradient Methods for POMDPs
Using Unlabelled Data for Text Classification through Addition of Cluster Parameters
Bhavani Raskutti, Adam Kowalczyk, Herman Ferra
4:30pm
Finding an Optimal Gain-Ratio Subset-Split Test for a Set-Valued Attribute in Decision Tree Induction
An epsilon-Optimal Grid-Based Algorithm for Partially Observable Markov Decision Processes
From Instance-level Constraints to Space-Level Constraints: Making the Most of Prior Knowledge in Data Clustering
Dan Klein,
Sepandar Kamvar,
Christopher Manning
5:00pm
Adaptive View Validation: A First Step Towards Automatic View Detection
On the Existence of Fixed Points for Q-Learning and Sarsa in Partially Observable Domains
Mining Both Positive and Negative Association Rules
Chengqi Zhang,
Xindong Wu,
Shichao Zhang
9:00am
Invited Speaker: Saso Dzeroski (ICML/ILP)
(Physics Theatre)
10:00am
Support Vector Machines
Chair: Alex Smola
Behavioural Cloning/
Scientific Discovery
Chair: Pat Langley
Theory
Chair: John Case
Anytime Interval-Valued Outputs for Kernel Machines: Fast Support Vector Machine Classification via Distance Geometry
Reinforcement Learning and Shaping: Encouraging Intended Behaviors
Sufficient Dimensionality Reduction - A novel Analysis Principle
11:00am
Multi-Instance Kernels
Thomas Gaertner, Peter Flach,
Adam Kowalczyk, Alex Smola,
Separating Skills from Preference: Using Learning to Program by Reward
Combining Training Set and Test Set Bounds
11:30am
Kernels for Semi-Structured Data
Learning to Fly by Controlling Dynamic Instabilities
Learning k-Reversible Context-Free Grammars from Positive Structural Examples
12:00
A Fast Dual Algorithm for Kernel Logistic Regression
Sathiya Keerthi, Kaibo Duan
Qualitative reverse engineering
On generalization bounds, projection profile, and margin distribution
Ashutosh
Garg, Sariel Har-Peled,
Dan Roth
12:30pm
Lunch (Square House)
Cost Sensitive Learning
Chair: Rob Holte
Scientific Discovery/
Reinforcement Learning
Chair: Ivan Bratko
Chair: ChenqiZhang
2:00pm
An Alternate Objective Function for Markovian Fields
Sham Kakade, Yee Whye The, Sam Roweis
Inducing Process Models from Continuous Data
Pat Langley, Javier Sanchez,
Ljupco Todorovski, Saso Dzeroski
Non-Disjoint Discretization for Naive-Bayes Classifiers
2:30pm
Issues in Classifier Evaluation using Optimal Cost Curves
Integrating Experimentation and Guidance in Relational Reinforcement Learning
Numerical Minimum Message Length Inference of Univariate Polynomials
Leigh Fitzgibbon, David Dowe, Lloyd Allison
3:00pm
Pruning Improves Heuristic Search for Cost-Sensitive Learning
Approximately Optimal Approximate Reinforcement Learning
Learning to Share Distributed Probabilistic Beliefs
Christopher Leckie,
Ramamohanarao Kotagiri
3:30pm
Afternoon Tea (Physics Lawn)
Unsupervised Learning
Chair: Eibe Frank
Reinforcement Learning
Chair: Mark Pendrith
Semi-supervised Clustering by Seeding
Sugato Basu, Arindam Banerjee,
Raymond Mooney
Competitive Analysis of the Explore/Exploit Tradeoff
John Langford, Martin Zinkevich, Sham Kakade
Markov Chain Monte Carlo Sampling using Direct Search Optimization
Malcolm Strens, Mark Bernhardt, Nicholas Everett
4:30pm
Exploiting Relations Among Concepts to Acquire Weakly Labeled Training Data
Investigating the Maximum Likelihood Alternative to TD(lambda)
Exact model averaging with naive Bayesian classifiers
5:00pm
Interpreting and Extending Classical Agglomerative Clustering Algorithms using a Model-Based approach
Sepandar Kamvar, Dan Klein, Christopher Manning
Coordinated Reinforcement Learning
Carlos Guestrin, Michail Lagoudakis,
Ronald Parr
MMIHMM: Maximum Mutual Information Hidden Markov Models
Friday
9:00am
Invited Speaker: Sebastian Thrun
(Physics Theatre)
10:00am
Morning Tea (Physics Lawn)
Ensemble Learners
Chair: Bernhard Pfharinger
Feature Selection
Incorporating Prior Knowledge into Boosting
Robert Schapire, Marie Rochery,
Mazin Rahim, Narendra Gupta
Refining the Wrapper Approach - Smoothed Error Estimates for Feature Selection
Loo-Nin Teow, Hwee Tou Ng
Haifeng Liu, Eric Yap
Feature Subset Selection and Inductive Logic Programming
11:00am
Modeling Auction Price Uncertainty Using Boosting-based Conditional Density Estimation
Robert Schapire, Peter Stone,
David McAllester, Michael Littman
Janos Csirik
Feature Selection with Active Learning
Inductive Logic Programming out of Phase Transition: A bottom-up constraint-based approach
Jacques Ales Bianchetti,
Celine Rouveirol, Michele Sebag
11:30am
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
Randomized Variable Elimination
Graph-Based Relational Concept Learning
Jesus Gonzalez
12:00
Towards "Large Margin" Speech Recognizers by Boosting and Discriminative Training
Discriminative Feature Selection via Multiclass Variable Memory Markov Model
Noam Slonim, Gill Bejerano,
Shai Fine, Naftali Tishby
Descriptive Induction through Subgroup Discovery: A Case Study in a Medical Domain
Friday
12:30pm
Lunch (Square House)
Support Vector Machines
Chair: Peter Flach
Statistic Behavior and Consistency of Support Vector Machines, Boosting, and Beyond
Sparse Bayesian Learning for Regression and Classification using Markov Chain Monte Carlo
Shien-Shin Tham, Arnaud Doucet,
Ramamohanarao Kotagiri,
Linkage and Autocorrelation Cause Feature Selection Bias in Relational Learning
2:30pm
The Perceptron Algorithm with Uneven Margins
Yaoyong Li, Hugo Zaragoza, Ralf Herbrich,
John Shawe-Taylor, Jaz Kandola
Modeling for Optimal Probability Prediction
Algorithm-Directed Exploration for Model-Based Reinforcement Learning
Carlos Guestrin, Relu Patrascu,
Dale Schuurmans
3:00pm
Learning the Kernel Matrix with Semi-Definite Programming
Gert Lanckriet, Nello Christianini,
Peter Bartlett, Laurent El Ghaoui, Michael Jordan
Representational Upper Bounds of Bayesian Networks
Huajie Zhang, Charles Ling
A Necessary Condition of Convergence for Reinforcement Learning with Function Approximation
3:30pm
Afternoon Tea (Physics Lawn)
Support Vector Machines/
Chair: Alan Blair
Rule Learning
4:00pm
Diffusion Kernels on Graphs and Other Discrete Structures
Learning Decision Rules by Randomized Iterative Local Search
Michael Chisholm, Prasad Tadepalli
Stock Trading System Using Reinforcement Learning with Cooperative Agents
Jangmin O, Jae Won Lee, Byoung-Tak Zhang
4:30pm
Learning from Scarce Experience
Leonid Peshkin, Christian Shelton
Transformation-Based Regression
Bjorn Bringmann, Stefan Kramer,
Friedrich Neubarth, Hannes Pirker,
Gerhard Widmer
Content-Based Image Retrieval Using Multiple-Instance Learning
Qi Zhang, Wei Yu, Sally Goldman, Jason Fritts