Calendar

Jan
30
Thu
U of T Statistics Seminar @ Sidney Smith Room 1074
Jan 30 @ 15:15 – 16:30

This is an announcement of the Statistics departmental seminar. Coffee and refreshment will be served at 3:15pm.

Speaker: Aaron Smith, Tutte Institute for Mathematics and Computing

Time: 3:15-4:30pm on Thursday, Jan 30, 2014.
Location: Sidney Smith Hall, Room 1074

Title: Efficiency Bounds and Concentration Inequalities for Adaptive Samplers

Markov chain Monte Carlo (MCMC) is a ubiquitous tool for estimating integrals over complicated probability distributions. In practice, the performance of MCMC algorithms depends heavily on a large number of tuning parameters that can be difficult to select. This problem is sometimes solved by using “adaptive” MCMC methods to learn parameters on the fly. Although these methods are popular, very little is known about the properties of estimates that they produce. In this talk, I present new finite-time error bounds and concentration inequalities for a popular class of adaptive algorithms, the equi-energy (EE) sampler. These ideas are also used to provide the first proofs that the EE sampler can be more efficient than its non-adaptive competitors.

Jan
31
Fri
Credit Risk Modeling in Canadian Banks @ University of Toronto, HS614,
Jan 31 @ 17:00 – Jan 31 @ 20:00

Speaker: Shan Jiang, PhD, Bank of Montreal

Date: Friday, January 31, 2014
Registration and Network: 5:00pm – 5:45 pm
Presentation: 5:45pm – 6:15pm
Dinner Together in Asian Legend: 6:15pm-8:00pm

Registration: Please send an email to seminar.sora@gmail.com with your affiliation. You will receive a confirmation letter if there is a seat available.

Location
University of Toronto, HS614, 155 College Street

Feb
3
Mon
Mixed Effects Models for Item Response Data @ York University Department of Psychology
Feb 3 @ 10:15 – 11:45
Quantitative Methods Forum @ Norm Endler Room (BSB 164)

Feb 3 @ 10:15 AM – 11:15 AM

Speaker: Phil Chalmers, York University
Department of Psychology

Title: Mixed effects models for item response data

Abstract: A special selection of item response theory (IRT) models can be understood as generalized mixed-effects models (GLMM), and as such can be estimated using existent software packages such as lme4 in R or PROC NLMIXED in SAS. The benefits of estimating IRT models using GLMM methodology is the ability to include additional fixed and random effect variables to help explain the rich properties a test may posses. However, although a GLMM approach can be used for some IRT models, it is not flexible enough to include more common models seen in educational and psychological testing literature. This talk will explore a newer estimation framework designed to be flexible to user specifications, accurate in the presence of multiple random effect covariates, and allow a much larger number of useful IRT models to be utilized in item analysis work. The GLMM approach to modelling IRT data will be contrasted with the proposed estimation framework, and analysis of simulated and empirical data will be presented.

Suggested Readings:
De Boeck, P. D., et al. (2011). The Estimation of Item Response Models with the lmer Function from the lme4 Package in R . Journal of Statistical
Software, 39, 1-28.

Feb
6
Thu
**CANCELLED** SHAPE CONSTRAINED REGRESSION USING GAUSSIAN PROCESS PROJECTIONS @ Sidney Smith Room 1074
Feb 6 @ 15:30 – 16:30

**CANCELLED DUE TO WEATHER**

The seminar will be rescheduled soon

SHAPE CONSTRAINED
REGRESSION USING
GAUSSIAN PROCESS
PROJECTIONS

Lizhen Lin, Duke University
Shape constrained regression analysis has applications
in dose-response modeling, environmental risk
assessment, disease screening and many other areas.
Incorporating the shape constraints can improve
estimation efficiency and avoid implausible results.
In this talk, I will talk about nonparametric methods for
estimating shape constrained (mainly monotone
constrained) regression functions. I will focus on a novel
Bayesian method from our recent work for estimating
monotone curves and surfaces using Gaussian process
projections. Inference is based on projecting posterior
samples from the Gaussian process.
Theory is developed on continuity of the projection and
rates of contraction. Our approach leads to simple
computation with good performance in finite samples.
The projection approach can be applied in other
constrained function estimation problems including in
multivariate settings.

Feb
10
Mon
Overview of Likelihood-Based Inference @ York University Department of Psychology
Feb 10 @ 10:15 – 11:15
Quantitative Methods Forum @ Norm Endler Room (BSB 164)

Feb 10 @ 10:15 AM – 11:15 AM

Speaker: Dr. Augustine Wong, York University
Department of Mathematics and Statistics

Title: Overview of Likelihood-Based Inference

Abstract: Obtaining a confidence region or a performing significance test of a parameter based on the likelihood function is commonly used in statistics.  Professor Pek in last year’s presentation introduced two likelihood-based methods: Wald method (based on the maximum likelihood estimate of the parameter) and Wilks method (likelihood ratio method).  In this talk, the accuracy of these two methods is examined.  When the parameter of interest is a scalar parameter, a special way of combining the Wald method and the Wilks method is proposed.  This proposed method gives extremely accurate inference results even when the sample size is extremely small.

Suggested Readings:
          1. Barndorff-Nielsen, O.E., & Cox, D.R. (1994). Inference and Asymptotics. Chapman & Hall.
2. Bedard, M., Fraser, D.A.S., & Wong, A. (2007).  Higher accuracy for Bayesian and frequentist inference: large sample theory for small sample likelihoodStatistical Science 22, 301-321.
3. Doganaksoy, N. & Schmee, J. (1993). Comparisons of approximate confidence intervals for distributions used in life-data analysis . Technometrics 35, 175-184.
4. Fraser, D.A.S., 1990. Tail probabilities from observed likelihoods. Biometrika 77, 65-76.
5. Fraser, D.A.S., Reid, N. & Wu, J. (1999). A simple general formula for tail probabilities for frequentist and Bayesian inference . Biometrika 86, 249-264.
6. Reid, N. (1988). Saddlepoint methods and statistical inference. Statistical Science 3, 213-238.
7. Reid, N. (1996). Higher order asymptotics and likelihood: a review and annotated bibliography . Canadian Journal of Statistics  24, 141-166.
8. Wong, A. & Wu, J. (2000).  Practical use of small sample asymptotics for distributions used in life-data analysisTechnometrics 42, 149-155.
9. Wong, A. & Wu, J., (2001).  Approximate inference for the factor loading of a simple factor analysis modelScandinavian Journal of Statistics 28, 407-414.

(Note: 1, 4, 5, 6, 7 are background material, 2 is to related to Bayesian, and the rest are specific applications.)

Feb
11
Tue
Pseudo-likelihood methods for community detection in large sparse networks @ Sidney Smith Room 2118
Feb 11 @ 15:30

Pseudo-likelihood methods
for community detection in
large sparse networks

Arash Amini, University of Michigan
We consider the problem of community detection in a
network, that is, partitioning the nodes into groups that, in
some sense, reveal the structure of the network. Many
algorithms have been proposed for fitting network
models with communities, but most of them do not scale
well to large networks, and often fail on sparse networks.
We present a fast pseudo-likelihood method for fitting the
stochastic block model, a well-known model for networks
with communities, as well as a variant that allows for an
arbitrary degree distribution by conditioning on degrees.
We provide empirical results showing that the algorithms
perform well under a range of settings, including on very
sparse networks, and illustrate on the example of a
network of political blogs. We also present spectral
clustering with perturbations, a method of independent
interest, which works well on sparse networks where
regular spectral clustering fails, and use it to provide an
initial value for pseudo-likelihood. We discuss theoretical
results showing that pseudo-likelihood provides
consistent estimates of the communities under mild
conditions on the starting value, for the case of a block
model with two communities. Time permitting, we give
some insights as to why perturbations help with spectral
clustering on sparse networks.

Tuesday
February 11,
2014
at 3:30pm

Sidney Smith
Hall, Room
2118

Refreshments
will be served
at 3:15p

Pseudo-likelihood methods for community detection in large sparse networks @ Sidney Smith Hall, Room 2118
Feb 11 @ 15:30 – 16:30

Tuesday February 11, 2014 at 3:30pm

Sidney Smith Hall, Room 2118

*Refreshments will be served at 3:15pm

Pseudo-likelihood methods for community detection in large sparse networks 

Dr. Arash Amini, University of Michigan

We consider the problem of community detection in a network, that is, partitioning the nodes into groups that, in some sense, reveal the structure of the network. Many algorithms have been proposed for fitting network models with communities, but most of them do not scale well to large networks, and often fail on sparse networks. We present a fast pseudo-likelihood method for fitting the stochastic block model, a well-known model for networks with communities, as well as a variant that allows for an arbitrary degree distribution by conditioning on degrees.

We provide empirical results showing that the algorithms perform well under a range of settings, including on very sparse networks, and illustrate on the example of a network of political blogs. We also present spectral clustering with perturbations, a method of independent interest, which works well on sparse networks where regular spectral clustering fails, and use it to provide an initial value for pseudo-likelihood. We discuss theoretical results showing that pseudo-likelihood provides consistent estimates of the communities under mild conditions on the starting value, for the case of a block model with two communities. Time permitting, we give some insights as to why perturbations help with spectral clustering on sparse networks.

 

 

http://www.utstat.toronto.edu/wordpress/wp-content/uploads/2014/01/ArashAminiFeb112014.pdf

 

Feb
12
Wed
Beyond Measurement Artifacts: Integrating Measurement Equivalence with Theory Development in Cross Cultural Research @ York University Schulich School of Business, N106
Feb 12 @ 11:30 – 13:00
The Organization Studies area at Schulich invites you to attend a seminar with Professor Gordon Cheung (The Chinese University of Hong Kong). Prof. Cheung is an outstanding and innovative researcher with expertise in research methods and structural equation modelling, as well as international and cross-cultural research.
Prof. Cheung is currently professor at the Department of Management, The Chinese University of Hong Kong.  He is a dedicated researcher with expertise in research methods and structural equation modeling. He has published more than 20 articles in research methodologies, which have been cited about 4,000 times. He has twice received the Sage Best Paper Award from the Research Methods Division of the Academy of Management (2000 and 2009) and in 2008 the Best Published Paper Award in Organizational Research Methods. Prof. Cheung served as the Division Chair of the Research Methods Division of the Academy of Management in 2006/07. Prof. Cheung’s research interest in measurement equivalence/invariance (ME/I) started more than 15 years ago and he has published over 10 papers in this area. His paper “Testing Factorial Invariance Across Groups: A Reconceptualization and Proposed New Method” published at Journal of Management in 1999 and the paper “Assessing Extreme and Acquiescence Response Sets in Cross-Cultural Research Using Structural Equations Modeling” published at Journal of Cross-Cultural Psychology in 2000 define the way on how ME/I should be examined. The paper “Evaluating Goodness-of-Fit Indices for Testing Measurement Invariance” published in 2002 at Structural Equation Modeling Journal, which defines the standard on how nested models should be compared, has received more than 2,000 citations. The paper “Testing Equivalence in the Structure, Means, and Variances of Higher-Order Constructs with Structural Equation Modeling” published in 2008 at Organizational Research Methods received the 2008 Best Paper Published in Organizational Research Methods Award.Structure, Means, and Variances of Higher-Order Constructs with Structural Equation Modeling” published in 2008 at Organizational Research Methods received the 2008 Best Paper Published in Organizational Research Methods Award. 

Feb
13
Thu
Computational Foundations of Bayesian Inference and Probabilistic Programming @ Sidney Smith Hall, Room 1074
Feb 13 @ 15:30 – 16:45

Thursday February 13, 2014

at 3:30pm

Sidney Smith Hall, Room 1074

**Refreshments will be served at 3:15pm

Computational Foundations of Bayesian Inference and Probabilistic Programming

Dr. Daniel Roy, University of Cambridge

The complexity, scale, and variety of data sets we now have access to have grown enormously, and present exciting opportunities for new applications.  Just as high-level programming languages and compilers empowered experts to solve computational problems more quickly, and made it possible for non-experts to solve them at all, a number of high-level probabilistic programming languages with computationally universal inference engines have been developed with the potential to similarly transform the practice of Bayesian statistics.  These systems provide formal languages for specifying probabilistic models compositionally, and general algorithms for turning these specifications into efficient algorithms for inference.

In this talk, I will address three key questions at the theoretical and algorithmic foundations of probabilistic programming—and probabilistic modeling more generally—that can be answered using tools from probability theory, computability and complexity theory, and nonparametric Bayesian statistics.  Which Bayesian inference problems can be automated, and which cannot?  Can probabilistic programming languages represent the stochastic processes at the core of state-of-the-art nonparametric Bayesian models?  And if not, can we construct useful approximations?  I’ll close by relating these questions to other challenges and opportunities ahead at the intersections of computer science, statistics, and probability.

 

 

http://www.utstat.toronto.edu/wordpress/?page_id=18

 

Feb
24
Mon
Integrating Ratings of Child Psychopathology across Multiple Informants @ Norm Endler Room (BSB 164)
Feb 24 @ 22:15 – 23:15
Quantitative Methods Forum @ Norm Endler Room (BSB 164)

Feb 24 @ 10:15 AM – 11:15 AM

Speaker: Dr. Andrea Howard, Carleton University
Department of Psychology

Title: Integrating Ratings of Child Psychopathology across Multiple Informants

Abstract: One of the most significant challenges facing researchers and practitioners who assess child psychopathology is how to integrate information about a child’s symptoms from multiple sources when those sources provide discrepant ratings (De Los Reyes & Kazdin, 2004). It is common to obtain ratings for a single target child from informants such as parents, teachers, and peers, but it is less clear how to combine the information provided by multiple informants to derive an integrated measure of the psychopathology trait of interest that is not confounded with informants’ unique perspectives. A new approach to this problem stipulates a  trifactor measurement model to analytically disaggregate informants’ unique perspectives of children’s symptoms from a cross-informant consensus rating of their true symptoms (Bauer, Howard et al., 2013). Preliminary results from a new study expand the trifactor model to a three-informant, multi-trait assessment of inattention and hyperactivity/impulsivity symptoms using data drawn from baseline assessments of children enrolled in a randomized controlled trial study of treatments for Attention-Deficit/Hyperactivity Disorder (ADHD). 

Suggested Readings:
Bauer, D. J., Howard, A. L., Baldasaro, R. E., Curran, P. J., Hussong, A. M., Chassin, L., & Zucker, R. A. (2013). A trifactor model for integrating ratings across multiple informants . Psychological methods, 18(4), 475-493.