Bayesian Statistics

You are here

Logo_perc_sense.jpg

Bayesian Statistics

To be announced

Scope

Nowadays, with the advance of computing and Markov Chain Monte Carlo (MCMC) algorithms, Bayesian statistics is becoming a powerful alternative for traditional Frequentistic statistics. The philosophy behind Bayesian Statistics is discussed. Practical examples are studied, and analysed using the (freeware) program WinBUGS. Participants will be surprised how easy they can tackle problems that are quite complicated to handle with traditional Frequentistic statistics.

Programme

null-hypothesis.pngThere are two branches of statistics: Frequentistic statistics and Bayesian statistics. These branches involve different concepts of probability: probability as a "frequency in the long run" or as a "degree of belief". "Degree of belief" comprises the concept of a prior distribution. Bayesians combine two sources of information: prior information, summarized in a prior distribution, and data, represented by a model and associated likelihood. The idea of a likelihood function and maximum likelihood estimation in Frequentistic statistics is briefly introduced (or refreshed). The two sources of information in Bayesian statistics are combined with a theorem of Thomas Bayes. This theorem forms the basis of Bayesian statistics. Possibly, Bayes would have been quite surprised about the consequences of his paper in statistics.

We briefly introduce (and probably refresh):

  • The concept of conditional probability
  • Bayes' theorem
  • An example with Haemophilia. This example, that fits into the framework of Frequentistic statistics as well, has the advantage that implementation of prior information is straightforward. It offers a first impression how prior information and data are combined.
  • Combining the prior information (prior distribution) and the information in the data (likelihood) into a posterior distribution. The posterior distribution summarises all we know about a parameter or parameters based on the prior knowledge and the data.
  • Posterior inference = deriving conclusions from the posterior distribution.
  • Informative priors, non-informative priors, improper priors, conjugate priors.
  • Posterior inference with MCMC, Gibbs sampling and the WinBUGS package.
  • Examples with different models from different areas of application.
 
General information
 
Target Group The course is aimed at PhD candidates and other academics
Group Size 24 participants
Course duration 2 days
Language of instruction English
Frequency of recurrence Once a year (Autumn)
Number of credits 0.6 ECTS
Lecturers Dr. Gerrit Gort, Dr. Bas Engel
Prior knowledge Basic Statistics
Location Wageningen University Campus
Options for accommodation Accommodation is not included in the fee of the course, but there are several possibilities in Wageningen. For information on B&B's and hotels in Wageningen please visit proefwageningen.nl. Another option is Short Stay Wageningen. Furthermore Airbnb offers several rooms in the area. Note that besides the restaurants in Wageningen, there are also options to have dinner at Wageningen Campus.

 

Fees 1

Generally, the following fees apply for this course, but note that the actual fees may be somewhat different for the next edition of this course.

PE&RC 2 / SENSE / WASS / EPS PhD candidates with an approved TSP € 150,-
All other PhD candidates, postdocs and other academic staff € 300,-
Participants from the private sector € 600,-

1 The course fee includes a reader, coffee/tea, and lunches. Accommodation is not included (NB: options for accommodation are given above)
2 Those defending their thesis at Wageningen University 
and those that are a member of IBED Amsterdam or VU Amsterdam

More information

Claudius van de Vijver (PE&RC)
Phone: +31 (0) 317 485116
Email: claudius.vandevijver@wur.nl

Lennart Suselbeek (PE&RC)
Phone: +31 (0) 317 485426
Email: lennart.suselbeek@wur.nl

Registration

This course is currently closed for registration, and will re-open once a new edition of this course is scheduled.