Learning Series: Stochastic/Markov Processes, Infinitesimal Generators, & Bayesian Networks (SIGNs), Part 0

As a part of preparating for an internship and for the sake of learning in its own right, I’m going to be writing a series of posts on the mathematics, statistics, and machine learning perspectives surrounding stochastic processes and probability theory as they relate to some papers my manager at IBM, Kush Varshney, recommended.

In particular, I’ll be writing my own understanding of the theory of stochastic processes (likely emphasizing Markov & Feller Processes) as they relate to continuous time Bayesian Networks used in machine learning.

I’m hoping to give a few different levels of explanation, and if I have the time / discipline to do it, I’ll try to give 1-3 different levels of description of the topic(s) I’m covering in that post, ranging from laymen to undergraduate to graduate level descriptions. Other times, if there are a lot of topics under one umbrella, I may just cover those fields perspectives, language used, and interrelationships.

General list of topics I’ll discuss (not exhaustive):

  • Semigroup theory as related to Markov processes

  • Terminology between fields for the same objects (e.g., infinitesimal generators & rate intensity matrices)

  • Maybe some linear algebra review

  • Time homogeneous Markov processes (discrete and continuous time)

  • Dynamic Bayesian Networks

  • Continuous Time Bayesian Networks

  • Event-Driven Continuous Time Bayesian Networks

  • Dynamic Programming and Reinforcement Learning as related to these topics

  • Functional analysis as needed throughout

  • Maybe formalizing an idea sketch for some multi-agent reinforcement learning and agent-based modeling at the end

  • Comments about intuition, examples, my own questions I had along the way, and the best answers I have so far for them.

For now, I’ll start by making the title shorter: I’ll refer to this series as the SIGNs series, for a rough acronym of (S)tochastic processes, (I)nfinitesimal (G)enerators, & Bayesian (N)etworks. It’s not perfect, but it’s way better than this one.

Conor Artman