SIAM Student Seminar
Friday, September 11, 2009 - 13:00
1 hour (actually 50 minutes)
We develop a stochastic control system from a continuous-time Principal-Agent model in which both the principal and the agent have imperfect information and different beliefs about the project. We attempt to optimize the agent’s utility function under the agent’s belief. Via the corresponding Hamilton-Jacobi-Bellman equation we prove that the value function is jointly continuous and satisfies the Dynamic Programming Principle. These properties directly lead to the conclusion that the value function is a viscosity solution of the HJB equation. Uniqueness is then also established.