Basic Wiener Process
d z = ε d t \mathrm{d}z = \varepsilon \sqrt{\mathrm{d}t} d z = ε d t  where ε \varepsilon ε ϕ ( 0 , 1 ) \phi(0, 1) ϕ ( 0 , 1 ) 
Generalized Wiener Process
A generalized Wiener process for a variable x x x d x \mathrm{d} x d x 
d x = a d t + b d z \mathrm{d} x = a \mathrm{d} t + b\mathrm{d}z d x = a d t + b d z where a and b are constants.
Ito Process
An Ito process for a variable d x \mathrm{d}x d x 
d x = a ( x , t ) d t + b ( x , t ) d z \mathrm{d}x = a( x, t) \mathrm{d}t + b( x, t )\mathrm{d}z d x = a ( x , t ) d t + b ( x , t ) d z where a ( x , t ) a(x,t) a ( x , t ) b ( x , t ) b(x, t) b ( x , t ) 
Ito's Lemma
Suppose that the value of a variable x \mathrm{x} x 
d G = ( ∂ G ∂ x a ( x , t ) + ∂ G ∂ t + 1 2 ∂ 2 G ∂ x 2 b ( x , t ) 2 ) d t + ∂ G ∂ x b ( x , t ) d z dG = \left ( \frac{\partial{G}}{\partial{x}}a(x,t)+\frac{\partial{G}}{\partial{t}}+\frac{1}{2} \frac{\partial^2{G}}{\partial{x^2}} b(x,t)^2 \right )\mathrm{d}t+\frac{\partial{G}}{\partial{x}}b(x,t)\mathrm{d}z d G = ( ∂ x ∂ G  a ( x , t ) + ∂ t ∂ G  + 2 1  ∂ x 2 ∂ 2 G  b ( x , t ) 2 ) d t + ∂ x ∂ G  b ( x , t ) d z Note, G also follows an Ito process.
Proof
First of all, let's solve the mean and variance of Y Y Y Y = X 2 Y=X^2 Y = X 2 X ∼ ϕ ( 0 , σ 2 ) X \sim \phi(0, \sigma^2) X ∼ ϕ ( 0 , σ 2 ) 
Since σ 2 = E ( X 2 ) − E ( X ) 2 \sigma^2 = E(X^2) - E(X)^2 σ 2 = E ( X 2 ) − E ( X ) 2 E ( X ) = 0 E(X) = 0 E ( X ) = 0 
E ( X 2 ) = σ 2 E(X^2) = \sigma^2 E ( X 2 ) = σ 2 And becuaseV a r X 2 = E X 4 − ( E X 2 ) 2 VarX^2 = EX^4 - (EX^2)^2 Va r X 2 = E X 4 − ( E X 2 ) 2 
E ( X 4 ) = ∫ x 4 2 π σ e − x 2 2 σ 2 d x = ∫ − σ 2 x 3 2 π σ e − x 2 2 σ 2 d − x 2 2 σ 2 = ∫ − σ 2 x 3 2 π σ d e − x 2 2 σ 2 = 1 2 π σ ( − σ 2 x 3 e − x 2 2 σ 2 ∣ − ∞ + ∞ + 3 σ 2 ∫ x 2 e − x 2 2 σ 2 d x ) = 1 2 π σ ( 0 − 3 σ 4 x e − x 2 2 σ 2 ∣ − ∞ + ∞ + 3 σ 4 ∫ e − x 2 2 σ 2 d x ) = 3 σ 4 \begin{aligned} E(X^4) &= \int \frac{x^4}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}dx \\ &=\int -\frac{\sigma^2x^3}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}d-\frac{x^2}{2\sigma^2} \\&=\int -\frac{\sigma^2x^3}{\sqrt{2\pi}\sigma}de^{-\frac{x^2}{2\sigma^2}} \\&= \frac{1}{\sqrt{2\pi}\sigma}\left ( -\sigma^2x^3e^{-{\frac{x^2}{2\sigma^2}}}|_{-\infty }^{+\infty } + 3\sigma^2\int x^2e^{-{\frac{x^2}{2\sigma^2}}}dx \right ) \\ &= \frac{1}{\sqrt{2\pi}\sigma} \left ( 0 - 3\sigma^4xe^{-{\frac{x^2}{2\sigma^2}}}|_{-\infty }^{+\infty } + 3\sigma^4\int e^{-{\frac{x^2}{2\sigma^2}}}dx \right ) \\ &= 3\sigma^4 \end{aligned} E ( X 4 )  = ∫ 2 π  σ x 4  e − 2 σ 2 x 2  d x = ∫ − 2 π  σ σ 2 x 3  e − 2 σ 2 x 2  d − 2 σ 2 x 2  = ∫ − 2 π  σ σ 2 x 3  d e − 2 σ 2 x 2  = 2 π  σ 1  ( − σ 2 x 3 e − 2 σ 2 x 2  ∣ − ∞ + ∞  + 3 σ 2 ∫ x 2 e − 2 σ 2 x 2  d x ) = 2 π  σ 1  ( 0 − 3 σ 4 x e − 2 σ 2 x 2  ∣ − ∞ + ∞  + 3 σ 4 ∫ e − 2 σ 2 x 2  d x ) = 3 σ 4  Actually, the generalized form of E [ X 2 n ] E[X^{2n}] E [ X 2 n ] 
E [ X 2 n ] = ( 2 n − 1 ) ! ! σ 2 n E\left [ X^{2n}\right ] = (2n - 1)!!\sigma^{2n} E [ X 2 n ] = ( 2 n − 1 )!! σ 2 n 
where! ! !! !! double factorial .
Thus,
V a r X 2 = E X 4 − ( E X 2 ) 2 = 3 σ 4 − σ 4 = 2 σ 4 \begin{aligned} VarX^2 &= EX^4 - (EX^2)^2 \\ &= 3\sigma^4 - \sigma^4 \\ &= 2\sigma^4 \end{aligned} Va r X 2  = E X 4 − ( E X 2 ) 2 = 3 σ 4 − σ 4 = 2 σ 4  Consider a continuous and differentiable function G of two variables x and t, a Taylor series expansion of Δ G \Delta G Δ G 
Δ G = ∂ G ∂ x Δ x + ∂ G ∂ t Δ t + 1 2 ∂ 2 G ∂ x 2 Δ x 2 + ∂ 2 G ∂ x ∂ t Δ x Δ t + 1 2 ∂ 2 G ∂ t 2 Δ t 2 + … \Delta{G}=\frac{\partial{G}}{\partial{x}}\Delta{x} + \frac{\partial{G}}{\partial{t}}\Delta{t} + \frac{1}{2}\frac{\partial^2G}{\partial{x}^2}\Delta{x}^2 + \frac{\partial^2{G}}{\partial{x}\partial{t}}\Delta{x}\Delta{t}+\frac{1}{2}\frac{\partial^2{G}}{\partial{t}^2}\Delta{t}^2+\dots Δ G = ∂ x ∂ G  Δ x + ∂ t ∂ G  Δ t + 2 1  ∂ x 2 ∂ 2 G  Δ x 2 + ∂ x ∂ t ∂ 2 G  Δ x Δ t + 2 1  ∂ t 2 ∂ 2 G  Δ t 2 + … discretize the Ito process to
Δ x = a ( x , t ) Δ t + b ( x , t ) Δ z = a ( x , t ) Δ t + b ( x , t ) ε Δ t \begin{aligned} \mathrm{\Delta}x &= a( x, t) \mathrm{\Delta}t + b( x, t )\mathrm{\Delta}z \\ &= a(x, t)\Delta{t}+b(x,t)\varepsilon\sqrt{\Delta{t}} \end{aligned} Δ x  = a ( x , t ) Δ t + b ( x , t ) Δ z = a ( x , t ) Δ t + b ( x , t ) ε Δ t   then we have
Δ x 2 = b 2 ( x , t ) ε 2 Δ t + terms of higher order in  Δ t \Delta{x}^2 = b^2(x, t)\varepsilon^2\Delta{t} + \text{terms of higher order in } \Delta{t} Δ x 2 = b 2 ( x , t ) ε 2 Δ t + terms of higher order in  Δ t It shows that the term involving Δ x 2 \Delta{x}^2 Δ x 2 
E ( X 2 ) = σ 2  and  V a r ( X 2 ) = 2 σ 4 E(X^2) = \sigma^2 \text{ and }Var{(X^2)} = 2\sigma^4 E ( X 2 ) = σ 2  and  Va r ( X 2 ) = 2 σ 4 Since the variance of a standard normal distribution is 1, E ( ε 2 Δ t ) = Δ t E(\varepsilon^2\Delta{t}) = \Delta{t} E ( ε 2 Δ t ) = Δ t V a r ( ε 2 Δ t ) = 2 Δ t 2 . Var(\varepsilon^2\Delta{t}) = 2\Delta{t}^2. Va r ( ε 2 Δ t ) = 2Δ t 2 . 
We know that the variance of the change in a stochastic variable in time Δ t \Delta{t} Δ t Δ t \Delta{t} Δ t Δ t 2 \Delta{t}^2 Δ t 2 ε 2 Δ t \varepsilon^2\Delta{t} ε 2 Δ t Δ t \Delta{t} Δ t Δ t \Delta{t} Δ t 
Taking limits as Δ x \Delta{x} Δ x Δ t \Delta{t} Δ t d x 2 = b 2 ( x , t ) d t dx^2 = b^2(x,t)dt d x 2 = b 2 ( x , t ) d t 
d G = ∂ G ∂ x d x + ∂ G ∂ t d t + 1 2 ∂ 2 G ∂ x 2 b 2 ( x , t ) d t dG = \frac{\partial{G}}{\partial{x}}dx + \frac{\partial{G}}{\partial{t}}dt + \frac{1}{2}\frac{\partial^2{G}}{\partial{x}^2}b^2(x,t)dt d G = ∂ x ∂ G  d x + ∂ t ∂ G  d t + 2 1  ∂ x 2 ∂ 2 G  b 2 ( x , t ) d t if we substitute the dx with d x = a ( x , t ) d t + b ( x , t ) d z \mathrm{d}x = a( x, t) \mathrm{d}t + b( x, t )\mathrm{d}z d x = a ( x , t ) d t + b ( x , t ) d z 
d G = ( ∂ G ∂ x a ( x , t ) + ∂ G ∂ t + 1 2 ∂ 2 G ∂ x 2 b 2 ( x , t ) ) d t + ∂ G ∂ x b ( x , t ) d z dG = \left ( \frac{\partial{G}}{\partial{x}}a(x,t) + \frac{\partial{G}}{\partial{t}} + \frac{1}{2}\frac{\partial^2G}{\partial{x}^2} b^2(x,t)\right )dt + \frac{\partial{G}}{\partial{x}}b(x,t)dz d G = ( ∂ x ∂ G  a ( x , t ) + ∂ t ∂ G  + 2 1  ∂ x 2 ∂ 2 G  b 2 ( x , t ) ) d t + ∂ x ∂ G  b ( x , t ) d z Property of Stock Prices
The Process for A Stock Return
The most widely used model of stock return is
d S S = μ d t + σ d z  and  Δ S S ∼ ϕ ( μ Δ t , σ 2 Δ t ) \frac{dS}{S} = \mu \mathrm{d}t + \sigma \mathrm{d}z \text{ and } \frac{\Delta{S}}{S} \sim \phi(\mu\Delta{t}, \sigma^2\Delta{t}) S d S  = μ d t + σ d z  and  S Δ S  ∼ ϕ ( μ Δ t , σ 2 Δ t ) where μ \mu μ σ \sigma σ μ \mu μ r r r 
The Log Return
The Taylor series expansion of ln  ( 1 + x ) \ln{(1+x)} ln ( 1 + x ) 
ln  ( 1 + x ) = 0 + 1 1 + 0 × Δ x − 1 2 1 ( 1 + 0 ) 2 × Δ x 2 + ⋯ ≈ x \ln{(1+x)} = 0 + \frac{1}{1+0}\times \Delta{x} - \frac{1}{2}\frac{1}{(1+0)^2} \times \Delta{x}^2 + \dots \approx x ln ( 1 + x ) = 0 + 1 + 0 1  × Δ x − 2 1  ( 1 + 0 ) 2 1  × Δ x 2 + ⋯ ≈ x for small Δ x \Delta{x} Δ x Δ x \Delta{x} Δ x 
Thus, ln  S t S 0 = ln  ( 1 + S t − S 0 S 0 ) ≈ Δ S S 0  ONLY for small  Δ S \ln{\frac{S_t}{S_0}} = \ln{(1 + \frac{S_t-S_0}{S_0})} \approx \frac{\Delta{S}}{S_0} \text{ ONLY for small } \Delta{S} ln S 0  S t   = ln ( 1 + S 0  S t  − S 0   ) ≈ S 0  Δ S   ONLY for small  Δ S S t S_t S t  
The log rate of return is also called the continuously compounded rate of return.
The Lognormal Property of Stock Prices
Define
Where S follows d S S = μ d t + σ d z \frac{dS}{S} = \mu \mathrm{d}t + \sigma \mathrm{d}z S d S  = μ d t + σ d z 
Since
∂ G ∂ S = 1 S , ∂ 2 G ∂ S 2 = − 1 S 2 , ∂ G ∂ t = 0 \frac{\partial{G}}{\partial{S}} = \frac{1}{S}, \frac{\partial^2G}{\partial{S^2}} = - \frac{1}{S^2}, \frac{\partial{G}}{\partial{t}} = 0 ∂ S ∂ G  = S 1  , ∂ S 2 ∂ 2 G  = − S 2 1  , ∂ t ∂ G  = 0 Using Ito's Lemma, we can get
d G = ( μ − σ 2 2 ) d t + σ d z dG = \left ( \mu - \frac{\sigma^2}{2}\right)dt + \sigma dz d G = ( μ − 2 σ 2  ) d t + σ d z Since μ \mu μ σ \sigma σ G = ln  S G = \ln{S} G = ln S μ − σ 2 / 2 \mu - \sigma^2/2 μ − σ 2 /2 σ 2 \sigma^2 σ 2 ln  S \ln{S} ln S T T T ( μ − σ 2 / 2 ) \left( \mu-\sigma^2/2\right) ( μ − σ 2 /2 ) σ 2 T \sigma^2T σ 2 T 
ln  S T S 0 ∼ ϕ [ ( u − σ 2 / 2 ) T , σ 2 T ] \ln\frac{S_T}{S_0} \sim \phi{\left [\left ( u - \sigma^2/2 \right )T, \sigma^2T \right ]} ln S 0  S T   ∼ ϕ [ ( u − σ 2 /2 ) T , σ 2 T ] Where S T S_T S T  S 0 S_0 S 0  
Black-Scholes Equation
c = S 0 N ( d 1 ) − K e − r T N ( d 2 ) c = S_0N(d_1) - Ke^{-rT}N(d_2) c = S 0  N ( d 1  ) − K e − r T N ( d 2  ) p = K e − r T N ( − d 2 ) − S 0 N ( − d 1 ) p = Ke^{-rT}N(-d_2)-S_0N(-d_1) p = K e − r T N ( − d 2  ) − S 0  N ( − d 1  ) Where
d 1 = ln  ( S 0 / K ) + ( r + σ 2 / 2 ) T σ T d_1 = \frac{\ln{\left ( S_0 / K \right )} + \left ( r + \sigma^2/2\right )T}{\sigma \sqrt{T}} d 1  = σ T  ln ( S 0  / K ) + ( r + σ 2 /2 ) T  d 2 = ln  ( S 0 / K ) + ( r − σ 2 / 2 ) T σ T = d 1 − σ T d_2 = \frac{\ln{\left ( S_0/K\right )} +\left ( r - \sigma^2/2\right )T}{\sigma \sqrt T} = d_1 - \sigma \sqrt{T} d 2  = σ T  ln ( S 0  / K ) + ( r − σ 2 /2 ) T  = d 1  − σ T  Proof
Key Result
If V is lognormally distributed, and the mean of ln  V \ln{V} ln V m m m ln  V \ln{V} ln V w w w 
E ( max  ( V − K , 0 ) ) = E ( V ) N ( d 1 ) − K N ( d 2 ) E( \max(V - K, 0)) = E(V)N(d_1) - KN(d_2) E ( max ( V − K , 0 )) = E ( V ) N ( d 1  ) − K N ( d 2  ) Where
d 1 = ln  [ E ( V ) / K ] + w 2 / 2 w d_1 = \frac{\ln{\left [ E(V)/K\right ]} + w^2/2}{w} d 1  = w ln [ E ( V ) / K ] + w 2 /2  d 2 = ln  [ E ( V ) / K ] − w 2 / 2 w d_2 = \frac{\ln \left [ E(V) / K\right ] - w^2/2}{w} d 2  = w ln [ E ( V ) / K ] − w 2 /2  Note that as to the call option,
N ( d 2 ) N(d_2) N ( d 2  ) 
S 0 e r T N ( d 1 ) / N ( d 2 ) S_0e^{rT}N(d_1)/N(d_2) S 0  e r T N ( d 1  ) / N ( d 2  ) 
Similarly, as to the put option
N ( − d 2 ) N(-d_2) N ( − d 2  ) 
S 0 N ( − d 1 ) / N ( − d 2 ) S_0N(-d_1)/N(-d_2) S 0  N ( − d 1  ) / N ( − d 2  ) 
Proof of Key Result
Define g ( V ) g(V) g ( V ) 
E [ max  ( V − K , 0 ) ] = ∫ K ∞ ( V − K ) g ( V ) d V E\left [ \max(V-K, 0) \right ] = \int_{K}^{\infty} (V-K)g(V)dV E [ max ( V − K , 0 ) ] = ∫ K ∞  ( V − K ) g ( V ) d V From the properties of the lognormal distribution ,
E ( V ) = e m + 1 2 σ 2 E(V)=e^{m+\frac{1}{2}\sigma^2} E ( V ) = e m + 2 1  σ 2 m = ln  E ( V ) − w 2 / 2 m = \ln{E(V)} - w^2/2 m = ln E ( V ) − w 2 /2 Define a new variable
Q = f ( V ) = ln  V − m w Q = f(V) = \frac{\ln{V} - m}{w} Q = f ( V ) = w ln V − m  It follows standard normal distribution,
Denote the density function of Q Q Q h ( Q ) h(Q) h ( Q ) h ( Q ) = 1 2 π e − Q 2 / 2 h(Q) = \frac{1}{\sqrt{2\pi}}e^{-Q^2/2} h ( Q ) = 2 π  1  e − Q 2 /2 
And since f ( V ) f(V) f ( V ) the formula for the density of a strictly increasing function .
Support R Q = { q = f ( v ) : v ∈ R V } R_{Q} = \left \{ q = f(v) : v \in R_{V}\right \} R Q  = { q = f ( v ) : v ∈ R V  } 
h ( q ) = { g ( f − 1 ( q ) ) d f − 1 ( q ) d q , if  q ∈ R Q 0 , if q ∉ R Q \begin{aligned} h(q) = \left \{ \begin{aligned} &g(f^{-1}(q))\frac{df^{-1}(q)}{dq}, && \text{if}\ q \in R_{Q}\\ &0, && \text{if} q \notin R_{Q} \end{aligned} \right. \end{aligned} h ( q ) = ⎩ ⎨ ⎧   g ( f − 1 ( q )) d q d f − 1 ( q )  , 0 ,   if   q ∈ R Q  if q ∈ / R Q    E [ max  ( V − K , 0 ) ] = ∫ K ∞ ( V − K ) g ( V ) d v = ∫ f ( K ) ∞ [ f − 1 ( Q ) − K ] g ( f − 1 ( Q ) ) d ( f − 1 ( Q ) ) d Q d Q = ∫ ( ln  K − m ) / w ∞ ( e Q w + m − K ) h ( Q ) d Q = ∫ ( ln  K − m ) / w ∞ e Q w + m h ( Q ) d Q − K ∫ ( ln  K − m ) / 2 ∞ h ( Q ) d ( Q ) \begin{aligned} E\left [ \max(V-K, 0) \right ] &= \int_{K}^{\infty} (V-K)g(V)dv \\ &= \int_{f(K)}^{\infty} \left [f^{-1}(Q) - K\right ]g(f^{-1}(Q))\frac{d(f^{-1}(Q))}{dQ}dQ \\ &= \int_{(\ln{K}-m)/w}^{\infty}\left( e^{Qw+m} - K \right )h(Q)dQ \\ &= \int_{(\ln{K} -m)/w}^{\infty}e^{Qw+m}h(Q)dQ-K\int_{(\ln{K} - m)/2}^{\infty}h(Q)d(Q) \end{aligned} E [ max ( V − K , 0 ) ]  = ∫ K ∞  ( V − K ) g ( V ) d v = ∫ f ( K ) ∞  [ f − 1 ( Q ) − K ] g ( f − 1 ( Q )) d Q d ( f − 1 ( Q ))  d Q = ∫ ( l n K − m ) / w ∞  ( e Qw + m − K ) h ( Q ) d Q = ∫ ( l n K − m ) / w ∞  e Qw + m h ( Q ) d Q − K ∫ ( l n K − m ) /2 ∞  h ( Q ) d ( Q )  Now
e Q w + m h ( Q ) = 1 2 π e ( − Q 2 + 2 Q w + 2 m ) / 2 = 1 2 π e [ − ( Q − w ) 2 + 2 m + w 2 ] / 2 = e m + w 2 / 2 2 π e [ − ( Q − w ) 2 ] / 2 = e m + w 2 / 2 h ( Q − w ) \begin{aligned} e^{Qw+m}h(Q) &= \frac{1}{\sqrt{2\pi}}e^{(-Q^2+2Qw+2m)/2} \\ &=\frac{1}{\sqrt{2\pi}}e^{\left [ -(Q-w)^2+2m+w^2 \right ]/2} \\ &=\frac{e^{m+w^2/2}}{\sqrt{2\pi}}e^{\left [ -(Q-w)^2\right ]/2} \\ &= e^{m+w^2/2}h(Q-w) \end{aligned} e Qw + m h ( Q )  = 2 π  1  e ( − Q 2 + 2 Qw + 2 m ) /2 = 2 π  1  e [ − ( Q − w ) 2 + 2 m + w 2 ] /2 = 2 π  e m + w 2 /2  e [ − ( Q − w ) 2 ] /2 = e m + w 2 /2 h ( Q − w )  This means that
E ( m a x ( V − K , 0 ) ) = e m + w 2 / 2 ∫ ( ln  K − m ) / w ∞ h ( Q − w ) d Q − K ∫ ( ln  K − m ) / w ∞ h ( Q ) d Q = e m + w 2 / 2 ∫ ( ln  K − m ) / w − w ∞ h ( Q − w ) d ( Q − w ) − K ∫ ( ln  K − m ) / w ∞ h ( Q ) d Q = e m + w 2 / 2 [ 1 − N ( ln  K − m w − w ) ] − K [ 1 − N ( ln  K − m w ) ] = e m + w 2 / 2 N ( − ln  K + m w + w ) − K N ( − ln  K + m w ) = E ( V ) N ( − ln  K + ln  E ( V ) − w 2 / 2 w + w ) − K N ( − ln  K + ln  E ( V ) − w 2 / 2 w ) = E ( V ) N ( ln  E ( V ) K + w 2 / 2 w ) − K N ( ln  E ( V ) K − w 2 / 2 w ) = E ( V ) N ( d 1 ) − K N ( d 2 ) \begin{aligned} E(max(V-K, 0)) &= e^{m+w^2/2}\int_{(\ln{K}-m)/w}^{\infty}h(Q-w)dQ - K\int_{(\ln{K}-m)/w}^{\infty}h(Q)dQ \\ &= e^{m+w^2/2}\int_{(\ln{K}-m)/w-w}^{\infty}h(Q-w)d(Q-w) - K\int_{(\ln{K}-m)/w}^{\infty}h(Q)dQ \\ &= e^{m+w^2/2}\left[1 - N\left (\frac{\ln{K}-m}{w}-w\right )\right] - K\left[1 - N\left(\frac{\ln{K-m}}{w}\right) \right] \\ &=e^{m+w^2/2} N\left (\frac{-\ln{K}+m}{w}+w \right) - K N\left ( \frac{-\ln{K}+m}{w}\right ) \\ &= E(V) N\left (\frac{-\ln{K}+\ln{E(V)}-w^2/2}{w}+w \right) - K N\left ( \frac{-\ln{K}+\ln{E(V)}-w^2/2}{w}\right ) \\ &= E(V) N\left (\frac{\ln{\frac{E(V)}{K}}+w^2/2}{w} \right) - K N\left ( \frac{\ln{\frac{E(V)}{K}}-w^2/2}{w}\right ) \\ &= E(V) N\left (d_1\right) - K N\left ( d_2\right ) \end{aligned} E ( ma x ( V − K , 0 ))  = e m + w 2 /2 ∫ ( l n K − m ) / w ∞  h ( Q − w ) d Q − K ∫ ( l n K − m ) / w ∞  h ( Q ) d Q = e m + w 2 /2 ∫ ( l n K − m ) / w − w ∞  h ( Q − w ) d ( Q − w ) − K ∫ ( l n K − m ) / w ∞  h ( Q ) d Q = e m + w 2 /2 [ 1 − N ( w ln K − m  − w ) ] − K [ 1 − N ( w ln K − m  ) ] = e m + w 2 /2 N ( w − ln K + m  + w ) − K N ( w − ln K + m  ) = E ( V ) N ( w − ln K + ln E ( V ) − w 2 /2  + w ) − K N ( w − ln K + ln E ( V ) − w 2 /2  ) = E ( V ) N ( w ln K E ( V )  + w 2 /2  ) − K N ( w ln K E ( V )  − w 2 /2  ) = E ( V ) N ( d 1  ) − K N ( d 2  )  Where
d 1 = ln  E ( V ) K + w 2 / 2 w d_1 = \frac{\ln{\frac{E(V)}{K}}+w^2/2}{w} d 1  = w ln K E ( V )  + w 2 /2  d 2 = ln  E ( V ) K − w 2 / 2 w d_2 = \frac{\ln{\frac{E(V)}{K}}-w^2/2}{w} d 2  = w ln K E ( V )  − w 2 /2  The Black-Scholes Result
We now consider a call option on a non-dividend-paying stock maturing at time T. The strike price is K, the risk-free rate is r, the current stock price is S 0 S_0 S 0  S T S_{T} S T  σ \sigma σ 
c = e − r T E ˆ ( max  ( S T − K , 0 ) ) c = e^{-rT}\^{E}(\max(S_{T}-K, 0)) c = e − r T E ˆ ( max ( S T  − K , 0 )) Where E ˆ \^{E} E ˆ S T S_{T} S T  
ln  S T S 0 ∼ ϕ [ ( r − σ 2 / 2 ) T , σ 2 T ] \ln\frac{S_T}{S_0} \sim \phi{\left [\left ( r - \sigma^2/2 \right )T, \sigma^2T \right ]} ln S 0  S T   ∼ ϕ [ ( r − σ 2 /2 ) T , σ 2 T ] Based on the property of lognormal, E ˆ ( S T S 0 ) = e r T \^{E}(\frac{S_T}{S_0})=e^{rT} E ˆ ( S 0  S T   ) = e r T V a r ( S T S 0 ) = e 2 r T ( e σ 2 T − 1 ) Var\left( \frac{S_T}{S_0}\right)=e^{2rT}\left(e^{\sigma^2T}-1 \right) Va r ( S 0  S T   ) = e 2 r T ( e σ 2 T − 1 ) 
c = e − r T [ S 0 e r T N ( d 1 ) − K N ( d 2 ) ] = S 0 N ( d 1 ) − K e − r T N ( d 2 ) c = e^{-rT}\left [ S_{0}e^{rT}N(d_1)-KN(d_2) \right ] = S_0N(d_1)-Ke^{-rT}N(d_2) c = e − r T [ S 0  e r T N ( d 1  ) − K N ( d 2  ) ] = S 0  N ( d 1  ) − K e − r T N ( d 2  ) Where
d 1 = ln  [ E ˆ ( S T ) / K ] + σ 2 T / 2 σ T = ln  ( S 0 / K ) + ( r + σ 2 / 2 ) T σ T d_1 = \frac{\ln{\left [ \^{E}(S_T)/K\right ]+\sigma^2T/2}}{\sigma \sqrt{T}} = \frac{\ln(S_0/K)+(r+\sigma^2/2)T}{\sigma \sqrt{T}} d 1  = σ T  ln [ E ˆ ( S T  ) / K ] + σ 2 T /2  = σ T  ln ( S 0  / K ) + ( r + σ 2 /2 ) T  d 2 = ln  [ E ˆ ( S T ) / K ] − σ 2 T / 2 σ T = ln  ( S 0 / K ) + ( r − σ 2 / 2 ) T σ T d_2 = \frac{\ln{\left [ \^{E}(S_T)/K\right ]-\sigma^2T/2}}{\sigma \sqrt{T}} = \frac{\ln(S_0/K)+(r-\sigma^2/2)T}{\sigma \sqrt{T}} d 2  = σ T  ln [ E ˆ ( S T  ) / K ] − σ 2 T /2  = σ T  ln ( S 0  / K ) + ( r − σ 2 /2 ) T  Proof of the Put Option
We can do it in the same way as above. However, here we will do it in another way.
We know that as put option, N ( − d 2 ) N(-d_2) N ( − d 2  ) 
E ˆ ( max  ( K − S T , 0 ) ) = ∫ − ∞ − d 2 ( K − S 0 e [ ( r − 1 2 σ 2 ) T + σ T ε ] ) f ( ε ) d ε = K N ( − d 2 ) − ∫ − ∞ − d 2 S 0 e r T 1 2 π e − ( ε − σ T ) 2 2 d ε = K N ( − d 2 ) − ∫ − ∞ − d 2 − σ T S 0 e r T 1 2 π e − ( ε − σ T ) 2 2 d ( ε − σ T ) = K N ( − d 2 ) − ∫ − ∞ − d 1 S 0 e r T 1 2 π e − ( ε − σ T ) 2 2 d ( ε − σ T ) = K N ( − d 2 ) − S 0 e r T N ( − d 1 ) \begin{aligned} \^{E}(\max(K-S_T, 0))&=\int_{-\infty}^{-d_2}\left ( K - S_0e^{\left [(r-\frac{1}{2}\sigma^2)T+\sigma \sqrt{T}\varepsilon\right ]}\right )f(\varepsilon)d\varepsilon \\ &=KN(-d_2) - \int_{-\infty}^{-d_2}S_0e^{rT}\frac{1}{\sqrt{2\pi}}e^{-\frac{(\varepsilon-\sigma \sqrt{T})^2}{2}}d\varepsilon \\ &=KN(-d_2) - \int_{-\infty}^{-d_2-\sigma \sqrt{T}}S_0e^{rT}\frac{1}{\sqrt{2\pi}}e^{-\frac{(\varepsilon-\sigma \sqrt{T})^2}{2}}d(\varepsilon-\sigma \sqrt{T}) \\ &= KN(-d_2) - \int_{-\infty}^{-d_1}S_0e^{rT}\frac{1}{\sqrt{2\pi}}e^{-\frac{(\varepsilon-\sigma \sqrt{T})^2}{2}}d(\varepsilon-\sigma \sqrt{T}) \\ &= KN(-d_2)-S_0e^{rT}N(-d_{1}) \end{aligned} E ˆ ( max ( K − S T  , 0 ))  = ∫ − ∞ − d 2   ( K − S 0  e [ ( r − 2 1  σ 2 ) T + σ T  ε ] ) f ( ε ) d ε = K N ( − d 2  ) − ∫ − ∞ − d 2   S 0  e r T 2 π  1  e − 2 ( ε − σ T  ) 2  d ε = K N ( − d 2  ) − ∫ − ∞ − d 2  − σ T   S 0  e r T 2 π  1  e − 2 ( ε − σ T  ) 2  d ( ε − σ T  ) = K N ( − d 2  ) − ∫ − ∞ − d 1   S 0  e r T 2 π  1  e − 2 ( ε − σ T  ) 2  d ( ε − σ T  ) = K N ( − d 2  ) − S 0  e r T N ( − d 1  )  where
ε \varepsilon ε ϕ ( 0 , 1 ) \phi(0, 1) ϕ ( 0 , 1 ) 
f ( ε ) = 1 2 π e − ε 2 2 f(\varepsilon) = \frac{1}{\sqrt{2 \pi}}e^{-\frac{\varepsilon^2}{2}} f ( ε ) = 2 π  1  e − 2 ε 2  
Strictly speaking, this is not a deduction of the Black-Scholes equation, because we were using the result from the equation that N ( − d 2 ) N(-d_2) N ( − d 2  )