Skip to content

WooGyeongDong/Logistic-Regression

Repository files navigation

Logistic-Regression

  • Notation $$X\in \mathbb{R}^{n\times (k+1)}:Train\ Data$$

$$X= \begin{pmatrix} x_1\\ x_2\\ \vdots\\ x_n\end{pmatrix},\ \ \ \ n=number\ of\ sample$$

$$x_i=\begin{pmatrix} 1 & x_{i1} & x_{i2} & \cdots & x_{ik} \\ \end{pmatrix},\ \ \ \ k=number\ of\ factor$$

$$\beta=\begin{pmatrix} \beta_0 & \beta_1 & \cdots & \beta_k \\ \end{pmatrix}^T\in\mathbb{R}^{k+1}:Parameter$$

$$f(x)=\frac{exp(x)}{1+exp(x)}:Sigmoid\ function$$ $$\hat y=f(X\beta)\in \mathbb{R}^{n}:Predict\ value$$ $$y\in {0,1}^n:True\ value$$

  • Cross Entropy $$L(\beta;y)=-\sum [y_i\log f(x_i\beta)+(1-y_i)\log (1-f(x_i\beta))]$$ $$=-\sum [y_i(\log exp(x_i\beta)-\log (1+exp(x_i\beta))-(1-y_i)\log (1+exp(x_i\beta))]$$ $$=-\sum [y_ix_i\beta-\log(1+exp(x_i\beta))]$$

  • Minimize Loss $$\underset{\beta}{min}L(\beta;y)\Rightarrow\frac{\partial L(\beta;y)}{\partial \beta}=0$$ $$\frac{\partial L(\beta;y)}{\partial \beta_j}=-\sum [y_ix_{ij}-\frac{exp(x_i\beta)}{1+exp(x_i\beta)}x_{ij}]$$ $$=\sum (\hat y_i-y_i)x_{ij}=x_i^T(\hat y_i-y_i)$$ $$\frac{\partial L(\beta;y)}{\partial \beta}=X^T(\hat y_i-y_i)$$

Result

Learned beta: [ 0.4286 -0.2562  0.3251  0.485   0.6253 -0.7556] 
True beta: [ 0.4 -0.2  0.3  0.5  0.6 -0.7])
  • Train Loss w.r.t. the number of iterations
    output10.26.png

  • Test Data ACC
    0.846

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages