250x250
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- __len__
- glob
- count()
- MySqlDB
- inplace()
- Database
- fnmatch
- discard()
- node.js
- 오버라이딩
- choice()
- MySQL
- items()
- JS
- remove()
- locals()
- fileinput
- randrange()
- __sub__
- mro()
- View
- decode()
- 파이썬
- HTML
- __getitem__
- zipfile
- CSS
- shutil
- shuffle()
- __annotations__
Archives
- Today
- Total
흰둥이는 코드를 짤 때 짖어 (왈!왈!왈!왈!왈!왈!왈!왈!왈!왈!왈!)
(Python) 딥러닝 본문
728x90
반응형
1. 퍼셉트론(Perceptron)
1-1. 생물학전 뉴런
- 인간의 뇌는 수십억 개의 뉴런을 가지고 있음
- 뉴런은 화학적, 전기적 신호를 처리하고 전달하는 연결된 뇌신경 세포

1-2. 인공 뉴런(Perceptron)
- 1943년에 워렌 맥컬록, 월터 피츠 단순화된 뇌세포 개념을 발표
- 신경 세포를 이진 출력을 가진 단순한 논리 게이트라고 설명
- 생물학적 뉴런의 모델에 기초한 수학적 기능으로, 각 뉴런이 입력을 받아 개별적으로 가중치를 곱하여 나온 합계를 비선형 함수를 전달하여 출력을 생성
1-3. 논리 회귀(단출 퍼셉트론)로 OR, AND 문제 풀기

In [ ]:
import torch
import torch.nn as nn
import torch.optim as optim
In [ ]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [1]])
In [ ]:
model = nn.Sequential(
nn.Linear(2, 1),
nn.Sigmoid()
)
print(model)
Sequential(
(0): Linear(in_features=2, out_features=1, bias=True)
(1): Sigmoid()
)
In [ ]:
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 1000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'Epoch {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}')
Epoch 0/1000 Loss: 0.626946 Accuracy: 50.00
Epoch 100/1000 Loss: 0.091468 Accuracy: 100.00
Epoch 200/1000 Loss: 0.047505 Accuracy: 100.00
Epoch 300/1000 Loss: 0.031712 Accuracy: 100.00
Epoch 400/1000 Loss: 0.023702 Accuracy: 100.00
Epoch 500/1000 Loss: 0.018887 Accuracy: 100.00
Epoch 600/1000 Loss: 0.015682 Accuracy: 100.00
Epoch 700/1000 Loss: 0.013399 Accuracy: 100.00
Epoch 800/1000 Loss: 0.011692 Accuracy: 100.00
Epoch 900/1000 Loss: 0.010368 Accuracy: 100.00
Epoch 1000/1000 Loss: 0.009312 Accuracy: 100.00
1-4. 논리 회귀(단층 퍼셉트론)로 AND 문제 풀기

In [ ]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [0], [0], [1]])
model = nn.Sequential(
nn.Linear(2, 1),
nn.Sigmoid()
)
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 1000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'Epoch {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}')
Epoch 0/1000 Loss: 0.631607 Accuracy: 75.00
Epoch 100/1000 Loss: 0.141134 Accuracy: 100.00
Epoch 200/1000 Loss: 0.080766 Accuracy: 100.00
Epoch 300/1000 Loss: 0.056088 Accuracy: 100.00
Epoch 400/1000 Loss: 0.042781 Accuracy: 100.00
Epoch 500/1000 Loss: 0.034502 Accuracy: 100.00
Epoch 600/1000 Loss: 0.028870 Accuracy: 100.00
Epoch 700/1000 Loss: 0.024800 Accuracy: 100.00
Epoch 800/1000 Loss: 0.021723 Accuracy: 100.00
Epoch 900/1000 Loss: 0.019319 Accuracy: 100.00
Epoch 1000/1000 Loss: 0.017389 Accuracy: 100.00
1-5. 논리 회귀(단층 퍼셉트론)로 XOR 문제 풀기

In [ ]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [0]])
model = nn.Sequential(
nn.Linear(2, 1),
nn.Sigmoid()
)
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 1000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'Epoch {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}')
Epoch 0/1000 Loss: 0.797920 Accuracy: 50.00
Epoch 100/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 200/1000 Loss: 0.693147 Accuracy: 25.00
Epoch 300/1000 Loss: 0.693147 Accuracy: 75.00
Epoch 400/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 500/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 600/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 700/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 800/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 900/1000 Loss: 0.693147 Accuracy: 50.00
Epoch 1000/1000 Loss: 0.693147 Accuracy: 50.00
2. 역전파(Backpropagation)
- 1974, by Paul Werbos
- 1986, by Hinton

In [ ]:
model = nn.Sequential(
nn.Linear(2, 64),
nn.Sigmoid(),
nn.Linear(64, 32),
nn.Sigmoid(),
nn.Linear(32, 16),
nn.Sigmoid(),
nn.Linear(16, 1),
nn.Sigmoid()
)
print(model)
Sequential(
(0): Linear(in_features=2, out_features=64, bias=True)
(1): Sigmoid()
(2): Linear(in_features=64, out_features=32, bias=True)
(3): Sigmoid()
(4): Linear(in_features=32, out_features=16, bias=True)
(5): Sigmoid()
(6): Linear(in_features=16, out_features=1, bias=True)
(7): Sigmoid()
)
In [ ]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [0]])
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 5000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'Epoch {epoch:4d}/{epochs} Loss: {loss:.6f} Accuracy: {accuracy:.2f}')
Epoch 0/5000 Loss: 0.701658 Accuracy: 50.00
Epoch 100/5000 Loss: 0.693139 Accuracy: 50.00
Epoch 200/5000 Loss: 0.693136 Accuracy: 50.00
Epoch 300/5000 Loss: 0.693133 Accuracy: 50.00
Epoch 400/5000 Loss: 0.693129 Accuracy: 50.00
Epoch 500/5000 Loss: 0.693125 Accuracy: 50.00
Epoch 600/5000 Loss: 0.693121 Accuracy: 50.00
Epoch 700/5000 Loss: 0.693117 Accuracy: 50.00
Epoch 800/5000 Loss: 0.693112 Accuracy: 50.00
Epoch 900/5000 Loss: 0.693107 Accuracy: 50.00
Epoch 1000/5000 Loss: 0.693101 Accuracy: 50.00
Epoch 1100/5000 Loss: 0.693095 Accuracy: 50.00
Epoch 1200/5000 Loss: 0.693088 Accuracy: 50.00
Epoch 1300/5000 Loss: 0.693081 Accuracy: 50.00
Epoch 1400/5000 Loss: 0.693072 Accuracy: 50.00
Epoch 1500/5000 Loss: 0.693062 Accuracy: 50.00
Epoch 1600/5000 Loss: 0.693051 Accuracy: 50.00
Epoch 1700/5000 Loss: 0.693038 Accuracy: 50.00
Epoch 1800/5000 Loss: 0.693023 Accuracy: 50.00
Epoch 1900/5000 Loss: 0.693005 Accuracy: 50.00
Epoch 2000/5000 Loss: 0.692984 Accuracy: 50.00
Epoch 2100/5000 Loss: 0.692959 Accuracy: 50.00
Epoch 2200/5000 Loss: 0.692928 Accuracy: 50.00
Epoch 2300/5000 Loss: 0.692889 Accuracy: 50.00
Epoch 2400/5000 Loss: 0.692840 Accuracy: 50.00
Epoch 2500/5000 Loss: 0.692778 Accuracy: 50.00
Epoch 2600/5000 Loss: 0.692694 Accuracy: 50.00
Epoch 2700/5000 Loss: 0.692581 Accuracy: 50.00
Epoch 2800/5000 Loss: 0.692419 Accuracy: 50.00
Epoch 2900/5000 Loss: 0.692177 Accuracy: 50.00
Epoch 3000/5000 Loss: 0.691788 Accuracy: 50.00
Epoch 3100/5000 Loss: 0.691096 Accuracy: 50.00
Epoch 3200/5000 Loss: 0.689670 Accuracy: 50.00
Epoch 3300/5000 Loss: 0.685924 Accuracy: 50.00
Epoch 3400/5000 Loss: 0.791805 Accuracy: 50.00
Epoch 3500/5000 Loss: 0.668625 Accuracy: 50.00
Epoch 3600/5000 Loss: 0.547393 Accuracy: 50.00
Epoch 3700/5000 Loss: 0.329915 Accuracy: 100.00
Epoch 3800/5000 Loss: 0.012595 Accuracy: 100.00
Epoch 3900/5000 Loss: 0.005424 Accuracy: 100.00
Epoch 4000/5000 Loss: 0.003303 Accuracy: 100.00
Epoch 4100/5000 Loss: 0.002328 Accuracy: 100.00
Epoch 4200/5000 Loss: 0.001777 Accuracy: 100.00
Epoch 4300/5000 Loss: 0.001427 Accuracy: 100.00
Epoch 4400/5000 Loss: 0.001187 Accuracy: 100.00
Epoch 4500/5000 Loss: 0.001012 Accuracy: 100.00
Epoch 4600/5000 Loss: 0.000880 Accuracy: 100.00
Epoch 4700/5000 Loss: 0.000777 Accuracy: 100.00
Epoch 4800/5000 Loss: 0.000694 Accuracy: 100.00
Epoch 4900/5000 Loss: 0.000627 Accuracy: 100.00
Epoch 5000/5000 Loss: 0.000571 Accuracy: 100.00
728x90
반응형
'파이썬 머신러닝, 딥러닝' 카테고리의 다른 글
(Python) 활성화 함수 (1) | 2023.06.16 |
---|---|
(Python) 데이터 로더 (0) | 2023.06.16 |
(Python) 파이토치로 구현한 논리 회귀 (0) | 2023.06.15 |
(Python) 파이토치로 구현한 선형회 (0) | 2023.06.15 |
(Python) 파이토치 (0) | 2023.06.15 |
Comments