๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

๐Ÿš“ Self Study65

ํ”„๋กœ๊ทธ๋ž˜๋จธ์Šค (๋ฌธ์ž์—ด ์••์ถ•, 2020 KAKAO BLIND RECRUITMENT) C++ ๋ฌธ์ž์—ด์„ ๊ฐ€์ง€๊ณ  ์žฅ๋‚œ์„ ์น˜๋Š” ๋ฌธ์ œ๋ผ๊ณ  ์ƒ๊ฐ์ด ๋“ค์—ˆ๋‹ค. substr์„ ์ž์œ ์ž์žฌ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€, erase๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ์˜ index ๋ฌธ์ œ์ ์„ ์ž˜ ํŒŒ์•…ํ•˜๊ณ  ์žˆ๋Š”์ง€๋ฅผ ๋ฌผ์–ด๋ณด๋Š” ๋ฌธ์ œ ๊ฐ™์•˜๋‹ค. ์‚ฌ์‹ค ์›ํ•˜๋Š” ๊ฒฐ๊ณผ๊ฐ’์€ ๋ฐ˜๋ณต๋˜๋Š” ๋ฌธ์ž์—ด์˜ ๊ฐœ์ˆ˜์™€ ๋ฌธ์ž์—ด์˜ ํฌ๊ธฐ์ด๋ฏ€๋กœ ์ด๋ฅผ ๊ฐ๊ฐ ๊ตฌํ•ด์„œ ๋”ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ ‘๊ทผํ•ด๋„ ๋œ๋‹ค. ๋ณต์žกํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์š”๊ตฌํ•˜์ง€๋Š” ์•Š์ง€๋งŒ ์กฐ๊ฑด์„ ์ž˜ ์„ธ์›Œ์„œ ์ •๋‹ต์„ ๊ตฌํ•  ์ˆ˜ ์žˆ๋Š” ๋…ผ๋ฆฌ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€๋ฅผ ํŒŒ์•…ํ•˜๋Š” ๋ฌธ์ œ์˜€๋‹ค. #include #include #include using namespace std; int solution(string s) { int answer = 0; int total = 0; int count = 1; int min = 99999; string restore = s; string.. 2021. 12. 29.
ํ”„๋กœ๊ทธ๋ž˜๋จธ์Šค (์นด์นด์˜คํ”„๋ Œ์ฆˆ ์ปฌ๋Ÿฌ๋ง๋ถ, 2017 ์นด์นด์˜ค์ฝ”๋“œ ์˜ˆ์„ ) C++ ๋ฌธ์ œ๋ฅผ ๋ณด๊ณ  ์ธ์ ‘ํ•œ Node๋ฅผ ํƒ์ƒ‰ํ•˜๋Š” DFS๋ฅผ ์‚ฌ์šฉํ•ด์•ผ๊ฒ ๋‹ค ๋ผ๋Š” ์ƒ๊ฐ์ด ๋“ค์—ˆ๋‹ค. DS์—์„œ ๋ฐฐ์šด DFS๋Š” Adjacent List๋ฅผ ๋งŒ๋“ค์–ด์„œ ํƒ์ƒ‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋ผ ์˜จ์ข…์ผ ๊ทธ ์ƒ๊ฐ๋ฐ–์— ์•ˆ ๋“ค์–ด์„œ Adjacent List๋งŒ ๋งŒ๋“ค๋‹ค๊ฐ€ ๋‹ค์‹œ ๋œฏ์–ด ๊ณ ์ณค๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ํ•„์š”ํ•œ ๋ณ€์ˆ˜๋Š” 2์ฐจ์› vector์™€ visited ์œ ๋ฌด๋ฅผ ํŒ๋‹จํ•˜๋Š” 2์ฐจ์› ๋ฐฐ์—ด์ด๋ฉด ๋ชจ๋“  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ถฉ์กฑํ•  ์ˆ˜ ์žˆ๋„๋ก ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. MxN Matrix ์ฒ˜๋Ÿผ ์ฃผ์–ด์ง€๊ณ  ์ธ์ ‘ํ•œ ์˜์—ญ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋ฌผ์–ด๋ณด๋Š” ๋ฌธ์ œ์—์„œ๋Š” ๊ดœํžˆ 2์ฐจ์› ์ธ์ ‘ ๋ฆฌ์ŠคํŠธ๋กœ ๋งŒ๋“ค์ง€ ๋ง๊ณ  dx, dy์— ๋Œ€ํ•œ ๋ณ€์ˆ˜๋ฅผ ํ• ๋‹นํ•˜์—ฌ ์ƒ, ํ•˜, ์ขŒ, ์šฐ๋ฅผ ํƒ์ƒ‰ํ•  ์ˆ˜ ์žˆ๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒ๊ฐ์„ ๊ธฐ๋ฅด๋Š” ์ค‘.. #include #include using namespace std; int dx[4.. 2021. 12. 29.
Deep Learning(CNN, Convolution Layers, Dilated Layers, Separable Convolution, Max-Pooling Convolution) Convolution Network Deep Learning Layer๊ฐ€ ๋งŽ์€ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์‚ฌ์šฉํ•˜๊ณ  layer ํ•˜๋‚˜๋Š” Weighted Sum์„ ํ™œ์šฉํ•œ๋‹ค. W๋Š” ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ํ•™์Šต์„ ํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ์ด๊ณ  ๋งˆ์ง€๋ง‰์—๋Š” Loss function์„ ์ •์˜ํ•œ๋‹ค. ์ฆ‰, ๋ชจ๋“  Layer์˜ Weight๋ฅผ ์ตœ์ ํ™”ํ•˜๋Š” ๊ฒƒ. ์ด๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๊ณผ์ •์ด ํ•™์Šตํ•˜๋Š” ๊ณผ์ •์ด ๋œ๋‹ค. Layer์˜ ๊ฐ Operation์€ ์ž…๋ ฅ ์ •๋ณด๋ฅผ Weighted Sum์œผ๋กœ mergeํ•˜๋Š”๋ฐ ์–ด๋–ค ๋น„์œจ๋กœ Merge ํ•˜๋Š”๊ฐ€๊ฐ€ ํฌ์ธํŠธ Deep Learning Architecture CNN -> ์˜์ƒ์ฒ˜๋ฆฌ, ์ž์—ฐ์–ด, ์Œ์„ฑ์ธ์‹ : ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์— ์˜ํ•œ ์†๋„ ํ–ฅ์ƒ์„ ๋” ์ž˜ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค. : ์ž์—ฐ์–ด : Machine Translation์˜ ํšจ๊ณผ๊ฐ€ ๋” ์žˆ์—ˆ๋‹ค. RNN -> T.. 2021. 12. 29.
Deep Learning(Regularization, Transfer learning, Internal Converiate Shift, Batch Normalization, ReLU Activation Fuction, Sparse Coding) Practical Problem and Solution Practical Issues Training Data๋ฅผ ๋งŽ์ด ํ•„์š”๋กœ ํ•œ๋‹ค. ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์€ ๋ฐ์ดํ„ฐ๋ฅผ ๋งŽ์ด ๋ชจ์œผ๋Š” ๊ฒƒ์ด๊ณ  ์ตœ๊ทผ ๋จธ์‹ ๋Ÿฌ๋‹์„ ํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ด ๋ฐ์ดํ„ฐ๋ฅผ ๋งŽ์ด ๋ชจ์•„๋†“์•˜๋‹ค๋Š” ๊ฒƒ. Regularization techinque / data augmentation : ์ ์€ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ํ•ฉ์„ฑํ•ด๋‚ด๋Š” ๊ฒƒ Unsupervised / semi-supervised / reinforcement learning : ์ •๋‹ต์ด ์•„๋‹Œ ํ”ผ๋“œ๋ฐฑ๋งŒ์œผ๋กœ ์ง„ํ–‰๋œ๋‹ค๋Š” ์  Computation ์„ ๋งŽ์ด ํ•„์š”๋กœ ํ•œ๋‹ค. Regularization Boundary๊ฐ€ ๋ณต์žกํ•˜๋ฉด training data์— ๋Œ€ํ•ด์„œ๋Š” ์ž˜ํ•˜์ง€๋งŒ ์ง€๊ธˆ๊นŒ์ง€ ๋ณด์ง€ ๋ชปํ–ˆ๋˜ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ ์ฒ˜๋ฆฌํ•˜๊ธฐ ํž˜๋“ค๋‹ค.. 2021. 12. 29.
Deep Learning(Deep Generate Model, Convolutional Neural Networks, Recurrent Neural Network) Deep Learning Approaches Convolution Layer์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ Convolution Layer์ด๋‹ค. -> Position Invariant ํ•œ Local์„ feature ํ•œ๋‹ค. ์ž…๋ ฅ์ด ๋“ค์–ด์™€์„œ Output๊นŒ์ง€ ์ญˆ์šฑ ๊ฐ€๋Š” ๊ฒƒ. RNN(Recurrent Nearal Network) ๊ณผ๊ฑฐ๊ฐ€ ํ˜„์žฌ์— ์˜ํ–ฅ์„ ์ฃผ๋Š” Time-series data์˜ ๋ถ€๋ถ„์œผ๋กœ ์“ฐ์ธ๋‹ค. FeedBack์„ ํ•ด์ฃผ๋Š” Node๊ฐ€ ์กด์žฌํ•œ๋‹ค. ํ˜„์žฌ์˜ Input๊ฐ’๊ณผ Context๊ฐ€ ํ•ฉ์ณ์ ธ์„œ ์‚ฌ์šฉ์ด ๋œ๋‹ค๋Š” ๊ฒƒ. Deep Generative Model ๊ฐ€์žฅ ์œ ๋ช…ํ•œ๊ฒƒ์€ GAN ๋„คํŠธ์›Œํฌ์ด๋‹ค. Attention Model ๋“ฑ์ด ์กด์žฌํ•œ๋‹ค. Convolutional Neural Network Layor๋“ค์ด 3์ฐจ์› ๊ตฌ์กฐ๋กœ ๊ตฌ์„ฑ์ด.. 2021. 12. 29.
Deep Learning(ImageNet, ProGAN, Vanishing Gradient Problem) Deep Learning ImageNet Dataset ์˜์ƒ์ธ์‹ ์„ฑ๋Šฅ, Image-net์ด๋ผ๋Š” ๋ฐ์ดํ„ฐ Set์ด ์žˆ๋‹ค. ILSVRC 2012๋…„ ๋”ฅ๋Ÿฌ๋‹์ด ์ ์šฉ๋˜๊ณ  Error Rate๊ฐ€ ํ›จ์”ฌ ์ค„์—ˆ๋‹ค. ์‚ฌ๋žŒ์˜ Error rate๋Š” 5ํ”„๋กœ๋ผ๊ณ  ๋ณด๋ฉด ๋œ๋‹ค. Instance Segmentation Object๊ฐ€ ์–ด๋””์žˆ๋Š”์ง€ ์ฐพ์•„๋‚ด๊ณ  Pixel ๋‹จ์œ„๋กœ ์‹ค๋ฃจ์—ฃ์„ ๋งŒ๋“ค์–ด์ฃผ๊ฒŒ ๋œ๋‹ค. Image Synthesis: ProGAN Neural Net์ด ๋งŒ๋“ค์–ด๋‚ธ ์‚ฌ๋žŒ๋“ค ์–ผ๊ตด Deep Learning Layer๊ฐ€ ๊ต‰์žฅํžˆ ๋งŽ๋‹ค. ์ˆ˜์‹ญ๊ฐœ์—์„œ ์ฒœ ๊ฐœ๊นŒ์ง€ ์‚ฌ์šฉ์„ ํ•œ๋‹ค. ์™œ Layer๊ฐ€ ๋งŽ์œผ๋ฉด ์ข‹์€๊ฐ€? ์ž…๋ ฅ ์ •๋ณด๋ฅผ Merge ํ•˜๊ฒŒ ๋˜๋Š”๋ฐ ๋ฐ‘์— ์žˆ๋Š” Layer๋ณด๋‹ค ์œ„์˜ ๊ฒƒ์ด ์ข€ ๋” High Level์ด ๋œ๋‹ค. Why Deep Learning H.. 2021. 12. 29.
Deep Learning(Gradient and Jacobian, Back Propagation, Training of 1st , 2nd Layer) Gradient and Jacobian Gradient Vector ์Šค์นผ๋ผ ๊ฐ’์„ ๋ฒกํ„ฐ๋กœ ๋ฏธ๋ถ„ํ•˜๋ฉด ์–ป์–ด์ง€๋Š” ๊ฒƒ. Error function์„ ๋ฒกํ„ฐ๋กœ ๋ฏธ๋ถ„ํ•˜๋ฉด Gradient๋ฅผ ์–ป๋Š”๋‹ค. ๋ฒกํ„ฐ๋ฅผ ๋ฒกํ„ฐ๋กœ ๋ฏธ๋ถ„ํ•˜๋Š” ๊ฒฝ์šฐ Matrix ํ˜•ํƒœ๋กœ ๋‚˜ํƒ€๋‚˜๊ฒŒ ๋˜๋Š”๋ฐ ๊ทธ๊ฒƒ์„ Jacobian Matrix ๋ผ๊ณ  ํ•œ๋‹ค. output์˜ ๊ฐœ์ˆ˜์™€ Input์˜ ๊ฐœ์ˆ˜๋กœ ์˜ ํ–‰์—ด๋กœ ์ด๋ฃจ์–ด์ง„ Matrix๊ฐ€ ๋œ๋‹ค. ๋ฒกํ„ฐ๋ผ๋ฆฌ์˜ Chain rule์€ Matrix์˜ ํ˜•ํƒœ๋กœ ๋‚˜ํƒ€๋‚œ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ์ถœ๋ ฅ๊ฐ’์— ๋Œ€ํ•œ Gradient์— Jacobian ํ–‰๋ ฌ์„ ๊ณฑํ•˜๋ฉด Input๊ฐ’์— ๋Œ€ํ•œ Gradient๊ฐ€ ๋‚˜์˜จ๋‹ค. Back-Propagation on NEaral Nets ๋ฐ‘์˜ layer์—์„œ ๋ฒกํ„ฐ๊ฐ€ ์˜ฌ๋ผ์˜ค๋ฉด Weighted Sum์„ ํ•ด์„œ Output์„ ๋‚ด์ฃผ๊ฒŒ ๋˜.. 2021. 12. 29.
Deep learning (MLP Learning, Loss Function, Back Propagation, Matrix Notation, Chain Rule) Back-propagation MLP Learning Layer ํ•˜๋‚˜๋ฅผ ๋‚˜ํƒ€๋‚ผ ๋•Œ x์˜ super script๋ฅผ ์“ฐ๊ฒ ๋‹ค๊ณ  ๊ฐ€์ •. X0 ๋Š” input vector๊ฐ€ ๋œ๋‹ค. Label์— ํ•ด๋‹นํ•˜๋Š” ๋…ธ๋“œ๋Š” ์ „๋ถ€ 1 loss function์€ Desired ouput๊ณผ real output ์‚ฌ์ด์˜ ์ฐจ์ด์— ํ•ด๋‹นํ•œ๋‹ค. ์ผ๋‹จ ์‹œ์ž‘์ ์€ ๋žœ๋คํ•œ node์—์„œ ์‹œ์ž‘ํ•ด์„œ ์กฐ๊ธˆ์”ฉ gradient๋ฅผ ๋นผ์คŒ์œผ๋กœ์จ ํ•™์Šต์„ ์‹œ์ž‘ํ•œ๋‹ค. Loss Function(Error Criteria) ์ตœ์ข… output Layer์—์„œ desired output ๊ฐ’์„ ๋นผ๊ณ  ์ œ๊ณฑ์„ ํ•ด์„œ C๊ฐœ์— ํ•ด๋‹นํ•˜๋Š” ๊ฒƒ ๋งŒํผ ๋‚˜๋ˆ ์ค€๋‹ค. ๊ทธ๋Ÿฌ๋ฉด Error๋ฅผ ์ฐพ์„ ์ˆ˜ ์ž‡๊ณ  Mean square๋ฅผ ํ•˜๊ฒŒ ๋˜๋Š” ๊ณผ์ •์ด ๋œ๋‹ค. ์ตœ๊ทผ์— ๋“ค์–ด์„œ Cross entropy function.. 2021. 12. 23.
Deep learning (Activation Function, softmax, Hidden Units, Output Units) Activation Function " Non-linearity Function " ์ด๋ผ๊ณ  ๋ถˆ๋ฆฐ๋‹ค. Weighted sum์„ ํ•œ ๋ฒˆ ๋” ์ฒ˜๋ฆฌํ•ด์ฃผ๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. Activation Functions Sigmoid ํ•จ์ˆ˜ : Hyperbolic Tangent ํ•จ์ˆ˜ : -1์—์„œ +1๊นŒ์ง€์˜ ๊ฐ’์„ ๊ฐ€์ง„๋‹ค. ReLU ํ•จ์ˆ˜ : net value์™€ 0 ์‚ฌ์ด์˜ max๊ฐ’์„ ์ทจํ•˜๋Š” ๊ฒƒ. Softmax function ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์นดํ…Œ๊ณ ๋ฆฌ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์„ ํ™•๋ฅ ์ ์œผ๋กœ ํ‘œํ˜„ํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. ํ™•๋ฅ ์€ 0์—์„œ 1 ์‚ฌ์ด์˜ ๊ฐ’์„ ๊ฐ€์ ธ์•ผ ํ•œ๋‹ค. ํ•˜์ง€๋งŒ Net Value๋Š” ์ ˆ๋Œ€๋กœ ํ™•๋ฅ ์ด ์•ˆ๋€๋‹ค. ๊ทธ๋ž˜์„œ Exponential์„ ์ทจํ•ด์„œ ์ „๋ถ€ ์–‘์ˆ˜๋กœ ๋ณ€๊ฒฝ์„ ํ•˜๊ณ  ๋ชจ๋“  node์— ๋Œ€ํ•œ Exp ๊ฐ’์„ ๋”ํ•ด์„œ ๋‚˜๋ˆ„๊ฒŒ ๋œ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉฐ ์ „์ฒด ๊ฐ’๋“ค์€ 1.. 2021. 12. 23.
Deep learning (Multi-Layer Perceptron) Multi-Layer Limitation of Single-Layer Perception XOR ๋ฌธ์ œ 11 ์ด๋‚˜ 00 ์ด ๋“ค์–ด๊ฐ€๋ฉด 0์ด ๋‚˜์˜ค๊ณ  01 ์ด๋‚˜ 10 ์ด ๋“ค์–ด๊ฐ€๋ฉด 1์ด ๋‚˜์™€์•ผ ํ•œ๋‹ค. Neural Network๊ฐ€ ์ด ๊ฒฝ๊ณ„์„ ์„ ์ž˜ ๋งŒ๋“ค ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ฆ‰ ์ง์„ ์œผ๋กœ ๊ตฌ๋ถ„๋  ์ˆ˜ ์žˆ๋Š” ๋ฌธ์ œ์—ฌ์•ผ ํ•œ๋‹ค. output์ด 0๊ณผ 1์„ ๋‚ด๋Š” class๊ฐ€ 2๊ฐœ ์กด์žฌํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๋ฌธ์ œ๋Š” ์ง์„  ํ•˜๋‚˜๋ฅผ ๊ฐ€์ง€๊ณ  Classification ๋ฌธ์ œ๋ฅผ ํ’€ ์ˆ˜ ์—†๋‹ค. ์ฆ‰, XOR ๋ฌธ์ œ๋„ ํ•ด๊ฒฐ์„ ํ•˜์ง€ ๋ชปํ•จ์œผ๋กœ Single Layer์˜ ๋ฌธ์ œ์ ์ด ๋ฐœ๊ฒฌ๋˜์—ˆ๋‹ค. Multi Layer Perceptron ์ง์„  2 ๊ฐœ H1 ๊ณผ H2๊ฐ€ ์กด์žฌํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด๋ณด์ž. ๊ฐ๊ฐ์˜ ์ง์„ ์€ Perceptron์„ ํ†ตํ•ด์„œ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. y1๊ณผ y2๋ฅผ ๊ฐ๊ฐ์˜ .. 2021. 12. 23.