본문 바로가기

머신러닝 딥러닝 번역

콘볼루션 넷: 모듈 관점 (Conv Nets: A Modular Perspective)

Edit

콘볼루션 넷: 모듈 관점 (Conv Nets: A Modular Perspective)

원문

Introduction

지난 몇년간, 딥 뉴럴넷은 컴퓨터 비전과 음성인식 같은 패턴 인식 문제에 대해서 breakthrough를 이루어왔다. 이러한 결과를 낸 가장 중요한 component 중 하나는 convolutional neural network라 불리는 뉴럴넷의 특별한 형태이다.

가장 기본적으로 보았을 때, convolutional neural networks는 같은 뉴런의 동일한 복사본을 사용하는 뉴럴넷의 한 종류로 생각할 수 있다.

이는 네트워크가 실제 파라메터 수는 유지하면서 많은수의 뉴런을 가질수 있게하고 계산적으로 큰 모델을 표현할 수 있게 한다.


같은 뉴런을 여러개 복사하는 trick은 수학과 컴퓨터 사이언스에서 함수의 추상화와 대략 비슷하다.

프로그래밍 할떄, 함수를 한번만 작성하고 여러군데서 여러번 사용한다. 같은 코드를 몇 백백번 다른 곳에 작성하지 않는 것은 프로그램을 더 빠르게 하고 결국 적은 버그를 일으킨다.

비슷하게, convolutional neural network는 뉴런을 한번 학습해서 여러군데에서 사용해서 모델을 좀 더 쉽게 학습하고 에러를 감소하게 할 수 있게 한다.

Structure of Convolutional Neural Networks

오디오 샘플을 보고, 인간이 이야기하는 것인지 예측하는 뉴럴넷을 원한다고 가정하자. 너는 누군가 말하면 좀더 분석하기를 원할지도 모른다. 그 시점 다른 점에서 오디오 샘플을 얻는다. 샘플은 균일하게 위치해있다.


The simplest way to try and classify them with a neural network is to just connect them all to a fully-connected layer. 다수의 뉴런이 있고 모든 입력은 모든 뉴런에 연결된다.


A more sophisticated approach notices a kind of symmetry in the properties it’s useful to look for in the data. 우리는 데이터의 지역 성질에 대해서 신경쓴다: 주어진 시간 근처에 소리의 frequency는 어떠한가? 그들이 증가하거나 감소하는가? 기타 등등

같은 성질을 시간 내 모든 점에서 고려한다. 시작, 중간, 끝의 frequencies를 아는 것이 유용한다. 이 성질들은 작은 윈도우 안을 보는 것이 필요하므로 local한 성질이다.

데이터의 작은 시간 segment를 보는 뉴런의 그룹인 s를 생성할 수 있다. 2 s 는 그러한 모든 segments를 보고, 특정 피쳐를 계산한다. 콘볼루션 층의 출력은 fully-connected layer s에 입력된다.


위의 예제에서, s는 단지 두 점으로 이루어진 segment를 본다. 이는 현실적이진 않다. 보통 콘볼루션 층의 윈도우는 더 크다.

다음 예제에서, s는 3개의 점을 본다. 이것도 역시 현실적이지 않다 - 슬프게도 많은 점을 연결하는 s을 시각화하는 것은 어렵다.


콘볼루션 층의 한가지 매우 좋은 성질은 합성이 가능하다는 것이다. 콘볼루션 층의 출력을 다른 곳에 입력할 수 있다. 각 층에서, 네트워크는 높은 레벨에 좀더 추상화된 feature를 찾을 수 있다.

다음 예제에서 뉴런의 새로운 그룹 s가 있다. s는 이전 층위에 쌓이는 다른 콘볼루션 층을 생성하는데 사용된다.


Convolutional layers은 pooling layers 사이에 끼어진다. 특히, 매우 인기 있는 max-pooling layer이라고 불리는 층이 있다.

자주 고레벨 관점에서, frequency가 조금 늦게 또는 늦게 등장하든 상관없다. max-pooling layer는 이전 층의 작은 블럭에 대해서 피쳐의 최대값을 취한다. 출력은 이전층의 한 지역에서 피쳐가 존재하는지를 말해주지만, 정확하게 어디인지는 말해주지 못한다.

Max-pooling layers는 줌아웃과 같다. 맥스 풀링 층은 이후 convolutional layers이 데이터의 큰 섹션에서 동작하도록 해준다. 왜냐하면 풀링 층 후에 작은 patch는 그 이전에 좀 더 큰 patch에 대응되기 때문이다. 맥스풀링 층은 데이터의 매우 작은 변환에도 invariant하게 해준다.

이전 예제에서, 1-차원 콘볼루션 층을 사용했다. 그러나, 콘볼루션 층은 고차원 데이터에 대해서도 잘 동작한다. 사실, convolutional neural networks의 가장 유명한 성공은 이미지를 인식에 2D convolutional neural networks를 적용한 것이였다.


2차원 콘볼루션 층에서, segment를 보는 대신에, s는 patches를 보게될 것이다.

각 patch에 대해서, s는 피쳐를 계산할 것이다. 예를 들어, 가장자리의 존재를 찾아내는 것을 학습할 수 도 있다. 또는 texture를 찾는 것을 학습할 수도 있다, 또는 두 색의 명암을 학습할 수도 있다.

이전 예에서, convolutional layer의 출력을 fully-connected layer에 입력했다. 그러나 1차원 예제와 마찬가지로 두 개의 콘볼루션 층을 합성할 수도 있다.


2차원에서도 맥스풀링을 할 수 있다. 여기서, 각 작은 patch에 대해서 피쳐의 최대값을 취한다.

What this really boils down to is that, when considering an entire image, we don’t care about the exact position of an edge, down to a pixel. It’s enough to know where it is to within a few pixels.

비디오나 볼륨 데이터(3D 의학 스캔)같은 데이터에 대해서, 3-차원 콘볼루션 네트워크도 때떄로 쓰인다. 그러나 많이 쓰이지는 않고, 시각화하기도 어렵다.

s는 뉴런의 그룹이라고 말했다. 이것에 대해서 좀 더 자세히 알아봐야 보자. s가 정확히 무엇인가? 전통적인 콘볼루션 층에서 s는 모두 같은 입력을 얻고, 다른 피쳐를 계산하는 평행한 많은 수의 뉴런이다. 예를 들어, 2-차원 콘볼루션 층에서, 하나의 뉴런은 수평의 edge들을 찾을 수도 있다. 다른 뉴런은 수직 edge를 찾을 수 있다, 다른 뉴런은 초록-빨간의 명암을 찾을 수 있다.


That said, in the recent paper ‘Network in Network’ (Lin et al. (2013)), a new “Mlpconv” layer is proposed.
최근 논문 ‘Network in Network’ (Lin et al. (2013))에서 새로운 “Mlpconv” 층이 제안되었다. 이 모델에서, s는 뉴런의 다수의 층을 갖는다, s는 지역에 대해 고레벨 피쳐를 출력하는 마지막 층을 갖는다. 논문에서, 모델은 매우 인상적인 결과를 달성했따. 많은 벤치마크 데이터셋에서 가장 높은 성능을 내었다.


이 포스트의 목적을 위해, 표준 CNN에 집중할 것이다. There’s already enough for us to consider there!

Results of Convolutional Neural Networks

이전에, CNN을 사용한 컴퓨터 비전에서의 획기적 발전에 대해서 언급했다. 이 결과에 대해서 조금 정리하고 싶다.

In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton blew existing image classification results out of the water (Krizehvsky et al. (2012)).

Their progress was the result of combining together a bunch of different pieces. They used GPUs to train a very large, deep, neural network. They used a new kind of neuron (ReLUs) and a new technique to reduce a problem called ‘overfitting’ (DropOut). They used a very large dataset with lots of image categories (ImageNet). And, of course, it was a convolutional neural network.

Their architecture, illustrated below, was very deep. It has 5 convolutional layers,3 with pooling interspersed, and three fully-connected layers. The early layers are split over the two GPUs.

They trained their network to classify images into a thousand different categories.

Randomly guessing, one would guess the correct answer 0.1% of the time. Krizhevsky, et al.’s model is able to give the right answer 63% of the time. Further, one of the top 5 answers it gives is right 85% of the time!


Top: 4 correctly classified examples. Bottom: 4 incorrectly classified examples. Each example has an image, followed by its label, followed by the top 5 guesses with probabilities. From Krizehvsky et al. (2012).

Even some of its errors seem pretty reasonable to me!

We can also examine what the first layer of the network learns to do.

Recall that the convolutional layers were split between the two GPUs. Information doesn’t go back and forth each layer, so the split sides are disconnected in a real way. It turns out that, every time the model is run, the two sides specialize.

Neurons in one side focus on black and white, learning to detect edges of different orientations and sizes. Neurons on the other side specialize on color and texture, detecting color contrasts and patterns.4 Remember that the neurons are randomly initialized. No human went and set them to be edge detectors, or to split in this way. It arose simply from training the network to classify images.

These remarkable results (and other exciting results around that time) were only the beginning. They were quickly followed by a lot of other work testing modified approaches and gradually improving the results, or applying them to other areas. And, in addition to the neural networks community, many in the computer vision community have adopted deep convolutional neural networks.

Convolutional neural networks are an essential tool in computer vision and modern pattern recognition.

Formalizing Convolutional Neural Networks

입력 s 과 출력 s을 갖는 1-차원 콘볼루션 층을 생각해보자,


입력으로 출력을 설명하는 것은 상대적으로 쉽다.
s

s
비슷하게, 2차원 콘볼루션 층을 고려하면, 입력 s과 출력 s을 가진다.


다시 입력으로 출력을 쓸수 있다.

s
예를들어,
s
이식을 합칠수 있다.
s

one has everything they need to implement a convolutional neural network, at least in theory.
이제 CNN을 구현하는데 필요한 모든 것을 가졌다. 적어도 이론적으론.

실제에선, 이것이 convolutional neural networks를 생각하는 가장 좋은 방법은 아니다. 콘볼루션이라고 불리는 수학적 연산의 측면의 대안이 되는 공식이 있다.

콘볼루션 연산은 강력한 도구이다. 수학적으로, 편미분 방정식(PDE)에서 확률론까지의 다양한 문맥에서 나왔다. PDE에서의 역할에서, 콘볼루션은 물리과학에서 매우 중요하다. 컴퓨터 그래픽, 신호처리같은 응용분야에서도 매우 중요한 역할을 한다.

우리에게, 콘볼루션은 많은 이익을 준다. 첫째로, 나이브한 관점이 제시하는것보다 콘볼루션 층의 효율적인 구현을 해줄 수있게 해준다. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of ss – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet.
마지막으로 콘볼루션은 콘볼루션 층에 대한 사고에 대한 다른 중요한 관점을 준다.

I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.  — Albert Einstein

%23%20%uCF58%uBCFC%uB8E8%uC158%20%uB137%3A%20%uBAA8%uB4C8%20%uAD00%uC810%20%28Conv%20Nets%3A%20A%20Modular%20Perspective%29%0A@%5B%uD2F0%uC2A4%uD1A0%uB9AC%5D%0A%0A%5B%uC6D0%uBB38%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/%29%0A%0A%23%23%20Introduction%0A%uC9C0%uB09C%20%uBA87%uB144%uAC04%2C%20%uB525%20%uB274%uB7F4%uB137%uC740%20%uCEF4%uD4E8%uD130%20%uBE44%uC804%uACFC%20%uC74C%uC131%uC778%uC2DD%20%uAC19%uC740%20%uD328%uD134%20%uC778%uC2DD%20%uBB38%uC81C%uC5D0%20%uB300%uD574%uC11C%20breakthrough%uB97C%20%uC774%uB8E8%uC5B4%uC654%uB2E4.%20%uC774%uB7EC%uD55C%20%uACB0%uACFC%uB97C%20%uB0B8%20%uAC00%uC7A5%20%uC911%uC694%uD55C%20component%20%uC911%20%uD558%uB098%uB294%20convolutional%20neural%20network%uB77C%20%uBD88%uB9AC%uB294%20%uB274%uB7F4%uB137%uC758%20%uD2B9%uBCC4%uD55C%20%uD615%uD0DC%uC774%uB2E4.%0A%0A%uAC00%uC7A5%20%uAE30%uBCF8%uC801%uC73C%uB85C%20%uBCF4%uC558%uC744%20%uB54C%2C%20convolutional%20neural%20networks%uB294%20%uAC19%uC740%20%uB274%uB7F0%uC758%20%uB3D9%uC77C%uD55C%20%uBCF5%uC0AC%uBCF8%uC744%20%uC0AC%uC6A9%uD558%uB294%20%uB274%uB7F4%uB137%uC758%20%uD55C%20%uC885%uB958%uB85C%20%uC0DD%uAC01%uD560%20%uC218%20%uC788%uB2E4.%0A%0A%uC774%uB294%20%uB124%uD2B8%uC6CC%uD06C%uAC00%20%uC2E4%uC81C%20%uD30C%uB77C%uBA54%uD130%20%uC218%uB294%20%uC720%uC9C0%uD558%uBA74%uC11C%20%uB9CE%uC740%uC218%uC758%20%uB274%uB7F0%uC744%20%uAC00%uC9C8%uC218%20%uC788%uAC8C%uD558%uACE0%20%uACC4%uC0B0%uC801%uC73C%uB85C%20%uD070%20%uBAA8%uB378%uC744%20%uD45C%uD604%uD560%20%uC218%20%uC788%uAC8C%20%uD55C%uB2E4.%20%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Conv2.png%29%0A%uAC19%uC740%20%uB274%uB7F0%uC744%20%uC5EC%uB7EC%uAC1C%20%uBCF5%uC0AC%uD558%uB294%20trick%uC740%20%uC218%uD559%uACFC%20%uCEF4%uD4E8%uD130%20%uC0AC%uC774%uC5B8%uC2A4%uC5D0%uC11C%20%uD568%uC218%uC758%20%uCD94%uC0C1%uD654%uC640%20%uB300%uB7B5%20%uBE44%uC2B7%uD558%uB2E4.%0A%0A%uD504%uB85C%uADF8%uB798%uBC0D%20%uD560%uB584%2C%20%uD568%uC218%uB97C%20%uD55C%uBC88%uB9CC%20%uC791%uC131%uD558%uACE0%20%uC5EC%uB7EC%uAD70%uB370%uC11C%20%uC5EC%uB7EC%uBC88%20%uC0AC%uC6A9%uD55C%uB2E4.%20%uAC19%uC740%20%uCF54%uB4DC%uB97C%20%uBA87%20%uBC31%uBC31%uBC88%20%uB2E4%uB978%20%uACF3%uC5D0%20%uC791%uC131%uD558%uC9C0%20%uC54A%uB294%20%uAC83%uC740%20%uD504%uB85C%uADF8%uB7A8%uC744%20%uB354%20%uBE60%uB974%uAC8C%20%uD558%uACE0%20%uACB0%uAD6D%20%uC801%uC740%20%uBC84%uADF8%uB97C%20%uC77C%uC73C%uD0A8%uB2E4.%0A%0A%20%uBE44%uC2B7%uD558%uAC8C%2C%20convolutional%20neural%20network%uB294%20%uB274%uB7F0%uC744%20%uD55C%uBC88%20%uD559%uC2B5%uD574%uC11C%20%uC5EC%uB7EC%uAD70%uB370%uC5D0%uC11C%20%uC0AC%uC6A9%uD574%uC11C%20%uBAA8%uB378%uC744%20%uC880%20%uB354%20%uC27D%uAC8C%20%uD559%uC2B5%uD558%uACE0%20%uC5D0%uB7EC%uB97C%20%uAC10%uC18C%uD558%uAC8C%20%uD560%20%uC218%20%uC788%uAC8C%20%uD55C%uB2E4.%0A%0A%23%23%23%20Structure%20of%20Convolutional%20Neural%20Networks%0A%uC624%uB514%uC624%20%uC0D8%uD50C%uC744%20%uBCF4%uACE0%2C%20%uC778%uAC04%uC774%20%uC774%uC57C%uAE30%uD558%uB294%20%uAC83%uC778%uC9C0%20%uC608%uCE21%uD558%uB294%20%uB274%uB7F4%uB137%uC744%20%uC6D0%uD55C%uB2E4%uACE0%20%uAC00%uC815%uD558%uC790.%20%uB108%uB294%20%uB204%uAD70%uAC00%20%uB9D0%uD558%uBA74%20%uC880%uB354%20%uBD84%uC11D%uD558%uAE30%uB97C%20%uC6D0%uD560%uC9C0%uB3C4%20%uBAA8%uB978%uB2E4.%20%uADF8%20%uC2DC%uC810%20%uB2E4%uB978%20%uC810%uC5D0%uC11C%20%uC624%uB514%uC624%20%uC0D8%uD50C%uC744%20%uC5BB%uB294%uB2E4.%20%uC0D8%uD50C%uC740%20%uADE0%uC77C%uD558%uAC8C%20%uC704%uCE58%uD574%uC788%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-xs.png%29%0AThe%20simplest%20way%20to%20try%20and%20classify%20them%20with%20a%20neural%20network%20is%20to%20just%20connect%20them%20all%20to%20a%20fully-connected%20layer.%20%uB2E4%uC218%uC758%20%uB274%uB7F0%uC774%20%uC788%uACE0%20%uBAA8%uB4E0%20%uC785%uB825%uC740%20%uBAA8%uB4E0%20%uB274%uB7F0%uC5D0%20%uC5F0%uACB0%uB41C%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-F.png%29%0AA%20more%20sophisticated%20approach%20notices%20a%20kind%20of%20symmetry%20in%20the%20properties%20it%u2019s%20useful%20to%20look%20for%20in%20the%20data.%20%20%uC6B0%uB9AC%uB294%20%uB370%uC774%uD130%uC758%20%uC9C0%uC5ED%20%uC131%uC9C8%uC5D0%20%uB300%uD574%uC11C%20%uC2E0%uACBD%uC4F4%uB2E4%3A%20%20%uC8FC%uC5B4%uC9C4%20%uC2DC%uAC04%20%uADFC%uCC98%uC5D0%20%uC18C%uB9AC%uC758%20frequency%uB294%20%uC5B4%uB5A0%uD55C%uAC00%3F%20%20%uADF8%uB4E4%uC774%20%uC99D%uAC00%uD558%uAC70%uB098%20%uAC10%uC18C%uD558%uB294%uAC00%3F%20%uAE30%uD0C0%20%uB4F1%uB4F1%0A%0A%uAC19%uC740%20%uC131%uC9C8%uC744%20%uC2DC%uAC04%20%uB0B4%20%uBAA8%uB4E0%20%uC810%uC5D0%uC11C%20%uACE0%uB824%uD55C%uB2E4.%20%uC2DC%uC791%2C%20%uC911%uAC04%2C%20%uB05D%uC758%20frequencies%uB97C%20%uC544%uB294%20%uAC83%uC774%20%uC720%uC6A9%uD55C%uB2E4.%20%uC774%20%uC131%uC9C8%uB4E4%uC740%20%uC791%uC740%20%uC708%uB3C4%uC6B0%20%uC548%uC744%20%uBCF4%uB294%20%uAC83%uC774%20%uD544%uC694%uD558%uBBC0%uB85C%20local%uD55C%20%uC131%uC9C8%uC774%uB2E4.%0A%0A%uB370%uC774%uD130%uC758%20%uC791%uC740%20%uC2DC%uAC04%20segment%uB97C%20%uBCF4%uB294%20%uB274%uB7F0%uC758%20%uADF8%uB8F9%uC778%20%24A%24%uB97C%20%uC0DD%uC131%uD560%20%uC218%20%uC788%uB2E4.%202%20%24A%24%20%uB294%20%uADF8%uB7EC%uD55C%20%uBAA8%uB4E0%20segments%uB97C%20%uBCF4%uACE0%2C%20%uD2B9%uC815%20%uD53C%uCCD0%uB97C%20%uACC4%uC0B0%uD55C%uB2E4.%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC758%20%uCD9C%uB825%uC740%20fully-connected%20layer%20%24F%24%uC5D0%20%uC785%uB825%uB41C%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2.png%29%0A%uC704%uC758%20%uC608%uC81C%uC5D0%uC11C%2C%20%24A%24%uB294%20%uB2E8%uC9C0%20%uB450%20%uC810%uC73C%uB85C%20%uC774%uB8E8%uC5B4%uC9C4%20segment%uB97C%20%uBCF8%uB2E4.%20%uC774%uB294%20%uD604%uC2E4%uC801%uC774%uC9C4%20%uC54A%uB2E4.%20%uBCF4%uD1B5%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC758%20%uC708%uB3C4%uC6B0%uB294%20%uB354%20%uD06C%uB2E4.%0A%0A%uB2E4%uC74C%20%uC608%uC81C%uC5D0%uC11C%2C%20%24A%24%uB294%203%uAC1C%uC758%20%uC810%uC744%20%uBCF8%uB2E4.%20%20%uC774%uAC83%uB3C4%20%uC5ED%uC2DC%20%uD604%uC2E4%uC801%uC774%uC9C0%20%uC54A%uB2E4%20-%20%uC2AC%uD504%uAC8C%uB3C4%20%uB9CE%uC740%20%uC810%uC744%20%uC5F0%uACB0%uD558%uB294%20%24A%24%uC744%20%uC2DC%uAC01%uD654%uD558%uB294%20%uAC83%uC740%20%uC5B4%uB835%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv3.png%29%0A%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC758%20%uD55C%uAC00%uC9C0%20%uB9E4%uC6B0%20%uC88B%uC740%20%uC131%uC9C8%uC740%20%uD569%uC131%uC774%20%uAC00%uB2A5%uD558%uB2E4%uB294%20%uAC83%uC774%uB2E4.%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC758%20%uCD9C%uB825%uC744%20%uB2E4%uB978%20%uACF3%uC5D0%20%uC785%uB825%uD560%20%uC218%20%uC788%uB2E4.%20%uAC01%20%uCE35%uC5D0%uC11C%2C%20%uB124%uD2B8%uC6CC%uD06C%uB294%20%uB192%uC740%20%uB808%uBCA8%uC5D0%20%uC880%uB354%20%uCD94%uC0C1%uD654%uB41C%20feature%uB97C%20%uCC3E%uC744%20%uC218%20%uC788%uB2E4.%0A%0A%uB2E4%uC74C%20%uC608%uC81C%uC5D0%uC11C%20%uB274%uB7F0%uC758%20%uC0C8%uB85C%uC6B4%20%uADF8%uB8F9%20%24B%24%uAC00%20%uC788%uB2E4.%20%24B%24%uB294%20%uC774%uC804%20%uCE35%uC704%uC5D0%20%uC313%uC774%uB294%20%uB2E4%uB978%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC744%20%uC0DD%uC131%uD558%uB294%uB370%20%uC0AC%uC6A9%uB41C%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Conv2.png%29%0AConvolutional%20layers%uC740%20pooling%20layers%20%uC0AC%uC774%uC5D0%20%uB07C%uC5B4%uC9C4%uB2E4.%20%uD2B9%uD788%2C%20%uB9E4%uC6B0%20%uC778%uAE30%20%uC788%uB294%20max-pooling%20layer%uC774%uB77C%uACE0%20%uBD88%uB9AC%uB294%20%uCE35%uC774%20%uC788%uB2E4.%0A%0A%uC790%uC8FC%20%uACE0%uB808%uBCA8%20%uAD00%uC810%uC5D0%uC11C%2C%20frequency%uAC00%20%uC870%uAE08%20%uB2A6%uAC8C%20%uB610%uB294%20%uB2A6%uAC8C%20%uB4F1%uC7A5%uD558%uB4E0%20%uC0C1%uAD00%uC5C6%uB2E4.%20max-pooling%20layer%uB294%20%uC774%uC804%20%uCE35%uC758%20%uC791%uC740%20%uBE14%uB7ED%uC5D0%20%uB300%uD574%uC11C%20%uD53C%uCCD0%uC758%20%uCD5C%uB300%uAC12%uC744%20%uCDE8%uD55C%uB2E4.%20%uCD9C%uB825%uC740%20%uC774%uC804%uCE35%uC758%20%uD55C%20%uC9C0%uC5ED%uC5D0%uC11C%20%uD53C%uCCD0%uAC00%20%uC874%uC7AC%uD558%uB294%uC9C0%uB97C%20%uB9D0%uD574%uC8FC%uC9C0%uB9CC%2C%20%uC815%uD655%uD558%uAC8C%20%uC5B4%uB514%uC778%uC9C0%uB294%20%uB9D0%uD574%uC8FC%uC9C0%20%uBABB%uD55C%uB2E4.%0A%0AMax-pooling%20layers%uB294%20%uC90C%uC544%uC6C3%uACFC%20%uAC19%uB2E4.%20%uB9E5%uC2A4%20%uD480%uB9C1%20%uCE35%uC740%20%uC774%uD6C4%20convolutional%20layers%uC774%20%uB370%uC774%uD130%uC758%20%uD070%20%uC139%uC158%uC5D0%uC11C%20%uB3D9%uC791%uD558%uB3C4%uB85D%20%uD574%uC900%uB2E4.%20%uC65C%uB0D0%uD558%uBA74%20%uD480%uB9C1%20%uCE35%20%uD6C4%uC5D0%20%uC791%uC740%20patch%uB294%20%uADF8%20%uC774%uC804%uC5D0%20%uC880%20%uB354%20%uD070%20patch%uC5D0%20%uB300%uC751%uB418%uAE30%20%uB54C%uBB38%uC774%uB2E4.%20%20%uB9E5%uC2A4%uD480%uB9C1%20%uCE35%uC740%20%uB370%uC774%uD130%uC758%20%uB9E4%uC6B0%20%uC791%uC740%20%uBCC0%uD658%uC5D0%uB3C4%20invariant%uD558%uAC8C%20%uD574%uC900%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Max2Conv2.png%29%0A%0A%uC774%uC804%20%uC608%uC81C%uC5D0%uC11C%2C%201-%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC744%20%uC0AC%uC6A9%uD588%uB2E4.%20%uADF8%uB7EC%uB098%2C%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC740%20%uACE0%uCC28%uC6D0%20%uB370%uC774%uD130%uC5D0%20%uB300%uD574%uC11C%uB3C4%20%uC798%20%uB3D9%uC791%uD55C%uB2E4.%20%uC0AC%uC2E4%2C%20convolutional%20neural%20networks%uC758%20%uAC00%uC7A5%20%uC720%uBA85%uD55C%20%uC131%uACF5%uC740%20%uC774%uBBF8%uC9C0%uB97C%20%uC778%uC2DD%uC5D0%202D%20convolutional%20neural%20networks%uB97C%20%uC801%uC6A9%uD55C%20%uAC83%uC774%uC600%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-unit.png%29%0A2%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC5D0%uC11C%2C%20segment%uB97C%20%uBCF4%uB294%20%uB300%uC2E0%uC5D0%2C%20%24A%24%uB294%20patches%uB97C%20%uBCF4%uAC8C%uB420%20%uAC83%uC774%uB2E4.%0A%0A%uAC01%20patch%uC5D0%20%uB300%uD574%uC11C%2C%20%24A%24%uB294%20%uD53C%uCCD0%uB97C%20%uACC4%uC0B0%uD560%20%uAC83%uC774%uB2E4.%20%uC608%uB97C%20%uB4E4%uC5B4%2C%20%uAC00%uC7A5%uC790%uB9AC%uC758%20%uC874%uC7AC%uB97C%20%uCC3E%uC544%uB0B4%uB294%20%uAC83%uC744%20%uD559%uC2B5%uD560%20%uC218%20%uB3C4%20%uC788%uB2E4.%20%uB610%uB294%20texture%uB97C%20%uCC3E%uB294%20%uAC83%uC744%20%uD559%uC2B5%uD560%20%uC218%uB3C4%20%uC788%uB2E4%2C%20%uB610%uB294%20%uB450%20%uC0C9%uC758%20%uBA85%uC554%uC744%20%uD559%uC2B5%uD560%20%uC218%uB3C4%20%uC788%uB2E4.%0A%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2.png%29%0A%0A%uC774%uC804%20%uC608%uC5D0%uC11C%2C%20convolutional%20layer%uC758%20%uCD9C%uB825%uC744%20fully-connected%20layer%uC5D0%20%uC785%uB825%uD588%uB2E4.%20%uADF8%uB7EC%uB098%201%uCC28%uC6D0%20%uC608%uC81C%uC640%20%uB9C8%uCC2C%uAC00%uC9C0%uB85C%20%uB450%20%uAC1C%uC758%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC744%20%uD569%uC131%uD560%20%uC218%uB3C4%20%uC788%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Conv2.png%29%0A2%uCC28%uC6D0%uC5D0%uC11C%uB3C4%20%uB9E5%uC2A4%uD480%uB9C1%uC744%20%uD560%20%uC218%20%uC788%uB2E4.%20%uC5EC%uAE30%uC11C%2C%20%uAC01%20%uC791%uC740%20patch%uC5D0%20%uB300%uD574%uC11C%20%uD53C%uCCD0%uC758%20%uCD5C%uB300%uAC12%uC744%20%uCDE8%uD55C%uB2E4.%0A%0AWhat%20this%20really%20boils%20down%20to%20is%20that%2C%20when%20considering%20an%20entire%20image%2C%20we%20don%u2019t%20care%20about%20the%20exact%20position%20of%20an%20edge%2C%20down%20to%20a%20pixel.%20%20It%u2019s%20enough%20to%20know%20where%20it%20is%20to%20within%20a%20few%20pixels.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Max2Conv2.png%29%0A%0A%uBE44%uB514%uC624%uB098%20%uBCFC%uB968%20%uB370%uC774%uD130%283D%20%uC758%uD559%20%uC2A4%uCE94%29%uAC19%uC740%20%uB370%uC774%uD130%uC5D0%20%uB300%uD574%uC11C%2C%203-%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%uB124%uD2B8%uC6CC%uD06C%uB3C4%20%uB54C%uB584%uB85C%20%uC4F0%uC778%uB2E4.%20%uADF8%uB7EC%uB098%20%uB9CE%uC774%20%uC4F0%uC774%uC9C0%uB294%20%uC54A%uACE0%2C%20%uC2DC%uAC01%uD654%uD558%uAE30%uB3C4%20%uC5B4%uB835%uB2E4.%0A%0A%24A%24%uB294%20%uB274%uB7F0%uC758%20%uADF8%uB8F9%uC774%uB77C%uACE0%20%uB9D0%uD588%uB2E4.%20%uC774%uAC83%uC5D0%20%uB300%uD574%uC11C%20%uC880%20%uB354%20%uC790%uC138%uD788%20%uC54C%uC544%uBD10%uC57C%20%uBCF4%uC790.%20%24A%24%uAC00%20%uC815%uD655%uD788%20%uBB34%uC5C7%uC778%uAC00%3F%20%uC804%uD1B5%uC801%uC778%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC5D0%uC11C%20%24A%24%uB294%20%uBAA8%uB450%20%uAC19%uC740%20%uC785%uB825%uC744%20%uC5BB%uACE0%2C%20%uB2E4%uB978%20%uD53C%uCCD0%uB97C%20%uACC4%uC0B0%uD558%uB294%20%uD3C9%uD589%uD55C%20%uB9CE%uC740%20%uC218%uC758%20%uB274%uB7F0%uC774%uB2E4.%20%20%uC608%uB97C%20%uB4E4%uC5B4%2C%202-%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC5D0%uC11C%2C%20%uD558%uB098%uC758%20%uB274%uB7F0%uC740%20%uC218%uD3C9%uC758%20edge%uB4E4%uC744%20%uCC3E%uC744%20%uC218%uB3C4%20%uC788%uB2E4.%20%uB2E4%uB978%20%uB274%uB7F0%uC740%20%uC218%uC9C1%20edge%uB97C%20%uCC3E%uC744%20%uC218%20%uC788%uB2E4%2C%20%uB2E4%uB978%20%uB274%uB7F0%uC740%20%uCD08%uB85D-%uBE68%uAC04%uC758%20%uBA85%uC554%uC744%20%uCC3E%uC744%20%uC218%20%uC788%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A.png%29%0AThat%20said%2C%20in%20the%20recent%20paper%20%u2018Network%20in%20Network%u2019%20%28Lin%20et%20al.%20%282013%29%29%2C%20a%20new%20%u201CMlpconv%u201D%20layer%20is%20proposed.%20%0A%uCD5C%uADFC%20%uB17C%uBB38%20%u2018Network%20in%20Network%u2019%20%28Lin%20et%20al.%20%282013%29%29%uC5D0%uC11C%20%uC0C8%uB85C%uC6B4%20%u201CMlpconv%u201D%20%uCE35%uC774%20%uC81C%uC548%uB418%uC5C8%uB2E4.%20%uC774%20%uBAA8%uB378%uC5D0%uC11C%2C%20%24A%24%uB294%20%uB274%uB7F0%uC758%20%uB2E4%uC218%uC758%20%uCE35%uC744%20%uAC16%uB294%uB2E4%2C%20%20%24A%24%uB294%20%uC9C0%uC5ED%uC5D0%20%uB300%uD574%20%uACE0%uB808%uBCA8%20%uD53C%uCCD0%uB97C%20%uCD9C%uB825%uD558%uB294%20%uB9C8%uC9C0%uB9C9%20%uCE35%uC744%20%uAC16%uB294%uB2E4.%20%uB17C%uBB38%uC5D0%uC11C%2C%20%uBAA8%uB378%uC740%20%uB9E4%uC6B0%20%uC778%uC0C1%uC801%uC778%20%uACB0%uACFC%uB97C%20%uB2EC%uC131%uD588%uB530.%20%uB9CE%uC740%20%uBCA4%uCE58%uB9C8%uD06C%20%uB370%uC774%uD130%uC14B%uC5D0%uC11C%20%uAC00%uC7A5%20%uB192%uC740%20%uC131%uB2A5%uC744%20%uB0B4%uC5C8%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A-NIN.png%29%0A%uC774%20%uD3EC%uC2A4%uD2B8%uC758%20%uBAA9%uC801%uC744%20%uC704%uD574%2C%20%uD45C%uC900%20CNN%uC5D0%20%uC9D1%uC911%uD560%20%uAC83%uC774%uB2E4.%20There%u2019s%20already%20enough%20for%20us%20to%20consider%20there%21%0A%0A%23%23%23%20Results%20of%20Convolutional%20Neural%20Networks%0A%uC774%uC804%uC5D0%2C%20CNN%uC744%20%uC0AC%uC6A9%uD55C%20%uCEF4%uD4E8%uD130%20%uBE44%uC804%uC5D0%uC11C%uC758%20%uD68D%uAE30%uC801%20%uBC1C%uC804%uC5D0%20%uB300%uD574%uC11C%20%uC5B8%uAE09%uD588%uB2E4.%20%uC774%20%uACB0%uACFC%uC5D0%20%uB300%uD574%uC11C%20%uC870%uAE08%20%uC815%uB9AC%uD558%uACE0%20%uC2F6%uB2E4.%0A%0AIn%202012%2C%20Alex%20Krizhevsky%2C%20Ilya%20Sutskever%2C%20and%20Geoff%20Hinton%20blew%20existing%20image%20classification%20results%20out%20of%20the%20water%20%5B%28Krizehvsky%20et%20al.%20%282012%29%29%5D%28http%3A//www.cs.toronto.edu/%7Efritz/absps/imagenet.pdf%29.%0A%0ATheir%20progress%20was%20the%20result%20of%20combining%20together%20a%20bunch%20of%20different%20pieces.%20They%20used%20GPUs%20to%20train%20a%20very%20large%2C%20deep%2C%20neural%20network.%20They%20used%20a%20new%20kind%20of%20neuron%20%28ReLUs%29%20and%20a%20new%20technique%20to%20reduce%20a%20problem%20called%20%u2018overfitting%u2019%20%28DropOut%29.%20They%20used%20a%20very%20large%20dataset%20with%20lots%20of%20image%20categories%20%28%5BImageNet%5D%28http%3A//www.image-net.org/%29%29.%20And%2C%20of%20course%2C%20it%20was%20a%20convolutional%20neural%20network.%0A%0ATheir%20architecture%2C%20illustrated%20below%2C%20was%20very%20deep.%20It%20has%205%20convolutional%20layers%2C3%20with%20pooling%20interspersed%2C%20and%20three%20fully-connected%20layers.%20The%20early%20layers%20are%20split%20over%20the%20two%20GPUs.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-arch.png%29%0A%0AThey%20trained%20their%20network%20to%20classify%20images%20into%20a%20thousand%20different%20categories.%0A%0ARandomly%20guessing%2C%20one%20would%20guess%20the%20correct%20answer%200.1%25%20of%20the%20time.%20Krizhevsky%2C%20et%20al.%u2019s%20model%20is%20able%20to%20give%20the%20right%20answer%2063%25%20of%20the%20time.%20Further%2C%20one%20of%20the%20top%205%20answers%20it%20gives%20is%20right%2085%25%20of%20the%20time%21%0A%21%5B%uC774%uC800%uC720%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-results.png%29%0ATop%3A%204%20correctly%20classified%20examples.%20Bottom%3A%204%20incorrectly%20classified%20examples.%20Each%20example%20has%20an%20image%2C%20followed%20by%20its%20label%2C%20followed%20by%20the%20top%205%20guesses%20with%20probabilities.%20From%20Krizehvsky%20et%20al.%20%282012%29.%0A%0AEven%20some%20of%20its%20errors%20seem%20pretty%20reasonable%20to%20me%21%0A%0AWe%20can%20also%20examine%20what%20the%20first%20layer%20of%20the%20network%20learns%20to%20do.%0A%0ARecall%20that%20the%20convolutional%20layers%20were%20split%20between%20the%20two%20GPUs.%20Information%20doesn%u2019t%20go%20back%20and%20forth%20each%20layer%2C%20so%20the%20split%20sides%20are%20disconnected%20in%20a%20real%20way.%20It%20turns%20out%20that%2C%20every%20time%20the%20model%20is%20run%2C%20the%20two%20sides%20specialize.%0A%0A%21%5BFilters%20learned%20by%20the%20first%20convolutional%20layer.%20The%20top%20half%20corresponds%20to%20the%20layer%20on%20one%20GPU%2C%20the%20bottom%20on%20the%20other.%20From%20Krizehvsky%20et%20al.%20%282012%29%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-filters.png%29%0A%0ANeurons%20in%20one%20side%20focus%20on%20black%20and%20white%2C%20learning%20to%20detect%20edges%20of%20different%20orientations%20and%20sizes.%20Neurons%20on%20the%20other%20side%20specialize%20on%20color%20and%20texture%2C%20detecting%20color%20contrasts%20and%20patterns.4%20Remember%20that%20the%20neurons%20are%20randomly%20initialized.%20No%20human%20went%20and%20set%20them%20to%20be%20edge%20detectors%2C%20or%20to%20split%20in%20this%20way.%20It%20arose%20simply%20from%20training%20the%20network%20to%20classify%20images.%0A%0AThese%20remarkable%20results%20%28and%20other%20exciting%20results%20around%20that%20time%29%20were%20only%20the%20beginning.%20They%20were%20quickly%20followed%20by%20a%20lot%20of%20other%20work%20testing%20modified%20approaches%20and%20gradually%20improving%20the%20results%2C%20or%20applying%20them%20to%20other%20areas.%20And%2C%20in%20addition%20to%20the%20neural%20networks%20community%2C%20many%20in%20the%20computer%20vision%20community%20have%20adopted%20deep%20convolutional%20neural%20networks.%0A%0AConvolutional%20neural%20networks%20are%20an%20essential%20tool%20in%20computer%20vision%20and%20modern%20pattern%20recognition.%0A%0A%23%23%23%20Formalizing%20Convolutional%20Neural%20Networks%0A%uC785%uB825%20%24%5C%7Bx_n%5C%7D%24%20%uACFC%20%uCD9C%uB825%20%24%5C%7By_n%5C%7D%24%uC744%20%uAC16%uB294%201-%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%20%uCE35%uC744%20%uC0DD%uAC01%uD574%uBCF4%uC790%2C%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2-XY.png%29%0A%uC785%uB825%uC73C%uB85C%20%uCD9C%uB825%uC744%20%uC124%uBA85%uD558%uB294%20%uAC83%uC740%20%uC0C1%uB300%uC801%uC73C%uB85C%20%uC27D%uB2E4.%0A%24y_n%20%3D%20A%28x_%7Bn%7D%2C%20x_%7Bn+1%7D%2C%20...%29%24%0A%0A%24%0Ay_0%20%3D%20A%28x_0%2C%20x_1%29%20%5C%5C%0Ay_1%20%3D%20A%28x_1%2C%20x_2%29%0A%24%0A%uBE44%uC2B7%uD558%uAC8C%2C%202%uCC28%uC6D0%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC744%20%uACE0%uB824%uD558%uBA74%2C%20%20%uC785%uB825%20%24%5C%7Bx_%7Bn%2Cm%7D%5C%7D%24%uACFC%20%uCD9C%uB825%20%24%5C%7By_%7Bn%2Cm%7D%5C%7D%24%uC744%20%uAC00%uC9C4%uB2E4.%0A%21%5Benter%20image%20description%20here%5D%28http%3A//colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-5x5-Conv2-XY.png%29%0A%uB2E4%uC2DC%20%uC785%uB825%uC73C%uB85C%20%uCD9C%uB825%uC744%20%uC4F8%uC218%20%uC788%uB2E4.%0A%24%24%0Ay_%7Bn%2Cm%7D%20%3D%20A%5Cleft%28%5Cbegin%7Barray%7D%7Bccc%7D%20x_%7Bn%2C%7Em%7D%2C%20%26%20x_%7Bn+1%2C%7Em%7D%2C%26%20...%2C%7E%5C%5C%20x_%7Bn%2C%7Em+1%7D%2C%20%26%20x_%7Bn+1%2C%7Em+1%7D%2C%20%26%20...%2C%20%7E%5C%5C%20%26...%5C%5C%5Cend%7Barray%7D%5Cright%29%0A%24%24%0A%uC608%uB97C%uB4E4%uC5B4%2C%20%0A%24%24%0Ay_%7B0%2C0%7D%20%3D%20A%5Cleft%28%5Cbegin%7Barray%7D%7Bcc%7D%20x_%7B0%2C%7E0%7D%2C%20%26%20x_%7B1%2C%7E0%7D%2C%7E%5C%5C%20x_%7B0%2C%7E1%7D%2C%20%26%20x_%7B1%2C%7E1%7D%7E%5C%5C%5Cend%7Barray%7D%5Cright%29%0A%5C%5C%0Ay_%7B1%2C0%7D%20%3D%20A%5Cleft%28%5Cbegin%7Barray%7D%7Bcc%7D%20x_%7B1%2C%7E0%7D%2C%20%26%20x_%7B2%2C%7E0%7D%2C%7E%5C%5C%20x_%7B1%2C%7E1%7D%2C%20%26%20x_%7B2%2C%7E1%7D%7E%5C%5C%5Cend%7Barray%7D%5Cright%29%0A%24%24%0A%uC774%uC2DD%uC744%20%uD569%uCE60%uC218%20%uC788%uB2E4.%0A%24%24%0AA%28x%29%20%3D%20%5Csigma%28Wx%20+%20b%29%0A%24%24%0A%0Aone%20has%20everything%20they%20need%20to%20implement%20a%20convolutional%20neural%20network%2C%20at%20least%20in%20theory.%0A%uC774%uC81C%20CNN%uC744%20%uAD6C%uD604%uD558%uB294%uB370%20%uD544%uC694%uD55C%20%uBAA8%uB4E0%20%uAC83%uC744%20%uAC00%uC84C%uB2E4.%20%uC801%uC5B4%uB3C4%20%uC774%uB860%uC801%uC73C%uB860.%0A%0A%uC2E4%uC81C%uC5D0%uC120%2C%20%uC774%uAC83%uC774%20convolutional%20neural%20networks%uB97C%20%uC0DD%uAC01%uD558%uB294%20%uAC00%uC7A5%20%uC88B%uC740%20%uBC29%uBC95%uC740%20%uC544%uB2C8%uB2E4.%20%uCF58%uBCFC%uB8E8%uC158%uC774%uB77C%uACE0%20%uBD88%uB9AC%uB294%20%uC218%uD559%uC801%20%uC5F0%uC0B0%uC758%20%uCE21%uBA74%uC758%20%uB300%uC548%uC774%20%uB418%uB294%20%uACF5%uC2DD%uC774%20%uC788%uB2E4.%0A%0A%uCF58%uBCFC%uB8E8%uC158%20%uC5F0%uC0B0%uC740%20%uAC15%uB825%uD55C%20%uB3C4%uAD6C%uC774%uB2E4.%20%uC218%uD559%uC801%uC73C%uB85C%2C%20%uD3B8%uBBF8%uBD84%20%uBC29%uC815%uC2DD%28PDE%29%uC5D0%uC11C%20%uD655%uB960%uB860%uAE4C%uC9C0%uC758%20%uB2E4%uC591%uD55C%20%uBB38%uB9E5%uC5D0%uC11C%20%uB098%uC654%uB2E4.%20PDE%uC5D0%uC11C%uC758%20%uC5ED%uD560%uC5D0%uC11C%2C%20%uCF58%uBCFC%uB8E8%uC158%uC740%20%uBB3C%uB9AC%uACFC%uD559%uC5D0%uC11C%20%uB9E4%uC6B0%20%uC911%uC694%uD558%uB2E4.%20%uCEF4%uD4E8%uD130%20%uADF8%uB798%uD53D%2C%20%uC2E0%uD638%uCC98%uB9AC%uAC19%uC740%20%uC751%uC6A9%uBD84%uC57C%uC5D0%uC11C%uB3C4%20%uB9E4%uC6B0%20%uC911%uC694%uD55C%20%uC5ED%uD560%uC744%20%uD55C%uB2E4.%0A%0A%uC6B0%uB9AC%uC5D0%uAC8C%2C%20%uCF58%uBCFC%uB8E8%uC158%uC740%20%uB9CE%uC740%20%uC774%uC775%uC744%20%uC900%uB2E4.%20%uCCAB%uC9F8%uB85C%2C%20%uB098%uC774%uBE0C%uD55C%20%uAD00%uC810%uC774%20%uC81C%uC2DC%uD558%uB294%uAC83%uBCF4%uB2E4%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC758%20%uD6A8%uC728%uC801%uC778%20%uAD6C%uD604%uC744%20%uD574%uC904%20%uC218%uC788%uAC8C%20%uD574%uC900%uB2E4.%20Secondly%2C%20it%20will%20remove%20a%20lot%20of%20messiness%20from%20our%20formulation%2C%20handling%20all%20the%20bookkeeping%20presently%20showing%20up%20in%20the%20indexing%20of%20%24x%24s%20%u2013%20the%20present%20formulation%20may%20not%20seem%20messy%20yet%2C%20but%20that%u2019s%20only%20because%20we%20haven%u2019t%20got%20into%20the%20tricky%20cases%20yet.%20%0A%uB9C8%uC9C0%uB9C9%uC73C%uB85C%20%uCF58%uBCFC%uB8E8%uC158%uC740%20%uCF58%uBCFC%uB8E8%uC158%20%uCE35%uC5D0%20%uB300%uD55C%20%uC0AC%uACE0%uC5D0%20%uB300%uD55C%20%uB2E4%uB978%20%uC911%uC694%uD55C%20%uAD00%uC810%uC744%20%uC900%uB2E4.%0A%0A%3EI%20admire%20the%20elegance%20of%20your%20method%20of%20computation%3B%20it%20must%20be%20nice%20to%20ride%20through%20these%20fields%20upon%20the%20horse%20of%20true%20mathematics%20while%20the%20like%20of%20us%20have%20to%20make%20our%20way%20laboriously%20on%20foot.%20%u2003%u2014%20Albert%20Einstein